T O P

  • By -

GreenCalligrapher571

I've been on teams where "We can't merge this until you can get the test coverage a little higher" along with doing a bunch of quibbling about tests ("Too many mocks! Prefer an end to end test" followed by "This end to end test is too slow! Mock some things out!") was regularly used to slow-roll work and make people look bad. I'm not on such a team now, but it's a weapon I've seen used in the past. It was basically straight out of the [CIA's Simple Sabotage Field Manual](https://www.corporate-rebels.com/blog/cia-field-manual), except to advance the career of Kevin the mediocre developer instead of the interests of a global superpower. For what it's worth, when I'm leading projects we do have periodic discussions about our test coverage, both as instrumented and in terms of "How often do we have breaking changes that would have been caught by more thoughtful testing practices?" Also in terms of "How much value are we actually getting out of these tests?" ... there are plenty of examples in the world of tests that were really useful for getting a feature written but now the tests take significantly more time to maintain than the time they save by catching regressions... or tests that are reduplicative.... or tests that are so slow that if I want to run the whole test suite I should go ahead and run a quick errand while waiting for it to finish. I've seen some terrible, terrible codebases with upwards of 90% test coverage. "What's the actual cost of a bug sneaking into production? How does that compare to the cost of trying to prevent it" is a great question for a team.


sho_bob_and_vegeta

Incredibly relevant. We - in fact - have a library that takes 8 minutes to run all the tests when we want to build. Our unit tests are always updated anyway (including adding a null as a parameter in 6 test classes when a method gets a new parameter it uses elsewhere). Our Director of Software set one of our Teams goals to be to "Hit x%" code coverage. That is, unfortunately, the only numerical goal he set, so that's the one that's being cracked down on. This is the only reason I don't think it's from the Simple Sabotage Field Manual, and the exact reason why I'm writing unit tests for what is essentially a global Hashmap. In our case, sneaking a bug into production can lead to literal fires, so it's pretty important to cover our code. But this is some insane inanity.


jaskij

> global *spends the next two weeks figuring out how to test for race conditions*


sho_bob_and_vegeta

Java. Concurrent HashMap. You got any of those over there in C?


jaskij

Not even in C++, I think. Not like I use standard library containers most of the time. Embedded FTW.


sho_bob_and_vegeta

Embedded. Ah yes, well... you have even stricter standards to deal with than we do.


jaskij

Depending on the actual application, thankfully I'm not dealing with anything safety critical. Just stuff like "we can save you seven digits in repairs if our system predicts the fault correctly". Full stack solution, all the way from analog signals to analysis and presentation. Most of the firmware I write doesn't even have a heap, so standard containers are a no go.


LunaNicoleTheFox

Having access to higher performing chips with enough Ram to actually have a heap is really nice. If it's reasonable it might be beneficial to consider ESP32 chips for future projects?


jaskij

Oh, I am working with high performance chips, my current project is using STM33H7, 480 MHz and a megabyte of RAM. And frankly, I will push back against using heap unless absolutely necessary. It's just simpler without it. As for ESP32... Nah, thanks. Not enough pins, probably missing a peripheral or two we would need. And work is conservative about chips we use anyway.


LunaNicoleTheFox

Understandable, we use ESP32S3 with 32MB of Flash and 8MB of PSRAM and they are great. Aside from espressifs esp-idf


blaqwerty123

Easy. Just rewrite it in Rust.


many_dongs

it turns out software development doesn't happen effectively when the people in charge don't know shit about software development, who could have foreseen this?


JackReact

For me, often times writing tests made me realize certain edge cases I forgot to account for. Aside from just bringing existing behavior under coverage to catch later changes it sometimes helps to approach a problem from a test driven perspective than just the coding side.


HolyGarbage

By far my most common MR comment that I write is: "write tests for the negative path cases as well". When the same developer write the feature and then a test for it, in particular if they write the test after, it's sooo easy to get stuck in the mindset of how it's *supposed* to work, and only test the positive path, or whatever path the user story describes. This is why TDD or having dedicated testers is so valuable.


sho_bob_and_vegeta

I use TDD when coding algorithms, framework, most things actually. Why I was so caught off-guard when they wanted tests for this. I had to refrain from saying: "I consistently write longer test classes than the SUTs (system under test). Are you kidding me?"


NewbornMuse

Hm sounds like you should write tests for your tests. That'll get your code coverage to >100%. That'll impress your boss!


eloquent_beaver

People hate writing unit tests, especially when it means writing tons of scaffolding / boilerplate code to set stuff up, and you're just testing "trivial" scenarios, but they are absolutely crucial to long-term codebase health. The modern pattern for development and deployment is *continuous* integration and continuous deployment. CI/CD is requires two foundational ingredients: automation, and good testing. You can't do CI/CD without automation, and automation is unsafe if you don't have good tests. Unit tests form the basis of good test suite (the Google SWE Book recommends [the "pyramid" approach](https://abseil.io/resources/swe-book/html/ch11.html#googleapostrophes_version_of_mike_cohna), with most of your tests being unit tests). Even the trivial ones. What they do is catch regressions in an environment when you might be committing 1000+ times a day. If you've worked in any large codebase being touched 1000x/day, you know all too well how easily bugs slip in that can break the most obvious of assumptions about trivial behavior. Unit tests asserting that trivial expectations that you thought shouldn't need to be stated in fact match the actual behavior catch these. Being able to automatically catch regressions is a lifeline in a world where the code is changing at breakneck pace and every change is a potential prod candidate.


sho_bob_and_vegeta

I do agree. Actually, I completely understand it. If we build this service out more and more, bit by bit, it will eventually need testing. If I didn't start tests now, then they'd probably get by-passed until someone had to spend a day or two figuring out the code, and then actually setting up the "scaffolding/boilerplate", and then writing so many actual tests. By doing this now, I've actually created the class, set up the mocks, and started. Now, it's easy as pie to add tests as it grows.


Appropriate_Plan4595

When a metric becomes a target it ceases to be a good metric


jwadamson

I’ve come around on the coverage stuff over time a bit. One needs to keep perspective but even “dumb” tests can still be useful, if only indirectly so. 1. trivial code is trivial to cover. 2. Covering the trivial code helps show the places that don’t have good coverage. Assuming you have tools that actually present the metrics in a useful manner that someone pays attention to. 3. If making a change somehow necessitates updating a bunch of the “trivial” tests, that code probably needs refactoring.


sho_bob_and_vegeta

>trivial code is trivial to cover. This is true. Hence why I just wrote the tests. >If making a change somehow necessitates updating a bunch of the “trivial” tests, that code probably needs refactoring. Another good point.


lmarcantonio

Read a paper where homeland security advocates the use of cyclomatic complexity for testing. \*Path\* coverage, not only code coverage. You have 3 ifs? that's 8 tests for you


tl_west

> You have 3 ifs? That 8 tests for you. Um, what if our software contains a little over 4,000 flags?


lmarcantonio

Software must be simple to be safe :D It was actually a paper against useless feature creep but in the '70 there was a debate on how to decide the testing strategy. In pure waterfall you can devise the tests at the specification stage because specs must be exhaustive


blaqwerty123

What is pure waterfall? TIA


lmarcantonio

It's waterfall to the governative-military-burocratic extreme. \*Nothing\* is done before the previous step is discussed, checked and sealed by the appointed committee. The commonly user waterfall is the Royce one where you can (sort of) iterate up one or more level when a snag happens; essentially you prototype and then redo all in the "right way", if possible. See [the antique diagram](https://en.wikipedia.org/wiki/Waterfall_model#/media/File:1970_Royce_Managing_the_Development_of_Large_Software_Systems_Fig10.PNG). Notice that even there the test plan is devised during design, well before coding. But then, at the time, coding was almost a brutal translation from the design pseudocode to FORTRAN or whatever was in use.


tl_west

Flexible, powerful, cheap, sa…. did I say flexible?


lmarcantonio

Flexible in the '70 meant that an array could handle 9 element instead of 10. 11 is out of the discussion :D Seriously these days waterfall make sense when you need a really solid piece of critical software that need to be enshrined and work for the next 40 years without an update. Happened to us in railroad, they need a guarantee for spares and no software maintenance for 40 years.


vordrax

Anecdotally, our repos with 90%+ code coverage encounter significantly fewer issues in production than the old legacy code that no one bothered writing unit tests for, or in a way that could be unit tested. I've had a few devs argue against unit tests because they don't enjoy writing them, find them boring, etc. but again, anecdotally, those developers often encounter significantly more issues during dev testing and QA. Or they aren't even sure how to test their code, or write their code in a testable way. Not saying that is the case with OP, I'm more speaking to my own experience.


engwish

Test coverage isn’t for you dingus, it ensures that the behavior of the Thing continues to behave as expected since we likely won’t touch it for another 5 years.


sho_bob_and_vegeta

Ok my guy, calm down there. No need to call names.


akorn123

Code coverage issues is fine but in these instances I would use this time to make small additions but also to adjust existing tests to be better.


smutje187

The point that "Just Me“ doesn’t understand is that of course it might not be of a lot of use for them to write a test now - but writing a test with 100% coverage is both a living documentation of how their code works and avoids accidental breaking changes later because there’s a piece of living documentation that can be run as part of every build to ensure the existing accepted functionality remains the same. Something that is just a concurrent map and a logger right now might get extended with lazy loading, or starts to use a distributed Redis instance, or sends HTTP requests somewhere else. It’s easy to change code that’s transitively used by a lot of the codebase and whilst they are not 100% protection against changes tests can at least point changing functionality out.


breischl

Huh, a service that has no network interface, authentication, parsing, serialization, or error responses? And is not going to be developed into anything more in the future? It's literally just a library that wraps a hashmap and writes a log, and will never be more? Why did anybody bother writing it?


dreamerOfGains

Because you don’t know the future. What’s stopping someone from introducing unintended changes? Are you going to be gatekeeper for all and any changes 24/7?


breischl

OK but then if you want it to have some particular behavior (eg maybe it's implementing a particular interface or something), then is it not worth testing for that behavior? If it's literally just a hashmap, it's not worth testing or writing. If it's something else, then it's probably worth testing. I could imagine some cases where you literally just want logging around some internal structure, eg for debugging, but it's not that likely.


exseven

# pragma: no cover Fixed coverage boss


This-Layer-4447

Why even have a service you are doing a simple wrap for logging around ?


sho_bob_and_vegeta

Two separate parts of the code need to access it. Hexagonal Architecture and all that.


This-Layer-4447

Implement logging through aspects or decorators that can be applied as needed without embedding logging logic within the data management or business logic classes. This keeps the logging flexible and reusable across different parts of the application.


SenorSeniorDevSr

I mean, I have a project now that have unit tests for things like regexes, etc. You know twiddly things that can easily go wrong, that needs some strict love. But every single path through the system has an integration test that goes and does a whole "thing" with the system. It has caught *so many simple silly errors* that it's amazing. It has literally paid for itself many times over. And best of all? It doesn't really care much about refactorings, so you can fix things around very quickly. I luv it.


settrbrg

I've been in teams who take tests seriously and teams who don't. The difference I've seen is that, in the teams where it was taken seriously, it was quite easy to write tests because we always had tests in mind and we had a good setup for it. The coverage was also always good and the team had good knowledge of the tests so they where all relevant. In the teams where it wasn't taken seriously the tests was slow, horrible written and a pain to work in. The coverage was bad and the "coverage is bad"-excuse was used very often. My take on this is that good coverage just to have good coverage is bad, but if you're in a team that does testing well you will have good coverage. And in those cases were 80% coverage isn't valuable, you can defend it. But only if your are good at testing. And in those cases you should be able to argue that the accepted coverage level should be decreased. I have yet never been in a team were tests has been taken seriously and they don't meet the accepted code coverage. A team with good testing hygiene don't need to talk about coverage nor excuse themselves for it. Edit: Achieving the test environment in the "good"-teams was hard work. We as developers felt happy about it. Not sure if it helped us release better software faster and cheaper, but at least we felt confident. And also we could actually find bugs by reviewing relevant tests and then make them pass


Alarming_Rutabaga

I've said it before and I'll say it again, 100% code coverage is a curse


FilterSoda

It's all fun until you find bug in built-in library.


gplusplus314

Make sure to say it’s powered by AI.


DiamondLebon

Good kpi you don't understand > performance


itachiWasANihilist

Ah, memories. I once wrote a function using the AWS library to write a JSON file to S3. Silly me, I figured unit tests were overkill for such an "obvious" case of integration tests. Oh, how wrong I was. The entire team insisted that mocking AWS and testing this function was non-negotiable. My manager even brought it up during my annual review, stressing that unit test code coverage is one of the most important measure of a codebase's quality. Apparently, skipping this makes me unworthy of a senior-level title. Who knew?


Cometguy7

All I ever want someone to tell me is what they're wanting their tests to tell them. I've asked every candidate for 10 years what they're looking to get out of their tests, and never gotten an answer.


PunDefeated

“To verify my code is correct?” What are you looking for?


Cometguy7

What is correct? There's so many ways to understand correct, and that understanding determines how tests are arranged, what gets tested, and what doesn't.


kuros_overkill

To verify functionallity, and to ensure future changes don't break existing functionallity.


Psychpsyo

I want the test to tell me if my change causes non-obvious breakage because it violates some (probably subtle) assumption that was made when the code was written.


[deleted]

That sounds like a you issue though. I would have answered “to validate that the code does what it should when presented with expected and also unexpected inputs” If that wasn’t enough, I would have walked out. Nobody wants to work for a pedant.


Cometguy7

Of course it's a me issue. Everyone on my team is my responsibility. I want to know what people consider important, and that they're not going to just go through the motions.