I think it leads to arguing along the lines of hey this isn’t a unit test.
Some of tests are very un-unit tests. I will write automated tests that …
Of course, I will also write more unit-like automation including …
Finally, I also write tests that are very production oriented to verify the liveness of a running system including Canary and Smoke tests.
Dave Thomas said recently (in a podcast with Elixir Wizards) that
If it has a name, it's probably wrong.
- Dave Thomas
Names are important, very important, but somethings they can be distracting.
I prefer to focus on three qualities of good automated tests
My insights into Testing are largely based on Kent Beck (see Qualities of a Good Tests and Programmer Test Principles)
Fast is relative.
It’s hard to argue (unless you are trolling) against having fast tests. Speed it definitely relative, but the general thinking is that the faster your tests run, the faster you can get feedback from those tests. Slow tests will be run less often, by fewer people and you will not provide the timely feedback that you need.
As your code base grows, even with the best intentions some of your test might need to be slow (as the test is more important than its speed) or you simply might have too many (high value and very fast) tests that now running the entire testsuite is way too (relatively) slow.
At this point, you offload those slow tests and no longer run them all the time. Just never let it excuse you from always trying to make your tests run fast.
Some examples of brittleness in tests might include
A sturdy test is one that rarely (ideally never) fails with a false negative (i.e. the test failed but the code is just fine). With experience you will get better at knowing before hand what may (or may not) make a test brittle, but even if you don’t, your brittle tests will eventually tell you how brittle they are.
As you make changes to your code base tests will eventually start to fail, and that’s a good thing. The test suite can act as a great tool to show you possible regressions based on changes. But, if can also highlight spots where your test suite has unnecessarily coupling to assumed behaviour.
Imagine that you are writing in invoicing app. You have tests that calculate the sales tax based on the shipping address. The accounting team lets you know that digital products calculate tax a bit differently. You write a new test to demonstrate the desired behaviour and then you run a your test suite a bunch of (what you thought were unrelated) tests start failing. Your tests about payment transactions, your tests about PDF generation, and your tests about overdue notices all start failing.
Turns out a lot of your tests were duplicating knowledge of the tax calculation and when things changed that assumptions were no longer true and now those tests starting blowing up. You cannot fully isolate yourself from this scenario, and different testing frameworks give you different tools to deal with coupled tests, but it is something to be aware of something to work to minimize. I am not advocating any particular solution to dealing with coupled tests and there is no one solution for every scenario.
Almost always having tests is better than not having tests.
Try not to argue too much about whether a test is a unit test, an integration, a system test and instead focus on if the test is fast, sturdy and isolated. Defer complexity in your build pipeline until it’s a must, as it’s always easier to change a simple process than a complex one.