Testing is a bit odd. It has practical value, but 'philosophically' it does not seem to make any sense.
When you are about to write some code and a test for it, you are starting with one piece of information: what you want the code to do. So why don't you just translate that into code? What do you gain by translating it into two pieces of code, and comparing one with the other? How can one have authority over the other?
So testing must be about ensuring consistency: if you change the code and the tests fail, you have made an invalid change. But that raises the question, why do we allow changes to be invalid? why don't we constrain code modification to only the kinds that maintain consistency?
Maybe testing is just one of the best things that are practically possible. . . . but there is a nagging feeling that it does not make sense!
Languages with powerful type systems do allow the programmer to ensure consistency without duplicating code, to a degree. And I suspect that it does reduce the testing burden substantially when used correctly.
There will always be some need for all of the following: static checking (e.g. compiler checking the types), tests, and code review. There's a simple economic reason for this: if you omit any one of those strategies, then the cheapest way to find the next bug will almost certainly be the one that you omitted.
You're right that tests are redundant (you could say the same thing about type annotations, perhaps), but redundancy is underrated. Redundancy aids readability, and it also helps catch mistakes when there's an inconsistency. If you put code and tests near each other, it might be helpful to think of it like: "<code>. In other words, <test>." Similar things could be said for type annotations, declarations, and constructors; or code comments.
I suppose testing could be understood as a special error-correcting code for a particular noisy information processor -- humans writing software.
But then one must begin to wonder, are they doing that job very well? Testing does not seem to be so carefully designed as Hamming codes or others . . .
It's not as much about clarity of communication as it is clarity of thought.
Also, a program is not a single message being sent to the computer. A program is revised over time, and testing helps ensure the integrity of the program through revision.
> ensure the integrity of the program through revision
Yes, that is what I am thinking: each step is like sending the signal through a noisy channel (that also does some transforming -- it is not an exact analogy). But testing doesn't seem to be carefully designed to address the particular kinds of 'noise'/mistakes that humans make.
Tests express what we want the code to do, code expresses how we want to do it. It's the difference between giving someone directions to your house and checking whether they made it.
There is not really a difference in software. The 'what' is defined by the 'how'. Imagine you had a very clear idea of the 'what', so complete that it could be used to test every possible output of the 'how' -- in which case, you already know all the answers and you don't need to have that 'how'. (The purpose of software is to produce stuff you don't already know.)
You can compare the 'what's of two 'how's -- that is what testing is doing.
Hm, I think of testing as more of knowing a couple of 'what's' and checking whether our 'how' correctly reaches them. There's an infinite number of 'what's' and we obviously can't test them all, but with testing we generally take a hopefully useful sample of 'what's' and make sure that our 'how' reaches them correctly.
There is a style of testing where you devise two algorithms for the same thing and check whether they both reach the same result. I'm rather dubious of that style.
When you are about to write some code and a test for it, you are starting with one piece of information: what you want the code to do. So why don't you just translate that into code? What do you gain by translating it into two pieces of code, and comparing one with the other? How can one have authority over the other?
So testing must be about ensuring consistency: if you change the code and the tests fail, you have made an invalid change. But that raises the question, why do we allow changes to be invalid? why don't we constrain code modification to only the kinds that maintain consistency?
Maybe testing is just one of the best things that are practically possible. . . . but there is a nagging feeling that it does not make sense!