If it's taking you twice as long, you're doing it wrong(TM).
Or more likely, you're not measuring the time properly, using memory to estimate the difference. It's easy to discount the time re-running the program every time to check the result, and think you spent more time than you did writing tests because that seems painful to you.
So: measure it rigorously, and pair-program with someone that knows how to do it properly.
Or maybe -- just maybe -- his problem domain might be harder to write tests for than yours is.
Look, as one of the Rakudo Perl 6 developers, I love our extensive test suite. It's an absolutely terrific development tool. Writing tests for it is usually quite easy, helps make sure we're getting the subtle details and interactions correct, and benefits not just Rakudo but every Perl 6 project. It's a huge win.
But I'm also working on a ABC music notation to sheet music program, using Perl 6 and Lilypond as tools. And I don't see how I could fully test the results of that without using some sort of optical sheet music recognition software -- and even if I could get that up and running, it would still only be testing that the musical symbols were correct, and not that the sheet music actually looked good. If you can tell me how to set up an automatic test for those things that will take me less time to setup than the 40 or so hours that writing the actual program took, I'm all ears.
you're not measuring the time properly, using memory to estimate the difference. It's easy to discount the time re-running the program every time to check the result, and think you spent more time than you did writing tests because that seems painful to you.
Given my (admittedly small) sample of other co-workers I have seen, it is most likely this. I see people constantly code, compile, run, type in a command, go "yeah, that's right" or "no, that's wrong" and then go back to step 1. I ask them why they aren't writing a test suite instead. "Because that takes way longer!" is the reply. They never actually bookkeep the time they are spending doing it. Their mind says "running it this way takes me 10 seconds" not "running it this way 50 times a day takes me 500 seconds". Selenium is amazing for web dev.
If you can reduce your problem state down to a sentence that sounds testable "When I press the login button, I go to the login screen", it probably is. If you can break a complex sentence into multiple smaller ones, you have a test suite. If your sentences are customer focused black box, you have acceptance tests. That's sort of it. If you have a problem where that doesn't work, then that's OK. There are a small class of hard problems that you can't test that way, and good developers know what they are. Great developers know that doesn't mean they can't test the other bits ;)
One day, I'm going to write a research paper where I watch people who refuse to write test cases and see if they actually are working faster, or whether they've just rationalized what they're doing in their minds.
I'll give an example . Imagine that you're writing an append method for a list that's implemented as insert(x, size). TDD requires that you test the interface, not the implementation, so you'd end up with lots of test cases. How would the test code compare with the implementation?
Hmmm... I dunno. A lot of times I like to bang out the design for something fairly significant without running the entire thing once. I might have an interpreter open and I'll play with a bunch of values trying to get the right form of everything up and going. Y'know, working on the details.
Then I start slapping it together, writing some tests on it, adding some fixtures for my data. Etc. etc. At the end, I've got all my tests done and my code working.
TDD or BDD is fine when you know what you're trying to build. If you've only got a few vague, high level ideas about your design and you're just iterating on the problem space, it's a drag man.
He is talking as a business owner and not as a programmer/coder. You should also consider his background in Lean Startup Circle. Fundamental rule of MVP (minimum viable product) is to get your product out there in the market as soon as you can. Writing tests goes in the opposite direction. The goal of an MVP is to validate your idea with real users. To see if it works or not in the first place. There is often very little time to write test cases or even look at edge cases. In that light writing tests doesn't make much sense.
Interestingly, the same thing could be said about using a powerful type system or not.
It's easy to assume that the compiler errors you are getting are just wasting your time, forcing you to put arcane type declarations, annotations, and constructors around your code. But that time might reduce the testing burden, because the compiler can make more guarantees for you; and therefore it might actually save time.
Or more likely, you're not measuring the time properly, using memory to estimate the difference. It's easy to discount the time re-running the program every time to check the result, and think you spent more time than you did writing tests because that seems painful to you.
So: measure it rigorously, and pair-program with someone that knows how to do it properly.