Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Testing is waste of time, I know that my code works (progfu.com)
84 points by Anon84 on Dec 24, 2010 | hide | past | favorite | 69 comments


90% of the code you write at a startup is exploratory. It's written to learn something about your customers. If you break it, you typically don't incur a much of a penalty in terms of cost / time.

Testing every bit of code does cause overhead. I find it takes more than twice as long to write code when you test it. This overheard puts a big drag on the speed at which you can move, which can kill a startup.

For these reasons, I only test code that's proven its long term value.


As much as I've tried, I really can't get into "test first" development. I find that I make so many tiny changes when I am first working on a new project, and refactor so much, that the tests would really slow me down.

But once things get more solid for me (and, like you said, I know the code has some long term value), I'm a big fan of building a solid test suite before a full production launch.

I also think it is worth the time to use code coverage tools to find sections that are not being covered. Odds are your test suite only covers 60-80% of the total logic in your app.


For me it was a discipline thing. Then a mentor of mine convinced me that not writing tests was actually okay when you're exploring something. He suggested I only write tests for "production" code - something I would deploy.

So I routinely would whip something up, see that it works, then start a new project with tests and rewrite my experimental code.

You can imagine what ended up happening - I realized how much time this was wasting, so I started writing tests to test my experimental code so I didn't have to throw it away.

I got my brain hacked. Maybe that story will help someone else. :)


How can you refactor if you don't have a test suite? Do you manually check everything in your application after every change? That's the main value of having a test suite you can trust. It speeds up refactoring massively.


If it's taking you twice as long, you're doing it wrong(TM).

Or more likely, you're not measuring the time properly, using memory to estimate the difference. It's easy to discount the time re-running the program every time to check the result, and think you spent more time than you did writing tests because that seems painful to you.

So: measure it rigorously, and pair-program with someone that knows how to do it properly.


Or maybe -- just maybe -- his problem domain might be harder to write tests for than yours is.

Look, as one of the Rakudo Perl 6 developers, I love our extensive test suite. It's an absolutely terrific development tool. Writing tests for it is usually quite easy, helps make sure we're getting the subtle details and interactions correct, and benefits not just Rakudo but every Perl 6 project. It's a huge win.

But I'm also working on a ABC music notation to sheet music program, using Perl 6 and Lilypond as tools. And I don't see how I could fully test the results of that without using some sort of optical sheet music recognition software -- and even if I could get that up and running, it would still only be testing that the musical symbols were correct, and not that the sheet music actually looked good. If you can tell me how to set up an automatic test for those things that will take me less time to setup than the 40 or so hours that writing the actual program took, I'm all ears.


you're not measuring the time properly, using memory to estimate the difference. It's easy to discount the time re-running the program every time to check the result, and think you spent more time than you did writing tests because that seems painful to you.

Given my (admittedly small) sample of other co-workers I have seen, it is most likely this. I see people constantly code, compile, run, type in a command, go "yeah, that's right" or "no, that's wrong" and then go back to step 1. I ask them why they aren't writing a test suite instead. "Because that takes way longer!" is the reply. They never actually bookkeep the time they are spending doing it. Their mind says "running it this way takes me 10 seconds" not "running it this way 50 times a day takes me 500 seconds". Selenium is amazing for web dev.

If you can reduce your problem state down to a sentence that sounds testable "When I press the login button, I go to the login screen", it probably is. If you can break a complex sentence into multiple smaller ones, you have a test suite. If your sentences are customer focused black box, you have acceptance tests. That's sort of it. If you have a problem where that doesn't work, then that's OK. There are a small class of hard problems that you can't test that way, and good developers know what they are. Great developers know that doesn't mean they can't test the other bits ;)

One day, I'm going to write a research paper where I watch people who refuse to write test cases and see if they actually are working faster, or whether they've just rationalized what they're doing in their minds.


I guess I'm not a true Scotsman.

I'll give an example . Imagine that you're writing an append method for a list that's implemented as insert(x, size). TDD requires that you test the interface, not the implementation, so you'd end up with lots of test cases. How would the test code compare with the implementation?


Hmmm... I dunno. A lot of times I like to bang out the design for something fairly significant without running the entire thing once. I might have an interpreter open and I'll play with a bunch of values trying to get the right form of everything up and going. Y'know, working on the details.

Then I start slapping it together, writing some tests on it, adding some fixtures for my data. Etc. etc. At the end, I've got all my tests done and my code working.

TDD or BDD is fine when you know what you're trying to build. If you've only got a few vague, high level ideas about your design and you're just iterating on the problem space, it's a drag man.


He is talking as a business owner and not as a programmer/coder. You should also consider his background in Lean Startup Circle. Fundamental rule of MVP (minimum viable product) is to get your product out there in the market as soon as you can. Writing tests goes in the opposite direction. The goal of an MVP is to validate your idea with real users. To see if it works or not in the first place. There is often very little time to write test cases or even look at edge cases. In that light writing tests doesn't make much sense.


Why can't they write a test that makes sure the basic case works?


Interestingly, the same thing could be said about using a powerful type system or not.

It's easy to assume that the compiler errors you are getting are just wasting your time, forcing you to put arcane type declarations, annotations, and constructors around your code. But that time might reduce the testing burden, because the compiler can make more guarantees for you; and therefore it might actually save time.


While I disagree with your hypothesis that "it takes more than twice as long to write code when you test it" (because you discount the bug fixes that are you deferring until later rather than discovering them with TDD), I do agree with one of your points:

"Testing every bit of code" is unwise.

I am a test-obsessed TDDer. That said, I do not test everything. I test proportionately to the cost of failure for a given particular feature.

Zero cost of failure -> zero to very few tests High cost of failure -> many tests sometimes with full path testing

This warrants discriminating degrees of test coverage:

* Partial coverage: not all of the code under test is executed

* Full coverage: all code under test is executed

* Path coverage: all code under test is tested for every possible path that it can be executed

Without path coverage, any suite of tests is potentially imperfect. Therefore, as Stu Halloway sometimes presents about, it is extremely possible to fail yet still have full test coverage.


But that's the point of this article; starting your test suite is as simple as spending a couple of hours actually learning a test library, then instead of throwing away your hack tests, keep them. There's no way that's a 2x speed penalty, no way, and in practice it's faster to develop that way.

If you spend five minutes constructing up some test cases for your code, you can either throw them away immediately after that, or continue to get leverage on your code with them for the next weeks, months, or years. Which do you think actually produces a better velocity, vs. which might make you feel like you're going faster?


Why does it take you twice as long to include unit tests?

Here is one possibility: when you don't test it, you don't know it's broken, so you don't spend time fixing it. When you do test, you do know it's broken, so you spend time fixing it. Hence, writing tests takes longer.

Edit: But, if you only use 10% of your code, it is probably fine if 50% of it is broken.


Not all code is as simple as an e-mail validator. I don't think anyone would argue that unit testing pure functions like this is so trivial that there is no argument against doing TDD.

However, there is more to the world than pure functions, and there is more to the world than writing simple CRUD applications.

First, code that is meant as "glue" between disparate systems. For example, a piece of code that pops off a live queue, handles gracefully when the queue is slow or lagged, and then performs some action that has an additional side effect.

The retort here is that you should mock out the queue and whatever other systems. This can only get you so far though. At the end of the day you need integration tests that simulate the entire environment. Often it's not possible to build up and tear down an environment that simulates reality, and additionally it is not always possible to effectively simulate failures or load. In this case, I prefer to spin up EC2 instances with a replica of the real environment and manually test the code with real services under real load. This can be automated fairly well nowadays.

Second, there are classes of algorithms that are highly data-driven, require large amounts of data, and are qualitative in nature. Machine learning algorithms, search algorithms, etc. Building these algorithms usually requires a) a large dataset and b) subjective relevance feedback from a human. TDD is not going to help you here. Sure, you can unit test the pure functions within your algorithm (for example, it's common to compute tf*idf in search algorithms, so test your math there) but the "full stack" test and iteration process for this code generally requires you to manually look at results and make judgements. This isn't something you can just automate in a unit test, because regressions against the input data will be false positives or false negatives, depending on which approach you take to compute a diff :) You can get some headway by looking for large differences in RMSE or other metrics between commits, but at the end of the day your tests will still remain brittle under big improvements to the core algorithms.

Finally, another class of algorithms that aren't amicable to TDD are computer graphics algorithms. The reason is the same: judging correctness generally requires a human. Again, unit test your vector math and so on. But don't expect TDD and unit testing to provide an exhaustive safety net to protect you from introducing subtle bugs in your rendering code. Looking at output is the best way to do that.


Agreed. Code that retains state does not lend itself well to unit testing. One of my latest projects at work was a MySQL node monitor with automatic failover. I did add unit tests to it, but I can tell you that I found more bugs, or rather quirks of MySQL's replication behavior and the MySQLdb driver through doing my manual testing. I also spent quite a bit of time fixing the test rather than the code, since the quirks meant I needed different behavior. As this piece of code is quite important to us, the ROI on it will be high. Then again the investment was sizable as well.

I think unit testing should be viewed in terms of ROI because, just like premature optimization, often times a lot of effort goes into it where the return is negligible. Pure functions can be easily tested, so test your critical calculations every time. On the other end of the spectrum you have things that interact with external systems. Do you unit test an email notification function? How about a CSS layout? Can you easily test for an out of memory condition or a failed hard disk? Sometimes it is just cheaper to use human judgement.

Lastly, as many have pointed out, unit testing is not a silver bullet. Just because you did not break the unit tests does not meant the code still functions properly.


Absolutely. Unit testing is just one gun in your automated testing arsenal. When I was working as a web dev and I saw Selenium for the first time, my jaw dropped. I totted up all the hours I'd wasted of my life hitting refresh buttons, because "you can't run automated tests for a browser". How wrong I was!

Once you get into those subjective bits, you're in new waters. I think/hope there is some research going on here, using AI-based "critics". A lot of AI uses a generate/test paradigm, where a result is generated and then tested for fitness in some manner. One could imagine that testing in some of these fuzzier fields, like graphics, would be improved by independent AI critics. I had the opportunity to head into AI, and the inability to automatically test my algorithms and say "this really works" versus "oh no, it's broken" drove me crazy. However, this uncertainty is what the AI guys loved :)


I disagree (greatly) about testing machine learning code. I test ruthlessly my pure functions and even full stack stuff, but of course my full-data experiments require interpretation. The difference is huge!

Without very solid separation between those two phases, and then without very solid actual testing, you cannot achieve trust or repeatability in your experiments. It's actually turning into a pretty interesting problem (that I've patched together solutions using Fabric and Org-mode) to track experiments to versions of code while maintaining good testing practices.


The troll title along with so many ad sense units between the post clearly shows the purpose of the article is to grab attention. Make some bucks, may be.


> computer graphics algorithms. The reason is the same . . .

another reason is the test code is/will/might-be as complexly numerical as what it is supposed to test -- how do you feel confident about it? write tests for it? hmmmm . . .


In Haskell there are (nearly) only pure functions, and you will still write tests. Or, rather, you will write test specifications with QuickCheck.


I love this title because this was exactly the response I got from a contractor who was hired as a Technical Lead of a team I had to work with.

I found a bug in a module they released to my team, created a series of about 5 steps to make it perfectly reproducible and contacted the team lead. He adamantly REFUSED to believe me. Told him how to reproduce it and his response was "I don't need to do that because I know it works fine." I pointed out that until the bug was fixed, I couldn't make progress because my code was dependent on theirs working, he suggested it was somehow my fault. 'K then. I found a way to reproduce the bug without any of my code running (they had written a simple external dialog to demo the module) and the guy still maintains that "there are no bugs in that code, I've been running it here for weeks. You must not know what you're doing."

If he wasn't 1,000 miles away I swear I would have gone over there and beaten him over the head with a stack of Knuth!

Finally I said screw it and went over his head. I knew his manager (who used to be a programmer) pretty well and explained the problem and how I was being stonewalled. He figured it out in less than a minute on the phone and told me how to setup their configuration file so it wouldn't trigger the bug and promised the bug would be fixed. And apologized as a bonus.

Not surprisingly, we had tons of problems later with them not properly testing a bunch of common error paths. Finally the TL pissed off the wrong person and got himself fired.

OK, this is a bit OT, but I needed that rant :-)


My beef against testing is that I often see it adopted (and often subtly pushed) as an alternative to thinking through the logic carefully -- Why mess with the if statements, my tests will catch the errors if any. Should I index the array with i or i+1 ? I will just test what works. Should I loop till n-1 or n-2. Let me just stick with n-2 and add a test...

The problem is that to write tests that really ensure that the logic is correct you really need to think through the logic really hard. Just spraying the code with a few tests that happen to come to the mind is not enough.

I am yet to see a convincing argument that shows that coming up with a sufficient number of (unit)tests from a specification is any easier than getting the logic right.

Testing is well intentioned but often abused as an excuse to be sloppy.


This post makes the strongest argument I can think of in favor of testing: you're doing it anyway, so why not keep them? I don't know anyone who doesn't write a little main or side program to test out the code they have created. Unit testing can amount to nothing more than just keeping those tests in a file that runs periodically and flags raises a notice when they no longer work. Usually the "problem" is that the code has changed and the test needs to be modified, but sometimes the problem is an error introduced into the code.

TDD is a slightly harder sell, because it isn't how most people normally code. TDD folks strongly believe that once you train your mind to think this way, it's the right way to go. I don't write code this way, though. I do what I described above.

The problem arises when it starts to become very difficult to write tests, and the scenario I described above no longer applies. If you're writing something that can be tested through simple main style program, no problem. But what if you need to test a UI element, or a service, or something immensely database oriented (that would require writing extensive mocks just to do the test). A good framework should make this easily testable, but they don't always. There's a limit to how much I'll bend over backwards to unit test something. In this case, maybe integration tests are the best way to still get some coverage.


>I don't know anyone who doesn't write a little main or side program to test out the code they have created.

We're not met in person, but - here I am! As some of my colleagues.

Most of my tests do not leave REPL. Those that happen to be outside of REPL are "functional tests" - tests for a whole bunch of a system.


Why don't you just automate the REPL tests so they can be done for you? Whatever you type in the REPL, add it to the test suite. Whatever you check as output, that's your set of assertions. It will save you a whole lot of time in the future.


Why should I bother to convert tests from REPL? I use REPL for experiments along with testing. When I done experimenting I also done testing. When I'll return there, it would be for another experiment, significant part of my previous tests won't run again.

Also, I work mostly with pure functions and strong type systems. Those functions won't change their behavior if I change something in the system. These types won't let slip something bad that is hard to find.


I am a reformed non-tester. A couple of projects ago, I started writing some tests for some particularly gnarly pieces of code that I wasn't confident were going to stay working throughout the product's development cycle. Once I got those written, I found that I was over a hump - I was already set up with my test suite, object factories, net communications stubs, etc - all the hard work was done.

At that point, I found that it became faster to write my tests and then code to make them pass than it was to develop "traditionally"; in cases where you do lot of setup or teardown for a piece of code (what happens when a user adds X to his account? Manually testing involves, adding, then testing, then clearing, then adding...), automated tested really shines. What used to take 30 seconds of manual testing time per iteration now takes a ctrl-S and 3 seconds of tests. Multiply that by 20 iterations and the time savings are significant.

I'm at the point now that I write tests because it saves me time, not because it's the "right thing to do", and I'm much more confident in my code as a result. I don't test everything, but the stuff that's easily testable, high volatility, or mission critical, you betcha, that's getting tested thoroughly.

The problem is that if you write, manually test, and commit a piece of code once, you're only guaranteed that it is working at the time that you commit. Two weeks down the road when you change something only mildly related, you either a) retest that "known good" code, or b) make the potentially faulty assumption that it still works. Once an app reaches a certain level of complexity, any particular change is going to require a QA department to ensure that it didn't break something else. Automated tests are your first-line QA department. Your test suite can exercise all the important bits of your code in one fell swoop, so if you break anything, it'll let you know, and quickly. The value of this cannot be understated, and once you've tasted it, you'll never want to go back.


Testing is a bit odd. It has practical value, but 'philosophically' it does not seem to make any sense.

When you are about to write some code and a test for it, you are starting with one piece of information: what you want the code to do. So why don't you just translate that into code? What do you gain by translating it into two pieces of code, and comparing one with the other? How can one have authority over the other?

So testing must be about ensuring consistency: if you change the code and the tests fail, you have made an invalid change. But that raises the question, why do we allow changes to be invalid? why don't we constrain code modification to only the kinds that maintain consistency?

Maybe testing is just one of the best things that are practically possible. . . . but there is a nagging feeling that it does not make sense!


Languages with powerful type systems do allow the programmer to ensure consistency without duplicating code, to a degree. And I suspect that it does reduce the testing burden substantially when used correctly.

There will always be some need for all of the following: static checking (e.g. compiler checking the types), tests, and code review. There's a simple economic reason for this: if you omit any one of those strategies, then the cheapest way to find the next bug will almost certainly be the one that you omitted.

You're right that tests are redundant (you could say the same thing about type annotations, perhaps), but redundancy is underrated. Redundancy aids readability, and it also helps catch mistakes when there's an inconsistency. If you put code and tests near each other, it might be helpful to think of it like: "<code>. In other words, <test>." Similar things could be said for type annotations, declarations, and constructors; or code comments.


I suppose testing could be understood as a special error-correcting code for a particular noisy information processor -- humans writing software.

But then one must begin to wonder, are they doing that job very well? Testing does not seem to be so carefully designed as Hamming codes or others . . .


It's not as much about clarity of communication as it is clarity of thought.

Also, a program is not a single message being sent to the computer. A program is revised over time, and testing helps ensure the integrity of the program through revision.


> ensure the integrity of the program through revision

Yes, that is what I am thinking: each step is like sending the signal through a noisy channel (that also does some transforming -- it is not an exact analogy). But testing doesn't seem to be carefully designed to address the particular kinds of 'noise'/mistakes that humans make.


Tests express what we want the code to do, code expresses how we want to do it. It's the difference between giving someone directions to your house and checking whether they made it.


There is not really a difference in software. The 'what' is defined by the 'how'. Imagine you had a very clear idea of the 'what', so complete that it could be used to test every possible output of the 'how' -- in which case, you already know all the answers and you don't need to have that 'how'. (The purpose of software is to produce stuff you don't already know.)

You can compare the 'what's of two 'how's -- that is what testing is doing.


Hm, I think of testing as more of knowing a couple of 'what's' and checking whether our 'how' correctly reaches them. There's an infinite number of 'what's' and we obviously can't test them all, but with testing we generally take a hopefully useful sample of 'what's' and make sure that our 'how' reaches them correctly.

There is a style of testing where you devise two algorithms for the same thing and check whether they both reach the same result. I'm rather dubious of that style.


I'm amazed at the number of people that don't test at all but also at those that blindly believe that because they're told to test they should always do it. I think it's important to pick and choose when it makes sense for you to test, when you are prototyping then you might not need to, in some cases though it's actually quicker to write something to test an output than it is to keep trying it another way.

The real skill in testing is knowing when it should be done and how. It's good article and ultimately until you have tested you can never know when it's write to make use of testing and TDD.


Amen. I'm going to pass this article around to doubting colleagues of mine.

TDD saves my (and Forrst's) ass on probably a daily basis. The most recent debacle with our rather complicated post formatting library (which involves Markdown, autolinking emails, usernames, URLs [but ignoring all content in pre or code blocks, and not double-linking URLs in href or src attributes], XSS cleaning/sanitization, and tag rebalancing), would have likely been 1000x worse without a comprehensive test suite. There's just no way to test every possible case manually.


Automated testing gives you the confidence to make very invasive changes to code at any stage during the development process. This is especially important in environments with short iterations, where you are not necessarily designing for features that haven't even been conceived yet.

Note that I didn't say unit testing. The author gives the example of writing a test to verify that a bug exists, that the fix makes the test pass, and now as you grow the code you have confidence that the bug doesn't reemerge.

The same is true for features. You can write a test that verifies a feature exists, and that the feature continues to exist as you radically refactor the code to make new features fit better, as well as across such dangerous operations as branch merges.


Delicious link-bait title there.

The best articulation of the benefits of TDD/BDD that I've heard is that it shifts the pain to the start of the project, rather than the end, where it otherwise tends to reside.


I can understand how TDD can feel like an extra burden if you're a single developer working on a project. However, having tests is so critically important if the project is passed on to another dev. I recently inherited a project that I had worked on at a previous gig. There were no tests but at the time I was familiar with the code and it didn't matter too much to me at the time. Now, the code has changed significantly, there are still no tests, and I'm always a bit nervous if I have to make a push to production, wondering if somehow I missed something that's going to break the app.

Even if you hate testing... do it for the sanity of the next dev on the project.


Where "next dev" includes "yourself, three months from now".


100% correct!


I absolutely adore unit tests because they mean I can change something down the line and be relatively sure that I haven't broken something else. It's worth writing the simple, 80%-coverage stuff for the peace of mind alone.


I'd like to do testing. The problem is that all the examples of these tests are of modules that work on simple data and produce deterministic outputs. In those cases, I can see how it's easy to set up an automated test.

But a large portion of my code is Monte Carlo, so the only way to see if it works correctly is to evaluate the distribution of the output, which will be correct only in a statistical sense. Moreover, the modules operate on classes with fairly complicated data, so it's difficult to mock up input data without running that part, too.

Why don't anyone show a realistic example of setting up an automated test on a real-life piece of software?


There's big difference between integration and unit tests. If you're having trouble writing unit tests, it's probably because your code is too tightly coupled, making it hard to separate one part from another.

When you try to write a project with tests, not necessarily test first, it will enforce certain design. This is almost always a good thing, since you're forced to write modular code.


7) Just can't start that diet. I know it's a good idea, I certainly do it informally to some degree, but it's not part of the culture, building up the infrastructure is a bother, but basically just the inertia of day to day stuff holds me back. (sigh, maybe a new years resolution?)


To build a culture of quality, the best way to start is a mandatory code review system. Code doesn't go in until it's peer-reviewed by at least one person.

Code review is critically important: it instills a different attitude in the programmer (someone is going to read this, so I won't get away with sloppiness); and it puts the focus on readability. Tests provide at least two benefits to readability: the reviewer knows what the code is supposed to do (provides better context), and the reviewer also has greater confidence that you didn't break existing basic functionality.

Even without mandating tests, reviewers will soon start to return patches with comments like "Broken when X,Y,Z happen. Add a few tests around that tricky code path." Then, it will eventually escalate to general comments like "where are the tests?", because reviewers will get tired of testing basic functionality.


Start the cultural revolution yourself. The learning exercise alone is invaluable, and you might learn a lot just setting up, or building your own, testing infrastructure.


I'm a rank amateur coder working on a minimum viable product so I've had to can testing to get something out there quicker. I understand its purpose and would love to know my code is squeaky clean but under the circumstances I've had to can it.


You'd rather release a buggy, possibly completely dysfunctional application than spend time on tests? How much overhead do you imagine testing would bring?


Test suites can make you less careful because you start relying on the tests flagging any bugs. And then when the test suite does find a bug, you just hack the code until the test suite passes instead of fixing the underlying problem. This is the same problem as using microbenchmarks for performance work: you end up tuning to the benchmark instead of to the real world.


By "you", I think you may mean "I". If ssp is hacking the test suite to bypass a failing test, ssp has the problem, not the test suite. jerf does not have much trouble with that, jerf has experienced "the one failing test that turns out to reveal a major underlying problem followed by a real fix that it would have taken him multiple customer-losing hard-to-reproduce bugs in field to learn about" multiple times.

Not that this is perfect, I've got just such a bug out there right now that simply refuses to be reproduced by anyone once a developer is looking at it, but I sure have far fewer of those than I would without the tests.


If ssp is hacking the test suite to bypass a failing test

I'm talking about hacking the application to pass the test suite, not hacking the test suite. You can often "fix" a failing test by doing

     if (condition that failed)
           whack the application state so that the test suite will pass.
without understanding what the actual bug were. And it's not always obvious to yourself that this is what you are doing.

ssp has the problem, not the test suite

Tuning to benchmarks is not some unique character flaw of mine. When you measure some aspect of people's behavior, they will optimize to that measurement. If the measurement is a boolean PASS/FAIL from the test suite, then they will optimize their behavior to get a PASS.

But the actual desired outcome is not PASS, it is "bug free program".


Hack the app, hack the test suite, I meant either equally. Which I think if you read my post is pretty clear that the main dichotomy is between "hack" and "real fix" and where the hack goes hardly matters.

Nevertheless, you are arguing that because people sometimes program to benchmarks, you are apparently better off without the benchmarks. I say that's nonsense. The solution is to use the benchmarks better. Are we programmers or automatons? (Or managers?) If you're going to be that defeatist about programming you're not going to be able to be a successful programmer under any circumstances, the entire field is a minefield of superficially-appealing optimization opportunities!


Nevertheless, you are arguing that because people sometimes program to benchmarks, you are apparently better off without the benchmarks.

No, I am not.


The answer to the problem of optimization by proxy is not no optimization at all.


I used to be in the test-hater club but it's saved my bacon and made my life a lot easier since I've gotten over my pride and sucked it up.

I'm writing quite a few libraries lately and doing refactoring as I go. Having a nice stockpile of tests built up along the way means that if I make some changes, I can know within seconds whether I inadvertently broke anything. This makes me braver when it comes to adding new features or reworking algorithms I've already written and "know" work.

Even if you're doing highly exploratory work or if you suck at testing, even bad or cursory tests can raise the oddest of bugs. I've seen it first hand and it's what convinced me to really get into the practice. If even bad tests could save my bacon, imagine what I could do with better ones..


I've always wanted to try TDD, but for the stuff that I am working on these days -- wireless kernel drivers -- it isn't clear to me exactly how to do this. Does anyone have experience doing TDD at the kernel level?


My biggest hang-up is the ever-changing landscape of TDD tools (for Rails) and getting them to work with more complex components like authentication, authorization, emails, background processes, and oauth connections and API calls.

It's frustrating at times getting all the proper tools lined up and working.

I like actual TDD but getting it setup makes me want to skip/minimize it.


Regression tests for a parser or compiler are by FAR the best application of test suites; they don't make a good defense of testing in general. I wish the tests I was forced to write at MIT and Google -- I think I even wrote unit tests for Pair.getFirst and Pair.getSecond Java methods at one point -- were so defensible.


"Testing is waste of time" -- yes, we already know this.

Unit testing through skeptic's eye -- "It's OK Not to Write Unit Tests", as discussed here previously: http://news.ycombinator.com/item?id=1376417


I'm not sure if anyone else encountered it, but I got a 404 when clicking the submission link.

I was able to find the article here: http://progfu.com/testing/testing-is-waste-of-time/



Is anyone else annoyed that there is no information about the author? I would like to have some idea of the author's credentials and work experience.


So you're judging and idea not based on the idea itself, but based on who said it?


The author is to some extent making an argument from experience; it's not entirely unreasonable to ask what that experience is.

For example, if the author is still a student (I have no idea if that is the case) that would go some way to explaining the apparent belief that the job of a software engineer is only to solve already well-defined problems—which is to say, homework problems. We could then discount the advice accordingly.


That is true only to some extent. Information can be valid or invalid independently from it's source.

If you have an idea in your dream, does that mean it won't apply in the real world? Even though the dream is nothing like reality, it doesn't inherently mean that the idea is wrong, it just means that it's not 100% right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: