Sadly, it rarely fit my needs, whislt it often reflect a poor understanding of what performance testing means. Here is a summary of my experience with those tests, how to detect them and how to impeach them to lead developpers to mistakes.
When tests smell
One particular case you should be aware of is the use of console.* methods. Thoses methods should never be found in a test since they are really slow and will tend to equalize test results especially for critical performance test. Unfortunately, it's very common to see tests using console.log.
By the way, If you test something, test it only. The above console.log test's prior versions were completely unusefull since they didn't test only console.log. The purpose of a test is to reveal the overhead of the tested feature, so you should find the smallest footprint possible for your wrapping code.
Another problem with previous revisions of this test is usage of a for loop in order to make the test more "massive": JSPerf do it for you, stop wasting your time.
Another common issue is when code is valid, but not well formed. It often lead to strange test results. So, take time to read the test code if you plan to exploit it's results. Common mistakes of that kind are :
forgetting to execute a function (
- abnormal return or break instruction,
- bad logic,
- undefined identifiers,
Those mistakes are leading most of the time to better performances since some parts of the code aren't executed.
Avoid creating bad test
Anyone can create a bad test, but there are some good practices to reduce the risk :
- if you're a noob : don't write tests,
- read your code many times before submitting,
- if you made shit, mark it as shit (comment with a link to the modified revisions),
- if someone mark it as shit, don't be hurt. Testing is not about you, it's about truth.
That's it, this post is over. If you've got some other good practices or another way to detect bad tests let me know!