Sunday, November 21, 2010

Think Times for Performance Testing

After only 10 years in the software industry, I am probably naive, but a recent post by Dan Barstow has caused me to question the use of think times.

Our current performance testing approach does not have think times.  I have used think times in the past for various applications.  I always have found think times to be somewhat arbitrary and subjective.  I can definitely see the value if your objective is to really understand the number of concurrent users the application can handle, but typically our objectives are focused on the number of transactions a system can process.    Also if you are using virtual users you truly are not representing a browser,.  A browser, depending on type, could leverage 6 threads at a time making requests.  So how do you adjust the think time between thread A getting an image and thread B retrieving javascript and firing the script?

Specifically we look for what is the optimal TPS value where 95% of requests latencies are less than 1 second and 99% of all requests are less than 4 seconds.

In my previous career I was a chemist.  On some regular frequency we had to calibrate our analytical instrumentation.  We would generate a 5-point curve using known standards.  Typically we sought a linear relationship of this curve where the low point was near the detection limit and the high point was near to the maximum detection point of the analytical instrumentation (e.g. – Gas Chromatographs).

When performance testing ramping VU in time intervals is similar to generating this calibration curve.  What I am typically looking for is how high can we generate the curve, yet still be below the high point of the instruments capability, which in this case the instrument is a configuration of servers and services.

I guess my philosophy might be that by eliminating think time I can get to that high point a bit faster.  Since we are testing a system, then my criteria is applicable to the total request and response.  So if the request is for an image or for a database object, as long as that request is returned within the criteria stated above, then we are within calibration.

My opinion is that if you are using virtual users then transactions per second is the important metric.  If you are using real browser users, then I think that think times might be important, but still subjective.  Regardless the objective is to do the best job possible to validate that your site gives customers a good experience.  I guess this can be achieved with or without think times.

For the record I have done some performance testing of JSON API where you could not even conduct the test without some level of think time.  So I will refine my conclusion to state that it depends on your objective and the application under test.

Please help educate me, because I am not in the think time camp unless it is necessary.


Andrew Lee said...


Although think time are subjective they are important for performance testing as this is a little bit of realism you can add to the test for very little cost/effort.

By emulating a user interaction without think time this represent the user rushing through the interaction. Leading to their data being in the system caches potentially more than if you emulated think times.

Carl said...

Thanks Andrew! I agree with your statement if latency of the user experience is what we are interested in.

By removing the think times we are looking at scalability including the efficiencies of the caching mechanisms.