Friday, May 20, 2011

Defect Tracking

I still plan to post thoughts on the STP Conference in Nashville, but some recent posts has spawned some thought.

Lisa Crispin gave a presentation at Star East that sparked Gojko to write a post entitled "Bug Statistics are a Waste of Time".

I agree with the notion that we should clearly understand the business objectives and find ways to measure the value the features bring to a customer community.  I do not agree that looking at bug statistics is a waste of time.  History is one of the greatest oracles available to a tester.  How were we doing in the past compared to today?  Are there any lessons to be learned from a software systems past defects?  I certainly think so.

Lets assume that company X provides some value to someone and that company X knows how to measure their business objectives and value of those objectives using things like Net Promoter Score, Google Analytics, Agile Velocity,  and Get Satisfaction.

What can inspecting defect metrics add to the cause of determining value?

I view metrics as flashlights into a dark cave.  How do you know what is there unless you look?

A simple inspection of the total number of defects in the back log implies some level of technical debt.  I agree with Gojko that if defects simply sit in a backlog then we are wasting some time.  Teams should and must proactively triage, fix, or even throw away defects.  But not to document them especially in a non-searchable manner would be detrimental to the team.

Teams should occasionally have retrospectives on their processes.  Finding data regarding defect groupings is a fantastic lever for continuous improvement.  Where are the majority of our bugs historically clustering.  Where you find defect clusters you have the opportunity to change your process to reduce those clusters.  Agile teams especially should do look backs at some frequency.  How did we do last quarter?  How does trending look?

As a tester I have at times had an extremely difficult time advocating for a process change.  In several situations I have found the ammunition to influence change by showing historical trends.  Could we do this carrying around a notebook? Sure we could, but it would be difficult especially as to how time flies.

Let me toss this scenario into the mix.  You are a new tester at a large company or even a consultant.  Your mission is to understand the quality of the software and you have to do it fast.  It would be nice to shine a flash light into the new cave and know the areas of risk.  Yes you could put your hands on the application and start testing, but a sneak peak at the historical defect data could narrow in on the best place to start.

Yes some defect tracking tools really really suck, but our ability as testers to search, learn and educate provides great value even to Company X who knows how to monitor and measure value.

I shared Gojko's link with a large community of developers and testers.  I shared the link not because I agreed with it, but because it made me think.  I received back a quote that really struck a chord with me.

"Sure hope this isn’t the future of QA."

Losing the oracle of history would be a huge mistake.  Using metrics prudently, adapting the metrics to changing business value, and having conversations around the findings are key elements of Continuous Improvement.

If we influence a change for the better using metrics, then we certainly are not wasting our time! 

Read Gojko's post and the associated comments it definitely should spark some thought.

Happy Testing! 

No comments: