Sunday, September 15, 2013

Quality Artifacts Everywhere

Recently I came across a situation where I observed defects, tasks, and even stories that were documented in multiple places (Google Doc, Issue Tracking System, Complex Stories, Wiki...).  How can a team evaluate quality when there are so many lists?  I looked a little deeper and even found single defects in the issue tracking system that were a list of defects.

I am right there with the next person for not entering an object into an issue tracking system if I do not have to.   Once an issue artifact is created it must be managed through to a resolution.  My guess is you have no clue with respect to quality of you have lists buried within lists within other lists.

If you have 100 defects and 100 tasks left to complete in an iteration, then you can evaluate when you are near done.  If you have 5 lists buried within the 100 defects , 5 lists buried in 100 tasks, and a Google spreadsheet with 75 more ideas, how do you ever know you are nearing done.

As much of a pain in the tail as it is I recommend two approaches.

One if you find an issue and do not want to put it into the tracking system, then fix the issue immediately and verify that it is fixed to your satisfaction.

The second is to enter the issue into the issue tracking system.

My final recommendation is to settle into a specific process, follow the process, iterate on the process, but do not create numerous processes within processes.

Keeping it simple helps to keep the team on the same page.

Happy Testing!

No comments: