Wednesday, December 28, 2011

Making Testing Irrelevant

Yesterday I read a fantastic post by Marlena Compton called Seven Ways to Make Testing Irrelevant on Your Team.  On her blog Marlena references a great companion article by Scott Barber called Scott Barber's Top 10 Things About Testing That Should Die.  Both of these are great articles and I must say "Nice Bike" to both of them.

I did want to make a few comments on Marlena's article.  First the term, irrelevant, struck a harsh tone with me.  Lets start with a definition via Webster online.

irrelevant - not important or relating to what is being discussed right now.

I was like "Wow" can the actions of testers really make Testing Irrelevant?  After reading Marlena's article I would say "Yes we can"!  It is very sad but true.  It is so truthful that I must admit "I AM GUILTY".

One of the things I love is to inspect process and suggest changes that could result in improvement.  I would never intentionally force process on anyone.  Our government or management does enough of that already.  I adhor process for the sake of process. But I can reflect and see that in my career there have been times when my suggestions could have been perceived as being forced upon a team.  Testers please be honest with yourselves.  Have you every stated it is my way or the highway.  So I concur with Marlena forced process could give testing a black eye.

Lock horns with teams on whether or not to release.  I agree again with Marlena.  Again I am guilty of doing this in my career, but there have been times when I have really had to fight hard to prevent my company from making a huge mistake.  Delaying a release for a short period of time is a viable answer.  Unfortunately I have seen companies so set on hitting a date, that they cannot see the forest through the trees.  So depending on the context my recommendation in this situation is to get all stakeholders to sign off on the decision process to release despite the testing evidence or advice.  Sounds bureaucratic, but the reason I suggest it is that these stakeholders should conduct a retrospective post release and during that retrospective collaborate on ways that release could have been better.

Third point of the article is with respect to complaining about a decision after the fact.  Honestly I did this the other day.  Crap, I am so guilty!  Did I complain to anyone that mattered?  No, but I complained none the less.  So my solution here is to also coordinate a retrospective with the key stakeholders.  Why did we make that decision and what could we have done better.  I will do my best to stop complaining. Thanks Marlena for the reminder.

Geez!  At this point I am feeling sad, because Marlena called me out with her top three points.  Need I continue?  The guilt is killing ME!

"Insist that everything be perfect when you look at it".  Finally something on the list that I do not do today.  Oh wait!  Yes I have done this in the past.  Dang guilty again!  The reason I do not do this much today is that I know how hard rapid software development is.  If I can jump in extremely early on a new feature and close collaborate with developers, then I do not have to document defects.  I ask lots of questions and we fix things before a bug report even has to be generated.  The thing with this is that you have to work close with developers to prove to them that your skills are not irrelevant.  Sometimes developers do like another pair of eyes and thoughts.  Testers should collaborate early, often, and politely!

Dang it!  Marlena caught me again.  Point number five regarding spread the attitude that developers are untrustworthy to test their own code.  It is not that developers are untrustworthy, but in the rapid software world sometimes great developers do not stop to smell the coffee.  Again I would never intentionally state that developers are untrustworthy, but I would say that great testers can add a tremendous amount of value.  My comment to Marlena is that there are many developers that think testers are untrustworthy too.  Honest Injun!  I have stated in my past that developers do not know how to test.  Given time constraints in rapid software development I now MUST trust developers to test and I trust that they are doing the best that they can!

Man!  Can I survive two more bullets of this article?

Assume developers don't care about testing and testers.  This is a neutral one for me because in my heart of hearts I believe everyone is striving to put out quality code.  I have never been told on a development project that my skills were not needed.  I have heard developers state "we do not need testers".  Since the Agile Manifesto 10 years ago I have heard this off and on.  Heck! Kent Beck practically stated this at STPCon Fall 2010 in his key note.  So my position on this one is NOT guilty!  My main assumption is that the team can always get better at testing.  Testing is relevant and it does not matter who does the testing.

Final bullet point from Marlena is tell people that developers are biased toward their code so they cannot test it.  I guess I am slightly guilty again.  At a previous company I believed this, but today not so much.  A great man in Austin Texas, Neal Kocurek, once taught me a course at Radian Corporation on Leadership.  In that course he taught me a new word, scotoma.  He taught that a scotoma is a blind spot.  As leaders (no matter how great), we have blind spots.  I would contend that developers can sometimes be blinded.  This most likely is never intentional, but a reality.

So Marlena I must thank you for making me conduct this self inspection.  The judge and jury rule that I have been GUILTY in the past of doing the things that you say make testing irrelevant.  I now want to move forward and cultivate that collaborative relationship you describe.  I am sentenced to a life of continuous improvement.   Kaizen!

I do believe today that collaboration is king.  And I will do my absolute best to not make testing irrelevant!  By writing this blog I have some done some testing, testing of oneself.  Thanks Marlena and Scott for the inspiration and blunt bullet points.

Testing is definitely NOT irrelevant nor should testers act in ways to make the act of testing irrelevant.

Testers - have you been guilty?

Monday, December 26, 2011

Acceptance Tests, who is responsible?

I have been thinking about what do I plan to do differently as a tester in 2012.  I have come to the conclusion that I am going to promote acceptance tests.  Over the past  few years I have seen numerous stories flowing through the software development life cycle without an inkling of testing thought.  I contend that for a story to be successful we must have some strategy as to how to test it. 

The objective is to have acceptance tests defined before a story is put into the backlog.  In the Kanban world perhaps backlog is not the correct place, but in the defined or ready state a story should have acceptance tests.

So who is responsible for this testing thought?

The quick and obvious answer is the tester.  I believe the tester does carry much of the burden.  I think testers should be the leaders in defining acceptance tests.  A tester should not be a Buford Pusser, but we should halt stories that do not have acceptance tests and help lead the team to define the tests.  There is nothing like trying to find a product person or a developer just before a release to understand what the heck a story meant.

Does the developer have a role in defining acceptance tests? 

My opinion is absolutely.  How can a developer write the best code possible if they do not know what the feature should do.  Some serious thought should go into how a developer should prove to the stakeholders that she did a fantastic job writing the code.

Does a product manager have a role in defining acceptance tests?

No, because their only job is to put the stories into the queue.  Just kidding!  Absolutely the product manager has a stake in the game.  The product manager needs to know exactly what it means for a feature to be "done".  In fact the product owner has the most insight on how to define if a feature has been developed to expectations.  Notice I inserted expectations instead of the evil word, requirements.  I will save that for another post, but if a story is well drafted and coupled with great acceptance tests, my opinion is that detailed requirements may not be necessary!

In the software world you often hear the term the business team.  These are the dreamers who come up with the ideas.  Should they care about acceptance tests?  Absolutely!  How do they know their dream has been realized.  I would contend that as a concept is being visualized the business should also be dreaming about testing.

Does management care about acceptance tests?  This is a tough one, but I conclude the answer is a resounding, Yes!  Knowing that the team has defined a minimal set of success criteria should indicate some level of efficiency.  Isn't management all about quick road maps to success?

At this point in the little ramble I have two additional thoughts.  One is that I would conclude that EVERYONE is responsible for acceptance tests.  Two is that I never defined what an acceptance test is.

So let me conclude with my simple definition of acceptance tests.  Acceptance tests are the proof that teams are getting shit done, GSD and that expectations are being met.

So testers GSD by providing evidence of success in the form of acceptance tests!

Sunday, December 18, 2011

Walking Tall

Should a QA manager ever carry a big walking stick?  Is there a time to be Buford Pusser and bring the hammer down on development teams?



"He was going to give them law and order or die trying." This tag line from the 1973 version of Walking Tall is very telling.  Being asked to single-handedly clean up the development town can come with great sacrifice to a QA manager.  The quality cop school of testing is an extremely dangerous place to live.

In 2004 Dwayne "The Rock" Johnson played Chris Vaughn and this tag line was born, "One man will stand up for what's right."  Often testers and test leaders do find themselves in a position to advocate for what it right.  Testers have to gather evidence and build a strong case as to why things are not right.



Testers put up with a lot of stuff, but in the end a QA Manager should walk tall.  Walking tall does not mean you have to carry the big solid 2 x 4.  In the testing world walking tall is leading collaboration, pointing out areas for improvement, mentoring others, and providing solutions.  Testing might be a lot easier if we all had abdominal muscles like "The Rock"!

Metrics can become the 2 x 4 for a QA Manager.  Do you simply communicate the numbers or do you communicate the numbers aligned with the development team.  Just communicating the numbers does not seem to have influence or power.  Calling out the responsible teams is very close to carrying the 2 x 4.  Using a big stick against an over bearing sheriff or amphetamine running mobsters is one thing.  Using a 2 x 4 against over worked, well intended developers is something completely different.

Placing metrics on a bill board in the center of Downtown development will open some eyes.  The data will cause angst.  The intention is to use these data as a spotlight and a vehicle toward continuous improvement.

Kaizen and Peace!





Sunday, December 11, 2011

Testers Act Like Cheerleaders!

It has been a long while since I have posted.  I have been torn as to the next topic and the second excuse is I have been fighting a cough for 3 weeks now.

Quickly I need to make a post on "Test is Dead".  Pradeep stated that this topic was a must to be a respected tester.  Here is my statement - "Testing is NOT dead.  Testing simply MUST be DIFFERENT!  Ponder that for a little while.  On to the real post ...

I have slowly but surely been reading "The Inmates Are Running the Asylum" by Alan Cooper.  In his book he states "Programmers act like jocks."  I will not accept this as universal truth, because many developers I have the pleasure to work with do not haze testers, especially good testers.  They do not snap testers with a towel just for the fun of it.  Well shooting testers with Nerf bullets is a close second to a towel pop.   I will state many programmers are team players, but honestly some do act like jocks.

So using the team sport theme, I ponder what do testers act like?

At first I considered team manager or coach, but that seemed too gatekeeper like to me.  We do try to mentor others on the nuisances of testing, but I do not think coach is the primary role of a tester on a development team.

Hmmm!  Are we the Adam Sandler of software development, "Water Boys" (for the ladies, Water persons)?  I really do not think good testers cater to programmers, but sometimes it feels like that.

Do testers act like jocks too?  I had a conversation the other day where someone told be that a well known tester is simply a bully or jock like.  It is their way or the highway.  I disagreed, because I think the tester in question is always looking for a challenge or dual.  I think the testing jock has the acumen to compete with anyone, so the swagger has been earned.

What I have concluded is that testers most likely act like cheerleaders.  We are there during every software release supporting the success of the programmers.  As I occasionally say we are there to make developers "look good".  We have nice legs and look good in skirts (inside joke from QA standup last Friday).  OK! Maybe we do not all have nice legs, but we do aspire to be nice.  We cheer the jocks on by crafting delightful documents about the imperfections we uncover in software.  We cheer the team on as deadlines approach.  Most testers I know are glass half full people, so we smile regardless of the number of priority one defects in the queue.

At this point in the blog I am wondering what are the characteristics of a cheerleader.  Do we really act like cheerleaders?

Here are three potential characteristics of a tester as related to the characteristics of a cheerleader.

Sportsmanship - Being able to deliver software with grace, being able to congratulate another team's success, not spreading rumors or talking down about other teams failures.

A positive attitude- Being ready and focused on testing, always being willing to try something new, being friendly and cooperative.

Spirit- Having respect for your development team, representing your developers in the most positive manor you can

A few months ago I found myself challenged by the Sportsmanship aspect of being a good tester.  Honestly I failed miserably.  I was asked by C-level management in front of a fairly large audience, which team has the poorest quality.  I answered from my gut and failed to put things in context.  The team I threw under the bus has the most complexity and integrations.  I could have not caved to the pressure and crafted a response with appropriate "Safe" words.  Oh well!  After all it is all about continuous improvement!

So are testers jocks, water boys, team managers, coaches, or are we really Cheerleaders?

Happy Testing!