Wednesday, December 28, 2011

Making Testing Irrelevant

Yesterday I read a fantastic post by Marlena Compton called Seven Ways to Make Testing Irrelevant on Your Team.  On her blog Marlena references a great companion article by Scott Barber called Scott Barber's Top 10 Things About Testing That Should Die.  Both of these are great articles and I must say "Nice Bike" to both of them.

I did want to make a few comments on Marlena's article.  First the term, irrelevant, struck a harsh tone with me.  Lets start with a definition via Webster online.

irrelevant - not important or relating to what is being discussed right now.

I was like "Wow" can the actions of testers really make Testing Irrelevant?  After reading Marlena's article I would say "Yes we can"!  It is very sad but true.  It is so truthful that I must admit "I AM GUILTY".

One of the things I love is to inspect process and suggest changes that could result in improvement.  I would never intentionally force process on anyone.  Our government or management does enough of that already.  I adhor process for the sake of process. But I can reflect and see that in my career there have been times when my suggestions could have been perceived as being forced upon a team.  Testers please be honest with yourselves.  Have you every stated it is my way or the highway.  So I concur with Marlena forced process could give testing a black eye.

Lock horns with teams on whether or not to release.  I agree again with Marlena.  Again I am guilty of doing this in my career, but there have been times when I have really had to fight hard to prevent my company from making a huge mistake.  Delaying a release for a short period of time is a viable answer.  Unfortunately I have seen companies so set on hitting a date, that they cannot see the forest through the trees.  So depending on the context my recommendation in this situation is to get all stakeholders to sign off on the decision process to release despite the testing evidence or advice.  Sounds bureaucratic, but the reason I suggest it is that these stakeholders should conduct a retrospective post release and during that retrospective collaborate on ways that release could have been better.

Third point of the article is with respect to complaining about a decision after the fact.  Honestly I did this the other day.  Crap, I am so guilty!  Did I complain to anyone that mattered?  No, but I complained none the less.  So my solution here is to also coordinate a retrospective with the key stakeholders.  Why did we make that decision and what could we have done better.  I will do my best to stop complaining. Thanks Marlena for the reminder.

Geez!  At this point I am feeling sad, because Marlena called me out with her top three points.  Need I continue?  The guilt is killing ME!

"Insist that everything be perfect when you look at it".  Finally something on the list that I do not do today.  Oh wait!  Yes I have done this in the past.  Dang guilty again!  The reason I do not do this much today is that I know how hard rapid software development is.  If I can jump in extremely early on a new feature and close collaborate with developers, then I do not have to document defects.  I ask lots of questions and we fix things before a bug report even has to be generated.  The thing with this is that you have to work close with developers to prove to them that your skills are not irrelevant.  Sometimes developers do like another pair of eyes and thoughts.  Testers should collaborate early, often, and politely!

Dang it!  Marlena caught me again.  Point number five regarding spread the attitude that developers are untrustworthy to test their own code.  It is not that developers are untrustworthy, but in the rapid software world sometimes great developers do not stop to smell the coffee.  Again I would never intentionally state that developers are untrustworthy, but I would say that great testers can add a tremendous amount of value.  My comment to Marlena is that there are many developers that think testers are untrustworthy too.  Honest Injun!  I have stated in my past that developers do not know how to test.  Given time constraints in rapid software development I now MUST trust developers to test and I trust that they are doing the best that they can!

Man!  Can I survive two more bullets of this article?

Assume developers don't care about testing and testers.  This is a neutral one for me because in my heart of hearts I believe everyone is striving to put out quality code.  I have never been told on a development project that my skills were not needed.  I have heard developers state "we do not need testers".  Since the Agile Manifesto 10 years ago I have heard this off and on.  Heck! Kent Beck practically stated this at STPCon Fall 2010 in his key note.  So my position on this one is NOT guilty!  My main assumption is that the team can always get better at testing.  Testing is relevant and it does not matter who does the testing.

Final bullet point from Marlena is tell people that developers are biased toward their code so they cannot test it.  I guess I am slightly guilty again.  At a previous company I believed this, but today not so much.  A great man in Austin Texas, Neal Kocurek, once taught me a course at Radian Corporation on Leadership.  In that course he taught me a new word, scotoma.  He taught that a scotoma is a blind spot.  As leaders (no matter how great), we have blind spots.  I would contend that developers can sometimes be blinded.  This most likely is never intentional, but a reality.

So Marlena I must thank you for making me conduct this self inspection.  The judge and jury rule that I have been GUILTY in the past of doing the things that you say make testing irrelevant.  I now want to move forward and cultivate that collaborative relationship you describe.  I am sentenced to a life of continuous improvement.   Kaizen!

I do believe today that collaboration is king.  And I will do my absolute best to not make testing irrelevant!  By writing this blog I have some done some testing, testing of oneself.  Thanks Marlena and Scott for the inspiration and blunt bullet points.

Testing is definitely NOT irrelevant nor should testers act in ways to make the act of testing irrelevant.

Testers - have you been guilty?

Monday, December 26, 2011

Acceptance Tests, who is responsible?

I have been thinking about what do I plan to do differently as a tester in 2012.  I have come to the conclusion that I am going to promote acceptance tests.  Over the past  few years I have seen numerous stories flowing through the software development life cycle without an inkling of testing thought.  I contend that for a story to be successful we must have some strategy as to how to test it. 

The objective is to have acceptance tests defined before a story is put into the backlog.  In the Kanban world perhaps backlog is not the correct place, but in the defined or ready state a story should have acceptance tests.

So who is responsible for this testing thought?

The quick and obvious answer is the tester.  I believe the tester does carry much of the burden.  I think testers should be the leaders in defining acceptance tests.  A tester should not be a Buford Pusser, but we should halt stories that do not have acceptance tests and help lead the team to define the tests.  There is nothing like trying to find a product person or a developer just before a release to understand what the heck a story meant.

Does the developer have a role in defining acceptance tests? 

My opinion is absolutely.  How can a developer write the best code possible if they do not know what the feature should do.  Some serious thought should go into how a developer should prove to the stakeholders that she did a fantastic job writing the code.

Does a product manager have a role in defining acceptance tests?

No, because their only job is to put the stories into the queue.  Just kidding!  Absolutely the product manager has a stake in the game.  The product manager needs to know exactly what it means for a feature to be "done".  In fact the product owner has the most insight on how to define if a feature has been developed to expectations.  Notice I inserted expectations instead of the evil word, requirements.  I will save that for another post, but if a story is well drafted and coupled with great acceptance tests, my opinion is that detailed requirements may not be necessary!

In the software world you often hear the term the business team.  These are the dreamers who come up with the ideas.  Should they care about acceptance tests?  Absolutely!  How do they know their dream has been realized.  I would contend that as a concept is being visualized the business should also be dreaming about testing.

Does management care about acceptance tests?  This is a tough one, but I conclude the answer is a resounding, Yes!  Knowing that the team has defined a minimal set of success criteria should indicate some level of efficiency.  Isn't management all about quick road maps to success?

At this point in the little ramble I have two additional thoughts.  One is that I would conclude that EVERYONE is responsible for acceptance tests.  Two is that I never defined what an acceptance test is.

So let me conclude with my simple definition of acceptance tests.  Acceptance tests are the proof that teams are getting shit done, GSD and that expectations are being met.

So testers GSD by providing evidence of success in the form of acceptance tests!

Sunday, December 18, 2011

Walking Tall

Should a QA manager ever carry a big walking stick?  Is there a time to be Buford Pusser and bring the hammer down on development teams?



"He was going to give them law and order or die trying." This tag line from the 1973 version of Walking Tall is very telling.  Being asked to single-handedly clean up the development town can come with great sacrifice to a QA manager.  The quality cop school of testing is an extremely dangerous place to live.

In 2004 Dwayne "The Rock" Johnson played Chris Vaughn and this tag line was born, "One man will stand up for what's right."  Often testers and test leaders do find themselves in a position to advocate for what it right.  Testers have to gather evidence and build a strong case as to why things are not right.



Testers put up with a lot of stuff, but in the end a QA Manager should walk tall.  Walking tall does not mean you have to carry the big solid 2 x 4.  In the testing world walking tall is leading collaboration, pointing out areas for improvement, mentoring others, and providing solutions.  Testing might be a lot easier if we all had abdominal muscles like "The Rock"!

Metrics can become the 2 x 4 for a QA Manager.  Do you simply communicate the numbers or do you communicate the numbers aligned with the development team.  Just communicating the numbers does not seem to have influence or power.  Calling out the responsible teams is very close to carrying the 2 x 4.  Using a big stick against an over bearing sheriff or amphetamine running mobsters is one thing.  Using a 2 x 4 against over worked, well intended developers is something completely different.

Placing metrics on a bill board in the center of Downtown development will open some eyes.  The data will cause angst.  The intention is to use these data as a spotlight and a vehicle toward continuous improvement.

Kaizen and Peace!





Sunday, December 11, 2011

Testers Act Like Cheerleaders!

It has been a long while since I have posted.  I have been torn as to the next topic and the second excuse is I have been fighting a cough for 3 weeks now.

Quickly I need to make a post on "Test is Dead".  Pradeep stated that this topic was a must to be a respected tester.  Here is my statement - "Testing is NOT dead.  Testing simply MUST be DIFFERENT!  Ponder that for a little while.  On to the real post ...

I have slowly but surely been reading "The Inmates Are Running the Asylum" by Alan Cooper.  In his book he states "Programmers act like jocks."  I will not accept this as universal truth, because many developers I have the pleasure to work with do not haze testers, especially good testers.  They do not snap testers with a towel just for the fun of it.  Well shooting testers with Nerf bullets is a close second to a towel pop.   I will state many programmers are team players, but honestly some do act like jocks.

So using the team sport theme, I ponder what do testers act like?

At first I considered team manager or coach, but that seemed too gatekeeper like to me.  We do try to mentor others on the nuisances of testing, but I do not think coach is the primary role of a tester on a development team.

Hmmm!  Are we the Adam Sandler of software development, "Water Boys" (for the ladies, Water persons)?  I really do not think good testers cater to programmers, but sometimes it feels like that.

Do testers act like jocks too?  I had a conversation the other day where someone told be that a well known tester is simply a bully or jock like.  It is their way or the highway.  I disagreed, because I think the tester in question is always looking for a challenge or dual.  I think the testing jock has the acumen to compete with anyone, so the swagger has been earned.

What I have concluded is that testers most likely act like cheerleaders.  We are there during every software release supporting the success of the programmers.  As I occasionally say we are there to make developers "look good".  We have nice legs and look good in skirts (inside joke from QA standup last Friday).  OK! Maybe we do not all have nice legs, but we do aspire to be nice.  We cheer the jocks on by crafting delightful documents about the imperfections we uncover in software.  We cheer the team on as deadlines approach.  Most testers I know are glass half full people, so we smile regardless of the number of priority one defects in the queue.

At this point in the blog I am wondering what are the characteristics of a cheerleader.  Do we really act like cheerleaders?

Here are three potential characteristics of a tester as related to the characteristics of a cheerleader.

Sportsmanship - Being able to deliver software with grace, being able to congratulate another team's success, not spreading rumors or talking down about other teams failures.

A positive attitude- Being ready and focused on testing, always being willing to try something new, being friendly and cooperative.

Spirit- Having respect for your development team, representing your developers in the most positive manor you can

A few months ago I found myself challenged by the Sportsmanship aspect of being a good tester.  Honestly I failed miserably.  I was asked by C-level management in front of a fairly large audience, which team has the poorest quality.  I answered from my gut and failed to put things in context.  The team I threw under the bus has the most complexity and integrations.  I could have not caved to the pressure and crafted a response with appropriate "Safe" words.  Oh well!  After all it is all about continuous improvement!

So are testers jocks, water boys, team managers, coaches, or are we really Cheerleaders?

Happy Testing!



Saturday, November 05, 2011

Are you a Swiss Army Knife?

I have been thinking a lot lately about testing as a career and what does the future have in store for Professional Testers.  I just returned from Dallas and the Software Testing Professionals Conference and I am reading two books The Agile Samurai and Inmates are running the Asylum.  I am also looking very hard at software to manage the Kanban process and how to make Session Based Test Management work efficiently within my context (which can vary greatly depending on the application under test).  With these current influences, I have come to the conclusion that to be a tester in the future you must model a Swiss Army Knife.  Wow!  I went to the Swiss Army Knife web site and there is a "basic" model with 17 blade utilities.  I think for now I am going to limit my model to 4 blades, but I am sure as a tester we could use as many as we can get.

You must have the basic knife blade.  You need to have a grounded knowledge of the technologies that you will deal with day in and day out.  You also must understand testing techniques.  One great example is Black Box Software testing, BBST.  Not only will you need these skills you need some solid characteristics like motivation and leadership.  The big blade is solid and will be used a bunch.  Keep this blade clean and sharp.

You must have scissors.  You will have to gain the knowledge of processes so that you can cut your way through the bureaucracy.  There is a ton of waste that gets in the way of actually testing.  We must cut our way through the wasteland and eliminate waste.  To bad a shovel will not fit into a pocket knife.  I would not include a pen in my knife because the last thing we need as testers is to create a plethora of documentation.

You must have one specialty tool.  In my mind that tool is the ability to write automation.  I am not talking record playback.  I am talking about actually writing code.  I am a big fan of Ruby and I suggest this book, Every Day Scripting in Ruby by Brian Marick to get started.   I have become a fan of Selenium, but for those of you testers just getting started with automation I am going to suggest using WATIR.  The reason I am suggesting WATIR is that there are a great set of examples to get you started.  Start by looking at the unit tests the developers wrote.  You can certainly use PHP, Java, Javascript, or some other language, but Ruby worked extremely well for me.  I know some javascript, but Java and I never got along.  I have a huge scotoma with respect to Java.  Find that specialty tool and keep it sharp.

The final tool on the knife that I recommend is the toothpick.  As a tester you need to be able to jab yourself in order to remind yourself that you must continuously be learning.  You must be willing to poke others with a barrage of questions.  You must be able to poke inside those tiny crack in software to expose the lingering bugs.  The toothpick can get at things that other tools cannot.

Knowing how to test, how to apply diplomacy, how to write automation, and how to inquire are certainly the makings of a great knife to keep in your pocket.  As Professional testers our Swiss Army knives are diverse, which is a great thing.  We should all continue to upgrade so that some day we have the 17 blade model.

What blade will you be adding soon?

Keep on Testing!


Saturday, October 29, 2011

Images from Speed Geeking at STP Conference

Here are pictures of the flip chart presentation I used during Speed Geeking - Breakfast Bytes at 2011 STP Conference.   I must say that it turned out to be extremely fun.  A huge thanks to those who stopped by to listen.

As Testers in the World of Kanban remember these 8 disciplines.

Happy Testing!


  






 





Sunday, October 23, 2011

Why Go to a Testing Conference?

Today I will be packing and driving to Dallas to attend the Fall session of the Software Testing Professionals Conference, STPCon.  I am extremely energized, but I got to thinking last night as to why should testers go to conferences.

Before becoming a Software Tester I was an Analytical Chemist.  Although at times I could think outside the box, for the most part my life was run by SOPs, standard operating procedures.  When I started software testing I guess I expected the same.  There is one and only one way to test software.  In October 2010, I attended my second testing conference.  Why did I go?

Sure I was blessed by the fact that my company was paying for the conference.  I am definitely indebted to my company.  But the primary reason I went to that conference was to learn!  How do other companies test software?  So in essence the second reason for going to the conference was to meet people.

Honestly some of the sessions I attended did not provide me with anything new.  Perhaps some of them educated me on the things I did not want to continue to do from a testing perspective.  But when you meet people like Jerry Weinberg, Michael Bolton, Kent Beck, Matt Heusser, Lynette Creamer, Adam Goucher, Dan Downing, Goranka Bjedov, and Scott Barber,  a crazy thing happens.  You become inspired!

There are more names to add this list, but conferences exist to educate, which was my original premise for going.  I had no clue that conferences could be so magical at inspiring.  STPCon March 2011 I met more amazing people,  James Bach, Jon Bach, Karen Johnson, and Janet Gregory.  I do not think I can even put into words how much more I was inspired.

So my conclusion as to why testers should go to conferences is two fold.  One is to learn!  Two is to become inspired.

There are many conferences and all may contain an inspiration.

If a conference is not affordable, then meet with your local testing community and hone the craft of testing.

I am anxious to see what inspirations I get from STPCon in Dallas this week.

Keep on Testing!

Sunday, October 16, 2011

Adventures in Medical Device Testing

On Wednesday March 23 I attended the session presented by James Bach called "Adventures in Medical Device Testing".

I do not remember all of the details, but I can recall a few key points.

James talked about how he was hired to test a medical device.  He subsequently was fired for not wanting to take the requirements and write test cases.  Soon he was rehired.  One of the main lessons for me was that James took it upon himself to become an expert with respect to this medical device.  As testers we must do our homework. 

I was also fascinated that he could get away without having to write zillions of test cases.  Having worked in a highly regulated industry in my past, I assumed he would have had to follow stringent and regulated standard operating procedures.  I have always wanted to skip writing test cases, because historically I spent a large chunk of my life wasting time developing many the test cases. James showed us how to do excellent testing in new ways.

I walked out of this session completely charged up.  I was energized to learn more about Session Based Test Management, SBTM.  I was also inspired to learn more about the systems that I test.

Over six months have passed and I still find myself talking about this STP Conference session.

As testers we must always look for new ways to do things.  We must always sharpen our skills and evolve or craft.

On October 23 I drive to Dallas to attend my Third Software Testing conference.  I am excited because I always seem to take away something new.  I am looking forward to being inspired by the likes of Jason Huggins, Matt Heusser, Karen Johnson, Scott Barber, Pete Walen, Lanette Creamer, Dan Downing, and many others.

@James and Jon Bach - I truly want to thank you for your kindness and inspiration in Nashville.

Brief Book review of "The Goal"

Someone at Agile Austin QA SIG recommended reading "The Goal" by Eliyahu Goldratt.  I ordered a used copy from Amazon and finished reading it yesterday.

By the way someone else told me it was not worth reading.

There were times last weekend while we got some rain here in Austin that I could not put the book down.  I found it to be easy to read and throughout the story there were some reasonable context around the Theory of Constraints.  There were points in the story where I got bored and lost interest.  The specifics on Alex's home life seemed ancillary but that slight personal touch made me relate to the stresses work can put on your personal life.  The part of the book helped remind me of the importance of having a great work-life balance.

In general I did get value out of reading the book.  I had wished in the testimonial section at the end there were some applications within the Software industry.  I admit I read them fairly fast, but I do not recall a life story about applying TOC to software development.

I have been trying to learn and study how testing fits into Kanban, so I think this book helped me get a broader view.

Now I am starting to read two books concurrently.  That is a mistake for my two brain cells, but giving it a go.  I am now reading "The Agile Samurai" by Jonathan Rasmusson  and "The Inmates are Running the Asylum" by Alan Cooper.  The Alan Cooper book was loaned to me by an energetic colleague Juliette Kimes.  Thanks Juliette!

Hopefully I will have some good things to say about those books.

Read on Testers!

Preparing for STP Conference Dallas 2011

As usual I have not made posting to this humble blog a priority.  As I sit here this Sunday morning I have three things I would like to cover.
  • Preparation for STP Conference 2011 in Dallas
  • Quick book review, "The Goal" by Eliyahu Goldratt
  • Attending James Bach talk on Testing Medical Devices
Maybe I am trying to tackle to much in this post, so I may just do three mini posts.

In the spring I spoke at the STP Conference in Nashville.  For those two presentations I was way more prepared by this point in the schedule.  I have one week left to prepare.  Yes I am a bit stressed, but it is a good kind of stress.  It is the high energy stress where I desire for excellence.  Will I achieve excellence? I have not a clue.  Will I learn something through this process?  Absolutely!

So thanks to an Austin develop, Mike Duvall at Hoovers.com I was able to add a couple of key points regarding Kanban to my slide deck.  Mike did a fantastic job presenting Kanban at Agile Austin and he was kind enough to permit me to borrow a few of his slides.

Now this week I need to execute a dry run in front of some critical peers.  This is intended to be an introduction talk, so hopefully I can succeed in engaging discussion during my session on Wednesday.

I am a bit more freaked out in that I also volunteered to do a Speed Geeking session.  They have always been fun.  At past STP Conferences I really got charged by folks like Adam Goucher, Lynette Creamer, and Scott Barber.

Today I need to go to an office supply store and get the materials.  I am excited, but it will take some time to prepare.  The preparation should be fun.  I hope I can bring the same high level energy as some of my colleagues and more importantly value in 8 minutes.

I think I am scheduled to do the Speed Geeking and my presentation on the same day.  I guess that gives me even more stress on preparation.

Hopefully you will come listen to me speak and given me feedback.

Two more quick blog posts, then I had better prepare! 

Somehow this morning I must squeeze in a movie on Netflix with my lovely wife.  I do not have time, but somethings are just extremely important!

Sunday, September 18, 2011

Continuous Integration

I will have to do a critical analysis as to why I do not blog more often, but I will save that analysis for another time.

STP Conference Fall 2011 is only about a month away.  I am actively trying to prepare.  I thought I would make an attempt to finish out my experiences from STPCon Spring 2011.  This blog post brings me to Session 702 - Turbocharge your Automated Tests with CI presented by Eric Pugh.

An interesting side note is that I mentioned Eric's name last Thursday when I noticed one of our development teams is implementing Solr.  From talking with Eric at the conference I knew he was an expert, so tossed his name out as a potential resource for collaborating on complex implementations and bottlenecks.

I must say as I recall, Eric was a poised presenter and really knows his stuff!

Continuous integration is something I am a novice at so Eric's presentation was a good one to attend.

He defined CI by using a quote from Martin Fowler - "A fully automated and reproducible build, including testing, that runs many times per day".  That makes great sense to me.

During the presentation Eric had the audience do an exercise where in just a few minutes of a discussion with the persons next to you find out how many non-obvious things we had in common.  The sad part is that since March I do not recall the true intent of the exercise, but what I liked was that it got the audience engaged.

Eric then did a live demonstration of his CI system.  Doing a live demonstration is always a challenge at these events and he did a great job.  I will admit I did not understand all of the components, but it did put things more into perspective for me.

Another key takeaway for me is that he mentioned that the results should be readily available and readable at a glance.  He even talked about places where flashing lights and lava lamps would ignite if a build ever failed.  There was even a photo of Agnes showing the team a gesture for breaking the build.

There was a ton of great information in this presentation.  I walked away from the presentation more informed and with a desire to learn more about how to best leverage CI.  Some of the automation that I do takes to long to execute so the tests do not run with every build.  Currently we run those nightly.  So I am encouraged to take a closer look at automation and push into the build process the automation layers that make sense.

You can find Eric's presentation here if you seek greater detail - http://www.opensourceconnections.com/oscshare/eric/Core%20Principles%20of%20CI_Session.pdf

I still have a ton to learn about CI.

One other great side effect of meeting Eric Pugh is that he provided me with some great constructive feedback on my Cucumber presentation.  I am new to giving presentations so I found it admirable that we was willing to kindly give me suggestions for improvement.  Eric did not even know me, but offered mentoring.  "Nice Bike" to Eric Pugh.  It was also nice to have a couple of cold beverages with Eric at the conference party at the WildHorse Saloon!

Monday, August 22, 2011

Thoughts from Agile Austin QA SIG Meeting on August 17

I am taking a slight detour from catching up with my spring STP Conference experience.  Once a month a group of testers meet as part of Agile Austin.  The topic was TDD and BDD.  Jill Ott was the presenter and she did a good job introducing the topic.  For some reason I walked out of that meeting completely unsatisfied.

This is my attempt at a personal retrospective.

One comment was made saying that testers really do not get involved in TDD.  I consider myself a tester and when collaborating on building an open-source testing framework, I found TDD necessary,  TDD is an excellent practice to prove that your ruby methods work.  One person did comment saying that testers could certainly add value by reviewing the tests created through TDD.  I agree with that statement.  The conversation did not advance beyond a couple of comments.  I guess I was not satisfied because I wanted more detail on how developers leveraging TDD create better code and increase tester confidence.  After recently learning TDD, I found it extremely valuable as a tester.  I guess I had expected more testers in the room to be developing automation code and that they would have an opinion on whether TDD was easy to do with automation code.

The conversation on BDD seemed to bounce around two themes: requirements and brittleness of automation.  One person gave a couple of great working examples of BDD.  One example where it appeared to work extremely well and a counter example where things did not go well.  I did have to admit that the reason I considered BDD is because to many legacy requirements were in peoples heads.

I guess I was not satisfied because I wanted to talk details with respect to cucumber implementations and the processes surrounding the implementation.  We never got into that level of detail that I desired.

  • When in the process do Cucumbers get written?
  • Who is responsible for writing them?
  • Who is responsible for building the step definitions?
  • How well do teams get reuse out of their step definitions?
  • How do you avoid cucumbers being scripts and focus on specification?

There were also threads within the conversation about the cost of automation.  Yes automation can be expensive, but I am of the opinion that automation is extremely necessary.  I tend not to pull cost into the game, but rather focus on the value that automation can deliver.  I concede that I probably should factor in cost a bit more, but I do not permit cost to be a deterrent.

So My conclusion as to why I walked away dissatisfied is because  I did not get the value I had hoped out of this meeting.

I am now pondering what could I have done differently to get more value out of this meeting.  There were 20 persons in the room and I knew 3 or 4.  I think if I attend this meeting again, that I should arrive early and meet more of the attendees.

I think I may have to either speak out more or get my questions on the table early.  I was a bit hesitant to chime in, which is unusual for me.  Approximately 4 -5 of the 20 people tended to dominate the conversation.  I ponder how can we facilitate more participation.  I am wondering if I should have submitted in advance of the meetings the questions I was hoping to address.

Perhaps I should follow up with the user group with the questions posed above.

I think I will do that!

Saturday, August 06, 2011

"How to win an Unfair Fight..."

Continuing the efforts to catch up on what I learned at STP Conference in Nashville.

The Tuesday evening key note speaker was Garrison Wynn.  His presentation was entitled, "How to Win an Unfair Fight: Influencing People You Don't Have Authority Over".
  • Get an understanding of what people really value and how that impacts agreement
  • Develop strategic advocates and create your own personal "influence upward" plan
  • How to get people to agree with you
  • Why some people disagree with everything and what you can do about it
  • How to get people to listen to your ideas
The above bullets were the target points of Garrison's presentation.  He delivered a presentation with great poise, energy and most importantly humor.

Unfortunately I do not remember all of the key lessons, but I do remember laughing.  After the key note I spoke with him in the hallway.  Of course I had to purchase an autographed copy of his new book, The Real Truth about Success.

I have slowly been reading this book and I have found some very insightful items.  Here is a quote testers can apply toward the continuous mission of honing our craft.

"Approach life talent first.  Find and create your personal advantage."

The book takes a close and humorous look at "what the top 1% do differently, why they won't tell you, and how you can do it anyway!"  There are examples of the traditional dressing for success, treating others has you would like to be treated, and shifting a negative attribute into a positive asset.

You too should find your secret weapon and not be afraid to use it.

Sunday, July 31, 2011

STP Conference in Nashville Continues - Virtual Systems

Choosing a session for Block 2 was just as difficult as block one.  There were three topics of interest to me and two familiar speakers.  There were sessions on Mobile Testing, Agile Testing, and Performance Testing.  Mobile testing certainly is the newest kid on the block, but I opted for performance testing.

There were three reasons I chose performance testing.  For one my colleagues were going to the Agile Testing session by Bob Galen.  Second I love learning more about performance testing.  And finally the presentation was being given by Dan Downing, who I consider a mentor in the field of performance testing.  The session was entitled "Performance Testing Virtualized Systems".

Dan introduced the topic by highlighting the pace at which organizations are moving to virtualization and the many pain points associated with this movement.  It was the next slide that captured my curiosity.

Dan would venture to explain the six critical factors for Testers.

The first was "anatomy of a Virtual System".  I will be honest in that I really do not know much about virtual systems.  So the key point for me was that I needed to know and understand the system under test.

Mapping workloads was the next topic.  Dan is brilliant enough to do this type of work.  This is something I would have to seek assistance on.  None the less if you are going to do performance testing you must know how data flows through the system and how various conditions can influence the data flow.

Dan's next point was regarding bottlenecks of a virtual system.  My first thought was that is why we are performance testing to find the bottlenecks in the system.  True that is why we would be performance testing, but Dan's point was understand the applications to confirm they are properly distributed based on you knowledge of the dedicated system.  So for me this tied the first to points together in that you need to understand the potential areas of risk and design your performance testing to properly measure the key areas of risk.

The next topic was about testing technique.  Dan talked about executing the tests in parallel.  Do not let time gaps call into question your test results because system performance can have different influences depending on the time of day.  One example might be that late in the evening database backups take place or ETL pulls moving data from system A to system B.  Honestly I do not think in my short performance testing career this technique occurred to me.  It makes sense because I have spent time in the past trying to understand nonsensical data.  If you have the resources to run in parallel, then I think this is a great idea.  If you do not have the resources then it comes back to the first three points about knowing the system under test.  Often performance testing is influenced by many disparate systems so you will still have potential for unexplainable results.

Mr. Downing talked about Test execution.  He split the load into three different phases.  Start with a  "light" load.  His point was to establish a baseline for each system where the system is performing optimally and there is limited competition for resources.  Then progress into a medium load test where everything is scaling properly and their are no bottlenecks, such as a database.  Finally move to the stress mode where you determine failure points and that the systems fail in the appropriate manner.

The next session of the talk was of course regarding the zillion of metrics you should measure and monitor.  I think Dan is spot on mentioning all of the measurements.  And if you have the tools and resources you should absolutely look at every thing.  Unfortunately when I have the time to help with performance testing I am squeezed for time.  I try to pick out maybe 5 key metrics to gather results on and use those results to determine if further monitoring is needed.  Performance results data can be overwhelming.  Dan certainly knows what tools to use and is a wizard at data analysis.  Having tools and systems in place to conduct real time monitoring can certainly ease the data analysis paralysis.

Dan spent some time in this presentation on data analysis.  The key take away for me was he had a focus on comparative data.  The goal of this performance test was to compare a dedicated system to a virtual system.  Assuming you know the dedicated system performance from a historical perspective, then you have the best heuristic.  Show data for both systems in the results.  Differences become more apparent and you then can investigate those differences.

For me it was a thought provoking presentation.  Dan concluded by stating how important it is for testers to keep up with the latest trends and develop the new skills to keep pace with these trends.  The performance testing fundamentals are the same, but testers need to stay ahead of the curve in order to provide value.

If you have never heard Dan speak, you should.  He has a great passion and enthusiasm for performance testing.  More importantly he likes to share this knowledge with everyone.  If you attend STPCon in Dallas and you love performance testing, introduce yourself to Mr. Dan Downing.

Sunday, July 24, 2011

Note on Performance Testing from Scott Barber

At the STP Conference in Nashville I had many daily decisions to make.  After James Bach's inspirational key note, came my first important decision.  There were two speakers both of whom wrote featured articles in the the book, "Beautiful Testing".   One of my co-workers was going to listen to Karen Johnson about "The Strategy Part of the Test Strategy", so I chose to listen to Scott Barber.

Scott in his normal flamboyant style delivered  a passionate presentation on performance testing, "A Performance Testing Life Story: From Conception to Headstone".  I also have a passion for performance testing, but I have a ton of learning to do.  Scott's presentation went right to the heart of the performance testing life cycle.  Here is a summary:

1.  Building a software product it is critical to consider performance within the architecture and design phase.  Performance should be part of the DNA by asking performance specific questions as the software concept evolves.

2.  We should set performance targets then profile and test the code at the component level.

3.  We should continue profiling, performance unit testings, but also add in environmental performance testing and load or stress testing.

4.  There should be a tuning phase where we do our best to optimize performance prior to launch.

5.  We should performance test every patch, update, or configuration change.

6.  Even sun setting applications should be monitored for performance.

Scott summarized his presentation by stating "Successful people and applications are tested, monitored and tuned all of their lives.

Most companies I have worked for performance testing is the last thing considered.  To some extent performance is thought about during the design phase, but not actually tested until the end of the life cycle.

Performance testing is hard!  A non-performance site can hurt a reputation so we should fold performance into the corporate DNA.

Bust out the performance tools and "Git Er Done"!

Wednesday, July 20, 2011

James Bach at STP Conference in Nashville

I can honestly state that the reason I have not been blogging lately is James Bach.  Just kidding of course, but he certainly inspired me with his keynote "Notes from a Testing Coach".  I have been extremely busy mentoring, learning, implementing, collaborating, testing, and innovating.  It has been extremely fun and I owe the energy to Mr. James Bach.

In his key note he opened by explaining the three kinds of practical credentials: portfolio (your past work), performance (demonstrating your ability), and reputation (stories told about you).  All of these things combine to establish credibility.  Testers should actively work on their portfolio.  Testers should consistently demonstrate their skills.  If you do these two things well hopefully "good" stories will be told about you.

James mentioned one of the things that can get in the way of mentoring are feelings.  He is very accurate in this assessment.  Once you go beyond feelings testers have the ability to leap tall buildings with a single bound.

The coaching process involves building relationships, challenges, allowing things to happen, retrospective or diagnosis of the problem, and collaboration.  There will be set backs as well as celebration of success. 

I recently visited London and I found myself hearing the words "Mind the Gap" in my sleep.  I remember Mr. Bach saying "mind the syllabus".  My interpretation of these words months after the keynote is as a mentor you should have a plan for teaching just has you should have a plan for Session Based testing.  I may have this way out of context at this point, so I will need to do some research.

James also talked about as a mentor you must be prepared to demonstrate to the student what you might do.  In other words, you may get to a point in your mentoring where you have to roll up your sleeves and lead by example.

Another huge lesson from this key note was his demonstration of the hidden picture.  I think the main point was to mess with his brother, but the demonstration illustrated how a tester can explore, change focus, change approach or technique, get reasonable coverage rapidly, yet not find a potentially large defect.

James gave an over view of the dice game.  I had inquired via email to James on how to execute the dice game.  Through email collaboration I got a reasonable idea of the intent of the game, but I was extremely fortunate to be able to learn more about this game in person with his brother Jon.  This hands on experience had a huge impact on me.  The conversation and the approaches Jon took clearly illustrated how testers can benefit by rapidly assessing patterns.  I now try to show this game to every tester I encounter.  I think it is fun and most importantly in invokes thought.  I even went to a local game store and bought the Cast Elk puzzle.  I have yet to solve it!  I keep trying but no success.  I know the answer is on You Tube, but I refuse to cave in.  During my travels I now buy puzzle books and attempt puzzles I never thought I could do.  I am extremely amazed at how fun learning and challenging yourself can be.  Thanks James and Jon for this inspiration.  Jon also turned me on to a site, http://www.sporcle.com/.

This is some of the value I took away from this key note presentation.  There was much more content that I do not recall. I can honestly say that this key note presentation was a true inspiration to me.  Not only as a tester, but in my every day life. 

A huge "Nice Bike" to James for the key note and to Jon for taking the time to experience the dice game with me.

Dice game ROCKS!

Monday, May 30, 2011

Workshop at STP Conference

 I attended a workshop prior to the STP Conference in Nashville called "Creating & Leading a High Performance Test Organization".  Bob Galen was the presenter.

Bob did a great job presenting a ton of material.  Honestly much of the material seemed like common sense, however the information was packaged nicely.  The material served as that gentle kick in the pants to remind you that you that as test leaders we need to revisit our foundations.

Tons of things were covered including recruiting, out sourcing, marketing test, defect management, leadership and communication.  I am going to focus in on two topics.


Effective Communication

Mr. Galen talked about knowing your audience and adopting their point of view.  This makes sense if you have some idea of who the person you are talking to is and what they do.  Even in this happy path situation how you phrase your communication can be extremely complex.  What happens if you meet the person for the very first time?  How do you get their point of view into context?

My conclusion to effective communication is that it is really hard.  We must always work on our communication skills.  Somehow in a conversation you must size up the moment.  In other words put some context around the current environment both setting and mood of the participants.

Bob's second point of "active listening" is probably the key to effective communication.  One approach might be to break the ice with an introducing statement, then carefully listen to the response.  Somehow we must grab the clues that help us to know our audience.  Honestly I tend not to be very good at the active listening part.  Just ask my wife!

One other point Bob had was "can your audience handle the truth and how much of the truth".  This is a tough one for me.  I have an opinion and I tend to share it regardless of the impact on the audience.  As a communicator I need to learn to be more judicious with my opinion and determine how much of the truth is appropriate for audience at hand.

Effective communication is critical in everything we do.   It is not easy and communication is something we can always improve upon.

Defects

At this work shop there was some interesting discussion around defects.  When to document defects and when not to document them.  One person in the audience felt it was critical to document every defect.  Others felt that there are times when it is appropriate not to spend the time documenting a defect.  I shall not continue this debate here, but I thought Bob had a couple of great points in his material.


"A good report is written, numbered, simple, understandable, reproducible, legible and non-judgmental."   I agree with this statement, but there is one attribute that stood out for me, non-judgmental.  As testers we need to do our best to remove emotion from a defect report.  We should be concise, state the discovery process, and add supporting material as facts.

Bob also provided a list of styles testers should consider putting together a defect report.

1. Condense–Say it clearly but briefly
2. Accurate–Is it truly a defect?
3. Neutralize–Just the facts
4. Precise–Explicitly, what is the problem?
5. Isolate–What has been done to isolate the problem?
6. Generalize–How general is the problem?
7. Re-create–essential environment, steps, conditions
8. Impact –To the customer, to testing, safety?
9. Debug–Debugging materials (logs, traces, dumps,
environment, etc.)
10.Evidence–Other documentation proving existence

I felt like this was a pretty good reminder of how to write an effective defect report.  Perhaps I can develop this into a little acronym - CAN-PIG-RIDE. 

I need to do a better job of effective communication and part of that effort is making sure defect reports are communicated effectively.

Happy Testing!

Friday, May 20, 2011

Defect Tracking

I still plan to post thoughts on the STP Conference in Nashville, but some recent posts has spawned some thought.

Lisa Crispin gave a presentation at Star East that sparked Gojko to write a post entitled "Bug Statistics are a Waste of Time".

I agree with the notion that we should clearly understand the business objectives and find ways to measure the value the features bring to a customer community.  I do not agree that looking at bug statistics is a waste of time.  History is one of the greatest oracles available to a tester.  How were we doing in the past compared to today?  Are there any lessons to be learned from a software systems past defects?  I certainly think so.

Lets assume that company X provides some value to someone and that company X knows how to measure their business objectives and value of those objectives using things like Net Promoter Score, Google Analytics, Agile Velocity,  and Get Satisfaction.

What can inspecting defect metrics add to the cause of determining value?

I view metrics as flashlights into a dark cave.  How do you know what is there unless you look?

A simple inspection of the total number of defects in the back log implies some level of technical debt.  I agree with Gojko that if defects simply sit in a backlog then we are wasting some time.  Teams should and must proactively triage, fix, or even throw away defects.  But not to document them especially in a non-searchable manner would be detrimental to the team.

Teams should occasionally have retrospectives on their processes.  Finding data regarding defect groupings is a fantastic lever for continuous improvement.  Where are the majority of our bugs historically clustering.  Where you find defect clusters you have the opportunity to change your process to reduce those clusters.  Agile teams especially should do look backs at some frequency.  How did we do last quarter?  How does trending look?

As a tester I have at times had an extremely difficult time advocating for a process change.  In several situations I have found the ammunition to influence change by showing historical trends.  Could we do this carrying around a notebook? Sure we could, but it would be difficult especially as to how time flies.

Let me toss this scenario into the mix.  You are a new tester at a large company or even a consultant.  Your mission is to understand the quality of the software and you have to do it fast.  It would be nice to shine a flash light into the new cave and know the areas of risk.  Yes you could put your hands on the application and start testing, but a sneak peak at the historical defect data could narrow in on the best place to start.

Yes some defect tracking tools really really suck, but our ability as testers to search, learn and educate provides great value even to Company X who knows how to monitor and measure value.

I shared Gojko's link with a large community of developers and testers.  I shared the link not because I agreed with it, but because it made me think.  I received back a quote that really struck a chord with me.

"Sure hope this isn’t the future of QA."

Losing the oracle of history would be a huge mistake.  Using metrics prudently, adapting the metrics to changing business value, and having conversations around the findings are key elements of Continuous Improvement.

If we influence a change for the better using metrics, then we certainly are not wasting our time! 

Read Gojko's post and the associated comments it definitely should spark some thought.

Happy Testing! 

Saturday, May 14, 2011

Funny thing Happened at STP in Nashville

Testers, I must find the time to catch up on my back log of posts.  I thought I would start out with an embarrassing tale.

Once upon a time there was a tester, Carl, who had signed up for a work shop at the STP conference in March 2011 at Nashville TN.  It was a beautiful day and the coffee was flowing.  Carl happened to be running late for the workshop.  All Carl knew was that the work shop was labeled Pre-5.

Carl raced up the stairs (very hard to do with a tender knee).  There at the top of the stairs on the right was Pre-5.  The instructors name was Bob.  Carl remembered his instructor's name was Bob.  The title of the work shop was Quality Monitoring and Coaching.  For some reason Carl seemed out of place. 

Everyone introduced themselves.  Of course I introduced myself and why I was there.  What did I say?  "Hi I am Carl Shaulis and I am a Test Manager at HomeAway.  I recently have been working close with Customer Service to facilitate quality and I am here to continue learning on how to bridge the gaps between our two teams."  Wait one second, that does not sound right.  Carl pulls out his conference brochure.  As he thumbs through the pages Fiona walks into the conference room.  Oh Carl knows Fiona from STP Las Vegas.  Carl must be in the right place.  Carl finally finds the work shop he should be in called "Creating and Leading a High Performance Test Organization" by Bob Galen.

Panic sets in!  I am in the wrong room.  I just gave a great introduction on how this presentation could benefit me.  What should Carl do?

Carl quickly packed up his gear and headed out of the room.  Carl found his intended work shop and enjoyed Spring STP Conference 2011.

Carl did get some friendly ribbing and crap from Abbie and Fiona.  Deservedly so!  The secret was out of the bag that there were two concurrent conferences  Contact Center Conference Expo 2011 and Software Test Professionals.

The really fun part is that Carl probably would have learned a hell of a lot if he had stayed in the incorrect work shop.  However, interacting with the testers and learning from Bob Galen was the right move.

Happy Testing!

Tuesday, April 26, 2011

Defect Severity

I now have a huge backlog of blog posts, but here is one that has me losing some sleep.

Why do developers seem to get their feathers ruffled when we talk about defect severity?  I am not implying that developers are ducks, chickens, or geese.  I could have used the phrase why do developers get their panties in a wad when we talk about defect severity, but that did not seem PC.

First I will give a little back ground.  I was asked to inspect and access team quality.  Hmm, that is a monumental task.  Where shall I start?  Lets take a look at how many severity one defects are in production.  If there are any severity one defects in production, then the quality status is RED.  If there are no severity one defects in production then the quality status is GREEN.  Seemed like a simple assessment, which would require some follow up conversations if the values were red.  By no means do I mean to imply that this is the only indicator of quality, but it was the first flash light used to look into the cave.

But wait we (development) do not care about severity we focus all of our efforts on Priority.  This has been echoed by at least three development leads.

My two brain cells are telling me that severity is the perception of how bad a defect is and priority is the process around getting a defect resolved.  Hmm, do I have this all wrong?  Is priority the only thing of importance.  I read Perfect Software by Jerry Weinberg a few months ago.  Did I not understand his writing?  Well I guess I had better do whatever any internet geek would do, "Google It"!

I found a great post by my friend and colleague Dr. Stan Taylor.  I did not see to much in that post that I could disagree with.  Next I stumbled on a Bug Advocacy video by Professor Cem Kaner.  I did not find a definitive answer there.  I read at least 10 other blog posts and all seemed to have a slightly different spin, but there did seem to be some consistency that severity is the perception of the user and priority is the decision process on when to fix a defect.

I am still perplexed as to why inspecting severity gets such a negative response by development.  Let me attempt to noodle out a couple of basic definitions.  Please note that I did not come up with these definitions, but I tend to agree with these.


What is Quality? -  “Quality is value to some person (that matters)”

What is a bug? – Something that bothers (bugs) someone who matters.

Severity - defines the impact the defect has on the customer and the likelihood of occurrence. 

S1 – Product is on fire.  We are getting sued.  We cannot take money
S2 – Breaking but we can offer a workaround
S3 – Minor functional defect (pain in the butt or poor user experience)
S4 – Cosmetic (we can live with it)

Priority – determines the order in which defects will be fixed/resolved and retested

            What do we fix?
            When do we fix it?

Priority decisions are based on impact on the customer (severity) and resolution difficulty (time & resources)

P1 – Resolve Immediately
P2 – High Attention
P3 – Normal
P4 – low

Those seem reasonable to me and some of the definition came from a Power Point presentation by Dave Whalen called "The Matrix". 

So now I am thinking that priority cannot be established without taking into consideration the severity.

I am still puzzled as to why developers do not want to track defects based on severity.  Somebody perceived the defect as having a negative impact on someone.

I am concluding that I need to have more conversations with developers!  Here is a reasonable post provided by one of the development leads.  One point made in our conversation was that when I used the phrase perception of the customer it inserts emotion into the concern.  The developer preferred impact on the customer. 

I can certainly understand with the pressures of Rapid Software development of today there is little time for fixing defects. So the balancing act of meeting milestones and building great features is extremely priority centric.

However persons documenting the defect have no input with respect to priority at the time of discovery, so they must focus on setting a proper severity.  A well-documented defect can be leveraged as a sales tool to influence priority and the person submitting the defect must be an advocate of the defect in order to see it through to resolution.

When severity and priority do not align, I would suggest it is extremely important to communicate the factors used to set the priority to the person who established the severity.

Hmm, have developers been burned in the past by incorrect values of severity being placed on the defects? Could that be the reason for the ruffled feathers?  I think this might be a good possibility.

Perhaps feathers are not ruffled at all, because based on all of the blog posts the ambiguity of Severity and Priority have been around for a long long time.

Time to read "Perfect Software" again!