Thursday, November 28, 2013

Should Testers Learn Automation?

 I have read various articles that debate this topic, but I never really have formed my own opinion.  The more I think about the future of testing, I conclude the answer is yes.  Testers should learn automation.

One of the most influential testers today is James Bach.  He knows code.  His brother admitted in an article recently that code is not is forte, but if pressed I bet he knows code.  Writing code might not be absolutely necessary to be a great tester, but I think it provides context.

I did a reflection on my career as a tester. 

Stan Taylor hired me at Excite@Home.  Stan gave me the start in the field and was a terrific mentor.  At first I learned how to set up multiple test environments, which gave me context about operating systems and browsers.  Next challenge was performance testing with Silk Performer.  I learned a proprietary language, 4Test and dabbled in regular expressions.  I was then able to extend the 4test language to functional testing, Silk Test.  The next phase was interesting when I got moved to a development team to do UI work.  Sad part is again I learned a proprietary language called Dynamic Content Generation, DCG.   My current boss, Jack Yang, was my development mentor.  He educated me on development basics: loops, logical statements, repository branching, tagging, & command line executions.  He gave me coding assignments that would challenge my skills, pushing me beyond my comfort zone.  Soon I was making production ready changes.

Next job was at a start up again with Stan Taylor.  Stan had built a beautiful JavaScript library and leveraged Webload for functional test execution.  Not only did I learn JavaScript, but also refactoring and reusable methods became important.  Code reviews and collaboration were great practices.  One important lesson to share about this experience was it was the first time I got to do “white box” testing.  I got to pair with developers and inspect java code. I was permitted to make suggestions on how to enhance the unit tests.  The ability for me to understand code structure made this possible.

Next adventure I got to learn some sound testing processes with a company that build complex telecom oriented software.  Guy Lipof and Joseph Griffin leveraged efficient testing techniques and I was exposed to collaborative testing in the form of test fests..  Eventually I ended up leading seven remote testers.  All of the testing was manual and it took the eight of us five business days to execute 1500 regression tests.  I came into work one day and I was informed that we had to let the seven testers go for budgetary reasons.  Holy cow, this is a great team how can I regression test this by myself.  We are talking eight weeks of busting hump.  The conclusion was automation.  I turned to WATIR and ruby.  WATIR Library was very education friendly.  The forums and people were amazing.  The result was in one month I had automated 70% of the existing tests, tossed out 20%, and the remaining tests were manual.  In the end I could do the complete regression plus test new features in five business days.  Was it the prettiest code in the world?  It definitely was not.  I was able to refactor some common methods and modules.  I attended AWTA(2007) where I met an amazing group of testers.  I met Brett Pettichord, Paul Rogers, Elisabeth Hendrickson, Brian Marick, Chris McMahon, Charley Baker, Andy Tinkham, Bob Cotton, Jim Mathews and many more great testing minds.  It was at this conference I learned the power of pair programming and collaborative thinking.  I was inspired to learn more by all of these people sharing their expertise.

In 2009, I moved to my current company.  The mission was to help set up an automation framework.  We settled on Ruby, because it was the language I was most comfortable with and there were tons of examples available.  We selected Selenium for the potential of cross-browser automation.  Building out automation is definitely a fun adventure from my point of view.  Some things I learned during this adventure were paired programming, factory patterns, page object patterns, mocks, and test driven development.

Now as a Director of Testing, I do not find myself writing much code.  Recently I recognized how valuable this journey has been.  I also learned that if you do not practice you get left behind a bit.  So I am now trying to learn some new aspects of software development during my copious spare time.

I did not share this short journey to highlight myself.  I wanted to share the experiences because I think some aspects of this journey are important to learning as a tester to constantly improve.  So my message to any tester that happens to stumble on this blog post.  Reach for the stars and add an understanding of code to your tool kit.  The majority of the lessons are at your fingertips free or at a very low cost.  You never know what you might learn reviewing someone else’s code and once others review your code you learn even more.

Happy Testing!

Sunday, October 06, 2013

Computing for Data Analysis

I am attempting to sharpen the saw.  I am taking a course online from Coursera, which I believe is associated with John Hopkins University called Computing for Data Analysis.  The course is turning out to be HARD!

The course assumes that I remember math from 30 plus years ago, which obviously is a bad assumption.  The course also assumes I have been exposed to statistics, which is also false.

I am learning the basics of the R programming language.  I am getting to learn RStudio.

Although it is hard, I think in the end I will have learned a tiny bit.  Let me share with you two lessons I learned today.

The runif command does not mean "run if" but "r uniforms".

Quote - "less typing is always better because good programmers are always lazy"

Please note I thought the quote was funny.  I do not believe developers are lazy.  I do support the premise that developers try to write code as simple as possible.

I will continue to tough out this course because in the end I believe I will have learned something useful and applicable.

Sunday, September 15, 2013

Quality Artifacts Everywhere

Recently I came across a situation where I observed defects, tasks, and even stories that were documented in multiple places (Google Doc, Issue Tracking System, Complex Stories, Wiki...).  How can a team evaluate quality when there are so many lists?  I looked a little deeper and even found single defects in the issue tracking system that were a list of defects.

I am right there with the next person for not entering an object into an issue tracking system if I do not have to.   Once an issue artifact is created it must be managed through to a resolution.  My guess is you have no clue with respect to quality of you have lists buried within lists within other lists.

If you have 100 defects and 100 tasks left to complete in an iteration, then you can evaluate when you are near done.  If you have 5 lists buried within the 100 defects , 5 lists buried in 100 tasks, and a Google spreadsheet with 75 more ideas, how do you ever know you are nearing done.

As much of a pain in the tail as it is I recommend two approaches.

One if you find an issue and do not want to put it into the tracking system, then fix the issue immediately and verify that it is fixed to your satisfaction.

The second is to enter the issue into the issue tracking system.

My final recommendation is to settle into a specific process, follow the process, iterate on the process, but do not create numerous processes within processes.

Keeping it simple helps to keep the team on the same page.

Happy Testing!

Sunday, September 08, 2013

Do you have what it takes to argue?

I watched Jon Bach's keynote at CAST 2013 this morning.

As usual he has a fantastic way to deliver information and the topic was on the money.  I agree that we should have more arguments.  The one challenge I have is that I may not have all of the skills to facilitate a sound argument.

I have a spouse who typically cannot lose an argument.  Her brother who has a law degree is equally as acute.  Between the two of them they help sharpen my argument skills, but I am lacking the ninja tools to win consistently.  The Software Industry is full of extremely sharp people and many have the chops to win an argument.  As Jon did for his keynote I thought I had better do some research.

The first site stated that the first thing you should do is select the strongest side of the argument.  I am not sure this is the right advice unless I was wanting to be a debate champion.  Normally the arguments I find myself involved in are because I believe in a certain concept.  So for starters my position may not be the strongest side.  So I think my take away from this suggestion is that I need to always be prepared to persuade the other side that I have a very compelling position.  So I need to reflect more frequently and build out the key list of bullets on my position.  I need to have these points stored in the part of my brain for rapid recall.

Another site talked about sneaky tactics.  The points on this link were pertinent, but I am not sure I am clever enough to be sneaky.  The two points I think I need to add to my skill set is not diluting my position with weak points and consciously concede valid points to the opponent.

Let's face it; the best way to win an argument is to avoid it altogether.  This position is not what Jon was advocating in his key note.  What I take away from this statement is that if you do not have acute points to defend your position perhaps it is time to agree to disagree then go fill your arsenal with more context.  Live to fight another day may be more applicable.

In a couple of weeks I may have the pleasure of visiting Rice University.  I stumbled on this gem.  The first bullet point is "Drink Liquor".  Jon Bach probably would not support this position since he kindly provided me his drink tickets at STP in Nashville.  Thanks Jon!  I got a bit of a chuckle when I read this point, but I think the underlying tenant is that you need to have an element of confidence when stepping into the argument.  I also concluded from this post that humor probably can also help in a good argument.  I think for me my confidence grows by having more information, "context".

I am the type of person who just puts stuff out on the table without necessarily thinking first.  I think I should learn to take my time, stay calm, and apply logic.  I am certainly not afraid of a great argument as long as the TRUST is there.  Jon referred to this as being in a safe environment.  I will probably continue more thinking and research on this topic.  I think I have some arguments coming my way, so preparation is probably a good thing.

Thanks Jon for sparking thought on this topic.  The next time I gather with testers I think we should have some exercises that improve our arguments skills.  I am going to have to give that concept a bit more thought.  Stay tuned ...

Happy Testing!

Sunday, May 12, 2013

Yet another round with defect Severity and Priority

Let me start by a quote from a developer.  “We only focus on priority.”

I have had so many conversations around severity and priority.  Rather than banter about the difference and the usages, I came up with a new concept relative to defects.

Equality to all defects!

I would like to see teams implement TDD practices and a mantra of “We found it. We will fix it!”  Imagine a world where developers, testers, and product managers all treat every defect equally.   We cannot release new code because it is defective.

In the Agile development world the best teams fix their defects as they build the product.  Testers are catching them early in the process, so why not fix them.  It is true that there is no such thing as perfect software; however, if you happen to find an imperfection should you ignore it.  The answer is no.

Do you ever hear “We are do busy to fix defects”?  Or  “that is just cosmetic, so we will fix it later”?  The statements indicate that defects are not important. 

All defects should be treated with equal importance no matter where they are found.  In fact if a customer finds a defect it should take on greater importance.  The defect should be fixed immediately in the next sprint.  If the team uses Kanban then push the defect to the top of the queue.

If we treated all defects as equals then we can throw out severity and priority.  Most importantly we testers no longer have to explain that there is truly a difference.

Sunday, January 27, 2013

What work is left is harder!

I am reading Slack by Tom DeMarco.  There is a chapter where he is talking about process obsession.  I have always been against process for the sake of process.  At one of my previous jobs it took at least 8 hours a day just to get through the CCMI daily bull shit.  My point is not to rant about how inefficient institutionalized process is, but to share a quote from the book.  I found this statement interesting.

"When the new automation is in place, there is less total work to be done by the human worker, but work is left is harder."

As an experienced tester I believe test automation is important.  I promote automation so that we have more time for cognitive testing.  I cherish the time available to execute well designed test sessions.  What never occurred to me is that the cognitive testing just might be harder than the automation.

From my experience automation is pretty darn hard.  Once I completed automation I did feel a great sense of satisfaction, but I never pondered that I had freed up some of my time to do stuff that was harder.  I viewed automation as giving me freedom.  I now had the freedom not to do scripted testing, but the confidence to explore.

The phrase "less total work" is also interesting and from my point of view somewhat misleading.  In theory the more software automation you have the more time you have to innovate new concepts and ideas, so in essence the scope of work increases.  With more automation I believe that maintenance increases.  Automation frameworks are constantly advising as a technology.  At some point the team must refactor to stay current.

I am looking forward to the day when automation equals less work!  For some reason I think I am going to be waiting a very long time.  Does software automation truly give us slack?

Sunday, January 06, 2013

Vanishing Defects

I know I watch to much television, but the Allstate commercial during the Seahawk versus Redskin game gave me an idea.  Allstate has a concept called the vanishing deductible for safe drivers.  Can we use a similar idea with defects?

I would like to propose that if a defect is more than 120 days old without a resolution, we should make it vanish, Vanishing Defects.

Development teams typically are investing the majority of their energy building new features.  From my 12 years of experience there is little time dedicated to fixing the defects.  The problem is compounded deeper by a defect ranking system.  If severity of defects is marked annoyance, cosmetic or non-essential, defects become destined for the eternal defect pile.  Teams triage defects and set a priority.  If defect priority is marked normal or perhaps some day, then those defects are also destined for the eternal defect pile.  The vanishing defect model would automatically reduce the size of the defect pile.

When customer support inquires about a defect that is older than 120 days, the response would be simple.  Your defect vanished!

Some software development teams should be on the A&E TV show, Hoarders.  I have seen backlogs with defects greater than 2 years old.  Should defects be hoarded?  With the vanishing defect policy intervention would not be required.

Ok!  I will make a compromise. Instead of vanishing defects perhaps the defects should be archived after 120 days.  

I think a better model to consider would be Defect Regeneration.  This would be similar to a gecko growing a new tail.  All defects must be fixed within 120 days.   Some geckos regenerate a tail in 2 days to 2 months, so 120 days should be plenty of time.

That concept will probably not fly either, so we are destined to see the giant landfill of defects forever!