Sunday, March 23, 2014

And what is your excuse?

I finally made my way through about 70 blogs.  Many educated me and a few were easily skipped.  It was this one that I found interesting - Top 5 Excuses for not having enough Testers Testing.

My Product is not finished yet:  I agree with the article that this excuse is silly.  The best and perhaps the most important testing happens at the beginning of SDLC.

Quality is everyone's responsibility; No dedicated testers needed:  I very much believe quality is everyone's responsibility and quality is enhanced by having a dedicated tester leading the charge.

We have budget/time constraints:  Oh!  This excuse is so very true.  This is where experienced testers add a tremendous amount of value by executing risk based and Session Based Test Management, SBTM.  In the world of continuous delivery time constraint certainly is playing a more important role in the land of excuses, so creativity and automation are highly valued.

My product is perfect.  It does not need testing:  This one is just laughable.  Hand the team a copy of Perfect Software by Gerald Weinberg.  Honestly it does not take much effort to find flaws in almost every software product today.

A separate QA team can build an 'Us vs Them mentality', which is not Healthy:  I have to admit that I have heard this one too.  And I agree with the article that this sentiment boils down to culture and style.  Agile software teams today should have a set of roles responsible for building great software regardless of the management structure.

I think there may be a couple more excuses floating around.

Our customers will let us know if we have bugs in our product:  This one is very sad, but I think it is true for some web applications.

Revenue is more important than product quality, just deliver it on time:  I think this may be a true excuse for young entrepreneurial companies.

The developers are doing enough testing:  In my opinion you add a great tester to this team it may just be humbling.

I set out to think of 5 additional excuses, but I am afraid I am going to fall two short.  

I think we should all focus on making great software and a little less on excuses.  We are human and yes we all do make mistakes.  I would rather a colleague catch my mistake than a customer.

Happy Testing!

Sunday, March 16, 2014

Testing versus Winning the Lottery

I just returned from a wonderful vacation at Seagrove Beach Florida.  I really did not have a clue what to blog about until I reflected on the vacation.  On our road trip we stopped at a Subway/Gas station.  I observed many interactions at this place, but the one that peaked my curiosity was the two ladies who spent $32 each on Power ball tickets and the family who sat at a table rapidly scratching their pile of scratch off lottery tickets.  I guess I was amazed at how they could spend their time and hard earned money on such long shot purchases.

As related to testing is seems like testers may spend most of their time rapidly looking for long shots.
Great testers typically do not rely on random luck, but I feel like there are some similarities.

Sometimes we testers throw money at the problem like the two ladies.

Sometimes we collaborate like the family all doing the scratch off tickets.

I did not observe the diversity of the scratch off tickets, but I could assume that the family strategically selected the tickets they suspected might pay off.  We testers do the same thing using a risk based approach to testing.

I believe the two ladies relied on the random generation of power ball numbers.  We testers use random inputs all the time hoping to hit the defect jackpot.

My conclusion is that testers gamble often.  Our jackpot just happens to be bugs!




Sunday, March 02, 2014

A/B Testing Experience Report

First I must start out with a huge apology to Lisa Crispin.  I had promised her a brief experience report on A/B testing a few weeks ago and I failed to deliver.

I thought it would be a good topic for a blog post.

First I would like to explain some potential confusion.  Recently I have heard people mention A/B testing as Test Driven Development.  Although I think the phrase is applicable it confused me because I think of Test Driven Development, TDD, as write your tests then your code.

So in the spirit of A/B Testing you write your experimental design and then execute against that design.  They certainly seem analogous, but I get confused in conversation and thought perhaps others might.

I first learned about A/B testing in 2002.  I was with a small start up and we were trying to follow XP patterns at the time.  I recall doing a couple of successful A/B tests on new features, but I really do not recall the actual mechanics.  We did leverage a Big Lever software product called Gears to rapidly establish feature sets, but I was not privy to the server mechanics or I simply do not recall.  The tool set did allow us to expose a customer base to feature A in a control fashion.

Today it is my opinion that A/B tests take on a way more sophisticated approach of scientific design.  You will also hear this technique referred to Multi-variant Testing, MVT.  I am amazed at how much design takes place today in order to have a successful A/B test.  The key piece in my opinion is having robust mechanics for measurement.  Sometimes a small statistical measure of variance can make a huge difference in the success of a business.

Here are the key components to a successful A/B test:

  • Hypothesis
  • Tools for Measurement
  • Mechanism for Traffic Control of the User experience
A/B tests can be extremely simple or extremely complex.  Most of the time the experiment is designed to evaluate user behavior with the hope of directing that behavior to improve a business result.  I think a guiding principle is to only adjust one variable at a time.

A hypothesis can come from anywhere in an organization, but in my opinion it takes a diverse team to evaluate the data and draw a meaningful conclusion that might improve the business.

Example hypotheses:
  • What if we increase the button size so more people will click it?
  • What if we change the checkout flow from Vertical to horizontal will the experience be better and sales increase?
  • What if we change the color from blue to green will we have more customer retention on the web page ?
  • What if we used a larger image size will more people buy the product?
  • What if we moved widget A above the fold would customers be more likely to use the widget?

Example Tools for measurement:


Mechanics to control traffic flow:

Load Balancer

There are many more tools and even companies who's business model is based on multi-variant testing.  However, these are the tools I am familiar with today.

The ability to conduct A/B tests is dependent on many variables, but here is the basic approach.  

You know that X number of clicks on Image A happen per day using Google Analytics.  You would like to increase the number of clicks by 10%.  Your hypothesis is that image size will make a difference.  So you design a web page that has a 200 x 400 image and you also design a web page with a 400 x 800 image.  You deploy each of these web pages to separate web servers.  In order to get a statistically significant sample you divert 10% of the web traffic to web server B that has the design with the larger image.  You believe one weeks worth of data will be statistically significant.  You measure the clicks per day over the course of that week in order to determine if you increased the number of clicks with the larger image.

Unfortunately I am not able to share specific experimental designs or results,  but I think Multi-variant is an important aspect of modern rapid software development.  If you get the desired result from your experimental design it is just a matter of adjusting a switch and your customers are on the new design.

Another cool aspect that I forgot to mention that part of your traffic is the control seeing the normal experiment.  There is a ton of literature on the topic and I will never claim to be an expert.  But I do believe that well thought out tests coupled with accurate measurement can improve your business and customer experience.