Wednesday, November 26, 2014

What should testers do differently?

I had the awesome pleasure of hanging out some with Peter Walen at Agile Testing Days.  Peter is a tester that really seems to enjoy life and is always willing to share experiences.  I learned that he has many experiences outside of testing that are wonderful to hear and ponder.  You can enjoy his work by reading his blog - "Rhythm of Testing".  Every tester should have a pint of beer with Mr. Walen, so if you get a chance introduce yourself.

At some point in the conference Peter asked me a question - "What is the one thing testers should do differently in the future?"  I almost spit out the first thing that came to mind; however, I suspected a trap.  If you get a chance ask Peter about the Super Ball test.  For some reason I asked for more time to think about the question.

Seriously I knew Peter was not setting me up.  He was asking me a genuine question.  In hind sight the conference was about the future of testing so the question really makes sense.  So I have had quite a bit of time to think about this topic.  I honestly have gone all over the place with my thoughts.

I think I have boiled my answer down to this.  Testers must earn the respect of their peers.  My definition of peers would be anyone you encounter in the field of software testing or in life.

Earning respect can take on many forms such as being a team player, learning to code, better yet always be willing to learn, or demonstrate your skills.  I believe once you have earned the respect of your peers you have gained trust and trust is the key to doing some great things.  If you get a chance read the works of Christopher Avery.

Respect and trust are hard to earn.  Once earned they are hard to keep. The rewards of earning respect are plentiful.  We learn from our mistakes.  Making mistakes together as a group and learning from those mistakes can be even more powerful.

I will conclude this post by saying "Thank you Mr. Peter Walen" for asking the question.  I cannot wait to here his thoughts around the question.  I also want to thank him for reminding me that we should have fun in what we do and it is really important to have fun together.

Anyone have a different answer?

Happy Thanksgiving testers!

Saturday, November 22, 2014

Buccaneer Scholar to King

Well I have not written in a while, so I will try to articulate some recent thoughts.  I am certainly not the best wordsmith or most articulate speaker, but I do have an opinion.

Some people you meet in life are inspirational.  They advocate for innovation, instilling drive and passion.  You read a great book about being a buccaneer scholar, pulling yourself up by your suspenders and achieving great things in life.  You attend a presentation at the 2009 STP Conference in Las Vegas and you come away thinking man that person is brilliant. You follow their blog posts and Twitter feeds that lead to inspiration. They teach you to enjoy games and attack challenges.  Today these pioneers seem to be taking a position of my way or the highway.  We are right and everyone else is wrong.

I am not sure that is the intent of the rhetoric or dogma as one colleague stated, but that has become my perception.

These pioneers have been extremely polarizing in their thoughts and critiques of others lately.  I am certainly OK with criticism and the elevation of thought.  I guess I think it should be done in a kind, professional, and collaborative nature.  What happened to politely learning to agree to disagree.

Word choice is an important attribute when debating or collaborating.  I am not great on my feet when it comes to word choice when debating on the fly.  When someone says I am wrong I can take it and I can listen to the point of view.  But when someone continuously attacks and says your idea is wrong it does not foster a learning environment.  I think we have missed the human side of a debate.

There are many people I respect and learn from in the industry of software testing.  Ideas should be challenged, but challenged in a human compassionate way.  We should push each other to be creative thinkers, but not at the expense of destroying relationships.

Although some people are more skilled than others we should not put ourselves on a mountain top and declare I am right; therefore, everyone else is wrong.  It is certainly Ok to think that way, but not belittle the thoughts of others along the way.

I hope the attitudes temper and we can get back to collectively improving our craft of software testing.

I will end with a humorous quote from a colleague - " I am Polish so I know all about Czech's!"

Keep on Testing!

Sunday, September 21, 2014

ISO 29119 - What not to do!

This post is inspired by Michael Bolton's post here.

I am a software tester.  I am an advocate for rapid and creative testing.  Think and do not follow!

I am formally a chemist where I managed an Ambient Air Analysis laboratory.  Our laboratory had many other divisions analyzing water, soil, and other tests that had to comply to EPA protocols.  All of these protocols were based on a documented government standard.  The irony of analyzing environmental standards relative to protocols was that if you found a better way to test for something, you were WRONG!  You could get the governing body to draft an amendment to the protocol or actually convert it to a new standard, but it most likely would take years.

Because some of us do things differently in testing software are we WRONG?

One funny story is that to do environmental analysis you had to have "certified" reference standards.  I discovered on a audit/tour of a gas standards company that the standards they were selling were certified against an expired standard.  The further irony was that the some gas standard companies certify their newly generated standards against standards they themselves had prepared.  I created the certification standard and I sell "certified" standards to the public.  Sure seems like a wolf in the hen house.

One of the most frustrating things about working in the environmental laboratory were the government audits.  If you did not follow protocol to the letter you risk large fines and even loss of business.  Something as simple as failing to put your initials in a laboratory logbook or an expired training record could result in a fine.

I believe I was a good chemist solving real world environmental problems.  When the lab got bought and I was told to only run samples of this kind in compliance with this standard to maximize profit.  I changed careers!  I was no longer permitted to innovate and solve environmental challenges.

When I hear debates like the one on ISO 29119, all of those laboratory frustrations resurface.  Today I lead a fantastic team that test a family of web sites designed to help create fun vacations for families and groups of friends.  Does an ever changing website really need to comply with some standard?  I think not.  Do we want a quality product that delights our customers.  Absolutely!

Now there may be software that requires a high degree of rigor and I get that. Just because you follow some guidance does not mean the software complies with a standard.

In the business of analyzing gas samples the oracle was a certified reference standard.  What is the oracle for "Perfect Software"?

I also worked for a software company that was attempting to achieve a high level of CMMI certification. The work became sit in review meetings 8 hours a day, then find time to actually test the software.    This routine involved test plans in word documents, test matrices, change control on test cases, and so on.  I do not want to go back to that routine.

My vote is test software with a high degree of technical creativity, find the bugs, and never follow any mandated guidance.

Honestly if I get a copy of ISO 29119, I will probably read it as a reference of what not to do!

Happy Testing!

Tuesday, August 26, 2014

Reply to the Two Hour Challenge

Finally getting around to my answer to the two hour challenge.  I appreciated the two people who attempted to tackle the two hour challenge.

Here would have been my approach:

  • 25 minutes exploring the site/application gathering context and testing ideas in a mind map
  • 5 minutes organizing the test ideas in order of importance or likelyhood to uncover bugs
  • 60 minutes executed the test ideas in order of importance and during execution taking notes.  The notes would be tagged - bug, question, observation, enhancement, issue, and action
  • 30 minutes summarizing my findings in a report where the report includes a list of bugs, questions, observations, enhancement suggestions, issues, and future action items.  The list of action items would include recommendations on additional test ideas I would recommend the team execute.
Hopefully based on these findings, I would be contracted to do additional testing.

Targeted and time-based sessions work!  Try it!

Saturday, July 12, 2014

Two Hour Challenge

It is very safe to say that I have completely blown my objective of writing a weekly blog post in 2014.  I could analyze the plethora of reasons, but that is really not important.  What is important is that I have been inspired to challenge the testing community?

In my opinion testing has been reframed from the POV of what testing does it require to generate a great product to what great testing can be accomplished in a given timeframe and make the product great.

The reframing could be worded better, but hopefully you get my point.

So here is the two hour challenge:

You are being hired to test a website.  You will be provided only a URL.  And you only have 2 hours to test.

How would you structure those two hours and what would you deliver?

Please post your testing approach as a comment.  Don't be shy because I am sure there are no wrong answers!

I will try my best to share my answer next week.

Sunday, March 23, 2014

And what is your excuse?

I finally made my way through about 70 blogs.  Many educated me and a few were easily skipped.  It was this one that I found interesting - Top 5 Excuses for not having enough Testers Testing.

My Product is not finished yet:  I agree with the article that this excuse is silly.  The best and perhaps the most important testing happens at the beginning of SDLC.

Quality is everyone's responsibility; No dedicated testers needed:  I very much believe quality is everyone's responsibility and quality is enhanced by having a dedicated tester leading the charge.

We have budget/time constraints:  Oh!  This excuse is so very true.  This is where experienced testers add a tremendous amount of value by executing risk based and Session Based Test Management, SBTM.  In the world of continuous delivery time constraint certainly is playing a more important role in the land of excuses, so creativity and automation are highly valued.

My product is perfect.  It does not need testing:  This one is just laughable.  Hand the team a copy of Perfect Software by Gerald Weinberg.  Honestly it does not take much effort to find flaws in almost every software product today.

A separate QA team can build an 'Us vs Them mentality', which is not Healthy:  I have to admit that I have heard this one too.  And I agree with the article that this sentiment boils down to culture and style.  Agile software teams today should have a set of roles responsible for building great software regardless of the management structure.

I think there may be a couple more excuses floating around.

Our customers will let us know if we have bugs in our product:  This one is very sad, but I think it is true for some web applications.

Revenue is more important than product quality, just deliver it on time:  I think this may be a true excuse for young entrepreneurial companies.

The developers are doing enough testing:  In my opinion you add a great tester to this team it may just be humbling.

I set out to think of 5 additional excuses, but I am afraid I am going to fall two short.  

I think we should all focus on making great software and a little less on excuses.  We are human and yes we all do make mistakes.  I would rather a colleague catch my mistake than a customer.

Happy Testing!

Sunday, March 16, 2014

Testing versus Winning the Lottery

I just returned from a wonderful vacation at Seagrove Beach Florida.  I really did not have a clue what to blog about until I reflected on the vacation.  On our road trip we stopped at a Subway/Gas station.  I observed many interactions at this place, but the one that peaked my curiosity was the two ladies who spent $32 each on Power ball tickets and the family who sat at a table rapidly scratching their pile of scratch off lottery tickets.  I guess I was amazed at how they could spend their time and hard earned money on such long shot purchases.

As related to testing is seems like testers may spend most of their time rapidly looking for long shots.
Great testers typically do not rely on random luck, but I feel like there are some similarities.

Sometimes we testers throw money at the problem like the two ladies.

Sometimes we collaborate like the family all doing the scratch off tickets.

I did not observe the diversity of the scratch off tickets, but I could assume that the family strategically selected the tickets they suspected might pay off.  We testers do the same thing using a risk based approach to testing.

I believe the two ladies relied on the random generation of power ball numbers.  We testers use random inputs all the time hoping to hit the defect jackpot.

My conclusion is that testers gamble often.  Our jackpot just happens to be bugs!

Sunday, March 02, 2014

A/B Testing Experience Report

First I must start out with a huge apology to Lisa Crispin.  I had promised her a brief experience report on A/B testing a few weeks ago and I failed to deliver.

I thought it would be a good topic for a blog post.

First I would like to explain some potential confusion.  Recently I have heard people mention A/B testing as Test Driven Development.  Although I think the phrase is applicable it confused me because I think of Test Driven Development, TDD, as write your tests then your code.

So in the spirit of A/B Testing you write your experimental design and then execute against that design.  They certainly seem analogous, but I get confused in conversation and thought perhaps others might.

I first learned about A/B testing in 2002.  I was with a small start up and we were trying to follow XP patterns at the time.  I recall doing a couple of successful A/B tests on new features, but I really do not recall the actual mechanics.  We did leverage a Big Lever software product called Gears to rapidly establish feature sets, but I was not privy to the server mechanics or I simply do not recall.  The tool set did allow us to expose a customer base to feature A in a control fashion.

Today it is my opinion that A/B tests take on a way more sophisticated approach of scientific design.  You will also hear this technique referred to Multi-variant Testing, MVT.  I am amazed at how much design takes place today in order to have a successful A/B test.  The key piece in my opinion is having robust mechanics for measurement.  Sometimes a small statistical measure of variance can make a huge difference in the success of a business.

Here are the key components to a successful A/B test:

  • Hypothesis
  • Tools for Measurement
  • Mechanism for Traffic Control of the User experience
A/B tests can be extremely simple or extremely complex.  Most of the time the experiment is designed to evaluate user behavior with the hope of directing that behavior to improve a business result.  I think a guiding principle is to only adjust one variable at a time.

A hypothesis can come from anywhere in an organization, but in my opinion it takes a diverse team to evaluate the data and draw a meaningful conclusion that might improve the business.

Example hypotheses:
  • What if we increase the button size so more people will click it?
  • What if we change the checkout flow from Vertical to horizontal will the experience be better and sales increase?
  • What if we change the color from blue to green will we have more customer retention on the web page ?
  • What if we used a larger image size will more people buy the product?
  • What if we moved widget A above the fold would customers be more likely to use the widget?

Example Tools for measurement:

Mechanics to control traffic flow:

Load Balancer

There are many more tools and even companies who's business model is based on multi-variant testing.  However, these are the tools I am familiar with today.

The ability to conduct A/B tests is dependent on many variables, but here is the basic approach.  

You know that X number of clicks on Image A happen per day using Google Analytics.  You would like to increase the number of clicks by 10%.  Your hypothesis is that image size will make a difference.  So you design a web page that has a 200 x 400 image and you also design a web page with a 400 x 800 image.  You deploy each of these web pages to separate web servers.  In order to get a statistically significant sample you divert 10% of the web traffic to web server B that has the design with the larger image.  You believe one weeks worth of data will be statistically significant.  You measure the clicks per day over the course of that week in order to determine if you increased the number of clicks with the larger image.

Unfortunately I am not able to share specific experimental designs or results,  but I think Multi-variant is an important aspect of modern rapid software development.  If you get the desired result from your experimental design it is just a matter of adjusting a switch and your customers are on the new design.

Another cool aspect that I forgot to mention that part of your traffic is the control seeing the normal experiment.  There is a ton of literature on the topic and I will never claim to be an expert.  But I do believe that well thought out tests coupled with accurate measurement can improve your business and customer experience.

Sunday, February 23, 2014

Are QA Managers necessary?

Unfortunately I did not attend the Agile Austin QA Special Interest group meeting last Wednesday, but they did explore an interesting topic.  A couple of my colleagues did attend so I got a little bit of an overview.  Since the topic is somewhat hot of the press, I am going to explore the issue.

Are QA managers required in an Agile development world?  Since I am a Director of QA of course my first inclination is to say, “Yes”. 

Is it necessary for the team to include a QA Manager to create great software?  I would conclude, “No”.

Who makes the tough decisions?  I would like to think this is a role of a QA Manager, but a team could certainly make the decision.

Who looks out for the best interests of the testers?  Perhaps we need a union, but I would think this is a role of a QA manager.

Who provides mentorship?  Could be the QA manager, but I think anyone with an experience to share can be a mentor.

Who puts together the budget?  I think this could be the role of a QA Manager.

Who does the hiring?  Entire team should be involved, but I think the decision boils done to the QA Manager.

Who does the people management such as career growth or disciplinary actions?  I would conclude the QA Manager, but it could be any people manager.

Do QA managers get in the way of Agility?  They could, but I think a good QA manager would not get in the way and would be an advocate for innovation and change.

Why was this topic even pondered at a QA SIG?  For this answer I wish I had attended.  I suspect as companies grow, layers of bureaucracy cause frustration and a reduction in speed.

Without QA Manager role what would be a logical career path for testers?  Oh yes, I believe testing is a career, but a tester could be perfectly satisfied focusing on a technical growth path and not in management.  I do like the fact that over time testers have a choice.

I have pondered several questions in this exercise, but I do not think I have scratched the surface on this topic.  I am leaning toward the cop out answer of “it depends”.

If every tester was self motivated, self correcting, self reflecting, and great at their craft, then I would have to say QA Managers are not needed.  However, with the inherent complexities with in a company and various degrees of learning, I think QA Managers can serve a very important role.

I have been managing projects and people for almost 30 years, so believe it is an important role.

Some of the software purists believe that there is no need for QA at all and that code can be so perfectly created that it encapsulates the requirements and essence of the product being delivered.

Many of the “Great” testers that I know are either managers or corporate consultants, so I would conclude leadership is vital from a quality perspective.

It is a delicate dance that probably depends on the size of the company, the structure of the company, and the complexities of the application under test.

Do you have an opinion?

Sunday, February 09, 2014

Eurostar 2014

Well I could not really find anything exciting to write about this week.  I did however submit a couple of topics to present at Eurostar 2014.

I am hoping the abstracts are written well enough to have a chance.

Here are the two abstracts for critique and input.  Although already submitted I would still welcome any thoughts or questions.

Track Talk - Leading Collaborative Testing at Scale

Carl Shaulis will share experiences and practices executing collaborative testing at a company-wide scale.  The testing dojo can take on many dimensions from paired testing, team testing and grow into a company wide testing initiative.  The collaborative process is based on the principles of Session Based Test Management, Agile practices and quality focused leadership.  Carl will elaborate on the experience of managing the effort, reporting results, applying daily retrospectives, and actively involving stakeholders outside of QA or development.  Everyone will learn how to lead a large corporate scale test dojo with a high level of confidence and success. 

Discussion Topic - Test Case Nightmares

Everyone has experienced the nightmare of combinatorial mathematics as test cases and variance increase for modern applications.  The reality is that with rapid software development and many new devices you must randomize the testing or become good at determining risks.  Carl will discuss the pros and cons of test case management.  Carl will also offer techniques using mind maps, risk factors and a game that can help inject variance into test execution.  Everyone will walk away with effective ways to manage your test case nightmare and have a little fun along the way.

Sunday, February 02, 2014

Fun with Test Cases

The inspiration for this short post comes from two sources.  

  • A Tweet last week by Noah Sussman, "For instance: Q. How do you manage your test cases? A. Can't talk. Shipping Code."
So my direct answer to Noah was review then delete when it comes to desk cases.  Historically the testing practices advocated for building a mountain of test cases and your product would be "completely" tested.  Should your test cases be Mount Everest or a small dirt pile?  I prefer the dirt pike because it would be easy to move in a reasonable amount of time.

I think it is OK to develop test cases as a source of your testing ideas, but you need to be willing to rapidly abandon some of those ideas.

I am toying with the concept that if a test case is execute 3 times and no issues are uncovered, then we abandon the test case.  Alternatively we should change the test mission.  If it happens to be a test case that checks a critical element, then automate it.

Another idea I am toying with is what I call "Test Case Craps, TCC".  In today's testing world there are numerous browsers and devices so this game can allow your randomly spice up your testing.

The game requires dice (2) and a list.  The list is mapped to the value of the dice.  In this example I put the most commonly used browser relative to the frequency.

2 - 2.78%     Fire Fox Previous version (-1)3 - 5.56%     Fire Fox Latest Version4 - 8.33%     Chrome Previous Version (-1)5 - 11.11%   Chrome Latest Version6 - 13.89%   IE 107 - 16.67%   IE 118 - 13.89%   iPad (iOS 6)9 - 11.11%   iPhone (iOS 6)10 - 8.33%   Android Tablet11 - 5.56%   Android Phone12 - 2.78%   Safari Latest Version

Before you execute the test case roll the dice.  Use the matching device to execute the test case.  This list can be in what ever context you choose (12 most critical functions of AUT), but it can add some randomness and fun to your testing.

So the moral of this short post is to rapidly abandon test cases that do not provide value and have fun!

Sunday, January 26, 2014

“Bull Shit” – Great testers do exist and they are important to the rapid delivery of excellent software!

Title uses the shout out straight from the “Cotton-eyed Joe”.  I do live in Texas.  

I am going to explore a common thread found in these two articles discovered via Twitter.  The common thread is we must deliver software fast.

First I am going to offer an opinion on this article that Michael Bolton commented about on Twitter -

Second I am going to share a couple of opinions on leveraging Kanban effectively. In my opinion delivering software fast you do not necessarily impact creativity but you embrace creativity.  This interview was discovered in a tweet by Smartbear Software linking to this interview -

I must admit Mr. Stanton’s article got me riled.  First of all it seemed to imply that Google does not have testers, which is absolutely false.  In fact Google has some amazing testers.  If you get a chance to speak with Ankit Mehta at Google and you will learn that integrated testers help keep the quality high on the more critical Google applications.  You will also learn that it is the community of testers that shows the developers how to test and implement automation effectively.  In my opinion they are well trained ninjas.

James Whitaker one of the author’s of the book “How Google Tests Software” seems extremely developer centric in some of his writings and presentations.  I have to admit the book is still in my queue to read, but having listened to Mr. Whitaker speak I think he would admit there are some great testers around and that testing is critical.

I would agree with one aspect about Mr. Stanton’s post and that is many developers do not know how to test nor do they think they should test.  Great testers can be important educators for teams like this, if the team is willing to learn, listen and continuously collaborate.  It truly does require a culture shift that includes great testers in the paradigm.

I will try to summarize my rebuttal with some key bullet points:
  •       Whether Scrum or Kanban try to remove the QA handoff by involving a great tester all along the way.
  •       Great testers can critique design, offer strategic planning, conduct code reviews, extend unit tests, and even help write code, especially automation.
  • ·     Great testers can assemble the release and deploy the product.  Once deployed they know how to quickly sniff out risk and locate critical defects the team may have been missed. 

Embed a great tester and treat the tester as a first class citizen, then you will see some amazing results.

I do like this statement “Design a workflow that requires developers to wipe their own behinds, by writing automated tests for and testing their own code.”  A story should not be complete until a reasonable amount of test automation is in place.  When a critical defect is found by a customer, the developer should prepare to apply some Desitin cream to their behinds by increasing the test coverage and manually experiencing the final product for excellence.

Speaking of delivering excellence, I think that is what David Hussman is about.  David Hussman provided an excellent keynote at "Keep Austin Agile" conference in 2013. 

The interviewer in the link above implies that because we have a desire to deliver software fast that we skimp on creativity.  My opinion is if you skimp on creativity and innovation you may not be creating an excellent product.  Side bar – I do love Mr. Hussman’s use of music to describe building great software.

The interviewer also seems to imply that using Kanban there is no room for creativity.  Again I have to disagree.  Kanban is all about the flow of delivering value.  A story can sit on that board for a period of time as long as other value is being delivered or until the value of that story is truly recognized.  Teams today can easily deliver pieces of functionality in the “dark”, providing time for the team to be creative as they develop.  It is the sum of all the parts that creates excellent software.

Kent Beck predicted that delivering software would get faster in is keynote at the 2009 STP Conference.  Software is being delivered fast today, but delivering excellent software that delights customers is the key.  I think we must permit creativity.  In fact I have seen teams have daisy changed Kanban boards where the first board outlines the creative process of design and experimentation.  From my experience developers and testers are extremely creative and nimble in their creativity.  Perhaps we can call it "Rapid Creativity".

Kent Beck was right and so is Jeremy Stanton in that we must adapt and change the way we test software in today’s rapid paced world.

Building a team that can deliver great software fast is a great challenge.  Building a great process integrating testing through out is not easy.  But I think an important part of that team should be a great tester.  Great testers do exist!