Tuesday, December 28, 2010

Kanban - Part II

Yesterday we had a brain storming session on what kanban would look like for the team.  I must say it generated a ton of good conversation.  Here are a couple of key points that I remember.

Someone mentioned that a team of testers could spend a majority of their time in the design phase, especially facilitating the development of Cucumber scripts.  It was suggested that one tester each week focus on this area and rotate the responsibility each week.  Another suggestion was to pair a tester and developer for each story.  The primary premise discussed was that if effective Cucumber scripts are developed then further down the process testers would know what to test even though they did not participate in the design phase.

Another key point was with respect to the current defect backlog.  How do we keep the defect backlog from building?  The team concluded that we need a dedicated swim lane in the process dedicated to fixing defects.  Dan Anderson shows in his book a good example where X% of the capacity is dedicated to defect resolution. 

What happens if a story is full of defects?  The conclusion was that the story remains in the process and is not released.  Since the team is inspecting the process daily, they can make adjustments to provide potential solutions.  One approach might be paired-development practices which provides real time training and retrospectives on what is happening with the story.  The ultimate conclusion is that if the team only got one story deployed each week they were adding incremental value to the business.  This should not happen often, but the bottom line is that it is OK to tweak the process on a daily basis and focus on deploying high quality valued features.

The one scary piece that we did not resolve was dealing with complexity.  Occasionally an Epic story comes into the team's backlog.  During the design phase we should be able to dissect and break the Epic into smaller stories, but the concern became how do we make sure the stories get implemented in the right order and that all of the stories that comprise the whole get assembled, tested, and deployed.  Adjusting for complexity will be a fun challenge, but a challenge never the less.

We talked about buffer zones and capacity limits.  We have no clue where to set the capacity limits.  It was suggested that we minimize the WIP so that we can gain confidence in the process and trust in the quality of the features being produced.

I must admit I am excited about this approach.  I know we will fall flat on our face a couple of times, but we will get up, collaborate, and continuous improve the process.

Stay tuned for Part 3.  We plan to walk through some real scenarios.

Friday, December 24, 2010

Kanban

I am wide awake Christmas Eve day at 5:20 AM.  I should be sleeping!  Instead I am reading a book by David Anderson, Kanban.  Why as a tester am I reading Kanban?  Well one of the teams I work closely with is about to implement kanban in 2011 as an attempt to improve their process.  So I ponder how does test have to change on order to fit into this paradigm.

I am not even going to attempt to describe kanban in this post, but I do want to convey I get a bit excited when I see the first step David Anderson recommends for moving to kanban.  He says the first step is to "Focus on Quality".  As a tester you have to like hearing those words.

I also agree with these points.
  • Both Agile and traditional development approaches to quality have merit.
  • Code inspections if done right can improve quality.
  • Collaborative analysis and design improve quality. I believe testers should get involved early in the process!
  • Using design patterns improve quality.
  • Use of modern development tools improve quality.
As pointed out in the book all of these things improve trust and confidence in the process.

The team using kanban will be doing weekly releases.  They have been doing weekly releases, but development and test cycles were taking four weeks (2 in development and 2 in test).  The product owners were not getting their new features in a timely manner.

In December the team has been focusing on reducing the backlog of defects and discussing the implications of kanban.  In my opinion, the biggest fear of this change is coming from the testers.

I contend there is nothing to fear.  A story will move through this flow:

Idea > Design > Work in Progress (code, unit test, code review, automation) > acceptance test > final test > production

If there are 10 stories in the pipeline and 5 testers.  Then a tester is responsible for 2 stories.  Testers should be able to test 2 stories in a week.  Right?  OK it does depend on the size of the story.  The fear comes from the regression test phase.  My philosophy is automate regression tests where possible and focus on Exploratory Testing.  If testers cannot get over their fear perhaps any manual regression testing should be focused on the area(s) of highest risk.

I predict that the process will be bottle-necked in two areas.  The first bottleneck will be in design.  As part of design the team will be writing Cucumber tests.   Through collaboration the cucumber tests will make sure everyone is understanding the feature and testers will know what to test early in the process.  This process will take time.

I believe acceptance testing will go fast, but the second bottleneck will be the final testing phase.  Unfortunately testers are still focused on "Perfect Software".  I contend that the testers will have to implement test strategies that can be completed in 1-2 days.

I am looking forward to seeing this in action.

The extremely nice thing about kanban is that you know your pain points because you see them on the flow chart.  More importantly kanban expects you to change the process in order to remove the pain points.

Over simplification?  Maybe!  As Larry the Cable Guy might say "Git R done"!

I am definitely looking forward to this challenge in 2011.  I would welcome any comments from testers who are actively doing kanban.  I suspect there are not many.

Sunday, December 19, 2010

Waterfall vs Agile

This past week Lanette Creamer posted on her great blog some statements with respect to her current team.  She implies that Waterfall teams are better at communication than Agile teams.

I would contend that the methodology has little to do with it.  You take a motivated and well spoken tester like Lanette, then combine other motivated members to her team and you will get good communication.

Motivated people make communication happen.  I will go one step further and say good communication makes for successful projects.

I have worked on waterfall, v-model, and agile projects.  If given the choice I will choose agile.

With waterfall and v-model I always found the massive amounts of documentation, document review, and the cumbersome process of reviewing changes to the documentation to be tedious.  In my opinion at times the process hindered communication.

With agile, lean, or Kanban we are afforded the opportunity to learn from our mistakes and quickly change the process.

But as to Lanette's point, agile projects can only be successful with excellent team communication.

So everyone who would like their projects to be successful sharpen your communication skills.

Monday, November 29, 2010

Books I am currently reading

All testers MUST read Perfect Software and Other Illusions about Testing by Gerald Weinberg.  I just finished reading this book and I found the content extremely useful and inspiring.  If you test software, read this book.

Now I am reading Lessons Learned in Software Testing by Cem Kaner, James Bach, and Brett Pettichord.  I am only a little ways into this book, but I am finding it to be spot on.

Next in the queue is Kanban - Successful Evolutionary Change for Your Technology Business by David J. Anderson.

If you are reading a great testing or development book, please make a comment so I can get more books into my queue.

Read on!

Sunday, November 21, 2010

Think Times for Performance Testing

After only 10 years in the software industry, I am probably naive, but a recent post by Dan Barstow has caused me to question the use of think times.

Our current performance testing approach does not have think times.  I have used think times in the past for various applications.  I always have found think times to be somewhat arbitrary and subjective.  I can definitely see the value if your objective is to really understand the number of concurrent users the application can handle, but typically our objectives are focused on the number of transactions a system can process.    Also if you are using virtual users you truly are not representing a browser,.  A browser, depending on type, could leverage 6 threads at a time making requests.  So how do you adjust the think time between thread A getting an image and thread B retrieving javascript and firing the script?

Specifically we look for what is the optimal TPS value where 95% of requests latencies are less than 1 second and 99% of all requests are less than 4 seconds.

In my previous career I was a chemist.  On some regular frequency we had to calibrate our analytical instrumentation.  We would generate a 5-point curve using known standards.  Typically we sought a linear relationship of this curve where the low point was near the detection limit and the high point was near to the maximum detection point of the analytical instrumentation (e.g. – Gas Chromatographs).

When performance testing ramping VU in time intervals is similar to generating this calibration curve.  What I am typically looking for is how high can we generate the curve, yet still be below the high point of the instruments capability, which in this case the instrument is a configuration of servers and services.

I guess my philosophy might be that by eliminating think time I can get to that high point a bit faster.  Since we are testing a system, then my criteria is applicable to the total request and response.  So if the request is for an image or for a database object, as long as that request is returned within the criteria stated above, then we are within calibration.

My opinion is that if you are using virtual users then transactions per second is the important metric.  If you are using real browser users, then I think that think times might be important, but still subjective.  Regardless the objective is to do the best job possible to validate that your site gives customers a good experience.  I guess this can be achieved with or without think times.

For the record I have done some performance testing of JSON API where you could not even conduct the test without some level of think time.  So I will refine my conclusion to state that it depends on your objective and the application under test.

Please help educate me, because I am not in the think time camp unless it is necessary.

Wednesday, November 17, 2010

Selenium Fury has been released!

In my humble opinion, the page object pattern can be extremely important for a test framework.

Do you have an ever changing UI that causes your locators to frequently break?

With ASP.net do your developers frequently move elements around the page?

Do you have a brand new page and you quickly want locators to work with?

Selenium Fury just might be the tool to assist you.

I have posted in the past about generators and validators.  Scott Sims has rolled this functionality up into a nice gem.  The gem is easily extensible.

Check out Scott's post, use the gem and let him know what you think!

http://scottcsims.com/wordpress/?p=251

Oh, it is so fun being a tester!

Thursday, November 11, 2010

Pirates vs Ninjas make it to Prime Time

I was catching up on some recorded shows.  I was watching my new favorite show, The Defenders.  Funny that it is set in Las Vegas the last location of STPCon.  Any rate last nights episode involved a hacker.  The main characters mentioned pirates and ninjas when talking about the hacker.  Were they observing Adam's and Lanette's Lightning talks at STPCon.  I found it very funny to hear pirates and ninjas mentioned regarding software on a prime time show.

The lightning talks were one of the highlights of the conference.  They were put on as breakfast bytes by Matt Heuser.  I love the format of lightning talks.

Since STPCon I have done some thinking about "pirates being better than Ninjas".  I have to respectively disagree with Adam Goucher.  I think ninjas are better than pirates.  Pirates might support developers in a loud and boisterous manner.  Ninjas would support developers in a silent and stealthy manner.  The good news is that testers exist to support development.

I guess it was obvious that I lean toward ninjas, hence the title of this blog.  That reminds me I need to get my ninjapus image uploaded to this humble blog.  My kids worked really hard prior to STPCon to help me create my logo.  I need to shrink it to appear on the blog, but I can certainly include it in this post.  You have to love the power of a ninja with 8 arms and multiple weapons for testing.  I think my kids 11 & 14 did an excellent job!  Go Ninjas!

Wednesday, November 03, 2010

Tester and Developer Pairing

I know at STPcon Lanette Creamer gave a session on "Pairing with Developers".  In hind site I truly wish I had attended that session. 

Currently one of our development teams is pushing out two releases per week.  There are a couple of issues in that the code is actually complete in the previous sprint and it takes test 1-2 weeks to execute regression.  This is not meeting the needs of the business, so we have added in "quick releases" with minimum code changes and minimal testing.  On top of that we are also injecting releases where we are conducting A/B testing (more on that latter).  In order to keep up an efficient pace, the team is now considering daily releases.  Wow!

This is essentially a shift to Kanban.  If you listen to Kent Beck, moving to this type of SDLC a team would essentially remove the testers.  I respectively disagree.  Testers can still play a critical role in the reporting of quality and making sure the releases are meeting business and customer needs.  As I ponder this paradigm shift, I believe "Pairing with Developers" becomes essential.

Developer grabs the next story and pairs with an experienced tester.  They work together design the unit tests.  As the developer implements the unit tests, the tester designs the acceptance tests (Cucumber) and developer does a code read.  As the developer writes the code, the tester implements the acceptance tests.  The tester should also collaborate with the developer to design and implement any browser facing tests (Selenium).  Integration tests should also be considered.  At the end of the day the code should be complete and all of the tests pass.  If all tests pass, then automated deployment pushes the release for UAT.  UAT Testers evaluate the functionality in a non-customer facing production environment.  If the user acceptance tests pass, then switch the environment to be customer facing.  In the A/B world we can expose only 20% of the customers to the new feature.  If there are no issues raised by the customers, then we push live. 

There may be other ways to complete the test automation, but this paradigm makes sense to me.  Not all testers have automation experience, so this places the emphasis on education and training.  How quickly will we get there? I do not know.  Is it possible?  Absolutely!

I am looking forward to this challenge.  If any of you are actively doing this style of rapid development please share you thoughts.

Gotta LOVE testing!

Sunday, October 31, 2010

Top Ten STP Conference Experiences

I plan to write up individual posts on some of the sessions attended at the STP Conference.  In the mean time I thought I would rank the sessions and experiences in order of value to me. 
  1. Session 103 - Testers! Get out of the Quality Assurance business - Michael Bolton
  2. Keynote - G Forces in the Organization - Kent Beck
  3. Keynote - Nice Bike: Fueling Performance with Passion - Mark Scharenbroich
  4. Breakfast bytes - Agile Testing Ninjas - Lanette Creamer
  5. Breakfast bytes - Pirates are better than Ninjas Even in the Agile World - Adam Goucher
  6. Workshop - Hands on Performance Testing Dan Downing and Goranka Bjedov
  7. Session 403 - Ever Been fooled by Performance Testing Results? - Mieke Gevers
  8. Session 305 - From Start to Success with Web Automation - Adam Goucher
  9. Keynote - Top Ten Leadership Skills: Surviving and thriving in 21st Century - Kelli Vria
  10. Party on the Strip - Christian Audigier at Treasure Island (met many great Testers!)
If you attended, what were your top ten?

Friday, October 29, 2010

Better Late than never

I finally realized that there are many great blogs that I follow, so I had better add a blog list to my tiny space of free thought on testing.

What really inspired me to do this was this post:

http://cartoontester.blogspot.com/2010/10/monotonous-context-and-certification.html

The cartoon absolutely illustrates that testers can get caught in a rut.  Collectively we need to continuously remind ourselves to get out of the rut. 

Think, experiment, and do!

I wish I had the creative mind to do great cartoons like Andy Glover.

Tuesday, October 26, 2010

AWK and GREP are your friends!

I definitely plan to write up some lessons learned from the STP Conference, but I thought I would share this quick tidbit for performance testers.

Our website profile is constantly changing so we inspect our log files to determine the correct mix for our performance testing.  There are many ways to do this and you can get as complex as you want.  Typically we choose our peak traffic day.

You have an access log and you want to know ratio of the number of GETS to the number of POSTS.

This command should give you the total number of requests.

cat | awk '{ print $7 }' | wc

This command should give you the total number of GETS.

 cat |grep \"GET | awk '{ print $7 }' | wc

This command should give you the number of POSTS.

 cat |grep \"POST | awk '{ print $7 }' | wc 

You can substitute "less" for the "wc" and you will see on your screen all of the GET requests.

You can use multiple grep segments to further refine your data. 

cat |grep \"GET | grep " 200 " | awk '{ print $7 }' | wc 

So my specific test I am only interesting in GETS with response code of "200".  I could write these results to a file.

cat |grep \"GET | grep " 200 " | awk '{ print $7 }' | wc >> foo.txt 

Yes some of you already know this stuff, but my main point is that you can learn a ton about your site traffic simply by dissecting the access logs.  You can generate a flat file that can be used by your performance tool of choice to randomly send requests to your system.  You can write a simple shell script that can take any access log and generate a quick breakdown on the request profile.  This profile can be used to proportion your performance test traffic in a similar pattern as your current user base.


Go ahead and karate chop your access logs!

 

Wednesday, October 20, 2010

Short Post on STP Conference

It is pouring down rain in Las Vegas.  Yes I said pouring down rain.  The rain and long taxi line is preventing us from going back to our condo.  So we are hanging out waiting to go to the big party at Treasure Island.  We will be true geeks and take our computer backpacks to the party.

So here are my short thoughts.
  • Great people
  • Extremely engaging
  • Some material unfortunately is not applicable to the things we are currently focused on
  • Some speakers I could listen to all day long
  • I am eating way to much food
  • I am doing tons of walking, but not off setting the amount of food and mmmm alcohol
  • I am having a great time
  • And the organizers of the event are doing a FABULOUS job!
  • I hope I can be a presenter in the near future.
  • I hope I can continue to collaborate with some of these smart people.
  • And final note - I wish I had a large volume of cash
Hopefully I will find the time to post future blogs on all of the things that are currently floating around in my two brain cells.

Have fun!

Thursday, October 14, 2010

STP Conference are you ready for me?

On Sunday I depart with 2 colleagues for Las Vegas to attend the STP Conference.  I am pumped up.  I am ready to learn what is happening in the wonderful world of testing.

On Monday I am attending an all day hands workshop about Performance Testing.  I do know how to use JMeter, but I am hoping to learn more tricks and gain insight as to how others approach performance testing.  Perhaps I have a few tricks up my sleeve to share!

Here are some of the other sessions I am hoping to actively participate in.
  1. Testers!  Get Out of the Quality Assurance Business by Michael Bolton
  2. Becoming an Automation Entrepreneur by Linda Hayes
  3. From Start to Success with Web Automation by Adam Goucher
  4. Ever Been Fooled by Performance Test Results? by Mieke Gevers
  5. Testing in an Agile Environment by Rob Walsh
  6. Test Faster: Model Your Test Process to Test Faster by John Ruberto
  7. Using Open Source Testing at Ford by Frank Cohen
  8. What You Need to Know About Performance Testing by Michael Czeiszperger
I may switch gears and attend other sessions, depending on the value.  There are so many excellent topics to choose from.  I am hoping to meet and network with as many testing gurus as possible.

There are a couple of key note speakers I am curious about Kelli Vrla and Kent Beck.  I am sure they will all be interesting.   The real key is to keep me away from poker tables.  :o)

Stay tuned for future posts on various topics spinning in my two brain cells after the conference.

Happy Testing!

Sunday, August 22, 2010

Page Generator Example

Last time I posted I promised some code.  I finally got around to compiling the code into a format that could be placed on this blog.  I borrowed some of these methods from our framework library, but I think this code illustrates how efficient a generator can be for formatting page locators.  Page object pattern is a valuable approach to an automation framework.

Note that this code calls a test helper file.  I have not included that here, but basically it is a file that specifies the gems that are necessary.

require 'rubygems'
gem "rspec", "=1.1.12"
require "spec"
gem "selenium-client", ">=1.2.18"
require "selenium/client"


In the code below there are two generators.  One can be used to get all of the links on the page and the other is used for other html elements.

Once these generators are executed the locators can be placed in a page file.  At the bottom of the output the labels are formatted into the correct accessor format for a page object file.

Below is a ton of code so enjoy!  Let me know if you have any questions.

One final comment in that not of all the links on Google's home page get formatted nicely.  I did not have time to fix the formats.

Hopefully on the next post I will provide the page file and an example of using the page file.

Enjoy!


require File.dirname(__FILE__) + "/../../spec/test_helper"

describe "Generate home page locators for google" do

  append_after(:each) do
    browser.close_current_browser_session
  end

  it "should generate locators for check boxes, text boxes, text areas, input images,and select boxes for the google home page" do

    my_setup("http://www.google.com/")

    browser.open ("/")

    get_source_and_print_elements(browser)
  end

  it "should generate and format the links from the google home page" do

    my_setup("http://www.google.com/")

    browser.open ("/")

    html = browser.get_html_source

    generate_link_instance_variables_from_html(html)
  end


  #  Below are the methods that help to extract out the components from the html and format them into locators and accessors

  def my_setup(target)

    @browser = Selenium::Client::Driver.new \
    :host => ENV['Host'] || "localhost",
    :port => 4444,
    :browser => ENV['SELENIUM_RC_BROWSER'] || "*firefox",
    :url => target,
    :timeout_in_second => 60

    browser.start_new_browser_session

  end

  def browser
    return @browser
  end

  def get_source_and_print_elements(browser)
    html =browser.get_html_source
    html_elements_check_boxes = GooglePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "input[type='checkbox']")
    html_elements_text = GooglePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "input[type='text']")
    html_elements_selectGooglePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "select")
    html_elements_text_area=GooglePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "textarea")
    html_elements_image = GoogleBasePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "input[type='image']")
    html_elements_radio = GooglePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "input[type='radio']")
    html_elements_form = GooglePage.generate_instance_variables_from_html(:html =>html, :locator_type => "css", :locator => "form")
    merge_and_print_elements([html_elements_check_boxes, html_elements_select, html_elements_text,
                              html_elements_text_area, html_elements_image, html_elements_radio, html_elements_form])
  end


  def generate_instance_variables_from_html(options)
    if options.kind_of?(Hash)
      @html = options[:html]
      @locator_type = options[:locator_type]
      @locator = options[:locator]
    end
    doc = Nokogiri::HTML(@html)
    html_elements = {}
    if (@locator_type=="css")
      doc.css(@locator).each do |html_element|
        attribute_name =  html_element.get_attribute("id")
        attribute_value = html_element.get_attribute("id")
        if !attribute_name.nil?
          attribute_name.gsub!('input-', '')
          attribute_name.gsub!('select-', '')
#          attribute_name.gsub!(/([A-Z]+)/, '_\1')
          attribute_name.gsub!('\\', '')
          attribute_name.gsub!(' ', '_')
          attribute_name.gsub!('.', '_')
          attribute_name.gsub!('-', '_')
          attribute_name.gsub!('__', '_')
          attribute_name = attribute_name.to_s.downcase
          puts "@#{attribute_name} = \"#{attribute_value}\"" if $debug
          html_elements[attribute_name]= attribute_value
        end
      end
    end
    return html_elements
  end

  def merge_and_print_elements(page_elements_types)
    html_elements={}
    page_elements_types.each do |element_type|
      html_elements.merge!(element_type)
    end

    puts "found (#{html_elements.length} elements)"

    html_elements.keys.sort.each do |key|
      puts "@#{key} = \"#{html_elements[key]}\""
    end
    html_elements.keys.sort.each do |key|
      print ":#{key}, "
    end
  end

  def generate_link_instance_variables_from_html(html)

    doc = Nokogiri::HTML(html)
    links = {}
    doc.css("a").each do |link|
    #  puts link
      link.text

      links =  format_link_text_from_label(link, links)
    end

    links.each_pair { |key, value| puts "@#{key} =  \"css=a:contains(\\\"#{value}\\\")\"" }
    links.each_key { |key| print ":#{key}, " }

    return links
  end

  def format_link_text(link, links)
    link_content=link.content.to_s.downcase
    link_content = link_content.strip
    link_content.gsub!(' ', '_')
    link_content.gsub!('"', '')
    link_content.gsub!('-', '_')
    link_content.gsub!('+', '_')
    link_content.gsub!('*', '')
    link_content.gsub!('\'', '')
    link_content.chomp
    #puts "#{link_content} --->     #{link.get_attribute("href")} "
    links[link_content] = link.get_attribute("href")
    return links
  end

  def format_link_text_from_label(link, links)
    link_text=link.text.strip
    link_text=link_text.chomp
    link_content=link.content.to_s.downcase
    link_content = link_content.strip
    link_content.gsub!(' ', '_')
    link_content.gsub!('"', '')
    link_content.gsub!('-', '_')
    link_content.gsub!('+', '_')
    link_content.gsub!('*', '')
    link_content.gsub!('\'', '')
    link_content.gsub!('_&_', '_')
    link_content.gsub!('(', '')
    link_content.gsub!(')', '')
    link_content.gsub!(':', '')
    link_content.gsub!('.', '_')
    link_content.gsub!('@', '_')
    link_content.chomp
    links[link_content] = link_text
    return links
  end

end

Sunday, July 25, 2010

Pages, Generators, & Validators

We have been working on a framework that leverages RSpec, Selenium GRID, and Team City.  As we wrote code for various projects we settled on the Page Object Pattern.  I am not going to provide code for this post but I would like to share a few definitions.

Pages -  A ruby class that contains the locators and custom methods relative to a single web page.  The class is named similar to the page title or the common term used by the team to identify the page.  The class is initialized with locators which are either XPath or CSS that map to the objects on a page.  The locators are set as attribute accessors.  Also included in this class are any custom methods that add efficiencies to navigating through the page objects.

Generators - A method or a series of methods that can grab and format the objects from a page.  One approach is to simply copy the page source and save it as an html file, which can subsequently be parsed using Nokogiri.  This approach may be necessary if you cannot readily navigate to a page due to SSL or some other restrictions.  The second approach is to navigate directly to the target page and get the source html.  Once you store the source html parse the html using Nokogiri.  The simplest form of a generator locates all of the ids on a page and formats the ids into a standard ruby format.  Here is one example for a link.

@location_link = "css=li.'location-link' a"

A generator can generate the locators names and associated css styles for links, radio buttons, buttons, check boxes, and select boxes.  All of the magic happens using Nokogiri to parse the HTML and format the locators.

Validators - This is a RSpec file that confirms the accuracy of the locators that are made by the generators.  A good validator can navigate to a page and execute browser.element? on all locators in the page class.  Any missing locators will be identified.  Validators are very helpful as we do test driven development, but they are extremely important if you have a rapidly changing UI.  You can rebuild the page files as frequently as needed.  Your automated tests key off these locators so keeping the locators current keep you tests current.

This has been a very effective pattern for our framework.  There are two important elements.  The first is a well constructed UI.  The second is Nokogiri.  Hopefully in my next post I will provide some real and usable code.

Happy Testing!

Saturday, May 01, 2010

Did the test really pass?

I thought I would share a quick experience since I had not posted in a couple of weeks.  Last Friday I had my head phones on and fingers flying on the key board.  I have been really pumped to write code because my coworker has shifted my paradigm from hacking to test driven development.  I am still learning this approach but I feel it is the right approach and more importantly FUN!

So I created some factories that managed the data.  I created some tests against the factories to validate that the tests would get the correct data from the factories.  I then wrote a data driven script.  The script iterates through numerous key words and performs a comparative test between the UI and the back-end XML generation.

When I was done every test passed with flying colors.  I was stoked.  The next morning it occurred to me that perhaps I should test a negative condition and force the test to fail.  I fed the script all kinds of data that should have resulted in a failure.  The test passed every single time.  What the heck is going on?

After a long pause I recognized that the URLs and http requests I was building were flawed.  It was time to consult the client.  After a couple of quick questions the requests were corrected and the test script worked as expected.

So the lesson I share with you is always and I mean always do a karate chop on your test in order to make sure it fails when it is expected to fail.

Continue to chop the code!

Wednesday, April 14, 2010

Simple But Effective

Thought I would toss out a quick post about a few simple lines of code that came to save the day.

Have no fear If Else is here to save the day!

We do product line development where we have twelve unique products whose foundation is a common code base, but each product has variant points and are updated incrementally.

Recently we deployed 4 of the 12 products and numerous automation scripts broke.  With a little investigation there was a difference in a single page flow.

To the rescue was:

if browser.element? "some_css_locator"
    browser.click "some_css_locator", :wait_for => :page
else
end

All automation tests are now passing!

The really cool part is this code will work even if the new page flow is deployed to the other 8 products.

Long live css locators and the If Else!

Friday, April 02, 2010

Do your Homework!

Good morning!  This is a quick post to my blog rather than making a tacky post on a users forum.  I believe we all gain very valuable information and assistance from user forums.  I pay close attention to a couple of forums and groups, but here is a snippet of a post on the JMeter forum that gets my feathers ruffled.

"I wnat to perform load,performance and capacity testing for a web site. can
you guide me how to do that
."

Do you see anything wrong with this post?

OK, there are typing errors, but what I see is an individual who has not even attempted to learn about the topic.  These individuals need to install the tools.  Follow the examples learn what they can then post intelligent questions.

So far the forum has explained to this individual the differences between load and performance testing and provided this individual with links to all the starting material.  Now with this post they expect the forum to do the work for them.

Please people put in a little bit of effort!

Sunday, March 28, 2010

Code Etiquette

Valuable lessons learned this week in the wonderful world of test coding.

Here are some of my hopefully old habits!

1.  Write code
2.  Re-factor and compress repeat code into useful methods
3.  Comment out old code to validate new methods

Sounds reasonable, right?

Then I do the following:

1.  Fail to check in the code
2.  Get in a hurry to check in the code
3.  Leave commented out code when checking the code in
4.  Fail to execute the tests as a unit prior to checking in code
5.  Fail to do an update to the code base prior to developing new code

A couple of major flaws in my actions or non-actions are that I check in new code that is already outdated because I failed to do an update.  I check in ugly crusty code full of meaningless commented out lines.  My team has absolutely no clue as to where I left off or what demons I was battling.

So I need to start a new set of developer habits as a test developer.

1.  Start each effort updating the target project
2.  Execute a "Diff" on all files that have differences
3.  Closely examine the differences and make smart choices on which file is more current or when to merge
4.  Continue to write code, re-factor and comment out old lines of code, BUT clean all of this up prior to checking in the code
5.  If I have demons I am battling during development (examples - Java Script timing issues, or page builders) I plan to clearly annotate these areas with comments and TO DO lists so team members can at least have a sense as to where I left off
6.  Seek help if I am stuck for to long a small area of code
7.  Test the code, Test the code, Test the code -  What this means is test it locally in isolation, test the suite of code locally, and test the code on our production system
8.  Checking in any code at the end of each day regardless if I am complete or not.  The code should include clear concise documentation if there are non-functional areas and not crusty commented out lines of code that server absolutely no purpose.

I definitely have to find a way to have better developer habits.  The last thing we need to do is cause unnecessary tension within a great team.  I am not a developer by nature so I really have to work at it.  But I do promise to always strive for continuous improvement.

If anyone has other suggestions for skills improvement please leave a comment!

Happy coding!

Saturday, March 20, 2010

Time to Refactor

It is a rainy day in Austin Texas and I decided it was high time to re-factored my blog.  My  test automation adventure started off learning OTC tools Silk Performer, Silk Test, Webload, and QTP.  Eventually I recognized the advantages of open-source testing tools and the communities that support them.  My first adventure was using WATIR.  I loved using WATIR and the community. Unfortunately when I changed companies the ramp time was quick and WATIR at the time did not have all of the weapons we needed to accomplish our test automation task.

I have the absolute pleasure of working with a very sharp developer, Scott Sims.  In part I owe the knew title for my blog to him.  One day he had described a scene to me where his daughter "went all Kung Fu panda" on his wife.  I also get a kick out of when he hits some keys on his keyboard and "Hiyaaa" a zillion application windows open for battle.  In my blog archives I also had a previous post regarding Kung Fu testing when I worked with Cari Spruiell on a project.  We had to be nimble and fast on our feet to get the high quality project out the door.

All in all I think it is an appropriate title for today's testing world.  If you do not have an arsenal of tools and tricks you will not keep pace with the rapid development.

Over the past six months I have collaborated with Scott and we have ended up with a framework that I will describe like this "powered by Ruby, RSpec, Selenium GRID and supercharged with testing by brand, finely formatted results stored in Rally, page driven test execution via Team City.  I will add posts describing our choices and the adventure as we advance this framework.

Over the past few months we had the challenge of preparing our site for the traffic generated by a Superbowl ad.  We succeeded, but would not have done it with out the help of JMeter and Browser Mob.  Along the way I will add posts on performance testing.  I hope you will stop by this blog occasionally and share your thoughts.

Stay tuned!  Lets see if I can actually post weekly.