April 29, 2006

Things I Like to Have in my Test Automation Suites



I've used lots of different tools - some commercial, some open-source, some home-grown - for test automation.  I usually use a mix of such tools in my overall automation efforts.

Over the years, I have found some nice-to-have features and attributes that I end up looking for, or building, as I assemble a new Test Automation Suite.  Some of these attributes are part of the tools themselves.  Others come about because of the way I assemble my Test Suites and tools into a complete package.

(For the purposes of this article, assume I am talking only about Functional Test Automation, involving scripts.)

Some things are must-haves, and most are obvious:
  • Run in my environment
If I'm running in a Windows shop, I may not be allowed to introduce a bunch of Linux machines (and vice-versa).
  • Automate my System-Under-Test
My Test Suite must be able to automate the system I'm testing.  If the system is web-based, my scripts must be able to automate my browser (or sometimes, multiple types of browsers).  If the system is Java, the scripts must be able to automate a Java system.
  • Be able to "see" most of the objects in my System-Under-Test
Since I usually want my scripts to validate the contents of the system at various points during the automation, I need them to be able to "see" the contents.  Sometimes the scripts have a direct way to do this (usually with standard controls, it's built-in).  Sometimes, my scripts have to be creative (for example, with some non-standard controls, I might have to compare images, or copy data to the clipboard, in order to "see" the contents).
  • Usable by my test team
In general I don't favor having just "specialists" able to use the Test Automation Suite.  I strongly prefer that most of the team be able to use the test system, and contribute to building it up over time. copyrightjoestrazzere
  • Be affordable
Obviously, the Test Suite has to be affordable.  The commercial software and maintenance fees have to be within my budget.  But also the hardware needed to run it, the training required, etc, etc - all need to be affordable.
  • Be generally more efficient than strictly manual testing
Seems pretty obvious.  Once everything is considered, if it's more efficient to perform the testing manually, then perhaps I don't need a Test Automation Suite after all.
Other things are nice-to-have:
  • Detect changes in the System-Under-Test
Bug reports, checkin comments, and build summaries provide clues as to what changed in the current build of my system-under-test.  But, often, they don't tell the whole story.

I like to be able to depend on my Test Suite to detect unexpected changes, so I can then dig in and find out if this was intentional or if it was a bug.

For example, when I build a Test Suite for a web-based system, I like to capture the non-dynamic text of each page, and compare it to a baseline.  If there's a difference, it might mean that I have an intentional change, or it might mean a bug.  If it's intentional, then I want to be able to easily update the baseline, so it's ready for the next test run. 
  • Create Smoke Tests which run after every build
I see this as one of the basic uses for my Test Automation Suite.  I want to be able to quickly run a test after the build, so I can assess whether or not my team should bother to dig in and spend their time testing it.  If the system-under-test passes the smoke test, we can proceed.  If not, we reject the build until it is fixed.

If the builds occur overnight, I like to be able to schedule this smoke test so that it runs after the build and so that the results are ready for me when I get in the next morning.  Sometimes, this allows me to run a larger overnight test and still have the results ready for the morning.
  • Run unattended
It's important that I don't have to sit and watch my Test Suite run, or to help it along.  Otherwise, I may not be saving much time.  If the Suite can run by itself, overnight, then I can take advantage of hours and machines that might otherwise be unused.
  • Run overnight, and have a report ready the next morning
There are really two parts to this - overnight, and results the next morning.  Running overnight allows me to take advantage of "free time".  But for this to be effective, I need a good post-run log of what happened during those overnight hours.
  • Automate the boring, repetitive stuff
This is where Test Automation Suites should shine.  They should be able to simply run the same things over and over.  People get bored doing this, or they get less attentive after they have seen the same thing several times.  Automated Scripts don't have this problem.
  • Run predictably and repeatedly
I need confidence that I can run my Test Suite and expect it to run correctly each time.  It seems obvious, but this also means that the System-Under-Test also needs to be running, along with any parts of the system.  If they are flakey, then I can't depend on being able to run my tests when I need them.

Additionally, I can't have a database that may or may not be available, or may have unpredictable data in it.  Ideally, I like to have my Test Suite start with an empty database, and use my scripts to populate it to a known state.  If I have to share my database with other testers, I want to be able to focus only on my part of the database, and not have other testers' actions cause my scripts to go awry.
  • Randomize
Almost all scripting languages have a randomize function.  This often turns out to be very useful in varying wait times, varying the order that tests are run, and varying the data sent to the System-Under-Test.
  • Perform timings
When I run my Test Suite, I want to time certain actions.  I use those timings to detect when things are starting to go awry in my System-Under-Test.

Unexpected changes in timings can point to new bugs, or sometimes just unexpected changes under the covers.
  • Run some load, stress, and volume tests
As part of my suite of Test Automation tools, I need load testing capabilities.  Sometimes this can be fulfilled (perhaps only to a small extent) by my Functional Test Automation Suite.
  • Isolate failures easily
My Test Suite needs to provide some way for me to easily isolate where bugs are occurring.  I don't want to run a long test that only has a Pass/Fail result.  Instead, I want my Suite to tell me where the real failure occurred, as much as possible.
  • Run many tests, in spite of unexpected failures along the way
Some Automated Test Suite are overly-sensitive to failures.  That is, once a single problem occurs, the rest of the tests fail.  What was intended to be a 500-test Suite, effectively can only test until the first failure occurs - the rest becomes useless.

But, the reason for running these tests is to find bugs!  Hopefully, I can find many of them - not just one!

I want my Test Suite to be able to recover and continue when it encounters a bug, or an unexpected situation.  This is not always possible, but the chances of continued testing can be greatly enhanced by have each test (or at least each group of tests) able to reset the System-Under-Test back to a known state and continue.  The better Test Suites can do this quite well, while others cannot.
  • Start wide, build depth later
I like to build an evolving set of tests in my Test Suite.  When I first start out, I want to cover features lightly, so that I can get at least some coverage in a lot of areas.  Later on, I'll go back and add depth in the important areas.

I want a Test Suite that lets me do this simply - create a "small" script which can run and be useful, then later enhance the same script to make it more useful - without having to throw things away and start over.
  • Automate what users do first (Getting Started Manual?)
I like to try to automate important, useful things first.  When customers first use the System-Under-Test, I want them to have a good experience.  If we have a Getting Started Manual or equivalent Help page, that's often a good place to start.
  • Isolate the maintenance effort
Test Suites are constantly evolving - due to added tests, changing requirements, and changes in the System-Under-Test.  I want to be able to maintain these tests without having to constantly throw large chunks away and rewrite them.
  • Produce "readable" scripts
I want lots of people on my QA Team to be able to go in and at least understand what the Test Suite is doing.  That's often made simpler by having a scripting language that is readable.  It's often aided by having well-commented scripts, too.
  • Ability to reset the environment as needed
I like to have a Test Suite that's able to reboot a machine and continue.  I find that's often needed for a really full-featured Suite.

I also like to be able to re-initialize a database, or kill a stuck program or two.
These things allow me to create tests that can survive the unexpected, and run longer without manual intervention.
  • Avoid false failures
If my Test Suite logs a lot of "false failures" then I will be forced to spend a lot of time investigating them, before I can determine if they represent real bugs or not.  So, I want a Test Suite that can accurately log an error when there is a real error, and not when there isn't.

Also, when a single failure occurs, I don't want every test after that to fail unnecessarily.  To that end, I need my individual Test Cases to be able to set my System-Under-Test to a known state - even if a failure occurred in the previous test.
  • Extensible - since we cannot predict all uses
I never used to think extensibility would be very important.  But over time, I find more unanticipated needs for my test tools.  So I want my tools to be as flexible as possible, and able to be extended to handle objects that I hadn't anticipated - perhaps non-standard objects that we haven't yet developed. 
  • Survive trivial changes to the System Under Test
When minor changes occur in my System-Under-Test, I don't want my Test Suite to decide that every change is a bug.  That's why, for example, I avoid full screenshots for verification points.  Too many things can change on the screen - many of which are just incidental and don't represent bugs.

I want to be able to create verification points for specific needs of my Test Case, and ignore everything else.
  • Validate during tests, and at the end as appropriate
I want the option to validate aspects of my System-Under-Test as my Test Case runs, and optionally validate at the end of the run as well.

So I may need to "look" at parts of the system at any time, but I don't want my Test Suite to always look at everything.

I may want to drive my System through various actions, and check things along the way.

But sometimes, I just want to drive my System to a particular state, then look at the a database export, for example.  While it's driving the System, I may not want to validate anything automatically along the way at all.
  • Ability to select and run subsets of the entire test suite
I often build a large, relatively complete regression.

But sometimes, I don't want to run that entire regression - I just want to run a subset.  Perhaps I want to quickly verify a fix, or a portion of my System on a new platform, etc.

If I've constructed my Test Suite correctly, it should be simple to select and run just a portion.
  • Ability to select and skip particular tests
It's often necessary to skip particular tests in a Test Automation Suite. 

Sometime, it's necessary to skip a test until a bug fix is available.  Sometimes, the Test itself needs work.  Sometimes, the code that the Test exercises is being re-built and running the Test wouldn't be productive.

Skipping tests can sometimes be achieved by commenting out the statement that invokes that test, sometimes there are other methods.  Either way, this will happen, so it should be simple.
  • Variable log levels (Verbose, Normal, Minimal)
The ability to log minimally sometimes, and verbosely other times is very useful.
When I run a subset of my full Regression Suite in order to narrow in on the root cause of a bug, I want lots of details in my logs.  I want to know pretty much everything that was done, and what was seen along the way.

But when I just run my full nightly Regressions, I usually want much less information - usually just what Test Cases were run, and what errors were found. 
  • Minimize dependencies between scripts
Ideally each script in a Test Suite is independent of all others.  It can run by itself, or in any position with the entire Suite.  That's the ideal.

In reality, it can be more efficient to have some dependencies.  So, for example a script initializes the database, then another starts the System-Under-Test, then the third populates the database to a base state.

In general, I don't want strong dependencies if it's not necessary.
  • Minimize learning curve
A QA team changes over time.  People leave, or move on to other roles.  New people arrive.

I want the QA team to be able to learn how to use the Test Automation Suite in fairly short order.  Part of that is hiring the right people.  Part of that is using a tool that can be learned relatively quickly, and that is well-documented. 
  • Minimize maintenance time
As my System-Under-Test changes, I don't want to spend too much time updating my Test Automation Suite in order to let it run against the new build.

I need to be sure that my Test Suite isn't too brittle - it must not be overly-sensitive to minor changes in the System.  But changes inevitably happen, so the Test Suite still must be easy to maintain.
  • Minimize post-run analysis time
If I run my Test Suite overnight, unattended, then I must be able to come in the next morning and quickly understand what ran, and what problems were found.  I want that to be simple and quick.  I want to be able to quickly see where the errors were, and be able to dig in, write the relevant bug reports, and get on to the rest of the day's work. 
  • Minimize dependence on golden machines
While it's not always possible to avoid completely, I don't want my Test Suite to depend on being run only on a particular machine.  I want to be able to pick the Test Suite up, and move it to another machine (perhaps several other machines) as needed.

To that end, I want to avoid hard-coded wait times (which may be inappropriate on a faster or slower machine).  I also want to place any necessary machine-specific details into variables that can be easily adapted to a new home.
  • Record and Playback capability
I never depend on Record-and-Playback as the sole method of developing a Script or a Test Suite.  It's just not possible to develop robust, full-featured Test Suites that way.

On the other hand, a quick recording is very often a nice way to create Script code rapidly; code which can then be edited into the final form.

I've used Test Tools which didn't provide a recording capability.  It's still usable, but not nearly as efficient.

6 comments:

  1. nice thanks for the article

    ReplyDelete
  2. Excellent compilation of requirements. Thanks for the write-up. Can I ask what suite you've found to be your best match and why have you stuck with it? Cheers

    ReplyDelete
  3. Thank you, snewyen. I've found lots of different tools over the years that fit within my Test Automation Suite. I'm currently using WinTask.

    ReplyDelete
  4. Thanks for the great write-up

    ReplyDelete
  5. AnonymousMay 16, 2016

    Great collection!! Thanks.

    ReplyDelete
  6. This is great stuff. Thank you for sharing the information.

    ReplyDelete