Category Archives: Test Process Improvement

Four Software Testing Challenges We Must Repeatedly Overcome

I recently came back from the Software Testing & Evaluation Summit in Washington, DC hosted by the National Defense Industrial Association. The objective of the workshop is to help recommend policy and guidance changes to the Defense enterprise, focusing on improving practice and productivity of software testing and evaluation (T&E) approaches in Defense acquisition.

So, what do I have to do with the Defense enterprise? One of the features of the workshop is to invite software testing experts from both commercial industry and academia to share state-of-the-practice strategies, processes and software testing methods. I was honored when the Director of Defense Research and Engineering asked me to do a keynote as the commercial industry-recognized subject matter expert on software testing, test automation and evaluation.

What I am about to share however, is not what I said at the conference. It’s about my takeaways from the summit. Although not surprised, I was intrigued by the fact that while we have made progress, there is still much work to do to overcome the four software testing challenges we have been fighting over the last three decades:

  1. The software testing profession is a choice, not a chance
  2. Testing is not free! Proper testing requires proper financing.
  3. Test automation is not about the technology or tool; it is about the methodology or the effective application of technology.
  4. Testing transparency is essential to educate and get executive support, hence funding.

Test Everything all the Time…

Back from more training, I was up at a client in Bellevue and really enjoyed teaching a performance class to a world class testing organization. I found that the students were very receptive to many of the concepts and ideas that the class offers.

One of the items that we talked about was the nature of the service based app delivery model. As we have all seen over the last 5 years, the cloud is really changing the software development deployment abstraction. I think we will see most of our applications served through this new cloud model, but our development cycles will be 10x faster and more chaotic. If you think about it, there are 75,000 apps on the iPhone app exchange program, if I find a news application that is slow and buggy, I will just choose another one in a matter of a few seconds. I don’t put up with an application that does not deliver acceptable customer experience.

So what does this mean? Well, I feel that if we aren’t able to “test everything all the time” we are putting our QA groups a disadvantage, that is, we become the bottleneck. Long protracted development and QA cycles are the way of the dinosaur. If we can’t make a change, feature enhancement, etc, and turn that feature/fix around in an instantaneous fashion, we will be left behind and I believe lose credibility within our engineering teams. And if you read my last blog post, you can only do this with software test automation (at both the api/gui and non-gui level)

It’s not me who’s saying this, look at your customers… Our customers of our software have so much choice of rich services that they can tap into – and with choice comes competition and with competition comes winners and losers.

It’s the quick and dead in this deployment model, I for one am preparing our engineering teams to make sure we follow a continuous iterative automation process, so that when the time comes, we can tell our customers, sure we test everything all the time – no problem.

Continuous Iteration in Automation

I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.

One topic however that has been gnawing at me lately is the notion of “continuous iteration” in automation. With virtualization, my feeling is that you should be running the automation you create all the time. I’ve coined a new term lately, “Test Everything all the Time”.

I find that many automation teams shelve the automation until runs are needed. I’m finding this to be a real bone of contention when it comes to automation credibility. The automation should always be in a state of readiness, in that I can run all my tests all the time if I want to. We used to be plagued by machine constraints and “unstable” platforms, but I’ve found that these issues are becoming increasingly less, but we seem to be still holding on to old automation ideas.

I’ve setup our labs here with machines that are constantly running our clients’ automation, while the testers are always adding more tests to that bank of tests. Automation should free us up to do this, but I still see too many people attending automation tests, which to me is like watching paint dry.

We have a “readiness” dashboard on all of tests all the time, so if I choose to run a set of tests immediately, we can dispatch those tests to a bank of machines and run them. It makes my job as a quality director easy, in that I never ever ask if the “automation” is ready, I have that information at my fingertips, for all types of tests. I’ve attached a simple dashboard report that I’ve used since I started working on automation projects in the early 90′s.

Remember, your automation needs to be credible and usable, not one or the other.

7-Sep-09
Test Cases Created and in Development 25
Test Case Create Goal for Week 40
Goal Delta -15
Test Cases to be Certified 150
Test Cases Modified* (timing only) 5
TOTAL TEST CASES IN PRODUCTION 1253
Passive (Functional) Test Cases 631
Negative Test Cases 320
Boundary Test Cases 40
Work Flow Test Cases 78
User Story Based Test Cases 24
Miscellaneous Test Cases 160
Test Cases Ready for Automation 1065
Test Cases Ready for Automation % 85%
Test Cases NOT Ready for Automation % 15%
Test Cases NOT Ready for Automation 188
Test Case Error/Wrong 5
Application Error 23
Missing Application Functionality 59
Interface Changes (GUI) 2
Timing Problems 70
Techincal Limitations 6
Requests to Change Test(change control) 23
Sum Check 188
TOTAL MACHINE TIME REQUIRED TO RUN 23:45:00
TOTAL MACHINES AVAILABLE TO RUN TESTS 7
TOTAL SINGLE MACHINE RUN TIME 3:23:34

Successful Automation of Cloud Testing

The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly complex and dynamic in structure — more than our industry has ever seen before.

The traditional software development and testing models do not support this constant “diligence in motion”; therefore a new Cloud delivery model must be adopted. Traditional models worked reasonably well in the world of client / server, since users were most often internal resources. User experience was downplayed, and glitches tolerated.

The lengthy cycle for requirements generation and feature development, followed by a set of testing cycles, allows for extended periods of time without testing. But these gaps do not correlate with the needs of Cloud consumers. For them, ongoing, reliable, uninterrupted experience is everything.

An effective delivery model of software for the Cloud pivots on one key moment – the instance of feature release. A provider or customer must be able to fix or change application features on-the-fly; that is, all tests for this fix or new feature are complete at the moment of feature release.

The only way companies can realistically achieve this model is to have superior test sets, that are fully automated – and to go about automation the right way. Otherwise it can quickly become unachievable and unmanageable.

“In the past 5 years, evaluating millions of tests for our clients, LogiGear has achieved automation percentages of over 95% of all tests.”

When automation efforts fail to achieve high percentages on tests, the method is often considered faulty. But when test automation follows specific and unique guidelines, its success can be measured again and again.

Guidelines for Successful Cloud Test Automation

When an automation team spends a disproportionate amount of time on automating tests, and the resulting automation then ends up covering only about 30% of the tests, the automation policy has failed. A much higher percentage is needed to “test everything always” in Cloud applications. Additionally, automation should not dominate the test process. The actual automation development, and more importantly maintenance effort should only have a modest footprint in terms of time and resources.

While many testing organizations mistakenly approach automation from the perspective of tooling or programming, an approach centered on automation effective test design combined with an agile test development process yields far better results. When done right, the result is a set of automated tests with on-the-fly adaptability that readily meets the facile requirements of testing in the Cloud.

Tools have their place in the process, but frequently steal the center of attention, viewed as panaceas. Primary focus goes to buying and learning “the tool” rather than expending the time, effort and cost involved in revisiting test design. But if a framework and test design process are not established, using a tool is like shooting in the dark — the Ready > Fire > Aim! approach adds up casualties quickly. The plan of attack must be mapped out before a proper weapon can be selected, otherwise “fire” can — and as we have seen, usually will — turn into “backfire”.

Establishing a test design process allows for more possible tests that are readily available, improving development cycles through flexibility. The approach aims to have at least 95 percent of tests automated, and 95 percent of testers’ time spent on test development, not automation.

These tests are not based on regression or bug validation, but are calibrated to find and hunt for bugs, boundary conditions, state transitions, exploratory testing and negative tests.

The test design approach has six essential principles:

 

  1. No more than 5% of all tests should be executed manually

    The cost of introducing automation is usually significant. By maximizing the investment, automating a high percentage of test cases, the return of a rewarding payoff is more likely.

  2. No more than 5% of all efforts around testing should involve automating the tests

    Creating more and better test cases is key to proper test design. When testers spend significant portions of their time programming automation, test cases tend to be shallow, addressing only the basic functionalities of the system.

    Allocating time for in-depth development allows testers to write more elaborate cases, using testing techniques such as decision tables or soap opera testing, as well as their imagination (a frequently underestimated asset). The result is better coverage with less effort at the tool end.

  3. Test development and automation must be fully separated

    To make sure that test cases are sufficiently in-depth, a distinction must be made between the responsibilities of testers and programmers. For successful Cloud test automation, testers must be dedicated only to testing.

  4. Test cases must have a clear and differentiated scope

    Each test case should be well-defined in its scope and purpose, and together test cases should map out comprehensive coverage while avoiding overlap and omissions.

  5. Tests must be written in the right level of abstraction

    Tools for conducting tests must be flexible enough to handle both higher business levels and lower user interface (UI) levels on demand.

  6. Test methods must be simple

    The method used to achieve effective test design, and the subsequent high coverage automation, should be easy and straightforward. Most of all, it should not add to the complexity of automation.

These principles of test design ensure the adaptability required for successful automated testing in the Cloud. Software development companies that have automated most of their testing, focus most of their efforts on testing, have dedicated testers trained to create comprehensive, flexible test design and produce useful, accessible results will be well-equipped for travel at the speed of service.

Software Test Automation: Divide and Conquer

Divide and conquer was a strategy successfully employed by ancient Persian kings against their Greek enemies. It is a strategy that can still be used successfully today. Fundamentally, by dividing something into smaller more manageable pieces (in the case of the ancient Persians, they divided the Greek city states), it becomes much more manageable.

Test automation projects generally have two kinds of problems:

  1. Tests lack intelligence. Tests follow functional specifications step by step, and are not really “sharp”.
  2. Test automation is disappointing. Not many tests get automated, and the tests that are automated are difficult to keep “working”.

A major source of the problems is that organizations underestimate the different skill sets involved. To be successful, a team needs many skills, but the two most important are:

  1. Designing good tests
  2. Automating tests successfully

The root of the problem is that many view the automation of tests as a low tech activity that the testers can take care of on top of their test design efforts. Unfortunately, many test tools on the market encourage this vision by making automation “friendly” with nice looking record and playback features and other support for end users to do their own automation. However, automation is in essence software development – you try to program a computer do something that you do not want to do yourself anymore. As with any software, automated tests tend to be complex and they can break when something unanticipated happens.

To create software, and to make it work, takes specific skills and interests. It takes experience and patience to find the cause of problems. This is even more the case with test automation than it is other software:

  • The scale of software is usually pretty large. There are many tests to be done, and if all tests are automated this can amount to a lot of automation.
  • Test automation is software that needs to communicate with other software that is not necessarily stable, and can provide often subtle changes with every release. Letting software communicate is among the hardest tasks in IT, as every network engineer can attest.
  • Adding to this complexity are the many “moving parts”. When something goes wrong, the test can be wrong, the implementation of the test can be wrong, and the system under test might be wrong. There could also be a technology problem with the test tool being unable to correctly address a class in the user interface of the system under test.

These skills and experiences are not part of the profile of most testers. The interest and patience needed to dive into ugly problems in test execution is not common among many testers. The best way for test automation to work effectively is when separate people on a team are tasked with it, and sometimes even specifically hired to do it.

Furthermore, it is a good idea to have cooperation with the development organization. This way the automation engineer can get help and coaching, get information on specific classes, API’s and protocols used in the application under test, and give feedback on the testability of it all. For example, automation engineers can negotiate extra hooks for the development team to incorporate into the software to make test automation easier and more stable.

Let us now look at the problem from the reverse angle: automation engineers are usually not well suited for test design. Testing is a profession that is often under-respected. In many cases testers are just regarded as workers who convert system specifications (use cases, requirements, UI mockups, etc) one-to-one into test cases. Good test design is, however, a highly developed art. Testers have to devise situations that make life difficult for the system under test, and express them into test cases that are not too tedious or to hard to automate. Test design also takes its own kind of patience. Testers need to work their way through specifications and find the many things to verify. This is something that automation engineers do not particularly like. Like most programmers, they usually prefer to create a new piece of software than to patiently and methodically test an existing one.

The bottom line is obvious; organize testing activities into at least two separate disciplines: 1) Test design, and 2) Test automation engineering. Some people may be able to work on both tasks, but you should not count on it. You should ask your team members what they like. They will usually diverge into individuals who like testing and individuals who like automation. Or conversely, they will diverge into individuals who hate automation and individuals who hate testing. By doing this you will be able to divide your test automation team into an effective unit with the appropriate people designing tests, and the appropriate people automating the tests.

Four Drivers for Delivering Software Test Automation Success

Test automation provides great benefits to the software testing process and improves the quality of the results. It improves reliability while minimizing variability in the results, speeds up the process, increases test coverage, and ultimately can provide greater confidence in the quality of the software being tested. However, automation is not a silver bullet. It also brings some problems. The solution for test automation is to first define the test methodology, then choose the right enabling technology to help you implement the methodology. The chosen methodology should provide measurable improvements to the following four success drivers:

  • Visibility
  • Reusability
  • Scalability
  • Maintainability

The four drivers above have a direct impact on three factors, manageability, productivity and cost efficiency, which ultimately lead to the tangible benefits that we are looking for a successful automation program to deliver.

Consider the following Figure 1. This diagram shows the results of a good test automation methodology. Good automation provides optimum productivity to the software testing effort. Hence, it leads to higher quality of the software releases. Test automation visibility provides measurability and control over the software development process, which keeps it manageable. With good visibility established, you can make effective management decisions about if, when, and how to do training and auditing to address the quality of tests. Reusability and scalability of test automation improves test productivity. However, productivity should be defined by (1) the quantity of tests (driven by reusability and scalability), and (2) the quality of tests (visibility into what the tests are actually doing helps improve the tests qualitatively). Quality and quantity are two different things. When test automation is reusable and scalable, the issue of quantity is resolved. When test automation is highly maintainable, the cost of ownership is minimized. Building test automation in a way that is more maintainable will lower the total cost of ownership of the test assets, making the overall testing effort more cost effective.

 

Figure 1. Outflows of Test Automation.

Figure 1. Outflows of Test Automation.

The quality of tests is mostly affected by the training and skills of the test staff. Automating a bad test does not improve its quality; it just makes it run faster. Good test design is a critical and often overlooked aspect of test automation. Test automation visibility by itself does not provide high quality tests. It merely enables us to have a view into how well the test designers are trained. Addressing the training issues will help in addressing test case quality issues.


Securing and developing competency is a secret to automation success.


Visibility, reusability, scalability, and maintainability lead to productivity and are the drivers for following benefits:

  • Improved time-to-market.
  • Improved quality of releases.
  • Improved predictability.
  • Improved Test/QA communication.
  • Higher test coverage.
  • Lower testing costs.
  • Earlier detection of bugs.
  • Lower technical support costs.
  • More effective use of testing resources.
  • Improved customer confidence and adoption.

After the test methodology and tools are set, the next step is to put the right people in place with the proper skills and training to do the work. The key to automation success is to focus your resources on the test production. That is to improve the quantity and quality of the tests, not to spend too many resources on automation production.


The most essential element to achieve these benefits is the methodology, not the tools.


In evaluating the return on investment with test automation, you need to look at the big picture. Think of the return-on-investment (ROI) equation, with the benefits on one side, and the costs on the other. For the benefits, consider the productivity, both in quantity and quality of tests. For the cost side of the equation, think about the reusability, scalability and maintainability of the tests in the context of the phases of a test automation effort:

  • Deployment costs
  • Test automation creation costs
  • Execution costs
  • Maintenance costs

Evaluate the ROI by considering if the costs are justified by:

  • Faster/more tests?
  • Faster/more test cycles?
  • Better test coverage in each cycle?
  • Higher quality of tests?

To make these evaluations, you need to have good visibility.

Often, the costs and benefits of test automation are uncertain due to lack of visibility. Management just does not know how much money it will spend on it, and how much benefit it will get from it. People are uncertain about how to quantify the ROI and how to set up a way to monitor it to see if they are on target. This is symptomatic to starting a project, and six months later something gets derailed.


Excellent visibility leads to effective management of test automation production.


Defining the test methodology before choosing the right enabling technology should provide measurable improvements in visibility, reusability, scalability, and maintainability yielding many tangible benefits from test automation efforts. To read more about this, see LogiGear’s book Global Software Test Automation: An Executive Snapshot of the Industry.

10 Essentials for Effective Test Automation

Test automation can provide great benefits to the software testing process and improve the quality of the results…. but its use must be justified and its methods effective.

The reasons to automate software testing lie in the pitfalls of manual software testing…

As we all know too well, the average manual software testing program:

How to Identify the Usual Performance Suspects

When You’re Out to Fix Bottlenecks, Be Sure You’re Able to Distinguish Them From System Failures and Slow Spots

By Scott Barber

Bottlenecks are likely to be lurking in your application. Here’s how you as a performance tester can find them.

This article first appeared in Software Test & Performance, May 2005.

So you found an odd pattern in your scatter chart that appears to be a bottleneck. What do you do now? How do you gather enough information to refute the inevitable response, “The application is fine, your tool/test is wrong”? And how do you present that information conclusively up front so you can get right down to working collaboratively with the development team to solve the problem?

Key Success Factors for Keyword Driven Testing

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation

Introduction

Keyword driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:

Getting Automated Testing Under Control

By Hans Buwalda and Maartje Kasdorp

A practical approach to test development and automation.