Monthly Archives: June 2009

Successful Automation of Cloud Testing

The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly complex and dynamic in structure — more than our industry has ever seen before.

The traditional software development and testing models do not support this constant “diligence in motion”; therefore a new Cloud delivery model must be adopted. Traditional models worked reasonably well in the world of client / server, since users were most often internal resources. User experience was downplayed, and glitches tolerated.

The lengthy cycle for requirements generation and feature development, followed by a set of testing cycles, allows for extended periods of time without testing. But these gaps do not correlate with the needs of Cloud consumers. For them, ongoing, reliable, uninterrupted experience is everything.

An effective delivery model of software for the Cloud pivots on one key moment – the instance of feature release. A provider or customer must be able to fix or change application features on-the-fly; that is, all tests for this fix or new feature are complete at the moment of feature release.

The only way companies can realistically achieve this model is to have superior test sets, that are fully automated – and to go about automation the right way. Otherwise it can quickly become unachievable and unmanageable.

“In the past 5 years, evaluating millions of tests for our clients, LogiGear has achieved automation percentages of over 95% of all tests.”

When automation efforts fail to achieve high percentages on tests, the method is often considered faulty. But when test automation follows specific and unique guidelines, its success can be measured again and again.

Guidelines for Successful Cloud Test Automation

When an automation team spends a disproportionate amount of time on automating tests, and the resulting automation then ends up covering only about 30% of the tests, the automation policy has failed. A much higher percentage is needed to “test everything always” in Cloud applications. Additionally, automation should not dominate the test process. The actual automation development, and more importantly maintenance effort should only have a modest footprint in terms of time and resources.

While many testing organizations mistakenly approach automation from the perspective of tooling or programming, an approach centered on automation effective test design combined with an agile test development process yields far better results. When done right, the result is a set of automated tests with on-the-fly adaptability that readily meets the facile requirements of testing in the Cloud.

Tools have their place in the process, but frequently steal the center of attention, viewed as panaceas. Primary focus goes to buying and learning “the tool” rather than expending the time, effort and cost involved in revisiting test design. But if a framework and test design process are not established, using a tool is like shooting in the dark — the Ready > Fire > Aim! approach adds up casualties quickly. The plan of attack must be mapped out before a proper weapon can be selected, otherwise “fire” can — and as we have seen, usually will — turn into “backfire”.

Establishing a test design process allows for more possible tests that are readily available, improving development cycles through flexibility. The approach aims to have at least 95 percent of tests automated, and 95 percent of testers’ time spent on test development, not automation.

These tests are not based on regression or bug validation, but are calibrated to find and hunt for bugs, boundary conditions, state transitions, exploratory testing and negative tests.

The test design approach has six essential principles:

 

  1. No more than 5% of all tests should be executed manually

    The cost of introducing automation is usually significant. By maximizing the investment, automating a high percentage of test cases, the return of a rewarding payoff is more likely.

  2. No more than 5% of all efforts around testing should involve automating the tests

    Creating more and better test cases is key to proper test design. When testers spend significant portions of their time programming automation, test cases tend to be shallow, addressing only the basic functionalities of the system.

    Allocating time for in-depth development allows testers to write more elaborate cases, using testing techniques such as decision tables or soap opera testing, as well as their imagination (a frequently underestimated asset). The result is better coverage with less effort at the tool end.

  3. Test development and automation must be fully separated

    To make sure that test cases are sufficiently in-depth, a distinction must be made between the responsibilities of testers and programmers. For successful Cloud test automation, testers must be dedicated only to testing.

  4. Test cases must have a clear and differentiated scope

    Each test case should be well-defined in its scope and purpose, and together test cases should map out comprehensive coverage while avoiding overlap and omissions.

  5. Tests must be written in the right level of abstraction

    Tools for conducting tests must be flexible enough to handle both higher business levels and lower user interface (UI) levels on demand.

  6. Test methods must be simple

    The method used to achieve effective test design, and the subsequent high coverage automation, should be easy and straightforward. Most of all, it should not add to the complexity of automation.

These principles of test design ensure the adaptability required for successful automated testing in the Cloud. Software development companies that have automated most of their testing, focus most of their efforts on testing, have dedicated testers trained to create comprehensive, flexible test design and produce useful, accessible results will be well-equipped for travel at the speed of service.

Why Outsource Software Testing to Vietnam?

There’s a reason so many major companies are outsourcing to Vietnam. Intel is building the largest semiconductor plant in the world in Vietnam. IBM, Sony and dozens of other Major corporations have opened centers in Vietnam in the last few years.

  • Business Week calls Vietnam “The new hot spot for IT outsourcing”
  • Gartner has added Vietnam to its list of top outsourcing centers, and adds Vietnam provides the lowest cost resources among the top outsourcing destinations.

Why Outsource To Vietnam?

  • Low Cost – Labor costs in Vietnam are over 30% less expensive than costs in India. And 70- 80% less expensive than the United States.
  • Workforce Size and Education- High numbers of well educated graduates with technology degrees. Low attrition rates, less then half of countries such as India and China.
  • Political and Economic Stability- Rated the safest envirenment in Asia. Another reason why companies like Intel, Anheuser Busch, Cisco and IBM have made large outsourcing comitments in Vietnam.
  • Facility with the English Language- English is the most popular second language in Vietnam and the Vietnamese language uses roman characters.

LogiGear is the first choice of major coporations because of it’s world wide leadership in software testing. Since most coporations realize test cost is a large part of IT expenses, a common goal is to reduce these costs by utilizing software test automation. Reconized as a pioneer in software test automation and author of the leading books on software testing Logigear has been able to dramitically reduce IT cost with a combination of Test automation and low cost Vietnam engineering.

Because of LogiGear’s expertise, Ho Chi Minh City (Saigon) is ranked as one of the top four cities in the world for outsourcing software testing.

Software Test Automation: Divide and Conquer

Divide and conquer was a strategy successfully employed by ancient Persian kings against their Greek enemies. It is a strategy that can still be used successfully today. Fundamentally, by dividing something into smaller more manageable pieces (in the case of the ancient Persians, they divided the Greek city states), it becomes much more manageable.

Test automation projects generally have two kinds of problems:

  1. Tests lack intelligence. Tests follow functional specifications step by step, and are not really “sharp”.
  2. Test automation is disappointing. Not many tests get automated, and the tests that are automated are difficult to keep “working”.

A major source of the problems is that organizations underestimate the different skill sets involved. To be successful, a team needs many skills, but the two most important are:

  1. Designing good tests
  2. Automating tests successfully

The root of the problem is that many view the automation of tests as a low tech activity that the testers can take care of on top of their test design efforts. Unfortunately, many test tools on the market encourage this vision by making automation “friendly” with nice looking record and playback features and other support for end users to do their own automation. However, automation is in essence software development – you try to program a computer do something that you do not want to do yourself anymore. As with any software, automated tests tend to be complex and they can break when something unanticipated happens.

To create software, and to make it work, takes specific skills and interests. It takes experience and patience to find the cause of problems. This is even more the case with test automation than it is other software:

  • The scale of software is usually pretty large. There are many tests to be done, and if all tests are automated this can amount to a lot of automation.
  • Test automation is software that needs to communicate with other software that is not necessarily stable, and can provide often subtle changes with every release. Letting software communicate is among the hardest tasks in IT, as every network engineer can attest.
  • Adding to this complexity are the many “moving parts”. When something goes wrong, the test can be wrong, the implementation of the test can be wrong, and the system under test might be wrong. There could also be a technology problem with the test tool being unable to correctly address a class in the user interface of the system under test.

These skills and experiences are not part of the profile of most testers. The interest and patience needed to dive into ugly problems in test execution is not common among many testers. The best way for test automation to work effectively is when separate people on a team are tasked with it, and sometimes even specifically hired to do it.

Furthermore, it is a good idea to have cooperation with the development organization. This way the automation engineer can get help and coaching, get information on specific classes, API’s and protocols used in the application under test, and give feedback on the testability of it all. For example, automation engineers can negotiate extra hooks for the development team to incorporate into the software to make test automation easier and more stable.

Let us now look at the problem from the reverse angle: automation engineers are usually not well suited for test design. Testing is a profession that is often under-respected. In many cases testers are just regarded as workers who convert system specifications (use cases, requirements, UI mockups, etc) one-to-one into test cases. Good test design is, however, a highly developed art. Testers have to devise situations that make life difficult for the system under test, and express them into test cases that are not too tedious or to hard to automate. Test design also takes its own kind of patience. Testers need to work their way through specifications and find the many things to verify. This is something that automation engineers do not particularly like. Like most programmers, they usually prefer to create a new piece of software than to patiently and methodically test an existing one.

The bottom line is obvious; organize testing activities into at least two separate disciplines: 1) Test design, and 2) Test automation engineering. Some people may be able to work on both tasks, but you should not count on it. You should ask your team members what they like. They will usually diverge into individuals who like testing and individuals who like automation. Or conversely, they will diverge into individuals who hate automation and individuals who hate testing. By doing this you will be able to divide your test automation team into an effective unit with the appropriate people designing tests, and the appropriate people automating the tests.

Tactics for Successfully Leading Offshore Software Testing

Test Leads and Test Managers very rarely make the decision to offshore. It is typically not a choice, but rather a mandate from company executives who look to offshoring for significant cost reduction. Among US leads and managers responsible for offshore teams, management and oversight of the offshore teams is now cited as their largest source of job stress.

Once the dictate is made to move your test effort offshore, what can you do to have it go as smoothly as possible? How can you minimize the headaches, late night phone calls, late or incorrectly executed tasks, and other hassles? What can you do to make offshoring work in the best way possible and reduce stress?

There are a number of things that you can do to maximize the productivity of your offshore testing team, get the most of the testing resources that are available in-house, and minimize your stress. In this article we will look at what can you control:

  • Tools, infrastructure, and processes to foster good communication
  • Training
  • What to test offshore vs. what to test in-house

As teams become more distributed, the need for formalized tools and processes becomes even greater. For testing teams, it is critical to have centralized repositories that are accessible to all team members at any time. This can include systems for:

  • Defect tracking
  • Test plans
  • Test cases
  • Test results
  • Requirements and/or specifications

This is most commonly accomplished by the use of Web-based tools for document management, defect tracking, and test case management. Many teams are also finding blogs, wikis, project web pages, or internal portals essential for instant and easy communication.

One area that is often neglected for offshore teams is training. In order for your offshore testing teams to be effective, you must be sure that they have received adequate training in:

  • How to use the product under test
  • The business domain that the product serves
  • Software testing and quality assurance fundamentals
  • Technical skills for setting up test environments, analyzing bugs, etc.

Ensuring that the team has adequate knowledge and training is often best achieved by sending one or more senior team members to work onsite with your offshore team. In addition to ensuring adequate training, this is the best way to establish rapport between your onshore and offshore teams. Communication via email, phone, IM, and video conference will never be as effective at establishing working relationships as face-to-face communication.

Training is typically not a one-time effort; with turnover rates high in most offshoring locations, project leads should be prepared to train new offshore team members on an ongoing basis.

Finally, with most testing teams consisting of a mix of onshore and offshore testers, an important consideration is what to test onshore vs. what to test offshore.

On the whole, offshore testers are more likely to have a programming or computer science background, and therefore can excel when given technical testing tasks. In particular, I’ve seen offshore teams succeed at:

  • Technical testing, particularly test automation
  • Low-level, API testing
  • Requirements-based testing, especially technical requirements
  • Functionality testing
  • Regression testing using existing, documented test cases
  • Performance/load/stress testing

Onshore testing team members will generally have a stronger domain knowledge and more ‘tribal knowledge’, gained through years of experience with your products. Onshore teams are best leveraged for:

  • Business requirements testing
  • User scenario/ soap opera testing
  • Usability testing
  • Exploratory/Ad hoc testing
  • Any testing tasks that need fast turnaround or interaction with onshore developers

Through appropriate tools, processes, training, and task assignment, you can ensure that your global testing team is working in an optimal way, and even increase the capability of your team through a diversity of talents and backgrounds.

Four Drivers for Delivering Software Test Automation Success

Test automation provides great benefits to the software testing process and improves the quality of the results. It improves reliability while minimizing variability in the results, speeds up the process, increases test coverage, and ultimately can provide greater confidence in the quality of the software being tested. However, automation is not a silver bullet. It also brings some problems. The solution for test automation is to first define the test methodology, then choose the right enabling technology to help you implement the methodology. The chosen methodology should provide measurable improvements to the following four success drivers:

  • Visibility
  • Reusability
  • Scalability
  • Maintainability

The four drivers above have a direct impact on three factors, manageability, productivity and cost efficiency, which ultimately lead to the tangible benefits that we are looking for a successful automation program to deliver.

Consider the following Figure 1. This diagram shows the results of a good test automation methodology. Good automation provides optimum productivity to the software testing effort. Hence, it leads to higher quality of the software releases. Test automation visibility provides measurability and control over the software development process, which keeps it manageable. With good visibility established, you can make effective management decisions about if, when, and how to do training and auditing to address the quality of tests. Reusability and scalability of test automation improves test productivity. However, productivity should be defined by (1) the quantity of tests (driven by reusability and scalability), and (2) the quality of tests (visibility into what the tests are actually doing helps improve the tests qualitatively). Quality and quantity are two different things. When test automation is reusable and scalable, the issue of quantity is resolved. When test automation is highly maintainable, the cost of ownership is minimized. Building test automation in a way that is more maintainable will lower the total cost of ownership of the test assets, making the overall testing effort more cost effective.

 

Figure 1. Outflows of Test Automation.

Figure 1. Outflows of Test Automation.

The quality of tests is mostly affected by the training and skills of the test staff. Automating a bad test does not improve its quality; it just makes it run faster. Good test design is a critical and often overlooked aspect of test automation. Test automation visibility by itself does not provide high quality tests. It merely enables us to have a view into how well the test designers are trained. Addressing the training issues will help in addressing test case quality issues.


Securing and developing competency is a secret to automation success.


Visibility, reusability, scalability, and maintainability lead to productivity and are the drivers for following benefits:

  • Improved time-to-market.
  • Improved quality of releases.
  • Improved predictability.
  • Improved Test/QA communication.
  • Higher test coverage.
  • Lower testing costs.
  • Earlier detection of bugs.
  • Lower technical support costs.
  • More effective use of testing resources.
  • Improved customer confidence and adoption.

After the test methodology and tools are set, the next step is to put the right people in place with the proper skills and training to do the work. The key to automation success is to focus your resources on the test production. That is to improve the quantity and quality of the tests, not to spend too many resources on automation production.


The most essential element to achieve these benefits is the methodology, not the tools.


In evaluating the return on investment with test automation, you need to look at the big picture. Think of the return-on-investment (ROI) equation, with the benefits on one side, and the costs on the other. For the benefits, consider the productivity, both in quantity and quality of tests. For the cost side of the equation, think about the reusability, scalability and maintainability of the tests in the context of the phases of a test automation effort:

  • Deployment costs
  • Test automation creation costs
  • Execution costs
  • Maintenance costs

Evaluate the ROI by considering if the costs are justified by:

  • Faster/more tests?
  • Faster/more test cycles?
  • Better test coverage in each cycle?
  • Higher quality of tests?

To make these evaluations, you need to have good visibility.

Often, the costs and benefits of test automation are uncertain due to lack of visibility. Management just does not know how much money it will spend on it, and how much benefit it will get from it. People are uncertain about how to quantify the ROI and how to set up a way to monitor it to see if they are on target. This is symptomatic to starting a project, and six months later something gets derailed.


Excellent visibility leads to effective management of test automation production.


Defining the test methodology before choosing the right enabling technology should provide measurable improvements in visibility, reusability, scalability, and maintainability yielding many tangible benefits from test automation efforts. To read more about this, see LogiGear’s book Global Software Test Automation: An Executive Snapshot of the Industry.