Category Archives: Written Series

Four Fundamental Requirements of Successful Testing in the Cloud – Part III

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Cloud computing, when it is done well, provides a reliable and single point of access for users. Consistent, positive user experience sells the service, and rigorous testing assures a quality experience. In order to produce reliable, effective results for users of many walks of life, exacting software testing standards must be met.

In a series of articles, LogiGear is identifying four fundamental requirements by which software testing in the Cloud can uphold these standards:

Four Fundamental Requirements for Successful Testing in the Cloud:

  1. Constantly Test Everything – the Right Way
  2. Know What’s Where – and Prove It
  3. Define Your Paradigm
  4. Don’t Underestimate the Importance of Positive User Experience

In this issue’s article, we address:

Requirement 03: Define your Paradigm

Today’s companies increasingly find that they are in an ever more competitive market, especially in the drive to implement more robust, capable and pioneering Cloud-based products and services. Product delivery times are decreasing, customers demand higher and higher levels of product quality, and failure to deliver within the customer’s expectations can be swiftly punished with whole scale product abandonment and erection of barriers to market reentry.

Companies who are leading creative efforts to address these volatile areas recognize that quality must be uncompromised, and indeed surpassed in every way.

Adopting a testing paradigm that is designed specifically for the requirements of Cloud-computing is a fundamental requirement for the new standards of quality being set by customer-driven demand.

The steps for defining a Cloud-friendly testing paradigm are in general: 

  1. Evaluate Your Current Testing Paradigm
  2. Define Paradigm Requirements that Meet Standards for Cloud-based Technologies
  3. Apply Proven Methods and Technology
For our discussion of the right tools and methodology, read the first installment of this series: Requirement 01: Constantly Test Everything – the Right Way

Our discussion about loss prevention and risk mitigation when testing in the Cloud can be read in the second series installment: Requirement 02: Know What’s Were – and Prove It

1. Evaluate Your Current Testing Paradigm

Each quality or development team has its own frame of reference by which it identifies the need for testing overall, and the specific kinds of testing required. This frame of reference, or paradigm, often reflects the higher organizational approach to quality assurance and the role that testing is perceived to occupy in customer satisfaction and loyalty.

Cloud computing introduces a need for a new paradigm – one that incorporates the 24/7 testing-on-the-fly required for successful Software as a Service implementation and other Cloud-based products.

Some organizations may find their current approach to testing adapts relatively well for the “test everything all the time” model. However, most will find it necessary to reassess not just their approach toward testing, but the fundamental principles that underlie that approach – ones they’ve inherited from traditional testing models.

For a more detailed look at the historical development of testing methodologies, read our white paper “The 5% Solution”

In the world of software testing, all the players used to start in pretty much the same place: manual testing. As the industry matured, more elegant testing solutions – record and playback, automated or scripted testing, and action-based testing – emerged over time. Each new approach to testing successively increased our testing efficiencies, reporting efficiencies and ability to handle larger and larger test loads, lending leverage to the leading few. Conducting a Current-State Assessment can give you the necessary insight into what works and doesn’t work in your current testing paradigm when it comes to testing in the Cloud.

Conducting a Current-State Assessment

Most companies have tried a variety of testing approaches, some manual, some scripted, and maybe even some attempts at automation. It is important for a firm to conduct an honest self-assessment about their current testing footprint, capabilities, limitations, and opportunities for improvement.

Many times firm have a belief about their current testing approach and are surprised to find that those beliefs, albeit hopeful, are not fully rooted in practice. The following questions will help thoroughly evaluate your testing paradigm.

  • Testing methodology – Is it largely manual or some combination of methods? Are those methods consistent?
  • Testing environment – Where are the applications located? Where is the data located? What are the network and performance constructs?
  • People resources and skill sets – What is the competency makeup of your current test team? What are the skill sets required for moving forward?
  • Time-to-deliver constraints – What are the barriers to testing efficiency? What are the efforts that take more time than others and why?
  • Reporting and progress visibility – Do you have consistent visibility into your testing status? Are you surprised when testing efforts are late? Are statuses verbal, or based on actual metrics?

2. Define Paradigm Requirements that Meet Standards for Cloud-based Technologies

Achieving your organization’s testing goals, in whole or in part, requires planning and mapping out your “testing in the Cloud” strategy and its requirements.

Clarifying Cloud Architecture

An initial understanding of the different types of clouds and what their high-level architectures offer will inform the kind of testing paradigm you will need to establish.

Sam Johnston’s integrated picture depicts the three cloud types:

  • Remote, Public or External Cloud – Public clouds are sometimes referred to as “Regular” cloud computing. Completely separate from a user’s desktop or the corporate network that they belong to, public clouds offer a pay-per-use service model because the user is leveraging outside compute resources for the particular service they are seeking.This approach offers economies of scale, but their shared infrastructure model can raise concerns about configuration restrictions, adequate security, and service-levels (available uptime). These concerns might make you think twice about subjecting sensitive data that is subject to compliance or safe harbor regulations.Because “public” clouds are typically made available via the public internet they may be free or inexpensive to use. A well known example of a public cloud is Amazon EC2 which is available for use by the general public.
  • Internal or Private Cloud – Private cloud computing extends the same infrastructure concepts firms already have in their data centers. The motivation for private clouds appears to be to resolve security and availability concerns inherent in the public cloud paradigm.As such, private clouds seemingly are not burdened by network bandwidth, availability issues or potential security risks that may be associated with public clouds. However, this thinking belies the very intent of cloud computing which is predicated on hardware-software extensibility, dramatic reduction in infrastructure costs, and an elimination of the management concerns governing private networks.
  • Mixed or Hybrid Cloud – Many of the leading engineering thinkers in the industry suggest that the most workable cloud computing approach is the “hybrid” approach. The hybrid solution combines the best of the Public and Private Cloud paradigms.Considering that some applications may be too complex or too sensitive to risk operating from a public cloud it makes sense for a firm to protect those application and data assets within the construct of a private cloud where they have total control.Less sensitive applications and data can be migrated to a public cloud freeing compute resources that can be repurposed for the complex applications that need to stay home.The hybrid approach does sound like the best of both worlds. It makes sense from a technology and economic standpoint. It allows for control, flexibility and growth. The trick to managing hybrid clouds comes when you consider spikes in demand.When demand spikes pummel the performance of your applications located within the private cloud -and you need additional computing power (such as is experienced by web-based news media when critical events occur) -you will need to develop a management policy that can be responsible for when to reach out to the public cloud for those additional resources.

Defining the Future-State

Envisioning the results of your testing transformation requires solid understanding of your organization’s business goals and objectives, the Cloud computing paradigms that may help your testing effort contribute to those goals, and development of a sound plan to move in the new direction.

When documenting your planned future state, address each of these categories:

  • Architectural – Consider the different Cloud paradigms as they pertain to your business model, goals and objectives, and application and data sensitivity.
  • Organization – Ensure organizational review and consensus of new Cloud testing direction as it pertains to business goals and objectives and priorities.
  • Financial – Define the benefits, know where the real costs lie, and define the budget.
  • Implementation – Adopt an incremental improvement approach, and choose the correct tools and partners.
  • Monitor and measure – Develop a consistent set of metrics for measuring and monitoring your new foray into testing in the Cloud.

Other organizational goals and objectives that are important to include:

  • Provide sufficient support for distributed test teams
  • Increase your return-on-testing-investment
  • Significantly decrease time-to-market
  • Optimize the reusability of tests and test automation
  • Improve test output and coverage
  • Enhance the motivation of your testing staff
  • Increase managerial control over quality and testing
  • Have better visibility into quality and testing effectiveness

While setting your testing goals, consider that in the LogiGear white paper “The 5% Solution,” Hans Buwalda, LogiGear’s CTO, posits that “with good test automation you can have more tests and execute them any time you need to, thus significantly improving the development cycles of your project.”

With the repeatable tests under automation, he says, testers are free to work on more sophisticated testing as well as testing focused on new features and functions increasing the overall quality of the delivered result.

Buwalda establishes the 5% Goals:

  • No more than 5% of structured test cases should be tested manually
  • No more than 5% of testing efforts should be spent in the automation process

Conversely, the five percent goals suggest that 95% of your tests should be automated and 95% of your testing efforts should be spent on emerging functionality and higher complexity testing. Other organizational goals fall into some pretty predictable categories.

3. Apply Proven Methods and Technology

Implementing the changes necessary to adopt a new testing paradigm will bear out how well you’ve defined your requirements. Therefore, selecting well-matched testing tools and specific architectural approaches can have a dramatic impact on the results of your paradigm shift.

Architectural Approaches

If you are going to perform testing in the cloud it is important to understand the different architectural approaches that are available and the unique advantages that each approach offers. First, let’s start out with a quick list of the six basic cloud architectures or relationships.

Cloud Services Paradigm Industry Standards
Cloud-accessible application
  • Communications (HTTP, XMPP)
  • Security (OAuth, OpenID, SSL/TLS[83])
  • Syndication (Atom)
Cloud-based client services
  • Browsers (AJAX)
  • Offline (HTML 5)
Scalable infrastructure and computational arrays
  • Virtualization (OVF[84])
Application development and deployment platforms
  • Solution stacks (LAMP)
Utility services
  • Data (XML, JSON)
  • Web Services (REST)
Data storage, duplication and backup

Cloud Accessible Application

This category of cloud computing technology focuses on applications that are entirely externally hosted at a cloud services application vendor. No installation of software is required on the user desktop. The application functionality is utilized through a web browser and the data is stored on the application host’s servers. is a good example of this type of application service.

Cloud-based Client Services

Think of a device that is completely useless without a connection to internet services (like the iPhone) and you will have an understanding of what cloud-based client services are all about. Netbooks are another emerging computing approach predicated on utilizing more cloud-based applications and services using a smaller, less powerful laptop computer.

Scalable Infrastructure and Computational Arrays

Imagine a sudden spike in demand for products you want to sell. Good news, right? But let’s image that your computer room server just can’t handle the additional load. You lose money with every customer who abandons their purchase. With the right cloud infrastructure provider relationship those worries might be a thing of the past. Vendors are increasing building expanding infrastructure architectural offers to support this very problem. Good examples to keep in mind are Amazon and Sun Microsystems.

Application Development and Deployment Platforms

Picture the scenario that you have developed a ‘killer’ application that you know everyone is going to want but you can’t deal with the cost and complexity of buying and managing the server and network equipment to manage it. Web hosting services are a good example of this approach allowing individuals and organizations to provide their own web site accessible via the web. Still other platforms allow for remote applications development and deployment.

Utility Services

MapQuest is popular example of a cloud-based service utility. It is completely inaccessible and unusable if you are not connected to the web. Other examples include, PayPal and Yahoo search.

Data Storage, Duplication and Backup

There are a number of variations of data services available via the cloud. Simple data storage allows for a pay-as-you-increase sort of service allowing ever increasing disk utilization as needed. Other services offer data duplication and mirroring, and data backup.


In this paper, we’ve presented the need to “define your paradigm” as a necessary prerequisite for pursuing a strategy for “testing-in-the-cloud”. We began with the common-sense directive of evaluating your current testing approach and setup. We then reviewed the different cloud types, their focus, and their typical use. Additionally, we offered the business objectives most firms would consider as foundation for making a move to a Cloud-based testing paradigm. Lastly, we presented the six common cloud computing paradigms, the industry standards that have emerged to support each, and introduced examples of services in each category.

Your testing-in-the-Cloud strategy should consider all of these approaches to determine what will work best for your organization. Your success is will be determined by a disciplined approach and thorough implementation. Happy testing!

Four Fundamental Requirements of Successful Testing in the Cloud – Part III

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Testing in Agile Part 5: TOOLS

Michael Hackett, Senior Vice President, LogiGear Corporation

To begin this article, it would be a good idea to, remember this key point:

Agile Manifesto Value #1
Individuals and interactions over processes and tools

Tools work at the service of people. People, particularly intelligent people, can never be slaves to tools. People talking to each other, working together and solving problems is much more important than following a process or conforming to a tool. This also means someone who tells you a tool will solve your Agile problems knows nothing about Agile. A disaster for software development over the past decade has been companies investing gobs of money into “project management tools” or “test management tools” to replace, in some cases, successful practices of software development, in favor of the process the tool or tool vendor told that company was “best practice”. That made certain tool vendors rich and many software development teams unhappy.

If your team wants to be Agile, and work in line with the value outlined above, I have a couple of rules to remember when it comes to tools and Agile.

Rule #1 – Tools need to fit teams. Teams should not fit into tools!
Rule #2 – Test teams need to grow in their technical responsibility.

These two ideas sum up how I feel about most tools that are part of an Agile development project. For example, when you add management tools, this will add overhead, reduce efficiency and limit the Agile idea of self-directing teams. However, communication tools can help in distributed development situations and scaling into large organizations. Some tools work to provide services to teams and others do not. This article is mainly about tools and how they relate to test teams. We are going to talk a lot about the different toolsets- that are most important to test teams. We won’t talk about unit test frameworks since test teams very rarely manage them, but we will discuss test case managers in the Agile world.
I’m also going to talk about the changes that the test teams need to undergo to successfully implement or use these new tools.

Often when beginning an Agile project, there is a need for some re-tooling in order to be successful. The same set of tools we used in traditional development may not be enough to support rapid communication, dynamic user story change and ranking, and increased code development.

The groups of tools of specific use to test teams we will focus on are:

• Application Lifecycle Management (ALM) – such as Rally, Atlassian’s JIRA Studio and IBM’s Rational Team Concept
• Test Case Management Tools – such as OTHER**, and HP’s Quality Center
• Continuous Integration Tools – such as CruiseControl, Bamboo, and Hudson ( Including Source control tools – such as PerForce, and Visual Source Safe (VSS) type tools)
• Automation Tools – we will not talk about specific tools in this section.

Application Lifecycle Management Tools (ALM)

Sometimes when we talk about tools, we discuss them in relation to the Application Lifecycle. Naturally then, the tool vendors refer to their large suites as Application Lifecycle Management Tools, or ALM. This is the major category of tooling and it is often an enormous undertaking to get these tools implemented. However, there are some minor category tools in the ALM spectrum that we can pull out that relate to testers.

Application Lifecycle Management (ALM) tools have been making a splash in the development world for a few years now. Approximately 15 years ago, Rational (before Rational/IBM) had an entire suite of tools we would now call ALM. From RequisitePro to Rose to ClearCase through ClearQuest, the idea was to have one set of integrated tools to manage software development projects from inception to deployment. Now, there are multitudes of tool sets in this space. The use of these tools has grown in the past couple of years as teams have became more distributed, faster and more dynamic while at the same time coming under increased pressure from executives to be more measurable and manageable. When Agile as a development framework grew explosively, the anti-tool development framework got inundated by tools.

ALM tool suites generally manage user stories/requirements, call center/support tickets, also help track planning poker, store test cases associated with user stories, bugs – there are many tools that do this – but the big leap forward with current ALM tools is them being linked to source control, unit testing frameworks, and GUI test automation tools. Lite ALM tools satisfy the need for speed required by the rapid nature of software development – it’s a communication tool.

There are great reasons for using ALM tools, just as there are great reasons not to use them! Let’s start with great reasons to use them:

  1. If you’re working on products with many interfaces to other teams and products, ALM tools can be very beneficial in communicating what your team is doing and where you are in the project.
  2. ALM tools are essential if you’re working with distributed teams.
  3. If you are working on projects with large product backlogs, or with dynamic backlogs often changing value, a management tool can be very helpful.
  4. ALM tools are useful if you have teams offshore or separated by big time zone differences
  5. You are scaling Agile – for highly integrated teams, enterprise communication tools may be essential.
  6. If, for whatever reason, the test team is still required to produce too many artifacts- mainly the high overhead artifacts like test cases and bug reports, build reports, automation result report, and status reports (ugh!) – most ALM tools will have all the information you need to report readily available.

There are many good reasons to use ALM tools. But the tool can’t control you or the use and benefit of going Agile will be lost.

Most Agile lecturers, including Ken Schwaber, will tell you using Agile in distributed teams will reduce the efficiency. This does not mean Agile cannot be successful with offshoring, but it means it is tougher and chances of ending up with smiles all around are lower. Nothing beats all being in the same place, communicating easily and openly every day and working on our software in my bullpen in the office. If you are in that situation – good for you! However, this is the subject of another magazine article we will bring you in the New Year.

Now the great reasons not to use ALM tools:

  1. It is very easy for some people to become tool-centric, as though the tool is the project driver rather than the Teams. That is completely waterfall, bad!
  2. Paper trails and the production of artifacts, “because we always did it this way,” are anti-Agile. Most times the artifacts become stale or useless having little relevance to what is actually built.
  3. Management overheard is anti-Agile. The only “project management” if I can even use that phrase, is a 15 minute daily scrum. Not a giant, administrative nightmare, high cost overhead, project management process! When the ALM tool becomes a burden and overhead slows down the team, you will need to find ways to have the tools support you rather than slow you down.

A quick word about measurement: the only artifact about measurement in line with by-the-book Agile is burndown charts. Extreme programming espouses velocity. If your team is measuring you by test case creation, test case execution rates, user story coverage there is nothing Agile about it. End of discussion.

If you want to investigate ALM tools further, I recommend you look at a few – check out enterprise social software maker, Atlassian, maker of the famous Jira, and Rally a great, widely used product with dozens of partners for easy integration. These two are at the opposite ends of the cost spectrum.

Test Case Management Tools

Most test organizations have migrated, over the past decade, to using an enterprise wide test case management tool (TCM) for tracking test cases, bugs/issues and support tickets in the same repository. Many test teams have strongly mixed feeling about these tools- sometimes feeling chained to them as they inhibit spontaneous thinking and technical creativity.

To begin talking about test case tools in Agile, there are two important reminders:

  1. Test cases are not a prescribed or common artifact in Scrum.
  2. Most companies that are more Agile these days are following some lean manufacturing practices – such as ceasing to write giant requirement docs and engineering specs, and getting really lean on project documentation. Test teams need to get lean also. They can start by cutting back or cutting out test plans and test cases. Do you need a tool to manage test cases in Agile? Strict process says no!

If you are not using a TCM yet and are moving to Agile, or are beginning to use a test case manager, now is not a good idea If you already use one – get leaner!
Still, most ALM tools have large functionality built around test case management that can become an overhead problem for teams focusing on fast, lean, streamlined productivity.

Continuous Integration

In rapid development, getting and validating a new build can kill progress. Yet there is no need for this! In the most sophisticated and successful implementations of Agile development, test teams have taken over source control and the build process. We have already talked about continuous integration in this series (See Agile For Testers Part 3 Automated Build and Build Validation Practice / Continuous Integration for more discussion on this), let’s focus on tools. Source control tools are straightforward. Many test teams regularly use the source controls systems.

There’s no reason why a test team can’t take over making a build. There is also no reason a test team does not have someone already technically skilled enough to manage continuous integration. With straightforward continuous integration tools such as Bamboo, CruiseControl and Build Forge, test teams can do it.

My advice is:

  • Learn how these tools work and what they can do
  • Build an automated smoke test for build verification
  • Schedule builds for when you want delivery

Test teams can use a continuous integration tool to make a big build validation process: scheduling new builds, automatic re-running of the unit tests, re-running automated user story acceptance tests and whatever automated smoke tests are built! Note, this does not mean write the unit tests – it means re-run, for example, the J-unit harness of tests. Test teams taking over the continuous integration process is, from my experience, what separates consistently successful teams from teams that enjoy only occasional success. Easy.

I suggest you visit Wikipedia on CI tools here: to see a very large list of continuous integration tools for various platforms.

Test Automation

I’m going to resist going into detail about automation here. Automation in Agile will have its own large magazine articles and white papers in 2011. Remember, when discussing automation in Agile with team members, scrum masters or anyone knowledgeable in Agile, the part of testing that a test team needs to automate is not the unit testing. That automation comes from developers. Test teams need to focus on:

  • Automated smoke/ build validation tests
  • Large automated regression suites
  • Automated user story validation tests (typically a test team handles this. But- there are very successful groups that have developers automate these tests at the unit, api or UI level)
  • New functionality testing can be difficult to automate during a sprint. You need a very dynamic and easy method for building test cases. This will be discussed a greater length in an future Agile automation methods paper.

In Agile development, the automation tool is not the issue or problem, it’s how you design, describe and define your tests that will make them Agile or not!

Here are the main points to get about test automation tools in Agile:

  • The need for massive automation is essential, clear and non-negotiable. How test teams can do that in such fast-paced development environments needs to be discussed fully in its own white paper.
  • That you need a more manageable and maintainable, extendible framework, like tools that easily support action-based testing is clear.
  • Developers need to be involved in test automation! Designing for testability and designing for test automation will solve automation problems.
  • Training is key to building a successful automation framework.
  • Tools never solve automation problems- free or otherwise! Better automation design solves automation problems.

Tools in Agile need to be used to fit people, not the other way around.
Tools need to be easy to use (from your perspective- not the tool vendor’s perspective!) and low time and management overheads.
Employ lean manufacturing ideas wherever possible- specifically, cut artifacts and documentation that is not essential.

Article by Michael Hackett, Senior Vice President, LogiGear Corporation

Testing in Agile Part 3: PRACTICES and PROCESS

Michael Hackett, Senior Vice President, LogiGear Corporation


Remember that Agile is not an SDLC. Neither are Scrum and XP for that matter. Instead, these are frameworks for projects; they are built from practices (for example, XP has 12 core practices). Scrum and XP advocates will freely recommend that you pick a few practices to implement, then keep what works and discard what doesn’t, optimize practices and add more. But be prepared: picking and choosing practices might lead to new bottlenecks and will point out weaknesses. This is why we need to do continuous improvement!

That being said, there are some fundamental practices particular to test teams that should be implemented; if not, your chances of success with agile could be doomed.

Just by being aware of possible, even probable, pitfalls or even by implementing the practices most important to traditional testers does not guarantee agile success and separately testing success, though it should avoid a complete collapse of a product or team — or potentially as harmful, a finger-pointing game.

This 3rd part of our Testing in Agile series focuses on the impact to testers and test teams on projects implementing agile, XP, and Scrum. In this installment, we’ll only focus on the practices that are the most important to software testers.

And, in the final analysis, even if you implement all the important processes — and implement them well — you need to review how successful they are in your retrospective and keep what works then modify, optimize, and change what doesn’t work. Your scrum master or scrum coach helps with this.


In this section, we will focus mainly on XP practices, which are the development practices, rather than on the Scrum/project management practices, since it is the development practices that effect test teams most.

To be blunt, if you’re developers are not unit testing, and the team does not have an automated build process, the team’s success in agile will be limited at best.

Unit Testing / TDD and Automated User Story Acceptance Testing

Unit testing and test-driven development are so fundamental to agile that if your team is not unit testing, I cannot see how testers could keep up with the rapid release of code and fast integration in agile projects. The burden of black and gray box testing in the absence of unit testing in very fast, 2-to-4 week sprints should frighten any sane person who is knowledgeable in software development. Your developers need to be unit testing most if not all their code in rapid, agile projects – there is no way around it.

Automated user-story acceptance testing is secondary to unit testing in importance. This test validates that the task or goal of the user story has been achieved rather than validating code as in a unit test. Having that kind of test automated and available to re-run on successive builds and releases will enable a test team to focus on more effective exploratory testing, error guessing, scenario, workflow, error testing, varieties of data and alternative path testing that unit testing and user-story validation tests rarely cover. This leads to finding better bugs earlier and releasing higher quality software.

100% developer unit testing is one of the most significant advancements within the agile development process. We all know this does not guarantee quality, but it is a big step toward improving the quality of any software product.

From our Testing State-of-the-Practice Survey that we conducted on, we can get a rough idea about how close we are to the ideal.

Here is one question regarding unit testing from our survey:

What percentage of code is being unit tested by developers before it gets released to the test group (approximately)?

total # of unit tests developed Percent answered
100% 13.60%
80% 27.30%
50% 31.50%
20% 9.10%
0% 4.50%
No idea 13.60%

* this was out of 100 respondents

A vast majority of agile teams are unit testing their code, though only a fraction are testing all of it. It’s important to know that most agile purists recommend 100% unit testing for good reason. If there are problems with releases, integration, missed bugs, and scheduling, look first to increase the percentage of code unit tested!

Automated Build Practice And Build Validation Practice / Continuous Integration

With the need for speed, rapid releases, and very compressed development cycles, an automated build process is a no-brainer. This is not rocket science and not specific to Agile/XP. Continuous integration tools have been around for years; there are many of them and they are very straightforward to use. It is also common for test teams to take over the build process. Implementing an automatic build process by itself is a step forward, but a team will realize more significant gains if they add to automated builds with continuous integration.
Continuous Integration includes the following:

  • Automated build process
  • Re-running of unit tests
  • Smoke test/build verification
  • Regression test suite
  • Build report

The ability to have unit tests continually re-run has significant advantages:

  • It can help find integration bugs faster
  • Qualifying builds faster will free up tester time
  • Testing on the latest and greatest build will save everyone time.

The positives for continuous integration far outweigh any resistance to implementing them. Test teams can take on more responsibility here: they can be more in control of the testing process — on how many builds they take and when they take them — for starters.

Hardening Sprint / Regression Sprint/ Release Sprint

The most successful agile teams have implemented a sprint that, in effect, is specifically designed and tailored to just test, or quite simply a “testing sprint¨. Although this testing or integration sprint can go by many names, a regression sprint or hardening sprint are the most common types. Prior to releasing to the customer, usually, someone has to do security, performance, accessibility, usability, scalability, perhaps localization, or many other types of tests that are most effectively done once the product is fully integrated. In most cases, this is when end-to-end, workflow, user-scenario tests are done and when full regression suites are executed. It is a great bug-finding and confidence-building sprint. But! Its late into the development cycle. “Bugs¨ found here may go directly into the backlog for the next release or cause a feature to be pulled from a release.

Estimating And Planning Poker Includes Testers.

Testers participating in the sizing and estimating of user stories is very basic to agile success. A few times I have run across companies trying to scope, size and rank a backlog without test team input. This is a gigantic mistake. Let me tell a quick story.

I was at a company that was doing a good job at implementing scrum, which they had piloted across a few teams. They still had some learning to do but were still implementing processes — overall, doing a good job to start!

The group that had the toughest time adopting to scrum, though, was the testers. This was because in their previous SDLC, the test team was viewed as adversarial, composed mainly of outsiders. Some of those attitudes persisted to the point where the product owner (a former “Marketing person¨) excluded testers from the user-story sizing and the estimation process.

During a coaching session, I was reviewing some user stories with them. We were doing some preliminary sizing and doing a first pass, assigning only four estimates: small, medium, large, and extra large. (Note: it’s a great way to start. Some people call it grouping by shirt size to roughly estimate what can get done in a sprint.)

One certain story got sized as a large and another at medium. I picked those two stories out from my experience with their product and pointed out that the one story ranked as a large was a very straightforward test. Any tester knowledgeable about this area could do an effective test job pretty quickly. But this other user story they sized as a medium was a test nightmare! I quickly ran through a list of situations that had to be tested — cross-browser, data, errors, etc — all of those things that testers do! The team believed me and we proceeded to pull in the test engineer to review our results. The tester quickly said the same thing as I had and pointed to this as a sore point for testers. The stories would have been sized completely wrong for the sprint (as had been the problem for the previous test team) if the test team continued to be excluded from the sprint planning and playing poker session.

This does not mean that it would reduce the complexity (to develop or move the story from a large to a medium). But it would have moved the medium complexity to a large or even an extra large and this would have impacted testing! The lesson learned here is that the test team needed to be included in the planning game. Attitudes had to change or costly estimating mistakes would be made.

This practice is also crucial to the XP values of trust and respect! Sadly, in many situations I have seen testers excluded from the planning meetings and invariably it is always a trust problem! Deal with the trust and respect problem and get involved in the complete planning process!

Definition Of Done

We’re all used to milestone criteria, entrance criteria, and exit criteria in whatever SDLC we’re using. The term people on agile projects use that relates to milestone criteria is the definition of done. Most often, the problem with milestone criteria is that it is routinely ignored when schedules get tight. This often leads to frustration, bad quality decisions, and ill feelings.

I want to show a simple description of agile that will help us in the discussion.

We are all familiar with the traditional three points of the project triangle that every project must juggle: features (number of features, quality, stability, etc.), cost (resources), and schedule. Before agile, projects committed to feature delivery then would add people (cost) or push out dates (schedule) and sometimes release buggy features to meet whatever constraint project managers felt needed holding!
Agile is different.

In agile, the cost, namely the size/resources of the team, is fixed. We know that adding people to a project reduces efficiency (Agile Series Part 1); and, the schedule is fixed. Never extend the time of a sprint. What can change, and what is estimated, is the set of features. This leads us back to the definition of done.
What gets done, the user stories/features, gets released. What does not get done gets put into the backlog for the next sprint. Since sprints are so short, this is not as dramatic as pulling a feature from a quarterly release. The customer would have to wait another quarter to get that functionality. If a feature gets pulled from a sprint and put into the back log, it can be delivered just a few weeks later. How do we know what’s done? A primary value of XP:

  • Working software is the primary measure of progress.

The definition of done will change among various groups. There is no one definition, though it commonly includes the following at a minimum:

  • the ability to demonstrate functionality
  • complete /100% unit testing
  • zero priority 1 bugs
  • complete documentation

Many teams also include a demonstration of the user story or feature before it can be called done.

In the past, for most teams, it seemed like a nice idea to have a set of milestone criteria but it was routinely ignored. In agile projects, though, with rapid release, the risk of slipping on done milestone criteria could be catastrophic to the system. Done is a safety net for the entire scrum team and actually the entire organization.

In the past, many test teams had been the entrance and exit criteria police since, in later stages, milestone criteria are often based on testing, bugs and code freeze — items testers see and are responsible for reporting on. Now, it is the Scrum Master who enforces the “Done” criteria, not testers. It is much better to have the Scrum Master be the enforcer, rather than have testers act as naysayers and complainers! Every team needs a Scrum Master!

Small Releases

  • Deliver working software frequently.

In Scrum it is recommended that sprints last from 2 to 4 weeks maximum. The practice of small iterative releases is the very core of agile development. I have seen companies rename their quarterly release a sprint and say: “we’re agile!” No.

A three month sprint is not a sprint at all. Sprints are meant to be very narrow in focus, able to demonstrate functionality before moving on to the next sprint, and have a prioritized and realistic backlog. These among many reasons should keep your iterations short. Some companies have begun agile implementations with four-week sprints and a plan to reduce the sprint time to three or two weeks over a year, after some successful releases and retrospectives with process improvement. Ken Schwaber and Jeff Sutherland, the original presenters of Scrum recommend beginning with a 2 week sprint.

Measure Burndown And Velocity

I have brought up the phrase sustainable pace and burndown charts a few times. Let’s briefly discuss these practices.
First, two guiding ideas:

  • We have to work at a sustainable pace. Crazy long hours and overtime lead quickly to job dissatisfaction and low quality output (see peopleware definition on Wikipedia) — the main way we get an idea of sustainable pace is through measuring burndown.
  • Burndown charts are one of the very few scrum artifacts.

The only scrum artifacts are the product backlog, the sprint backlog, and the burndown chart. Velocity is not by the book scrum but we will talk about this as well.
To briefly describe a burndown chart, here are some points about them and their usage:

  1. They measure the work the team has remaining in a sprint and whether they can get done with its planned work
  2. They quickly alert you to production risks and failed sprint risks
  3. They alert you to potential needs to re-prioritize tasks or move something to the backlog
  4. They can be used during a sprint retrospective to assess the estimating process and in many cases the need for some skill building around estimating the line

To have healthy teams and high quality products, people need to work at a sustainable pace. View this time using burndown charts and velocity.

Burndown charts count the total number of hours of work remaining in a sprint on the y axis against the day-by-day total on the x-axis.

Velocity measures the estimated total of successfully delivered user stories or functionality of backlog items. If this is measured over many sprints, a stable, predictable number should emerge.

Velocity can be used to gauge realistic expectations by both “chickens and pigs” for future purposes. Velocity is measured in the same units as feature estimates, whether this is “story points”, “days”, “ideal days”, or “hours” – all of which are considered acceptable. In simple terms, velocity in an agile world is the amount of work that you can do in each of the iterations. This is based on experience from previous iterations. The Aberdeen Group, a IT research firm, which has covered and published material on Agile Development, makes the claim that “When cost / benefit is measured in terms of the realization of management’s expectations within the constraints of the project timeline, it is the ability to control velocity that drives the return on investment.”

Measuring the burndown rate and calculating velocity will give you reasonable amounts of work for a team to do at a pace that is conducive to happy teams, releasing higher quality software. To repeat from the introduction piece to this series on Agile, “Teams working at a reasonable pace will release higher quality software and have much higher retention rates — all leading to higher quality and greater customer satisfaction.”

When the team feels really good about their abilities it encourages them to do better. The business starts to believe in the team and this sets the team up in the zone. When it gets into the zone, the team can generally sustain its steady-state velocity month after month without burning out. And better yet, they get to enjoy doing it. Geoffrey Bourne, who writes for Dr. Dobbs Journal notes, “The essence of creating a happy and productive team is to treat every member equally, respectfully and professionally.” He believes Agile promotes this ethos and I agree with him.

In conclusion, being agile is implementing practices to help product and development teams work most efficiently and be happy. There are many, many practices (again, XP has 12 core practices). Here, we discussed only the key practices for testing success. If your team is calling itself agile and has not implemented some of these practices, it is crucial to bring them up in sprint retrospectives and talk about their benefits and the problems that not doing them has caused in your product and customer.

There are other practices that need to be in place for success, and specifically for test teams to be successful and not be pointed out in a blame game. These are covered in other installments, namely:

  • You have to have a scrum master.
  • Automate, automate, automate.
  • Use sprint retrospectives for process improvement.

Testing in Agile Part 2 – AGILE IS ABOUT PEOPLE

Michael Hackett, Senior Vice President, LogiGear Corporation

If your Agile implementation is not about people, you’ve missed the boat!  The most profound impact to becoming more Agile is happier teams!

Agile manifesto Value #1:

* Individuals and interactions over processes and tools

Words like these do not show up in Waterfall or RUP SDLC process descriptions.

Agile cannot get more basic than this: people, the team members, you, are more important than the process documents, best practices or any standard operating procedures. People sitting in a room together, talking, hashing out an issue, live, beats out any project management tool every time.

At the core of the Agile movement are a few ideals that often get clouded by structure, the need-for-speed or just taken for granted. First, technology people and knowledge workers have to be happier at their jobs for long term productivity and higher quality work output. The hangover and subsequent bust from the dot-com boom have done more to spread Agile ideas than the cost savings aspects. This has been a grass roots movement.

As Ron Jeffries says in “Extreme programming explained,” XP is:

  • An attempt to reconcile humanity and productivity
  • A mechanism for social change

These are not casual statements. They are not meant to be taken lightly.


It is difficult to find a valuable discussion of Agile, Scrum or XP that does not highlight trust, empowerment, courage, or respect.  In my opinion, a groundbreaking aspect of the Agile/XP/Scrum movement is Chickens and Pigs.

This is an oft repeated story attributed to Ken Schwaber concerning chickens and pigs. Here is goes:

A chicken and a pig are together when the chicken says, “Let’s start a restaurant!”
The pig thinks it over and says, “What would we call this restaurant?”
The chicken says, “Ham n’ Eggs!”
The pig says, “No thanks, I’d be committed, but you’d only be involved!”
SCRUMGUIDE — By Ken Schwaber, May, 2009

In terms of Scrum, the Team members, the doers are called “pigs.” They have skin in the game. Management, sales-people involved in product delivery but who do not produce it, are “chickens.”  In Scrum, chickens cannot tell “pigs” how to do their work.

In Agile, management and sales teams (chickens) “back off” from solely determining code delivery since they do not write the codes. The people developing the code (pigs) know how fast or slow ultimate software delivery wiill be – and they should be driving these schedules and milestones. The doers (pigs) and the business balance delivering value with working at a sustainable pace. The pigs are not dominated by chickens. For many years, technology teams have complained about this domination which destroys teamwork. This is where trust is proven or broken.

This turns out to be a major stumbling block for many companies attempting to say they are Agile. Often, management and sales still want to tell development teams how long their work should take, most often without understanding the complexity. This legacy notion of top down management direction is not sustainable.

This refocus of development teams (pigs) influencing schedule and level of effort stops Taylorism (production line management theory – where management dictates everything) dead in its tracks.  Agile implies that we hire smart, creative, effective people (pigs) – even experts!  Let the pigs tell the managers (chickens) how long it will take to complete their work. Let the doers be creative and learn about the best way to finish their tasks. Let the pigs find out what is a sustainable pace rather the chickens imposing one on them.

Working at a sustainable pace will eliminate or reduce one of the highest job dissatisfaction: overtime or even worse, weekend work!  Let us, the people actually doing the work, commit our feature development to delivery dates. Estimating, scoping, planning “poker” – this is all bottom up, not top-down.

It’s pigs and chickens working together- not one dominating the other! Agile empowers people!

Where is my desk?

With such a high focus on easy communication, Agile development teams need to sit together, preferably in a bullpen or at least be together to talk freely, every day to get the full benefits of Agile; this must include “the business” — a customer or customer representative.  And these “customers” need to participate in all facets of development; just participating in the daily scrum is not enough. It’s sitting together, working out problems together, talking to each other, asking each other questions with no delay or dead time waiting for email responses or the next weekly meeting. Communication as a constraint in today’s world is intolerable and frustrating.

Whatever your implementation of practices, keep in mind: communication, communication and more communication. Focus on resolving communication problems, easy communication, transparent communication, eliminate lack of response (close the loop) and feelings of being ignored or dominated. Teams needs to respect each others’ work and treat each other as humans. Simple as this seems, adversarial team members, people who never meet, do not know each other or finger-point and blame can not make good products! Team members getting to know each other, sitting together with open communication is a major step toward happier teams making better products. This sounds nice but in a world of remote teams, remote offices, home offices and too-busy multi-taskers, software development is often de-humanized. This needs to stop.

Most books you find on Agile development include a chapter on how you sit, where you sit — looking out a window or facing the center. Sitting together for pair programming, team building is very important, not only to just Agile: focusing on getting along better will make everyone in the team stronger! The bottom line: sitting together fosters feedback and open communication.

The work environment has always been a key ingredient in job satisfaction.

If the number of meetings you attend, the hours spent in meetings, has not massively decreased, you have missed the Agile boat. Death by meeting is SO last century. Enough said.

Teams full of happy people…

Empowerment — the idea of self-forming, self-directing teams — is the foundation of Agile software development. What works for one team may not work for another, even if the teams sit next to each other and work on the same product. Each team has its own individual team dynamic; often times, creativity and problem solving skills differ from group to group.  Teams build skills and when they find something that works, they can share it with other teams.

The extension of the Chickens and Pigs discussion is that teams commit to delivery. The whole team commits or no one commits. Everyone is responsible for quality – no finger pointing.

Multitasking is a failed notion – focus on one thing at a time! A quick web search on new research about how the brain works reveals multitasking diminishes capability and productivity! Agile focuses on a narrow scope which makes teams more productive and happier.

If continuous process improvements are not coming out of your retrospectives, you’re breaking one of the primary tenants of Agile. No one gets it right the first time – learn by doing! Make it better over time. As the team grows and skills grow – optimize and evolve your processes.

Key principles of Agile:

* Teams need to constantly reflect on how to improve their processes and methods.

Retrospectives not only demonstrate finished functionality, but your group must also talk about lessons learned, how to make work more efficient for future sprints, how to reduce stress, and what new tools might help future development.  Hopefully, you have put an end to useless postmortems.


We all want to be happy in our jobs. All managers want employee retention. You can’t get quality products from burned-out, stressed, distrustful, frustrated or angry people.  In software development, en masse, we finally realized this. Agile development concepts have been a significant step forward. For many organizations, Agile ideas and methods have been a complete remedy for companies who’ve struggled with bad team dynamics.

When thinking about Agile, remember…. Agile does not mean faster. Agile is about people.

Testing in Agile Part 1 – INTRODUCTION TO AGILE

Michael Hackett, Senior Vice President, LogiGear Corporation

Testing in Agile Part 1 – INTRODUCTION TO AGILE

In case you missed the first part of the series in our last magazine issue from Michael Hackett, Agile’s impact on software development teams is huge. For test teams it can be even more pronounced — and good, especially if your existing projects have been problematic.

There are many positive things about the move to be more Agile and responsive in software development, but like so many things the devil is in the details! What Agile actually means in your organization and what practices will be implemented vary dramatically from company to company. There is a rush to be Agile these days and I see it mis-implemented more than well implemented. I believe “Agile” itself needs to be re-understood, if not re-conceived.

Here is what I mean: Agile does not equal faster. It is not Scrum, it is not XP, nor is it Lean. Agile is not an SDLC or how to manage a project or how developers write code. Agile means too many different things to many people. This series of articles in not meant to be an Intro to Agile. The discussion here centers not on what Agile is or is not. It focuses only on those aspects which directly impact software testing and how traditional testers need to be prepared to be Agile.

Most of the white papers and speakers you find these days on Agile, Scrum and XP focus on its practices. There is more than enough written about things like test-driven development, sprint retrospectives, and estimating user stories. My focus is different.

First, I will focus on software testing. And by this I mean testing in the traditional sense, not in the unit testing/user story acceptance testing sense. Secondly, I will focus on the areas of Agile development that are most overlooked but quite important to successful implementation of Agile, such as team dynamics, self-forming or self-directing teams, and tools.

Let me layout some overview themes that guide my views on Agile and are critical points to be made about this movement.

1- These ideas are not new. In 1987, DeMarco and Lister famously wrote about software development teams in their groundbreaking book, Peopleware:
Things that have a strong positive effect:

  • Fellow team member interactions
  • Workplace physical layout
  • Protection from interruptions

Things that have a negative effect:

  • Overtime
  • Lack of closure

Agile addresses these issues like no other set of ideas in software development or project management before it. There is no Agile, whatever project management mechanism you choose, that is not based upon teams sitting together and focusing on one set of tasks with excellent, full-team, daily communication.

2- Agile is not about getting faster!
Agile is about people and interactions being more important than processes and tools, working at a sustainable pace, and self-directed teams!

People working on Agile projects should be more satisfied with their work, do it at a more sustainable pace and feel better about the quality of the released product. If not, your company has really missed the boat. If Agile at your company is about a tool, or set of tools implemented, something is very wrong.

3- Testers have recommended many Agile ideals and practices for a long time.

  • “Unit testing is really good- we should do more!” How long have testers been saying that?
  • “Accept the fact that we always accept late changes rather than pretend we don’t.”
  • “I [formerly known as QA] don’t own quality- we all do!”
  • “You are going to stop calling my team QA.”
  • “We need to automate more.”
  • “We need to design the code for testability.”

The fact is that test teams have known many good Agile practices for a long, long time. In some cases, using more Agile development practices may not change the job of a traditional tester; the focus of testing may not change very much. But how the projects are managed will change.

The following series of discussions on some aspects of Agile development that will explore how implementing Agile development practices and values can dramatically change your job satisfaction, level of respect, and work situation for the better — if practices are implemented and not just given lip service.

The series is entitled: “Testing in Agile” – Please read on to learn more about how Michael tries to help testers new to Agile Development

April & May 2010: Testing In Agile, Part 1 Of Our Series On Agile Testing

by Michael Hackett, LogiGear Sr. VP and Certified ScrumMaster

Agile‘s impact on software development teams is huge. For test teams it can be even more pronounced – and good, especially if your existing projects have been problematic.

There are many positive things about the move to be more Agile and responsive in software development, but like so many things the devil is in the details! What Agile actually means in your organization and what practices will be implemented vary dramatically from company to company. There is a rush to be Agile these days and I see it mis-implemented more than well implemented. I believe “Agile” itself needs to be reunderstood, if not reconceived.

Four Fundamental Requirements of Successful Testing in the Cloud – Part II

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Four Fundamental Requirements of Successful Testing in the Cloud – Part I

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.