Category Archives: LogiGear Resource

Four Fundamental Requirements of Successful Testing in the Cloud – Part III

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Cloud computing, when it is done well, provides a reliable and single point of access for users. Consistent, positive user experience sells the service, and rigorous testing assures a quality experience. In order to produce reliable, effective results for users of many walks of life, exacting software testing standards must be met.

In a series of articles, LogiGear is identifying four fundamental requirements by which software testing in the Cloud can uphold these standards:

Four Fundamental Requirements for Successful Testing in the Cloud:

  1. Constantly Test Everything – the Right Way
  2. Know What’s Where – and Prove It
  3. Define Your Paradigm
  4. Don’t Underestimate the Importance of Positive User Experience

In this issue’s article, we address:

Requirement 03: Define your Paradigm

Today’s companies increasingly find that they are in an ever more competitive market, especially in the drive to implement more robust, capable and pioneering Cloud-based products and services. Product delivery times are decreasing, customers demand higher and higher levels of product quality, and failure to deliver within the customer’s expectations can be swiftly punished with whole scale product abandonment and erection of barriers to market reentry.

Companies who are leading creative efforts to address these volatile areas recognize that quality must be uncompromised, and indeed surpassed in every way.

Adopting a testing paradigm that is designed specifically for the requirements of Cloud-computing is a fundamental requirement for the new standards of quality being set by customer-driven demand.

The steps for defining a Cloud-friendly testing paradigm are in general: 

  1. Evaluate Your Current Testing Paradigm
  2. Define Paradigm Requirements that Meet Standards for Cloud-based Technologies
  3. Apply Proven Methods and Technology
For our discussion of the right tools and methodology, read the first installment of this series: Requirement 01: Constantly Test Everything – the Right Way

Our discussion about loss prevention and risk mitigation when testing in the Cloud can be read in the second series installment: Requirement 02: Know What’s Were – and Prove It

1. Evaluate Your Current Testing Paradigm

Each quality or development team has its own frame of reference by which it identifies the need for testing overall, and the specific kinds of testing required. This frame of reference, or paradigm, often reflects the higher organizational approach to quality assurance and the role that testing is perceived to occupy in customer satisfaction and loyalty.

Cloud computing introduces a need for a new paradigm – one that incorporates the 24/7 testing-on-the-fly required for successful Software as a Service implementation and other Cloud-based products.

Some organizations may find their current approach to testing adapts relatively well for the “test everything all the time” model. However, most will find it necessary to reassess not just their approach toward testing, but the fundamental principles that underlie that approach – ones they’ve inherited from traditional testing models.

For a more detailed look at the historical development of testing methodologies, read our white paper “The 5% Solution”

In the world of software testing, all the players used to start in pretty much the same place: manual testing. As the industry matured, more elegant testing solutions – record and playback, automated or scripted testing, and action-based testing – emerged over time. Each new approach to testing successively increased our testing efficiencies, reporting efficiencies and ability to handle larger and larger test loads, lending leverage to the leading few. Conducting a Current-State Assessment can give you the necessary insight into what works and doesn’t work in your current testing paradigm when it comes to testing in the Cloud.

Conducting a Current-State Assessment

Most companies have tried a variety of testing approaches, some manual, some scripted, and maybe even some attempts at automation. It is important for a firm to conduct an honest self-assessment about their current testing footprint, capabilities, limitations, and opportunities for improvement.

Many times firm have a belief about their current testing approach and are surprised to find that those beliefs, albeit hopeful, are not fully rooted in practice. The following questions will help thoroughly evaluate your testing paradigm.

  • Testing methodology – Is it largely manual or some combination of methods? Are those methods consistent?
  • Testing environment – Where are the applications located? Where is the data located? What are the network and performance constructs?
  • People resources and skill sets – What is the competency makeup of your current test team? What are the skill sets required for moving forward?
  • Time-to-deliver constraints – What are the barriers to testing efficiency? What are the efforts that take more time than others and why?
  • Reporting and progress visibility – Do you have consistent visibility into your testing status? Are you surprised when testing efforts are late? Are statuses verbal, or based on actual metrics?

2. Define Paradigm Requirements that Meet Standards for Cloud-based Technologies

Achieving your organization’s testing goals, in whole or in part, requires planning and mapping out your “testing in the Cloud” strategy and its requirements.

Clarifying Cloud Architecture

An initial understanding of the different types of clouds and what their high-level architectures offer will inform the kind of testing paradigm you will need to establish.

Sam Johnston’s integrated picture depicts the three cloud types:

  • Remote, Public or External Cloud – Public clouds are sometimes referred to as “Regular” cloud computing. Completely separate from a user’s desktop or the corporate network that they belong to, public clouds offer a pay-per-use service model because the user is leveraging outside compute resources for the particular service they are seeking.This approach offers economies of scale, but their shared infrastructure model can raise concerns about configuration restrictions, adequate security, and service-levels (available uptime). These concerns might make you think twice about subjecting sensitive data that is subject to compliance or safe harbor regulations.Because “public” clouds are typically made available via the public internet they may be free or inexpensive to use. A well known example of a public cloud is Amazon EC2 which is available for use by the general public.
  • Internal or Private Cloud – Private cloud computing extends the same infrastructure concepts firms already have in their data centers. The motivation for private clouds appears to be to resolve security and availability concerns inherent in the public cloud paradigm.As such, private clouds seemingly are not burdened by network bandwidth, availability issues or potential security risks that may be associated with public clouds. However, this thinking belies the very intent of cloud computing which is predicated on hardware-software extensibility, dramatic reduction in infrastructure costs, and an elimination of the management concerns governing private networks.
  • Mixed or Hybrid Cloud – Many of the leading engineering thinkers in the industry suggest that the most workable cloud computing approach is the “hybrid” approach. The hybrid solution combines the best of the Public and Private Cloud paradigms.Considering that some applications may be too complex or too sensitive to risk operating from a public cloud it makes sense for a firm to protect those application and data assets within the construct of a private cloud where they have total control.Less sensitive applications and data can be migrated to a public cloud freeing compute resources that can be repurposed for the complex applications that need to stay home.The hybrid approach does sound like the best of both worlds. It makes sense from a technology and economic standpoint. It allows for control, flexibility and growth. The trick to managing hybrid clouds comes when you consider spikes in demand.When demand spikes pummel the performance of your applications located within the private cloud -and you need additional computing power (such as is experienced by web-based news media when critical events occur) -you will need to develop a management policy that can be responsible for when to reach out to the public cloud for those additional resources.

Defining the Future-State

Envisioning the results of your testing transformation requires solid understanding of your organization’s business goals and objectives, the Cloud computing paradigms that may help your testing effort contribute to those goals, and development of a sound plan to move in the new direction.

When documenting your planned future state, address each of these categories:

  • Architectural – Consider the different Cloud paradigms as they pertain to your business model, goals and objectives, and application and data sensitivity.
  • Organization – Ensure organizational review and consensus of new Cloud testing direction as it pertains to business goals and objectives and priorities.
  • Financial – Define the benefits, know where the real costs lie, and define the budget.
  • Implementation – Adopt an incremental improvement approach, and choose the correct tools and partners.
  • Monitor and measure – Develop a consistent set of metrics for measuring and monitoring your new foray into testing in the Cloud.

Other organizational goals and objectives that are important to include:

  • Provide sufficient support for distributed test teams
  • Increase your return-on-testing-investment
  • Significantly decrease time-to-market
  • Optimize the reusability of tests and test automation
  • Improve test output and coverage
  • Enhance the motivation of your testing staff
  • Increase managerial control over quality and testing
  • Have better visibility into quality and testing effectiveness

While setting your testing goals, consider that in the LogiGear white paper “The 5% Solution,” Hans Buwalda, LogiGear’s CTO, posits that “with good test automation you can have more tests and execute them any time you need to, thus significantly improving the development cycles of your project.”

With the repeatable tests under automation, he says, testers are free to work on more sophisticated testing as well as testing focused on new features and functions increasing the overall quality of the delivered result.

Buwalda establishes the 5% Goals:

  • No more than 5% of structured test cases should be tested manually
  • No more than 5% of testing efforts should be spent in the automation process

Conversely, the five percent goals suggest that 95% of your tests should be automated and 95% of your testing efforts should be spent on emerging functionality and higher complexity testing. Other organizational goals fall into some pretty predictable categories.

3. Apply Proven Methods and Technology

Implementing the changes necessary to adopt a new testing paradigm will bear out how well you’ve defined your requirements. Therefore, selecting well-matched testing tools and specific architectural approaches can have a dramatic impact on the results of your paradigm shift.

Architectural Approaches

If you are going to perform testing in the cloud it is important to understand the different architectural approaches that are available and the unique advantages that each approach offers. First, let’s start out with a quick list of the six basic cloud architectures or relationships.

Cloud Services Paradigm Industry Standards
Cloud-accessible application
  • Communications (HTTP, XMPP)
  • Security (OAuth, OpenID, SSL/TLS[83])
  • Syndication (Atom)
Cloud-based client services
  • Browsers (AJAX)
  • Offline (HTML 5)
Scalable infrastructure and computational arrays
  • Virtualization (OVF[84])
Application development and deployment platforms
  • Solution stacks (LAMP)
Utility services
  • Data (XML, JSON)
  • Web Services (REST)
Data storage, duplication and backup

Cloud Accessible Application

This category of cloud computing technology focuses on applications that are entirely externally hosted at a cloud services application vendor. No installation of software is required on the user desktop. The application functionality is utilized through a web browser and the data is stored on the application host’s servers. is a good example of this type of application service.

Cloud-based Client Services

Think of a device that is completely useless without a connection to internet services (like the iPhone) and you will have an understanding of what cloud-based client services are all about. Netbooks are another emerging computing approach predicated on utilizing more cloud-based applications and services using a smaller, less powerful laptop computer.

Scalable Infrastructure and Computational Arrays

Imagine a sudden spike in demand for products you want to sell. Good news, right? But let’s image that your computer room server just can’t handle the additional load. You lose money with every customer who abandons their purchase. With the right cloud infrastructure provider relationship those worries might be a thing of the past. Vendors are increasing building expanding infrastructure architectural offers to support this very problem. Good examples to keep in mind are Amazon and Sun Microsystems.

Application Development and Deployment Platforms

Picture the scenario that you have developed a ‘killer’ application that you know everyone is going to want but you can’t deal with the cost and complexity of buying and managing the server and network equipment to manage it. Web hosting services are a good example of this approach allowing individuals and organizations to provide their own web site accessible via the web. Still other platforms allow for remote applications development and deployment.

Utility Services

MapQuest is popular example of a cloud-based service utility. It is completely inaccessible and unusable if you are not connected to the web. Other examples include, PayPal and Yahoo search.

Data Storage, Duplication and Backup

There are a number of variations of data services available via the cloud. Simple data storage allows for a pay-as-you-increase sort of service allowing ever increasing disk utilization as needed. Other services offer data duplication and mirroring, and data backup.


In this paper, we’ve presented the need to “define your paradigm” as a necessary prerequisite for pursuing a strategy for “testing-in-the-cloud”. We began with the common-sense directive of evaluating your current testing approach and setup. We then reviewed the different cloud types, their focus, and their typical use. Additionally, we offered the business objectives most firms would consider as foundation for making a move to a Cloud-based testing paradigm. Lastly, we presented the six common cloud computing paradigms, the industry standards that have emerged to support each, and introduced examples of services in each category.

Your testing-in-the-Cloud strategy should consider all of these approaches to determine what will work best for your organization. Your success is will be determined by a disciplined approach and thorough implementation. Happy testing!

Introduction to Skilled Exploratory – Cem Kaner

{flv} Skilled_Exploratory_CemKaner1|540|340|0{/flv} 

Introduction to Skilled Exploratory

Cem Kaner – Director, Center for Software Testing Education & Research, FIT


Four Fundamental Requirements of Successful Testing in the Cloud – Part III

Internet-based per-use service models are turning things upside down in the software development industry, prompting rapid expansion in the development of some products and measurable reduction in others. (Gartner, August 2008) This global transition toward computing “in the Cloud” introduces a whole new level of challenge when it comes to software testing.

Exploratory Testing – Johnathan Kohl – Part 3

Exploratory Testing

Jonathan Kohl - Co-founder of Kohl Concepts

In the third segment of this three part series Jonathan Kohl discusses how exploratory testing is becoming more ingrained in different kinds of projects. Find out how you can use test automation in an exploratory way. Dont rely on unattended automated test scripts! Rather, find creative and cool ways to combine test automation with your exploratory testing skills to find hard-to-reproduce bugs.

VISTACON 2010 Keynote -Investment Modeling as An Exemplar of Exploratory of Test Automation – #2




Testing in Agile Part 5: TOOLS

Michael Hackett, Senior Vice President, LogiGear Corporation

To begin this article, it would be a good idea to, remember this key point:

Agile Manifesto Value #1
Individuals and interactions over processes and tools

Tools work at the service of people. People, particularly intelligent people, can never be slaves to tools. People talking to each other, working together and solving problems is much more important than following a process or conforming to a tool. This also means someone who tells you a tool will solve your Agile problems knows nothing about Agile. A disaster for software development over the past decade has been companies investing gobs of money into “project management tools” or “test management tools” to replace, in some cases, successful practices of software development, in favor of the process the tool or tool vendor told that company was “best practice”. That made certain tool vendors rich and many software development teams unhappy.

If your team wants to be Agile, and work in line with the value outlined above, I have a couple of rules to remember when it comes to tools and Agile.

Rule #1 – Tools need to fit teams. Teams should not fit into tools!
Rule #2 – Test teams need to grow in their technical responsibility.

These two ideas sum up how I feel about most tools that are part of an Agile development project. For example, when you add management tools, this will add overhead, reduce efficiency and limit the Agile idea of self-directing teams. However, communication tools can help in distributed development situations and scaling into large organizations. Some tools work to provide services to teams and others do not. This article is mainly about tools and how they relate to test teams. We are going to talk a lot about the different toolsets- that are most important to test teams. We won’t talk about unit test frameworks since test teams very rarely manage them, but we will discuss test case managers in the Agile world.
I’m also going to talk about the changes that the test teams need to undergo to successfully implement or use these new tools.

Often when beginning an Agile project, there is a need for some re-tooling in order to be successful. The same set of tools we used in traditional development may not be enough to support rapid communication, dynamic user story change and ranking, and increased code development.

The groups of tools of specific use to test teams we will focus on are:

• Application Lifecycle Management (ALM) – such as Rally, Atlassian’s JIRA Studio and IBM’s Rational Team Concept
• Test Case Management Tools – such as OTHER**, and HP’s Quality Center
• Continuous Integration Tools – such as CruiseControl, Bamboo, and Hudson ( Including Source control tools – such as PerForce, and Visual Source Safe (VSS) type tools)
• Automation Tools – we will not talk about specific tools in this section.

Application Lifecycle Management Tools (ALM)

Sometimes when we talk about tools, we discuss them in relation to the Application Lifecycle. Naturally then, the tool vendors refer to their large suites as Application Lifecycle Management Tools, or ALM. This is the major category of tooling and it is often an enormous undertaking to get these tools implemented. However, there are some minor category tools in the ALM spectrum that we can pull out that relate to testers.

Application Lifecycle Management (ALM) tools have been making a splash in the development world for a few years now. Approximately 15 years ago, Rational (before Rational/IBM) had an entire suite of tools we would now call ALM. From RequisitePro to Rose to ClearCase through ClearQuest, the idea was to have one set of integrated tools to manage software development projects from inception to deployment. Now, there are multitudes of tool sets in this space. The use of these tools has grown in the past couple of years as teams have became more distributed, faster and more dynamic while at the same time coming under increased pressure from executives to be more measurable and manageable. When Agile as a development framework grew explosively, the anti-tool development framework got inundated by tools.

ALM tool suites generally manage user stories/requirements, call center/support tickets, also help track planning poker, store test cases associated with user stories, bugs – there are many tools that do this – but the big leap forward with current ALM tools is them being linked to source control, unit testing frameworks, and GUI test automation tools. Lite ALM tools satisfy the need for speed required by the rapid nature of software development – it’s a communication tool.

There are great reasons for using ALM tools, just as there are great reasons not to use them! Let’s start with great reasons to use them:

  1. If you’re working on products with many interfaces to other teams and products, ALM tools can be very beneficial in communicating what your team is doing and where you are in the project.
  2. ALM tools are essential if you’re working with distributed teams.
  3. If you are working on projects with large product backlogs, or with dynamic backlogs often changing value, a management tool can be very helpful.
  4. ALM tools are useful if you have teams offshore or separated by big time zone differences
  5. You are scaling Agile – for highly integrated teams, enterprise communication tools may be essential.
  6. If, for whatever reason, the test team is still required to produce too many artifacts- mainly the high overhead artifacts like test cases and bug reports, build reports, automation result report, and status reports (ugh!) – most ALM tools will have all the information you need to report readily available.

There are many good reasons to use ALM tools. But the tool can’t control you or the use and benefit of going Agile will be lost.

Most Agile lecturers, including Ken Schwaber, will tell you using Agile in distributed teams will reduce the efficiency. This does not mean Agile cannot be successful with offshoring, but it means it is tougher and chances of ending up with smiles all around are lower. Nothing beats all being in the same place, communicating easily and openly every day and working on our software in my bullpen in the office. If you are in that situation – good for you! However, this is the subject of another magazine article we will bring you in the New Year.

Now the great reasons not to use ALM tools:

  1. It is very easy for some people to become tool-centric, as though the tool is the project driver rather than the Teams. That is completely waterfall, bad!
  2. Paper trails and the production of artifacts, “because we always did it this way,” are anti-Agile. Most times the artifacts become stale or useless having little relevance to what is actually built.
  3. Management overheard is anti-Agile. The only “project management” if I can even use that phrase, is a 15 minute daily scrum. Not a giant, administrative nightmare, high cost overhead, project management process! When the ALM tool becomes a burden and overhead slows down the team, you will need to find ways to have the tools support you rather than slow you down.

A quick word about measurement: the only artifact about measurement in line with by-the-book Agile is burndown charts. Extreme programming espouses velocity. If your team is measuring you by test case creation, test case execution rates, user story coverage there is nothing Agile about it. End of discussion.

If you want to investigate ALM tools further, I recommend you look at a few – check out enterprise social software maker, Atlassian, maker of the famous Jira, and Rally a great, widely used product with dozens of partners for easy integration. These two are at the opposite ends of the cost spectrum.

Test Case Management Tools

Most test organizations have migrated, over the past decade, to using an enterprise wide test case management tool (TCM) for tracking test cases, bugs/issues and support tickets in the same repository. Many test teams have strongly mixed feeling about these tools- sometimes feeling chained to them as they inhibit spontaneous thinking and technical creativity.

To begin talking about test case tools in Agile, there are two important reminders:

  1. Test cases are not a prescribed or common artifact in Scrum.
  2. Most companies that are more Agile these days are following some lean manufacturing practices – such as ceasing to write giant requirement docs and engineering specs, and getting really lean on project documentation. Test teams need to get lean also. They can start by cutting back or cutting out test plans and test cases. Do you need a tool to manage test cases in Agile? Strict process says no!

If you are not using a TCM yet and are moving to Agile, or are beginning to use a test case manager, now is not a good idea If you already use one – get leaner!
Still, most ALM tools have large functionality built around test case management that can become an overhead problem for teams focusing on fast, lean, streamlined productivity.

Continuous Integration

In rapid development, getting and validating a new build can kill progress. Yet there is no need for this! In the most sophisticated and successful implementations of Agile development, test teams have taken over source control and the build process. We have already talked about continuous integration in this series (See Agile For Testers Part 3 Automated Build and Build Validation Practice / Continuous Integration for more discussion on this), let’s focus on tools. Source control tools are straightforward. Many test teams regularly use the source controls systems.

There’s no reason why a test team can’t take over making a build. There is also no reason a test team does not have someone already technically skilled enough to manage continuous integration. With straightforward continuous integration tools such as Bamboo, CruiseControl and Build Forge, test teams can do it.

My advice is:

  • Learn how these tools work and what they can do
  • Build an automated smoke test for build verification
  • Schedule builds for when you want delivery

Test teams can use a continuous integration tool to make a big build validation process: scheduling new builds, automatic re-running of the unit tests, re-running automated user story acceptance tests and whatever automated smoke tests are built! Note, this does not mean write the unit tests – it means re-run, for example, the J-unit harness of tests. Test teams taking over the continuous integration process is, from my experience, what separates consistently successful teams from teams that enjoy only occasional success. Easy.

I suggest you visit Wikipedia on CI tools here: to see a very large list of continuous integration tools for various platforms.

Test Automation

I’m going to resist going into detail about automation here. Automation in Agile will have its own large magazine articles and white papers in 2011. Remember, when discussing automation in Agile with team members, scrum masters or anyone knowledgeable in Agile, the part of testing that a test team needs to automate is not the unit testing. That automation comes from developers. Test teams need to focus on:

  • Automated smoke/ build validation tests
  • Large automated regression suites
  • Automated user story validation tests (typically a test team handles this. But- there are very successful groups that have developers automate these tests at the unit, api or UI level)
  • New functionality testing can be difficult to automate during a sprint. You need a very dynamic and easy method for building test cases. This will be discussed a greater length in an future Agile automation methods paper.

In Agile development, the automation tool is not the issue or problem, it’s how you design, describe and define your tests that will make them Agile or not!

Here are the main points to get about test automation tools in Agile:

  • The need for massive automation is essential, clear and non-negotiable. How test teams can do that in such fast-paced development environments needs to be discussed fully in its own white paper.
  • That you need a more manageable and maintainable, extendible framework, like tools that easily support action-based testing is clear.
  • Developers need to be involved in test automation! Designing for testability and designing for test automation will solve automation problems.
  • Training is key to building a successful automation framework.
  • Tools never solve automation problems- free or otherwise! Better automation design solves automation problems.

Tools in Agile need to be used to fit people, not the other way around.
Tools need to be easy to use (from your perspective- not the tool vendor’s perspective!) and low time and management overheads.
Employ lean manufacturing ideas wherever possible- specifically, cut artifacts and documentation that is not essential.

Article by Michael Hackett, Senior Vice President, LogiGear Corporation

Testing in Agile Part 4: SKILLS /TRAINING

Michael Hackett, Senior Vice President, LogiGear Corporation


Agile teams need training!
One of the missing links in the implementation of Agile development methods is the lack of training for teams. I noticed in our recent survey on Agile that only 47% of the respondents answered “Yes” that they had been trained in the Agile development process, with over half responding “No.” This number is way too high. The “No” respondents should be closer to 0%!

Have you been trained in Agile development? Percent answered
Yes 47.8%
No 52.2%

Check out last month’s Survey Summary Article

A situation that is too common these days is a team says: We are Agile because their manger told them they are and they have daily meetings. They have cut down the size of their requirements docs. They call their development cycles sprints and their daily meeting a scrum. That’s about all the Agile there is to this development organization. No management backing off, no self-directed teams, same old tools, no more measurable unit testing or Test Driven Development (TDD), and no increased test automation – they now just call themselves Agile.

This is wrong. This is not Agile. This will lead to problems. What you want to prevent is a wave of the magic wand and… “abracadabra, you’re Agile!”

Training solves problems

Before starting any Agile implementation, the team must be trained in a variety of ideas, practices and methods. Such as: What are Agile, Scrum, and XP? Why are we doing it? How should you do it? What are the new “roles?” How do I do my job differently now?

To make a case for training lets put together a few foundation ideas of Agile development. Teams and individuals must be trusted to do their jobs. Without proper training – that trust will not happen. Simple. Also, in my experience with Agile teams, one common hard learnt lesson is the importance of changing and improving the way teams work and develop software through the use of retrospectives and continuous improvement; that is, teams constantly reflect back on what worked, what did not and change practices, processes, and sometimes, even teams. This is key to Agile success. Teams need training and coaching here. This is not your old fashioned post-mortem.

There is a common phrase in the Agile world: Fail early, fail often! This means, fail fast – find out what does not work for you, remove inefficiencies right away and find the practices that do not work for your team and get rid of them fast. Try something – fail at it, fix it, make the process better, change how you do things till people are comfortable with how things work.

To do this crucial piece of the Agile/Scrum/XP most effectively, get training about what is Agile and why are these practices used? When your team has problems – all teams do – having good training on the practices, and the processes to recognize and take action on problems as well as recommended or alternative practices will help you fail fast and fix problems – getting more productive and happier faster!

Agile training for whom?

The main audience for this article is test teams. Most test teams do not dictate management training. But if you have the ability to make a suggestion, strongly suggest your company’s management teams get training in understanding Agile and critical management topics such as: Chickens and Pigs, the importance of product owners, customers or customer representatives working with the Scrum team daily, empowering self-forming/self-directing teams, and team ownership of quality – rather than the traditional single point QC by a test team. Having management understanding of these topics is a minimum starting point and support for productive teams. Management needs Agile training too!

The whole development organization and Scrum team will need training and a new glossary. The practices and glossary will make a laundry list of training topics fast: sprint, spike, scrum of scrums, done, release planning vs. sprint planning, what happens at a scrum meeting and what does not, cross-functional and self-directed team roles, the role of the scrum master, etc.

For other non-test team members there can be significant training, like unit test training for developers and choosing a good unit test harness, TDD (test driven-development) vs. unit testing, scrum master, and product owner training.

Let’s pause here for a second, since I want to make sure you are aware that I talk in detail on many of the above skills, topics and methods in my last three articles in this “Testing in Agile” article series. I encourage you to review them since they will provide additional context to some of these suggested training areas.

Here are the links to those articles:

Testing in Agile Part 1 – Intro to Agile for Testers
Testing in Agile Part 2 – Agile is About People
Testing in Agile Part 3 – People and Practices

Skills for test teams

Let’s move on — specific to test teams, in most organizations, after a whole-team training on Agile development, the biggest burden for technical training and testing skills is the ability to be much more dynamic and self-directed in test strategy.

The testing skills needed for Agile test teams are more responsive, dynamic, thinking on your feet methods rather than following-a-plan type testing many test teams are accustomed to. The test team needs to move away from validation and into investigative, exploratory, aggressive, unstructured methods, such as “error guessing.” Test teams need new strategies that are more responsible for reviewing design as well as hunting down bugs and issues!

Training is also needed for when and how you test. Teams need to split their time and tasks between developers and testers for unit testing, user story validations, UI testing and devoting focus on strategies around manual and automated testing. There are also documentation changes and ideas for more lean and efficient practices — avoiding documentation waste. Agile teams must have new approaches to writing test cases. These issues get solved with training.

In successful Agile organizations, traditional style test plans are useless and not nearly dynamic and flexible enough for the speed of Agile projects. This does not mean people who test stop planning! It means, instead that a different, leaner, more efficient set of test artifacts need to be created. This, again, is a training issue.

And, as mentioned in other places, good test teams should and can take on more technical tasks, such as design review and build management/continuous integration – read more about continuous integration in my last article on people and practices. This should be achievable for well-staffed test teams with good training and skill development.

Also important for most development teams is team building, communication, and “feel good about working together” type training. Many organizations are moving from situations where the relationship between project management, development and testing is sometimes strained. These frayed relationships generally occur due to poor communication and are often exacerbated by individuals with different motives or unrealistic timelines. In Agile development frameworks, these bad dynamics kill any chance of success. Soft skill training is yet another key to Agile success.

Here are some skills I have often found lacking in testers who are involved in an Agile development project:

  • good, on the spot, rapid fire estimation skills – for people who test and people who write code, product owners and scrum masters
  • depending on the existing skill set of the many test teams, some testers will need help and training to successfully contribute to software design critique
  • many times in this series we have discussed the need for massive test automation. That most teams need help looking for easier, more effective, faster ways to do more automation is clear. Test Automation in Agile deserves its own white paper – I will however, talk a little bit about automation playback tools for Agile in my next article, the last one in the series.

Good Agile trainings should be full of learning games and learning activities. That Agile is “learn-by-doing” is clear. Training adults needs to be lighter, more fun, engaging – and immediately related to their job! There are significantly higher levels of retention with learning activities than with only lecture or reading. But most importantly, the activities and games are a perfect way to present a new way of thinking, teamwork dynamics and principles to foster good team dynamics.

VISTACON 2010 Keynote – The Future of Testing by BJ Rollison

VISTACON 2010 – Keynote: The future of testing


BJ Rollison – Test Architect at Microsoft

VISTACON 2010 – Keynote

Testing in Agile Part 3: PRACTICES and PROCESS

Michael Hackett, Senior Vice President, LogiGear Corporation


Remember that Agile is not an SDLC. Neither are Scrum and XP for that matter. Instead, these are frameworks for projects; they are built from practices (for example, XP has 12 core practices). Scrum and XP advocates will freely recommend that you pick a few practices to implement, then keep what works and discard what doesn’t, optimize practices and add more. But be prepared: picking and choosing practices might lead to new bottlenecks and will point out weaknesses. This is why we need to do continuous improvement!

That being said, there are some fundamental practices particular to test teams that should be implemented; if not, your chances of success with agile could be doomed.

Just by being aware of possible, even probable, pitfalls or even by implementing the practices most important to traditional testers does not guarantee agile success and separately testing success, though it should avoid a complete collapse of a product or team — or potentially as harmful, a finger-pointing game.

This 3rd part of our Testing in Agile series focuses on the impact to testers and test teams on projects implementing agile, XP, and Scrum. In this installment, we’ll only focus on the practices that are the most important to software testers.

And, in the final analysis, even if you implement all the important processes — and implement them well — you need to review how successful they are in your retrospective and keep what works then modify, optimize, and change what doesn’t work. Your scrum master or scrum coach helps with this.


In this section, we will focus mainly on XP practices, which are the development practices, rather than on the Scrum/project management practices, since it is the development practices that effect test teams most.

To be blunt, if you’re developers are not unit testing, and the team does not have an automated build process, the team’s success in agile will be limited at best.

Unit Testing / TDD and Automated User Story Acceptance Testing

Unit testing and test-driven development are so fundamental to agile that if your team is not unit testing, I cannot see how testers could keep up with the rapid release of code and fast integration in agile projects. The burden of black and gray box testing in the absence of unit testing in very fast, 2-to-4 week sprints should frighten any sane person who is knowledgeable in software development. Your developers need to be unit testing most if not all their code in rapid, agile projects – there is no way around it.

Automated user-story acceptance testing is secondary to unit testing in importance. This test validates that the task or goal of the user story has been achieved rather than validating code as in a unit test. Having that kind of test automated and available to re-run on successive builds and releases will enable a test team to focus on more effective exploratory testing, error guessing, scenario, workflow, error testing, varieties of data and alternative path testing that unit testing and user-story validation tests rarely cover. This leads to finding better bugs earlier and releasing higher quality software.

100% developer unit testing is one of the most significant advancements within the agile development process. We all know this does not guarantee quality, but it is a big step toward improving the quality of any software product.

From our Testing State-of-the-Practice Survey that we conducted on, we can get a rough idea about how close we are to the ideal.

Here is one question regarding unit testing from our survey:

What percentage of code is being unit tested by developers before it gets released to the test group (approximately)?

total # of unit tests developed Percent answered
100% 13.60%
80% 27.30%
50% 31.50%
20% 9.10%
0% 4.50%
No idea 13.60%

* this was out of 100 respondents

A vast majority of agile teams are unit testing their code, though only a fraction are testing all of it. It’s important to know that most agile purists recommend 100% unit testing for good reason. If there are problems with releases, integration, missed bugs, and scheduling, look first to increase the percentage of code unit tested!

Automated Build Practice And Build Validation Practice / Continuous Integration

With the need for speed, rapid releases, and very compressed development cycles, an automated build process is a no-brainer. This is not rocket science and not specific to Agile/XP. Continuous integration tools have been around for years; there are many of them and they are very straightforward to use. It is also common for test teams to take over the build process. Implementing an automatic build process by itself is a step forward, but a team will realize more significant gains if they add to automated builds with continuous integration.
Continuous Integration includes the following:

  • Automated build process
  • Re-running of unit tests
  • Smoke test/build verification
  • Regression test suite
  • Build report

The ability to have unit tests continually re-run has significant advantages:

  • It can help find integration bugs faster
  • Qualifying builds faster will free up tester time
  • Testing on the latest and greatest build will save everyone time.

The positives for continuous integration far outweigh any resistance to implementing them. Test teams can take on more responsibility here: they can be more in control of the testing process — on how many builds they take and when they take them — for starters.

Hardening Sprint / Regression Sprint/ Release Sprint

The most successful agile teams have implemented a sprint that, in effect, is specifically designed and tailored to just test, or quite simply a “testing sprint¨. Although this testing or integration sprint can go by many names, a regression sprint or hardening sprint are the most common types. Prior to releasing to the customer, usually, someone has to do security, performance, accessibility, usability, scalability, perhaps localization, or many other types of tests that are most effectively done once the product is fully integrated. In most cases, this is when end-to-end, workflow, user-scenario tests are done and when full regression suites are executed. It is a great bug-finding and confidence-building sprint. But! Its late into the development cycle. “Bugs¨ found here may go directly into the backlog for the next release or cause a feature to be pulled from a release.

Estimating And Planning Poker Includes Testers.

Testers participating in the sizing and estimating of user stories is very basic to agile success. A few times I have run across companies trying to scope, size and rank a backlog without test team input. This is a gigantic mistake. Let me tell a quick story.

I was at a company that was doing a good job at implementing scrum, which they had piloted across a few teams. They still had some learning to do but were still implementing processes — overall, doing a good job to start!

The group that had the toughest time adopting to scrum, though, was the testers. This was because in their previous SDLC, the test team was viewed as adversarial, composed mainly of outsiders. Some of those attitudes persisted to the point where the product owner (a former “Marketing person¨) excluded testers from the user-story sizing and the estimation process.

During a coaching session, I was reviewing some user stories with them. We were doing some preliminary sizing and doing a first pass, assigning only four estimates: small, medium, large, and extra large. (Note: it’s a great way to start. Some people call it grouping by shirt size to roughly estimate what can get done in a sprint.)

One certain story got sized as a large and another at medium. I picked those two stories out from my experience with their product and pointed out that the one story ranked as a large was a very straightforward test. Any tester knowledgeable about this area could do an effective test job pretty quickly. But this other user story they sized as a medium was a test nightmare! I quickly ran through a list of situations that had to be tested — cross-browser, data, errors, etc — all of those things that testers do! The team believed me and we proceeded to pull in the test engineer to review our results. The tester quickly said the same thing as I had and pointed to this as a sore point for testers. The stories would have been sized completely wrong for the sprint (as had been the problem for the previous test team) if the test team continued to be excluded from the sprint planning and playing poker session.

This does not mean that it would reduce the complexity (to develop or move the story from a large to a medium). But it would have moved the medium complexity to a large or even an extra large and this would have impacted testing! The lesson learned here is that the test team needed to be included in the planning game. Attitudes had to change or costly estimating mistakes would be made.

This practice is also crucial to the XP values of trust and respect! Sadly, in many situations I have seen testers excluded from the planning meetings and invariably it is always a trust problem! Deal with the trust and respect problem and get involved in the complete planning process!

Definition Of Done

We’re all used to milestone criteria, entrance criteria, and exit criteria in whatever SDLC we’re using. The term people on agile projects use that relates to milestone criteria is the definition of done. Most often, the problem with milestone criteria is that it is routinely ignored when schedules get tight. This often leads to frustration, bad quality decisions, and ill feelings.

I want to show a simple description of agile that will help us in the discussion.

We are all familiar with the traditional three points of the project triangle that every project must juggle: features (number of features, quality, stability, etc.), cost (resources), and schedule. Before agile, projects committed to feature delivery then would add people (cost) or push out dates (schedule) and sometimes release buggy features to meet whatever constraint project managers felt needed holding!
Agile is different.

In agile, the cost, namely the size/resources of the team, is fixed. We know that adding people to a project reduces efficiency (Agile Series Part 1); and, the schedule is fixed. Never extend the time of a sprint. What can change, and what is estimated, is the set of features. This leads us back to the definition of done.
What gets done, the user stories/features, gets released. What does not get done gets put into the backlog for the next sprint. Since sprints are so short, this is not as dramatic as pulling a feature from a quarterly release. The customer would have to wait another quarter to get that functionality. If a feature gets pulled from a sprint and put into the back log, it can be delivered just a few weeks later. How do we know what’s done? A primary value of XP:

  • Working software is the primary measure of progress.

The definition of done will change among various groups. There is no one definition, though it commonly includes the following at a minimum:

  • the ability to demonstrate functionality
  • complete /100% unit testing
  • zero priority 1 bugs
  • complete documentation

Many teams also include a demonstration of the user story or feature before it can be called done.

In the past, for most teams, it seemed like a nice idea to have a set of milestone criteria but it was routinely ignored. In agile projects, though, with rapid release, the risk of slipping on done milestone criteria could be catastrophic to the system. Done is a safety net for the entire scrum team and actually the entire organization.

In the past, many test teams had been the entrance and exit criteria police since, in later stages, milestone criteria are often based on testing, bugs and code freeze — items testers see and are responsible for reporting on. Now, it is the Scrum Master who enforces the “Done” criteria, not testers. It is much better to have the Scrum Master be the enforcer, rather than have testers act as naysayers and complainers! Every team needs a Scrum Master!

Small Releases

  • Deliver working software frequently.

In Scrum it is recommended that sprints last from 2 to 4 weeks maximum. The practice of small iterative releases is the very core of agile development. I have seen companies rename their quarterly release a sprint and say: “we’re agile!” No.

A three month sprint is not a sprint at all. Sprints are meant to be very narrow in focus, able to demonstrate functionality before moving on to the next sprint, and have a prioritized and realistic backlog. These among many reasons should keep your iterations short. Some companies have begun agile implementations with four-week sprints and a plan to reduce the sprint time to three or two weeks over a year, after some successful releases and retrospectives with process improvement. Ken Schwaber and Jeff Sutherland, the original presenters of Scrum recommend beginning with a 2 week sprint.

Measure Burndown And Velocity

I have brought up the phrase sustainable pace and burndown charts a few times. Let’s briefly discuss these practices.
First, two guiding ideas:

  • We have to work at a sustainable pace. Crazy long hours and overtime lead quickly to job dissatisfaction and low quality output (see peopleware definition on Wikipedia) — the main way we get an idea of sustainable pace is through measuring burndown.
  • Burndown charts are one of the very few scrum artifacts.

The only scrum artifacts are the product backlog, the sprint backlog, and the burndown chart. Velocity is not by the book scrum but we will talk about this as well.
To briefly describe a burndown chart, here are some points about them and their usage:

  1. They measure the work the team has remaining in a sprint and whether they can get done with its planned work
  2. They quickly alert you to production risks and failed sprint risks
  3. They alert you to potential needs to re-prioritize tasks or move something to the backlog
  4. They can be used during a sprint retrospective to assess the estimating process and in many cases the need for some skill building around estimating the line

To have healthy teams and high quality products, people need to work at a sustainable pace. View this time using burndown charts and velocity.

Burndown charts count the total number of hours of work remaining in a sprint on the y axis against the day-by-day total on the x-axis.

Velocity measures the estimated total of successfully delivered user stories or functionality of backlog items. If this is measured over many sprints, a stable, predictable number should emerge.

Velocity can be used to gauge realistic expectations by both “chickens and pigs” for future purposes. Velocity is measured in the same units as feature estimates, whether this is “story points”, “days”, “ideal days”, or “hours” – all of which are considered acceptable. In simple terms, velocity in an agile world is the amount of work that you can do in each of the iterations. This is based on experience from previous iterations. The Aberdeen Group, a IT research firm, which has covered and published material on Agile Development, makes the claim that “When cost / benefit is measured in terms of the realization of management’s expectations within the constraints of the project timeline, it is the ability to control velocity that drives the return on investment.”

Measuring the burndown rate and calculating velocity will give you reasonable amounts of work for a team to do at a pace that is conducive to happy teams, releasing higher quality software. To repeat from the introduction piece to this series on Agile, “Teams working at a reasonable pace will release higher quality software and have much higher retention rates — all leading to higher quality and greater customer satisfaction.”

When the team feels really good about their abilities it encourages them to do better. The business starts to believe in the team and this sets the team up in the zone. When it gets into the zone, the team can generally sustain its steady-state velocity month after month without burning out. And better yet, they get to enjoy doing it. Geoffrey Bourne, who writes for Dr. Dobbs Journal notes, “The essence of creating a happy and productive team is to treat every member equally, respectfully and professionally.” He believes Agile promotes this ethos and I agree with him.

In conclusion, being agile is implementing practices to help product and development teams work most efficiently and be happy. There are many, many practices (again, XP has 12 core practices). Here, we discussed only the key practices for testing success. If your team is calling itself agile and has not implemented some of these practices, it is crucial to bring them up in sprint retrospectives and talk about their benefits and the problems that not doing them has caused in your product and customer.

There are other practices that need to be in place for success, and specifically for test teams to be successful and not be pointed out in a blame game. These are covered in other installments, namely:

  • You have to have a scrum master.
  • Automate, automate, automate.
  • Use sprint retrospectives for process improvement.

Agile Testing Part 2 – New Roles for Traditional Testers in Agile

Video narrated by MICHAEL HACKETT – Certified ScrumMaster

This is Part Two of a Six Part Video on “New Roles for Traditional Testers in Agile Development”

Agile Testing Part 2 – New roles for traditional Testers in Agile

Michael shares his thoughts on “A Primer – New Roles for Traditional Testers in Agile”