Monthly Archives: September 2009

Four Software Testing Challenges We Must Repeatedly Overcome

I recently came back from the Software Testing & Evaluation Summit in Washington, DC hosted by the National Defense Industrial Association. The objective of the workshop is to help recommend policy and guidance changes to the Defense enterprise, focusing on improving practice and productivity of software testing and evaluation (T&E) approaches in Defense acquisition.

So, what do I have to do with the Defense enterprise? One of the features of the workshop is to invite software testing experts from both commercial industry and academia to share state-of-the-practice strategies, processes and software testing methods. I was honored when the Director of Defense Research and Engineering asked me to do a keynote as the commercial industry-recognized subject matter expert on software testing, test automation and evaluation.

What I am about to share however, is not what I said at the conference. It’s about my takeaways from the summit. Although not surprised, I was intrigued by the fact that while we have made progress, there is still much work to do to overcome the four software testing challenges we have been fighting over the last three decades:

  1. The software testing profession is a choice, not a chance
  2. Testing is not free! Proper testing requires proper financing.
  3. Test automation is not about the technology or tool; it is about the methodology or the effective application of technology.
  4. Testing transparency is essential to educate and get executive support, hence funding.

Test Everything all the Time…

Back from more training, I was up at a client in Bellevue and really enjoyed teaching a performance class to a world class testing organization. I found that the students were very receptive to many of the concepts and ideas that the class offers.

One of the items that we talked about was the nature of the service based app delivery model. As we have all seen over the last 5 years, the cloud is really changing the software development deployment abstraction. I think we will see most of our applications served through this new cloud model, but our development cycles will be 10x faster and more chaotic. If you think about it, there are 75,000 apps on the iPhone app exchange program, if I find a news application that is slow and buggy, I will just choose another one in a matter of a few seconds. I don’t put up with an application that does not deliver acceptable customer experience.

So what does this mean? Well, I feel that if we aren’t able to “test everything all the time” we are putting our QA groups a disadvantage, that is, we become the bottleneck. Long protracted development and QA cycles are the way of the dinosaur. If we can’t make a change, feature enhancement, etc, and turn that feature/fix around in an instantaneous fashion, we will be left behind and I believe lose credibility within our engineering teams. And if you read my last blog post, you can only do this with software test automation (at both the api/gui and non-gui level)

It’s not me who’s saying this, look at your customers… Our customers of our software have so much choice of rich services that they can tap into – and with choice comes competition and with competition comes winners and losers.

It’s the quick and dead in this deployment model, I for one am preparing our engineering teams to make sure we follow a continuous iterative automation process, so that when the time comes, we can tell our customers, sure we test everything all the time – no problem.

Continuous Iteration in Automation

I’ve been teaching a lot lately, was in India for one week, and I’m off to Seattle in two weeks to teach on performance topics. I thoroughly enjoy teaching, it allows me to stay sharp with current trends, and provides a nice break from the “implementation focus” that I generally have day to day.

One topic however that has been gnawing at me lately is the notion of “continuous iteration” in automation. With virtualization, my feeling is that you should be running the automation you create all the time. I’ve coined a new term lately, “Test Everything all the Time”.

I find that many automation teams shelve the automation until runs are needed. I’m finding this to be a real bone of contention when it comes to automation credibility. The automation should always be in a state of readiness, in that I can run all my tests all the time if I want to. We used to be plagued by machine constraints and “unstable” platforms, but I’ve found that these issues are becoming increasingly less, but we seem to be still holding on to old automation ideas.

I’ve setup our labs here with machines that are constantly running our clients’ automation, while the testers are always adding more tests to that bank of tests. Automation should free us up to do this, but I still see too many people attending automation tests, which to me is like watching paint dry.

We have a “readiness” dashboard on all of tests all the time, so if I choose to run a set of tests immediately, we can dispatch those tests to a bank of machines and run them. It makes my job as a quality director easy, in that I never ever ask if the “automation” is ready, I have that information at my fingertips, for all types of tests. I’ve attached a simple dashboard report that I’ve used since I started working on automation projects in the early 90′s.

Remember, your automation needs to be credible and usable, not one or the other.

7-Sep-09
Test Cases Created and in Development 25
Test Case Create Goal for Week 40
Goal Delta -15
Test Cases to be Certified 150
Test Cases Modified* (timing only) 5
TOTAL TEST CASES IN PRODUCTION 1253
Passive (Functional) Test Cases 631
Negative Test Cases 320
Boundary Test Cases 40
Work Flow Test Cases 78
User Story Based Test Cases 24
Miscellaneous Test Cases 160
Test Cases Ready for Automation 1065
Test Cases Ready for Automation % 85%
Test Cases NOT Ready for Automation % 15%
Test Cases NOT Ready for Automation 188
Test Case Error/Wrong 5
Application Error 23
Missing Application Functionality 59
Interface Changes (GUI) 2
Timing Problems 70
Techincal Limitations 6
Requests to Change Test(change control) 23
Sum Check 188
TOTAL MACHINE TIME REQUIRED TO RUN 23:45:00
TOTAL MACHINES AVAILABLE TO RUN TESTS 7
TOTAL SINGLE MACHINE RUN TIME 3:23:34