?

Log in

No account? Create an account

Previous Entry | Next Entry

QA: Software Test Automation

weather: cloudy
outside: 18.2°C
mood: ...
I've been trying to spend time dunking my head into test automation lately. It hasn't been easy.


Part of that is because of the high rate of distraction. And that really is just me. People seem to gravitate towards asking me those "quick questions" that are sometimes not so quick. Rather than just plain helping them, I have been working on helping others help themselves. Coaching/mentoring/guiding/tutoring/teaching was one of the goals from performance reviews in previous years. Judging by my last performance review, it's been working out really well. =)

But the other part of my struggle with automation is the fact that automation and getting it right is not easy, in and of itself. I said before that you're not making a hard job easier by automating. You're making a hard job equally as hard, but in a different way.

Test automation IS development. It's a programming job. It isn't solely a QA responsibility. QA cannot just do it in their spare time. The skill set that you traditionally want for QA positions is not the same as what you'd want for a QA position for automation. Not understanding this is probably the key reason that most shops don't have good automated test suites.

A lot of developers don't understand this either. I remember the whole crew being hauled into the big boardroom to discuss the snowballing issue of regression getting more and more complex. All the developers are pulling a Hermione Granger and going "QA can automate it". And all the QA staff are staring daggers. =}

I see signs of Work getting it™ and moving towards an staff organizational structure that supports doing it right, so we're off to a really good start. I may or may not get into automation that deeply, I don't know yet. But just looking at what we have and before actually even creating anything, I could see where things needed to be done differently.

We needed source control on all our automation scripts and assets. Priority one.

We have to map out development procedures for what to check in and what NOT to check in. Anything that is generated by running the test scripts should NEVER be checked in. It drove me stark raving bananas to see temporary files, lock files, compiled intermediate binary files, and test run results checked in to the source repository. I cleaned out what I could. I tried to make sure that the Results folders weren't being used as a baseline for comparisons. I have a feeling there are other files that don't need to be checked in either, but I just wanted to get rid of the obvious ones. I need to do more homework and maybe toss out more unnecessary stuff.

From looking at what's checked in, we're still struggling with how to manage scripts for the same features for the same product but differences in the automation assets coming from underlying implementation. I've tried to cut down on the number of copies of the same thing all over the place as much as I can. But for now there are times when there is no feasible or sane way to deal with something, other than to make a copy and remembering to make the same changes in two places.

I have in my mind a general sense of how to approach it, but I wanted to solidify it exactly for myself before I recommend it to everyone. I'm positive that we can do something by branching and merging the source code. But "merging" will have to be done differently and it isn't going to mean the same thing as the development source repository because we're dealing with mostly binary files. Using the merge feature of the version control tool isn't going to work for us.

Alongside of managing automation assets, the automated testing approach is an important direction to get right.

Automation testing tools will usually allow you to show it what to do (in a manner of speaking) and it will do the same thing automatically. It's like creating a Macro. You hit the Record button. You do whatever you want to in whatever window you have open. You hit Stop. You hit Play and it does exactly what you just did.

This is well and fine, but as soon as something changes in the window or screen that you've recorded your steps in, the automation script also has to be changed, otherwise it can't find the buttons or text fields and the script run fails.

And when you're developing software, things change. A lot. And constantly.

It takes a lot more skill, training and experience to create automation scripts that are robust enough that are able to function correctly even with those changes constantly taking place. The person you hire to do this has to be a hybrid of developer and QA.

These people are more rare and very difficult to interview for. I'm not certain if you want to interview for a developer and hope to train them into a partial QA role or if you want to interview for a QA and hope to train them to be somewhat of a developer.

In support of the automation effort though, I've begun to structure my test cases differently too.

Even without data-driven test automation in the picture, I had wanted to separate test procedures from test data. How many times have I written out test cases with exact Reproduction Steps only to have the user interface change and make a whole section of 500 test case repro steps wrong? It didn't take long to realize that test case descriptions need to be as general as possible while maintaining precision and correctness.

It also means that I have a test case structure that complements automation and is easier to interpret for creating automation scripts.

Here's what you do. And plug in these values for these data fields. If the "here's what you do" changes, then make those changes once. And it will still work for those same data values.

So, yes. Not there yet. But on my way. Full steam ahead.


Profile

eLouai
bride
The Bride of the First House

Latest Month

March 2015
S M T W T F S
1234567
891011121314
15161718192021
22232425262728
293031