Monday night I presented about “Test Automation” at the Bonn Agile Meetup, a group of now more than 100 people interested in all kinds of topics related to, well, ‘Agile’.
First of all, thank you all for coming! With a bit more than 30 people, I heard that this was the largest number of people joining the Meetup. Second of all a big thanks to the folks at Doo for providing a great location, support, beverages & food. I really enjoyed the hour speaking & coding as well as the great 3 hours of discussion & talking afterwards.
Without further ado, these are the (German only) slides: test-automatisierung_bonn-agile_april-2012.
As the main technical part, I presented my preferences about organising automated test (or checks, if you prefer that term) in Cucumber:
- I like to have the scenarios to be as high level and close to the ‘business language’ as possible.
If a user with certain rights or a given role needs to login, I’d rather note that role/right, not an explicitly given username & password in the scenario file.
- I also prefer the step definitions to be as clean as possible. Sure, we likely need to refer to a concrete user name here, and be clear about what steps (on the application level) are done in which order. Assertions (of whatever kind you prefer) is also what I like to see in the step definitions: What is actually done and what the expectations are — that’s what I prefer here.
Code I don’t like in step definitions: Statements that create a browser (headless or otherwise), HTML or XML parsing code, SQL statements of whatever kind…
Whenever that appears in the step definitions, I lose track of what I’m about to check.
- As the details of dealing with the output of the system under test have to go somewhere, I usually create classes and/or modules wrapping all the details. This code actually implements the interface used to communicate with the tested system.
I didn’t come up with all this, but find it a good way for me to separate technical from business levels. Most of the ideas in fact come from @unclebobmartins Clean Coders videos (and his books about the same topic).
While I was recently working on some test automation task, I had the feeling that the automating of the tests seemed to be more important, than the actual automated tests (or checks) I created. It seemed to me that this is very similar to the saying that the planning in agile projects (and likely in other projects as well) is more important (or valuable) than the plan you get out of the planning. So I tweeted about it and within seconds Lisa Crispin agreed to this.
It seems to me there is a pattern—actually a question—at work: Is doing something more important than the result you get out of that activity? There’s a book by Mary and Tom Poppendieck ‘Leading Lean Software Development’, subtitled ‘Results Are Not the Point’, that hints in the same direction.
Is actually doing something really more valuable or important than the result of this activity? I’m not sure about this.
The result of having done something seems obvious: We have the result, something that didn’t exist before and has some value to someone (hopefully). But what about the value of the activity itself? Two things come to my mind immediately:
- While actively working on a task, no matter whether it’s performing or rehearsing, we exercise and get more used to doing it. We’re very likely getting better at doing this in the future.
- It’s also likely that we learn something about the task itself, some technology we’re using or social aspects of work life.
- A secondary result are scripts that help me to create a certain state in the system, a base camp, from where I can start explorations into new areas of the system.
What do you think about this? What makes the doing itself valuable to you, apart from the outcome?
Let’s go back to the creating of automated tests: While I was looking at newly implemented parts of a software system, as soon as an automated test executes and yields a reproducible result… it’s a regression test (or check as some would say). Now, regression tests are not particularly likely to disclose new defects, so what’s the value of automating anyway. While I can’t offer a generally valid answer, the automation effort itself helped me to uncover a number of bugs.
PS: An additional example if Ben Simo (@qualityfrog) tweeted this:
Read in 100+ yo teach book: teacher should script lesson, then throw away script at class start. Scripting useful for prep, not instruction.
Another great example of the pattern.
One pattern I see often is that people (me included) apply more of some X, if applying the usual amount doesn’t yield the desired result (anymore).
Two examples from very unrelated fields:
- If a dog or other pet doesn’t react in the desired way to a command (‘sit’, ‘down’…), many people start shouting at the dog again and again, which may or may not work at first, but will also fail eventually. Stopping to shout at the poor dog won’t solve the problem either.
- In software development I’ve seen rules and policies added to a given process again and again. In these cases the goal was to make the teams produce better products faster (and probably more predictable). That doesn’t lead to the desired results either. But reducing the number of rules is not enough to get teams out of trouble.
So when X doesn’t work, or doesn’t work anymore, applying more of it won’t help. Alas reducing it (less/no shouting at the pet, removing process load from projects) won’t solve the issue.
It seems to be a lot more promising to try an entirely different approach. After realising issues with waterfall projects, we (as software developers) found agile methods to be helpful. In the case of training a dog it’s another way to help the dog learn a new command. In some cases it may even be necessary to teach it its name before.
Did you see anything like this? Let me know.
There will be posts about, well, testing, patterns (& anti-patterns) and learning.