A Week of “Rapid Testing Intensive”

I signed up for the Rapid Testing Intensive event starting next week and am looking forward to a week of learning new things about software testing.

I’ll attend online and if anyone would like to talk about it or team up with me, leave me a note in the comments below or contact me via @S_2K on Twitter. In case you like to follow the tweets about the event, the Twitter hash tag is #RTI1.

Edit: Since that hash tag is already in use, make that #RTI2012.

An Anti Pattern when using Continuous Integration & Automated Tests

The Setting

Once in a while I meet teams who are using a Continuous Integration (CI) system, for their builds and executing the unit tests (yes, there are still teams using a CI for building the software, but not executing automated tests [or checks]). So this is the setting:

  • There is a CI system.
  • There may be unit tests being automatically executed either after check ins into the version control system or at least nightly.
  • In some cases there are failing tests (or checks).
  • Broken builds are displayed on the start page of the CI system in the (intra) net, but not ‘radiated’ so everyone can and will see them.
In this post I focus especially on the cases where the last two points are true.

The Observation

I think this is problematic at least, likely also dangerous, because when there are failing tests and/or broken builds  for some time the team will very likely develop features base on broken software. Getting back to a clean and working system gets harder and harder as time goes on.

The Aftermath

A change in the behavior of the automated check can go unnoticed: Since no one watches the first test fail, it’s unlikely that someone will realize that other checks start failing, too. Furthermore new features might depend on the brokenness somewhere else.

Now What?

Do not live with ‘broken windows‘ & apply a ‘Stop the Line‘ (or check out what Google says about Stop the Line) policy: As soon (read: right after it’s discovered) as a problem becomes known, solve it and then continue with other work. Like everything else, this is supposed to be done by the whole team. When everyone helps, things get solved quickly.

A leaving thought

In my opinion, introducing a Continuous Integration System and then ignoring the results is certainly a major anti pattern, that can do more harm than good for the development process. The effort of setting it up and keep running and then ignoring the information it provides is a form of waste. Is it possible that in such a case you’re better off not having a CI system? Honestly, I’m not sure.

Note however, that I am in strong favour of…

  • having a CI system,
  • actually using it to build the systems and execute automated checks against it,
  • radiating the results to the team and
  • acting upon those results.

Ways & Places to Learn: Hackdays

There will be a hack day tomorrow, just before the Euruko 2012 and I think events like this are a great opportunity to learn no matter whether you’re a programmer, tester or (visual) designer. I find it fascinating to see how other people approach tasks, be it programming, testing or something unrelated to software development.

Here are a few ideas I have, about what to do…

  • Work on something that bugs (sic) me for a while now… could be a Sinatra or Rails app — or maybe something entirely different.
  • Offer a testers point of view to whoever asks about it. Just ask me (@S_2K on Twitter).
  • Work on some entirely new (to me) topic.

What would your choice be? Do you make plans about events like this? What are your expectations?

Sources of Learning

Over at Jeroen Mengerink’s blog (and there’s the 1st of the sources to learn from) he talks about learning (and the lack thereof) of people in testing. Given the topic list of this blog “On Testing, Patterns and Learning”, this topic is very interesting to me, too.

I recently attended a Scrum Training and one of the group exercises was to suggest ways to implement certain practices we were given. One of the topics was “focus on both, technical excellence and good design, fosters agility” (my translation, the training was held in German). I started filling the spot for possible immediate actions on the board: Conferences, user group meetings, books, blogs, podcasts… at which point a colleague stopped me, so others could add to it. However they didn’t.

So, if you’re looking for information about testing here are some recommendations I have, for today let’s cover the letter ‘B’ 😉

Jeroen also mentioned conferences and especially a presentation by Alan Richardson (@EvilTester on Twitter): Please go to Jeroen’s post and read it, then watch Alan’s presentation, both are really recommended stuff.

For now, I’d love to hear what book(s) and/or blog(s) you recommend that helped you learning about testing or improving your testing.

TDD and Manually Solving Tasks

Uncle Bob Martin recently wrote “Why is estimating so hard?”. Among other things, the article explains the difference between manually doing a task (in this case breaking a text into lines of a certain maximum length) and actually writing the program to do it.

The way (many) humans do this is by trial & error, as Uncle Bob says:

Why was it so hard to write down the procedure for doing something so basic and intuitive?

Answer: Because when we do it manually, we don’t follow a procedure. What we do instead it continuously evaluate the output and adjust it until it’s right.

In an earlier article (on of before 7. Oct. 2005) “The Three Laws of TDD” Uncle Bob described three rules (or laws) of TDD:

Over the years I have come to describe Test Driven Development in terms of three simple rules. They are:

  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

The similarity between these three rules and the non-procedural way used for manual tasks surprises me: Is it possible that the TDD-way of writing software works so well, because it models the way we approach manually solving problems? Thinking of it, my first reaction is this: Sure, writing code (whether test-driven or not) is the manual work of solving some problem. So far, there’s not much news here.

But then, there seems to be more to it… To me, it is fascinating to think about the ‘meta level’ of what we’re looking at: Code writing is the manual, sapient way of creating a procedure to solve some problem. It is neither automated nor do we have a process or procedure for this kind of work.

To me TDD is an aid, a technique to help me write code in a more methodical and disciplined way. It is not at all a step to make software development more mechanical or predictable.

This brings me back to the starting point of this post: Estimating. It’s hard to predict a complex system like software development, where humans do creative work on hard problems, trying to find the solutions to problems they have not solved before. This is hard (and to me fun) work.

Should it ever turn out that there’s an easy way to (reasonably) correctly estimate it, I will be very surprised.

Navigation

%d bloggers like this: