January 23, 2014

Does Agile Really Lead to "Amateur, Untrained, Unmotivated" Testers?

Amateur, Untrained, Unmotivated - and Agile?


Recently, I was reading an article on James Bach's blog titled "Test Jumpers: One Vision of Agile Testing". It's an interesting article and like most of James' writing, it was thought-provoking. The notion of a "Test Jumper" (he compares it to the elite firefighters known as Smoke Jumpers), is intriguing, and seems like a lot of fun. copyrightjoestrazzere

There was one point that really caught me eye though. In this article, James wrote:
"The value of a role like this arises because in a typical dedicated Agile situation, everyone is expected to help with testing, and yet having staff dedicated solely to testing may be unwarranted. In practice, that means everyone remains chronically an amateur tester, untrained and unmotivated."
When I asked him to write a bit more about this point, he replied:
"This has been a longstanding philosophical difference between the programmer-dominated culture of Agile and the culture of skilled testers. Attitudes vary from place to place, of course. But what I'm saying is that to do any technical activity well you must study and strive to improve. If you are focused on programming, you study that. If you are focused on testing, you study that. It is possible to study both, but programming is so interesting and all-consuming that it is VERY rare for a programmer to study testing to any great degree. 
Some of them seem to be offended when I say that. I think that's because they honestly don't realize how deep I am speaking when I speak of studying. Many of them seem to see little worth learning in the testing sphere."
At my company, we are about to embark on several projects that for the first time will be Agile. I'm worried about the quality bar and the tester role in these projects, so I'll be on the lookout for the problems that James warns about.

I will be assigning professional Testers to the projects, but for at least one of them this is only a part-time assignment. I'm hoping that this will provide sufficient testing, but the Agile Team themselves will be deciding how the testing tasks are divided up, and how much professional and amateur time is devoted to testing versus all the other activities that must take place.

For those of you with Agile Testing experience:

  • Do you agree with James' assessment? 
  • Are you doing anything to counter that trend?
  • Any other suggestions?

I hope to write a follow-up a few months down the road as these projects progress.


This article originally appeared in my blog: All Things Quality
My name is Joe Strazzere and I'm currently a Director of Quality Assurance.
I like to lead, to test, and occasionally to write about leading and testing.
Find me at http://AllThingsQuality.com/.

13 comments:

  1. Joe, you're going to laugh, but in my experience... it depends.

    If the team is using the input of the test specialist(s) to drive the developer-centric tests (unit tests, low-level module tests and the like), the test specialist role ends up as a kind of hybrid between end-user advocate, system generalist (because in my experience testers end up knowing the breadth of a product where developers tend to have more in-depth knowledge of specific areas), and communicator of gotchas in addition to the actual testing. What happens in the best of circumstance is that everyone tests, but they test in the areas they are most familiar with: the product owner looks for usability and whether the deliverables after each sprint/iteration meet the his needs (which of course is pretty common because it can take a while to narrow down to the real need that the product is attempting to meet), the programming specialists build unit tests, gated continuous integration builds and other automated tools to test the basic functionality they're implementing. They will also perform some integration tests to cover the steel thread of the user story they're working on. The test specialist(s) will look at more sophisticated test scenarios around the user stories, regression, and potentially functional automated regression of completed sprints.

    James' assessment is pretty close to what happens when a siloed development team that's used to throwing code over the "wall" to the test team moves to agile. Programmers who have no idea about what testing involved tend to underestimate what's involved and actively resent being expected to tests. Programmers who know what a good tester brings to a project tend to be more willing to work with the tester to make everyone's life easier - but there are an awful lot of places where the testers are in their own silo and actively discouraged from working with the programmers.

    I won't say I've got techniques or methods to change programmer attitudes: just like each tester is an individual, so is each programmer. I've got a bit of an advantage when it comes to breaking down the wall because I've been a programmer so I can flip between tester-mindset and programmer-mindset. One thing I'm always careful to do is phrase things with a bit of tact - that never hurts! I'll often phrase bug reports in terms of the users who will be impacted by it and how they'll be impacted, and when I can, I won't write up a bug report but will work directly with the programmer (it's amazing what "Hey, have you got a minute? Could I show you something I'm not sure about?" can accomplish - often a programmer will see what weird thing you're making happen, say something like "Oh, I never thought of that, give me a minute." and the problem's fixed within an hour - no need for bug reports).

    Actually, that's one thing that does tend to happen with agile - bug reports tend not to happen as often. I'll typically only file a bug report if something I find isn't going to be fixed within the project backlog - or if the team decides it's out of scope for the project, in which case it needs to be recorded somewhere else and prioritized. It won't go on the project backlog if a developer can fix it within a day - so the overall load of open bugs tends to drop.

    ReplyDelete
  2. Thanks, Kate - excellent response! (And yes, as always, "it depends")

    ReplyDelete
  3. I like Kate's explanation and have nothing really to add. The test jumper idea sounds like a bit of a hero, but sometimes I can see it being useful to have an outside coach helping when the team gets stuck. I think an underlying assumption is there is not a skilled tester on the team, which was what the early agile books missed. Now, I see most teams with testers who work with the team developing their testing skills to help the team from the beginning of the project right through to delivery. I don't think jumpers can do that.

    ReplyDelete
    Replies
    1. Thanks, Janet. It appears that your experience differs from what James sees as a typical dedicated Agile situation.

      Delete
    2. Many people have different experiences, but I see a lot of teams and most of them have dedicated testers.

      Delete
  4. AnonymousMay 15, 2014

    I don't agree with James' assessment that there's a 'programmer-dominated culture of Agile'. One of the 4 values of Agile is 'Working software over comprehensive documentation'. Working software is production ready software, production ready software has to have been tested.

    I've seen some really screwy implementations of 'Agile-in-name-only', that I could describe as 'programmer-dominated', but I don't see anything inherent in Agile that promotes that.

    Other suggestions, you refer to these projects as Agile projects. Agile is a very broad-church, have you actually chosen a flavour of Agile to implement (XP, SCRUM, something else)? And 3-4 months later, how's it going?

    ReplyDelete
    Replies
    1. Thanks, Simon.

      I guess I don't agree that "production ready software has to have been tested". From my viewpoint "should have been thoroughly tested by a professional programmer" would be a better goal. But I know of many companies where that simply isn't happening, and some where there's not a lot in the way of testing for production-readiness going on. And then there are always the companies who like to "test in production".

      Our initial Agile project has struggled badly, for a variety of reasons. we have yet to achieve any rhythm (or any real velocity). If I had to characterize the "flavor", I guess I'd call it SCRUM, although it certainly hasn't yet followed many of the practices I've seen in other SCRUM projects. We're still working on it...

      Delete
    2. AnonymousJune 05, 2014

      Hi Joe,

      I take your point about testing, every place has it's own definition of production-ready. Whatever production-ready means, is the state the service ought to be in at the end of a sprint.

      I like SCRUM, I used it in an ad-hoc manner about 10 years ago, and then more formally over the past 4 years. In my view SCRUM has very few rules, and to get Agility out of SCRUM you need to stick to all the rules (especially when you start out). With SCRUM teams, I always start strict, even if that means zero/low velocity. Because 90% SCRUM can be chaos, especially when you're all just finding your way.

      Delete
    3. Thanks, Simon! We are seeing some chaotic behavior. And we're still struggling to define what "done" means. Clearly we are still finding our way with this particular project.

      Delete
  5. AnonymousJune 11, 2014

    I've learnt to start by getting the team to define the 'definition of done' at the start of the project (oh how wish I'd done that at the start of my first SCRUM project!). Define what done means as a team, before you start estimating (I have a feeling this advice might be late!). Last time I did this, I used this...

    http://www.scrumalliance.org/system/resource_files/0000/0445/Gupta_Figure_1.png

    ...as a boiler plate, to get the team talking. We crossed off half that stuff, and added a few more that were specific to our team/project. I think being specific really helps, one project had "If it's an accounting story, Martin has successfully run accounting examples through the service on the test server and the examples are checked into SVN", because if that wasn't done, we didn't want it escaping.

    Then we printed it off and stuck copies on the wall and mocked people if they set stuff to done that didn't met the standard.

    If you've not documented your 'definition of done', it's never too late. Maybe raise it at the next retrospective? If you're not having retrospectives, be brave and start having them. I always equate them to 'sharpening the saw' http://nathanhoad.net/sharpening-the-saw

    ReplyDelete
    Replies
    1. Thanks again, Simon! During our recent Lessons Learned session, we declared that we had no "definition of done" in the first wave of the project, and are determined to do better this time around. It's one of the key factors that I'll be monitoring, as the project now has an imposed deadline, and the manpower has expanded significantly. I'm not running the project, and have only partial influence on how it is run. We'll see what happens this time...

      Delete
  6. As a tester entering an agile team for the first time (I have been working in a team that is semi agile, which means, not really agile at all) and as the first professional tester in the team,I find the information provided in the comments very useful, thanks!

    Joe, maybe you could write a blog post about your experience with Agile so far and experiences learned? it seems to me that most of the testing world is struggling with establishing a good working testing baseline in an agile workspace.

    ReplyDelete
    Replies
    1. Thanks, Ory!

      We are still in the middle of a so-far struggling, semi-Agile project.

      But I do intend to post about that experience when the time is right. At the moment it would be too negative. I'm hoping for improvement soon.

      Delete