I need a tester.
Why do I need a tester? I am a tester!
Still, I should have one. I am wearing a developer’s hat, today, and there are things a developer won’t do.
Here’s my situation…
Last week, I wrote a lot of code on the Tricentis project I’m working on learn more about our bold vision for the testing platform of the future]. It took me 37 hours to write and debug this code. Well, the amount means nothing, I suppose, except it feels like a lot. It’s “back end” work, a parser that scans test notes and test code and indexes them in a particular way that lets me construct a narrative of the test process.
I’m writing code because I have a vision that is hard to communicate and not fully formed. Along with my Tricentis colleagues, I want to create a very particular tool to help testers. We don’t yet know for sure if the tool is feasible, so this is R&d with an especially big R and small d. The process is something like solving a Rubik’s Cube. (Disclaimer: I have not yet successfully solved a Rubik’s Cube. If you have it’s because you googled how to do it, didn’t you, you cheater!)
As a subject matter expert, my job is to design, not implement. But although I could describe the tool to our official developer, I’m afraid I’d drive him crazy with my constant modifications and backtracking. It’s easier to code the prototype myself and work out the design problems as they come. The results of this process will be a precise, implementable specification for a product that I already will have experienced and shown to my colleagues. We can later build the full-scale thing after we answer the most crucial question: do we even want a product like this?
As I write the code, I must also test a bit. Good thing I enjoy testing. Except, I don’t have time for that! I have features to add. A prototype might not need production quality, yet it must be robust enough to allow us to experiment with realistic test data. Notice the zig-zagging pattern of my words? Claims followed immediately by counterclaims. This is me pondering a tradeoff.
All developers are caught up in this tradeoff. We write instructions and quickly check them to see if we built what we think we built. There are various methods and tools for this. But all of them, I think, are intentionally shallow. By that I mean they reliably find only certain obvious bugs, not the subtle ones. Shallow testing is popular because it is non-disruptive. It’s polite. It’s not creepy. It doesn’t overstay its welcome or scare the dog.
If I want deep testing, I must carefully set aside my dainty white beaver developer hat, trimmed with velvet leaves and red berries. I must instead don my tester Fibrosport mask, gas up my tester chainsaw, and go on a spree. Of testing. That is a very different sort of process, and it can be rather messy and time-consuming.
That’s why developers won’t do it. Deep testing stops development. It’s not necessarily a matter of skill or interest. It is nearly always a massive distraction.
What specifically is distracting about deep testing?
I’ve been making notes as I go. I’m watching myself do some kinds of testing while avoiding other kinds. This is what I’ve been avoiding so far:
- Exploring amorphous limits. Back in “the day,” I wrote Assembly language and C code. I personally allocated and managed each byte of memory I used. These days, I don’t know. Elves do it? I command that an array be created and it is done. I call library functions like
part_the_sea('red')
and good stuff happens. I want things and I get them. But when you accept wishes from strange genies, there is often a catch. How much text will that field have to handle and what happens as it gets very large? How many files will we have to process? How big will they be? What character encodings will be used in them? Discovering what happens as internal limits are pressed would take me a lot of time. I’d have to produce a huge amount of fake-but-kinda-real test data. Yes, as a developer I can do it…but I won’t. - Exploring amorphous dependencies. Our products frequently interoperate with other products or rely on packages that we might have limited knowledge of, or control over. For example, the tool I’ve been coding lately interoperates with the popular Git version control tool. But it turns out the functions I’m using have undocumented conditional behaviors. Several times I’ve had to rewrite my code when the format of Git’s output changed because, say, a file was deleted or a filename was too long. In full tester kit, I would systematically analyze and model Git. I would pore over its documentation. I would design a wide variety of file name and file change scenarios to drive away the risk of these unhappy surprises. Nothing stops a developer from doing this personally, except it’s priority three in a priority one world.
- Exploring special/unusual/contingent conditions. As I write the code, I’m in a position to notice a great many strange conditions that could occur. Things like “maybe the file write operation will fail” or “maybe this text field will be empty” or “maybe there already is a file with that name.” I explore some of them right away, while others I put in a TODO list. Quite a few just slip from my mind. Why? Because shhh! I’m trying to concentrate.
- Exploring the subject matter world. I would like four more months, full-time, to do competitive analysis and consider different design concepts without the burden of active product development. I want to understand different kinds of users and how they might use the products we’re designing. Hearing this, my boss demurred with an unambiguous one-word response over Slack. I assume he was smiling as he wrote it (because I practice reckless positive thinking about such things). But the truth is that developers can do a lot without knowing a lot, and our bosses know at least that much. When I am unburdened by the need to produce code– when I’m fully invested in testing– I have more opportunity to study the terrors of the unknown. When I’m in development mode I’d rather just shout “analysis paralysis!” as if it were a Patronus charm.
- Exploring the “ilities.” The ilities are those areas of quality that go beyond mere capability (i.e. “can it work?”) and address the question of will it work, in every way, in every situation, and into the stormy future. This includes usability, multi-usability, localizability, accessibility, compatibility, performance, security, and scalability, to name the most common ones. Each of these ilities is difficult to test to a high industrial standard. Most of them require specialist knowledge and some are the subject of industry standards. Testers can make careers out of specializing in any one of them.
- Securing the past, coordinating with the present, and preparing for the future. The work of a professional tester is not just performing tests. It is also our responsibility to keep reasonable records, to become aware of how our work affects (and is affected by) the work of others, and to be ready for what is coming. That happens to be a lot of work. Especially the last part, which requires that I develop systems and put tools and data in place to test the next thing before the next thing is actually here. In developer mode, my response to that is a thousand-year stare and quiet choking sounds until you end the Zoom call.
Then there is the mindset…
You better believe I’m ignoring all of those things right now because I’m creating a prototype. But in the back of my mind, I’m wondering if some of my design choices are going to be unworkable down the line. In the back of my mind, a slumbering tester mutters incoherently and rolls over.
It’s hard to explain the tester’s mindset to most people, because most people don’t want trouble. Testers crave trouble. It validates us. Being a good tester is like being a good conspiracy theorist—except one that is rational and helpful. If a developer tells me, “This is a product that does…” my first visceral response is “THAT’S what they WANT me to think!” I hope that I don’t say that out loud. But to be productive as a tester, I must approach every claim with active suspicion. I mean that: literally every claim. For normal people, this attitude is exhausting. Just as with unproductive conspiracy theorizing, there’s no limit to it; good testing can always go deeper. You could say that the urge for developers to stay in a positive mindset is the same instinct that says “don’t look down” when climbing a very high cliff. I like to think that testers look down so that everyone else can look up.
Twenty years ago, I worked with my brother Jon on software for our test project at HP. As I was coding I would copy to a floppy disk (if you are too young to know about them, a “floppy disk” was a sort of wax tablet used by computer scribes) and literally toss it over my shoulder to him so he could test it. He stayed in the tester’s mindset: critiquing, looking for trouble. I stayed in the developer’s mindset: trusting my libraries, focused on overcoming the next obstacle. We worked concurrently. It was wonderful.
I could use a tester like Jon, right now…
****
James Bach is a consulting software tester and Technical Fellow at Tricentis. He is also the founder and CEO of Satisfice, Inc., a software testing consultancy. James has been in the tech field as developer, tester, test manager, and consultant for 38 years. He is a founder of the Context-Driven school of testing, a charter member of the Association for Software Testing, the creator of Rapid Software Testing methodology and Session-based Test Management. He is also the author of two books: Lessons Learned in Software Testing and Secrets of a Buccaneer-Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success. For more about his work and online courses see https://www.satisfice.com/.