Skip to content

Movin’ Onward

February 24, 2015

I’ve moved Tester’s Notebook to a new location: http://testersnotebook.jeremywenisch.com/

Please update your bookmarks and/or subscriptions accordingly. I hope to have new content very soon. Thanks for reading!

Testing and Editing: An Analogy Analysis

April 14, 2014

There’s a well-known saying: “Those who can, do. Those who can’t, teach.” (George Bernard Shaw)

I tend to live by another saying, one that I just made up now: “Those who can, create. Those who can’t, analyze.”

I’ve always been drawn to writing (never the other way around), and I think I’m pretty good at it. I’ve also always been drawn to programming, and I’m not terrible at it. But where I’ve always seemed to excel is in analysis of other people’s creations: editing works of writing and testing software. Perhaps it’s my detail-obsessiveness and general anal-retentiveness; perhaps it’s my aversion to decision-making and attraction to questioning. I’m not sure. What I do know is that the sibling activities of editing and proofreading can be a useful analogy for testing and checking.

The standard definition of editing makes it out to be very similar to (and inclusive of) proofreading, but in my life it has been more helpful to define them as distinct activities. Proofreading is a low-level task — a hunt for misspellings, grammatical mistakes, punctuation problems, usage issues, and so on. This is the last thing you do to a piece of writing before it is sent off to its final audience. Before proofreading comes one or several rounds of what I personally define as editing. Editing is concerned primarily with high-level issues: structure, point of view, theme, audience, and so on. I usually think of proofreading as a “corrective” activity, wherein I make changes directly; but editing is more of a discussion with the author, during which I might make suggestions (“this paragraph might work better at the beginning”) and ask questions (“was it your intent to convey this message here?”). The goal in proofreading is to fix mistakes; the goal in editing is to help the author work their way to a better version of the piece.

Perhaps you can already see how being familiar with this distinction might help me organize my thoughts about testing and checking. Here are the definitions of testing and checking suggested by James Bach and Michael Bolton:

Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

The end result of testing and editing is very similar, and may include a report of issues of concern to stakeholders, as well as questions and observations that may lead to the author or developer re-working part of the artifact in a significant way. Likewise, the basis for both checking and proofreading is algorithmic decision rules: Does clicking this button result in that outcome? Is this clause punctuated with that mark?

There are a lot of differences between these pairs of activities, of course, but one of the more glaring, and useful, is that editing and proofreading are typically separate activities, with proofreading intentionally coming later — there’s not much point in proofreading a piece of writing if the process of editing may yet result in significant rewriting. This is different than a software project, in which (1) the reverse is often true, in that a lack of early low level checking can result in broken software that isn’t worth testing yet, and (2) “testing” under the Bach/Bolton definition is a general term that includes checking.

The benefit of this analogy is some extra structure to my thinking whenever I try to distinguish amongst these activities as I do them. For example, when I first started testing several years ago, my experience giving feedback to other writers helped in finding an appropriate tone for bug reports. And now that I have more experience testing, the worlds often channel advice between each other.

What about you? What analogies do you use to make sense of your own testing world?

State of Testing Survey

December 3, 2013

You know something that has stood out to me at the two testing conferences I’ve attended, in BBST online classes, and in conversations in the Twitter testing community?

(Oh, hello, blog reader. It’s been awhile, right? Good to see you, too.)

It’s how diverse the experiences and mini-worlds of testers are. For every broad idea from a tester to which I can relate, there are a dozen minor, but significant, details that seem entirely foreign to me. “Testing software” doesn’t paint a picture quite as clear or specific as “repairing automobiles.” Some testers work in giant QA departments, some work on small teams, and many work alone; every tester deals with a different palette of platforms, source code, and users; I could go on, but I don’t need to, because chances are good that you already know exactly what I’m talking about. Understanding the software tester’s world tends to be more about understanding a specific tester’s specific mini-world than it is about understanding “the software tester” in a general sense.

But, you know what?

QA Intelligence and Tea Time with Testers are trying to help paint a clearer picture of that general tester: Who we are, what we do, where we’re heading. The tool they’ve created is the State of Testing Survey, which they are launching this year and will continue each year going forward.

I encourage you to check out the link, share it with other testers, and participate in the survey when it’s released. This is a pretty great opportunity to learn more about the testing industry and about each other. I say we all take advantage of it.

(h/t TESTHEAD)

Insights from The Black Swan, Part 2

April 5, 2013

I am in the process of reading Nassim Nicholas Taleb’s The Black Swan and reflecting here as I encounter insights that excite me as a tester. If this news comes as a shock to you, please read this immediately.

Now, this:

“…that not theorizing is an act–that theorizing can correspond to the absence of willed activity, the ‘default’ option. It takes considerable effort to see facts (and remember them) while withholding judgment and resisting explanations. And this theorizing disease is rarely under our control: it is largely anatomical, part of our biology, so fighting it requires fighting one’s own self.”

First, it’s heartening to learn that my constant theorizing (about what is causing a bug, about how to reproduce a bug, about how a feature is or should be working, about how a user might respond to something) might be natural and largely out of my control.

Second, it’s disconcerting that my efforts to hold back judgment and explanation while collecting observations and information may be largely futile — and that I may in fact be fooling myself when I think I am succeeding.

Third… wait, do I actually try to hold back judgment and explanation while testing? Sometimes, yes — which may explain, according to Taleb, why a bout of intense testing and exploration can be so taxing. But perhaps more often, no, I think that I let my instincts run the show and theorize away. And it gets to be dangerous. When my brain wants to theorize, it’s like being trapped with a car salesman:

Me: “I want to investigate some factors before I start making any decisions. For example, what’s the price difference between the LS models and…”

Brain: “Yeah, yeah, sounds good. But wait! I think you’ll like last year’s sedans. C’mon, let’s take a look together.”

Me: “Fine, but then I want to get back to this.”

Brain: “No problem.”

Me: “Yeah, you know, these sedans look pretty good. I could be persuaded, let me just check the mileage…”

Brain: “OH! You know what you’d LOVE. This new SUV. C’mon, let’s go.”

Me: “Shoot, ok, but then I want to revisit these sedans, and also go back to my original questions…”

Brain: “This will only take a second, I SWEAR.”

Me: “Oh, you know what, this SUV is nice. Geez, I’m losing track of…”

Brain: “HEY! Let’s check your credit score. Super quick.”

Me: “Um, ok… wait, why did I come here again?”

And so it goes in my mind while I’m fact-collecting and trying to hold multiple theories in my mind, hoping that I don’t start to drop the threads, or worse, end up with a tangled ball of nonsense to show for my efforts.

Taleb suggests that fighting this natural tendency to theorize may not always be worth the effort. But what I’ve come to understand through this reflection is that I can at least train myself to simply be aware of it more. And, better yet, take note of the theories and possible explanations as they come to me.

My head is good at doing some things, but terrible at doing at least two things: Storing information and understanding my own thoughts. Paper and computers are far superior at accomplishing the former and enabling the latter. I know that when I jot down my theories as I test and move on, rather than hold them in my head (or fight them off), the results are much more useful and productive. I get to the exploration that I intended, and, not only do I not lose track of the ideas I had earlier, but I can consider them clearly later. I can act on them, expand on them, test them, even destroy them.

So, no, we can’t avoid spinning theories and explanation for the things we see while testing. But I think something as basic as effective note-taking can get them working for us instead of against us.

Insights from The Black Swan, Part 1

April 1, 2013

I am reading a book. (I’ll wait for your applause.)

(Thank you.)

I am reading Nassim Nicholas Taleb’s The Black Swan right now. I’m less than a hundred pages in, but I’m already convinced all human beings should read it. I could wait to finish the whole thing and write a tidy little recap here, but I decided it would be more fun to witness how long it actually takes me to read a book by regularly posting “insights”  — nuggets that, as I read them, make my tester brain cells wriggle.

So here is the first bit that I found worthy of reflection. In this quote, Taleb describes what he calls the “round-trip error”, by referencing his earlier example of a turkey being fed every day for a thousand days, until one day (the Wednesday before Thanksgiving) he is not.

“Someone who observed the turkey’s first thousand days (but not the shock of the thousand and first) would tell you, and rightly so, that there is no evidence of the possibility of large events, i.e., Black Swans. You are likely to confuse that statement, however, particularly if you do not pay close attention, with the statement that there is evidence of no possible Black Swans. Even though it is in fact vast, the logical distance between the two assertions will seem very narrow in your mind, so that one can be easily substituted for the other.”

Notice the emphasis, which is the author’s own: The difference between no evidence of a Black Swan (an improbable event with extreme consequences — in this case, the turkey’s unexpected demise) and evidence of no possible Black Swan. Is this not one of the critical thought and communication challenges of a software tester? When the testing of a product reveals no evidence of critical bugs, it is easy — and biologically natural, according to Taleb — to confuse that as evidence that there are no critical bugs present.

The former assertion — no evidence of possible bugs — has meaning and impact that is mostly dependent on context. The mission of my testing and the particular sampling of tests I’ve chosen and executed, among other factors, will have a lot to say about what “no evidence of possible bugs” actually means, including whether more testing, and what tests in particular, could be valuable.

But the latter assertion — evidence that no possible bugs exist — has no meaning. It only has truth in the isolated island nation of Simplestan, where there is but one computer and one user, and where the software is so simple that, not only are all possible risks known, but it is possible to develop a finite number of tests to cover all possible bugs. (You may know Simplestan by one of its other names: Paradise, or Boringville.) In the rest of the world, we have to train ourselves to remember that “evidence that no possible bugs exist” is a falsehood — a seductive one (it feels so similar to the other!), but one that can negatively impact the quality of the product when testers and stakeholders are led to believe in it.

Now, it feels like I’ve always been very aware of all this. But I think that may just be evidence of how good this book is.

Buckle Up: Let’s Talk Boredom

March 10, 2013

I’ve heard people say that software testing is boring. Testers complaining about their jobs, developers explaining why they wouldn’t want to test, friends and family questioning my career path. And as many times as I’ve heard this — possibly more — I’ve heard testers complain about people saying that testing is boring. So you won’t hear that from me. I know I have a great job; I don’t need to defend it.

I think that boredom is a natural part of any job. You’re a very lucky person if you’ve never, ever been bored at any job. In my experience, boredom at work is usually because you’ve gotten too good at your job, you weren’t a good fit for the job in the first place, or much of your job is repetitive training. (I’m thinking exciting fields like professional athletes and fighter pilots have got to have large stretches of boring time, right?) In the case of software testing — and probably many others, with which I am less familiar — boredom may well be because you aren’t doing a very good job.

I look at boredom as a tester smell. A “code smell” is a surface signal that something might be wrong deeper in your code. To me, boredom is a signal that something might be wrong with my testing at the moment. If I catch myself getting bored on the job, that’s my cue to step back and start asking questions:

  • Could I be looking at this from a different perspective? Whatever box I’m in, can I climb out and try something new?
  • Have I already run this test before? What am I gaining from running it again?
  • Should I move on from this area? If I’m no longer interested, perhaps there is no longer anything interesting here.
  • Am I doing something a machine could do for me? Automation could save me from boredom right now *and* in the future.
  • Is it time for a jump start from something like the Heuristic Test Strategy Model or the Test Heuristics Cheat Sheet?
  • Would it be more productive at this point to get another person involved?
  • Do I need to step away and come back later? Maybe the work hasn’t become boring, but my mind has just become tired.

And so on.

I am fond of telling myself that I have a strict no-boredom policy. That’s a motivating catchphrase, but when I stop to reflect (I will at this point acknowledge that I’ve neglected this blog — and the space for reflection it gives me — for too long), I realize that zero boredom is impossible, and that, in fact, boredom is an excellent tool for a tester. If I’m bored, I’m not interested; and if I’m not interested, I’m probably letting interesting things slip by. So I train myself to recognize boredom’s approach, and use it as a signal to re-evaluate what I’m doing at the moment.

One hat too many

August 5, 2011

I often hear that a challenge for testers who started out as programmers is shifting their mindset from creation to testing. That makes sense to me. That’s a big switch, to go from “How do I get this to do what I want it to?” to “How might this bug somebody who matters? Does this solve the problem?”

I am experiencing the reverse challenge. I started in testing and am now learning to write scripts in Ruby to help my testing. The struggle I have is keeping my focus narrowed on getting the code to do what I want. Instead, I constantly throw up stumbling blocks for myself:

“Oh, but what if somebody tries to do this?”

“What if we don’t have that data?”

“Ack, will that design take too long to run?”

“Well, this will work, but would it be nicer this way?”

“Someone might want to be able to do this later. Right?”

It makes for slow, painful, and inefficient creation. In fact, it reminds me of the path my writing process takes at times. Instead of focusing on creation — words on paper — I stop and think about the effectiveness of a sentence or word, or consider theme or some other big picture concern. What I do get done is typically of good quality… but is it worth spending more time and getting less done?

I find I am much more efficient and productive when I put the ideas down — all of them — then revisit for editing and revising (or “testing” — determining what might need editing and revising).

Hopefully I can find this mindset for scripting as well.