Search This Blog

Sunday, August 9, 2015

How to Evaluate a Workshop

[Reformatted and revised slightly from my original post on the EuroSTAR blog.]

Why I prefer the workshop format

In the course of a typical year I speak and present about a dozen times at conferences, meet-ups and client sites. My preferred mode of presentation for most topics is a highly interactive workshop; I enjoy the interactions with my students and I always learn from them. Of course, workshops aren’t the only way to learn. For some topics they may not be the best way, but for many, a workshop can provide a deeper and more memorable learning experience than a lecture-style class like the ones we all endured at school.

There are always people who sign up for a workshop without understanding what to expect. Most participants dive in happily and enjoy the experience of working with, and learning from, their peers. But some feel cheated because they thought they were coming to acquire knowledge from an expert and there was no presentation to listen to. Where possible, I try to steer these attendees to alternative sessions that are a better fit for the way they prefer to learn.

In the subjects I teach there are no indisputable answers, no solutions that will apply to all contexts. That’s why I design workshops with opportunities for participants to explore the important areas of a subject and discover ways to arrive at answers that they can use in their own contexts.

So if you come to one of my workshops, what should you expect? How should you evaluate the workshop when it’s over?

What to expect at one of my workshops

I see a workshop as a collaborative effort. My role as the “presenter” is not to deliver material to passive learners, but rather to structure and facilitate experiences where everyone in the room has the opportunity to share knowledge and ideas and to learn new things.

You and the other students are active participants in the learning process with me. Interaction is central to the workshop model, as is the expectation that everyone has something valuable to contribute.  While we won’t quite be peers in the workshop operation—since I will be there as the person who designs and leads the session—some of the participants may well be my peers in knowledge or experience. They come to learn about the workshop topic, but they don’t expect all the learning to come from me as the leader. Instead, they expect to join with me in exploration, discovering new things (as well as reinforcing some old ones) that come out of the interactions I have designed and will guide.

Evaluate a workshop using criteria that fit the format

Teaching well takes skill and practice. The skills required to design and facilitate a good workshop are significantly different from those needed to prepare and deliver a good lecture.

Like most presenters, I work hard to grow and refine my teaching skills. I rely on participants’ comments to help me learn about what has and hasn’t worked in a session, and how I might improve for the next time. But I find that the standard conference evaluation forms rarely elicit useful workshop feedback, perhaps because they were designed for lecture-style sessions. I need participants to apply different criteria for evaluating workshops.  

The primary consideration is whether you found the workshop useful. If yes, asking some of the  more particular questions in the list that follows may help you articulate why it worked for you and how it might work even better. 

If no, the same questions may help you articulate why not.

Either way, a comment or two will help me (and other workshop leaders) continue to grow and offer good sessions in the future.

Questions that can help you evaluate a workshop

  • Learning
    • Did I learn something useful, wonderful and/or important?

    • Did the workshop challenge me and others to think?
      • Did I discover new ideas and understanding?
      • Did it help me to see things I already knew in a new light?
      • Did it provide opportunities to interact and learn from others?

  • Comfort and safety
    • Did I feel safe in the workshop (even if it took me outside my usual comfort zone)?
      • Was it okay to opt out of exercises and observe if I wanted to?
      • Were group sizes varied so that I had at least some opportunities to work with my preferences?

  • Design and structure
    • Was the workshop engaging?
      • Were there interesting and useful exercises?
      • Were groups sized appropriately for each exercise’s purpose?

    • Was the workshop well-structured? Were there:
      • Exercises building on learning from previous ones?
      • Opportunities to reflect on and consolidate what I learned?

  • Pace
    • Did the workshop move along at a reasonable pace?
      • Did it keep us energised or allow boring lags?

  • Leadership
    • Was the workshop leader warm and welcoming? Did she:
      • Listen to participants and acknowledge contributions?
      • Provide opportunities for everyone to contribute (and not allow loud voices to dominate)?
    • Did she lead the workshop capably?
      • Did she exhibit firm but unobtrusive guidance?
      • Was she flexible and able to work with emerging ideas and participants’ energy?

    • Did the leader guide discussions and debriefs so as to facilitate learning? Did she:
      • Ask good questions?
      • Speak knowledgeably about the workshop subject?

And finally...

  • Was there anything else that struck you about the workshop?


Friday, April 10, 2015

Women and Conference Keynotes

How do we get more women to speak at conferences? Or, a more basic question, how do we get more women to make satisfying careers in tech that they want to stay in and grow with? 

I think the two questions are linked. I don’t have simple answers. I don’t think there are any simple answers, but I do believe there are positive steps we can take. Anne-Marie Charrett and I have embarked on one with Speak Easy. Another important one is to highlight role models for tech-minded girls and women who are actually in tech. One place to do this is at conferences with keynotes by successful women with interesting ideas.

We don’t see enough women giving keynotes at software testing conferences. What’s “enough”?  Well, one would be good for a start! At least one at each conference, in fact.

I hear a couple of contra-arguments here.

One is that most conferences are businesses. Organizers want keynote speakers they believe will be a “draw”, speakers who’ll bring in the punters. And that’s fair enough.

But I look at it this way. Maybe you’re missing out on female punters who’d like to see more people like them. From what I hear and see, there’s a market of women testers out there that you’re not really tapping into. And you’re not attracting nearly enough women to submit track session proposals. Far more men than women are submitting conference proposals, more than seems warranted by the numbers of men and women in testing. 

Are you scaring the women off? Or could it be simply that they don’t see enough other women speaking? They don’t see a culture where women are regularly on the keynote podium. Could it be that in a very important way, they don’t really feel part of the culture?

I don’t know. But I wouldn’t be surprised if this lack of visible role models were at least part of the answer.

Another argument I hear, primarily from younger women, is, “I don’t want to be chosen because I’m a woman. I want to be chosen for my experience and my ideas.”

You bet your booties, honey. So do I! So do we all. But (at the risk of sounding patronizing) I find young women’s post-feminist optimism shockingly naïve. Because before you can be chosen for your ideas, you have to have been considered. And men – and not only men, sometimes it’s women too – forget to consider women and their ideas more often than you’d think. Scouting for keynote speakers, they may forget you even exist. Still. In 2015.

I don't believe there's a vast male conspiracy in testing or test conferences, all joining together to keep good women down! In many ways, that would be easier to fight. I do think there's a general tendency to be oblivious: not to notice that there aren't (m)any women in the room or on the list, when actually, there should be. Because there are women of significant merit that they forgot to think about.

If it takes a quota to remind conference committees to expand their field of vision, then so be it. We all need reminders. We all have unconscious biases.

I think we are at a stage in human evolution where thoughtful people have to make conscious choices in order to overcome unconscious biases. It's natural for people to gravitate to the other people they feel most comfortable with, which very often means the people who are most like them.

A quota – say one woman keynote per conference – isn’t necessarily tokenism.

Let's say a bunch of people got together to do a job and suddenly realized that there weren't any men in the group. "Oh no", they said. "This looks terrible! People will accuse us of sexism if we don't have a man. Let's ask Paul. He won't make any waves (and we can get him to make the tea)."

But say the same group of people said instead, "Oh dear, this is starting to feel as if we've only looked at women candidates for our group. There's a whole pool of people we forgot to consider, and we know we would do a better job if we were more diverse. Paul would do a great job. He does excellent work, and is well respected in the community.  We know we will all work well together."

The first scenario is clearly tokenism, but is the second? Or is it simply a refocusing on the wider talent pool made possible by a conscious (however belated) attempt to overcome an unconscious bias?

I said on twitter that it’s shameful having no women keynotes at EuroSTAR 2015. I stand by that. The program committee chose excellent speakers. But given the talent pool of excellent women speakers, it's disgraceful that the committee didn’t expand its field of vision and choose at least one.

And it’s not just EuroSTAR, by the way. Take a look at StarCanada, only one other example. There are plenty of others.

Saturday, August 23, 2014

Signing the ISO 29119 Petition and the Tester's Manifesto

I forgot to include these links in my previous post on ISO 29119. I have signed both. I urge all testers to read, consider the arguments and, if you agree, add your signature.

Why I oppose adoption of ISO 29119

I don’t oppose the idea of a testing standard, though I’d like to see a programming standard to accompany it. But ISO 29119 and its predecessors are not testing standards. They are fundamentally standards for documentation of testing and things called “testing processes”. There is little that goes into a testing process practiced by a skilled tester that a document about documents can capture or codify.

In a long career I have yet to see any indication that so-called “test” standards have done anything to improve the skill levels of testers or the quality of their testing. Instead, I’ve seen many organizations doing mediocre rote testing with testers who are forced to produce reams of impenetrable, repetitive documents that nobody outside the company testing circle reads. I repeatedly see test strategy documents showing not an ounce of strategy yet compliant with standards such as this. Those same organizations often insist that their testers obtain certification.

Whether or not this is the intent of the ISO 29119 proponents, it is how adoption will play out in real life in many organizations. As James Christie has pointed out, contract lawyers, internal auditors and managers who know nothing about testing will insist on the grand panoply of fat documents because it’s a standard and therefore must represent “best practice”. Nervous and unskilled test managers will embrace templates based on ISO 29119 because all those documents make them feel secure and important. People on Agile projects will struggle with the conflicting demands of their projects and the standards.

I have yet to see evidence that compliance to any “testing” standard equates to good testing.

Testing is a skilled activity. (James Bach calls it a “performance art”.) The only true measurement of testing is skill exhibited in live practice. Some proponents of ISO 29119 sneer at the “craftsman” (or craftsperson) mentality espoused by many of us. I wrote in an earlier post that I grew up thinking of craft as “skill fuelled by love and integrity”. You who sneer at the idea of craft and make snide jokes about medieval guilds should take a look at some highly-skilled professions in the modern world. Do you think a surgeon never speaks of, nor works to grow, her craft? Is a person licensed to perform surgery because of the fine strategies, plans and reports he compiles in templates?

I don’t doubt that surgeons must plan and devise strategies. They have procedures they must follow and forms they must fill. But ultimately, surgeons are evaluated—and licensed by their state-sanctioned governing bodies—based on their results and the skill they exhibit on a real live human, tools of the trade in hand. They must also pass exams on their knowledge of the human body and its pathologies, as well as a range of tools and techniques. But the exams surgeons undergo are much more rigorous than anything developed so far for a testing certification. And no-one becomes a surgeon merely by passing exams. Like other craftspeople, surgeons serve an apprenticeship: studying, practicing and exhibiting on the job the skills they must have to qualify for  their profession. As do lawyers.

I’m not pretending that software testers normally require the same level of skill as surgeons, nor as extensive a education program. But I do think that scaled down the analogy holds.

I would welcome a real testing standard, though I’d like to see a programming standard to accompany it. A true testing standard would focus on demonstrated skills assessed by qualified practitioners. It might set boundaries for the levels of testing skill required to work alone or under supervision, and the types of software testers at differing levels could work on. Education to meet such a standard would combine classroom studies with on-the-job practical training, and judging of live testing. At successful conclusion of her education, a tester could be certified as a professional. Very skilled testers could become master testers, in demand for very high-risk software.

We aren’t nearly organized enough to devise a real testing standard in the near future. But I don’t see ISO 29119 as an acceptable substitute. It puts too much focus on the wrong things.

Tuesday, July 1, 2014

Off the top of my head - Some skills & personal qualities that a tester can benefit from

My previous blog post - on why I believe it's good for testers to learn to code - triggered discussion and some protests, especially from testers who argued that testing involves so much more than an understanding of coding. Which is indubitable (to me at least).

So I thought I'd post this mindmap, an undoubtedly partial - in both senses of the word - list of tester skills and personal qualities that I threw together a few years ago in an idle moment. These are all things I believe a tester can benefit from. It's a very high-level, i.e., superficial view. I'm sure I've missed some very important items. Of the items I did list, it's clear to me that not all testers need every item in every context.

Saturday, June 28, 2014

Why I believe it's good for testers to learn to code

Rob Lambert recently published a blog post with the title “Why Testers Really Should Learn to Code”. Rob’s principal argument is that “The market demands it and the supply is arriving.”

What follows is an expanded version of the comments I made on that post.

First, Rob is presumably speaking about the market he knows best, i.e., in the UK. I’m not currently seeing such a heavy emphasis on coding in the Canadian market, though I think it’s probably there in Agile circles.

But regardless of the market demands, I think there’s a larger concern: about testers growing their skills and expanding their toolkits.

Whether or not testers “should” learn to code seems to be a contentious issue in at least some parts of the testing community at the moment. I admit that I’ve been observing the controversy with amazement. I’m having trouble understanding why any tester would not want to learn to code. 

I’m now a test manager, consultant and strategist, and I haven’t done serious hands-on testing in many years. But when I was a tester I knew how to code, and I worked at learning the languages I needed to know to understand and at least read the code for the applications I was working with. Working as a technical writer before I even became a tester, I learned to code and it seemed to me then to be an essential skill.

As coding advocates keep saying, you don’t have to be able to turn out production-level code. But it’s enormously helpful for a tester to understand how a system is constructed from the inside out. When you’ve tried to write working code, you learn to know the kinds of mistakes it’s easy to make with a given language and that helps you find bugs in other people’s code. And when you can read code, you can often spot the place where the error occurred and see what else might be wrong around it.

When you can code, you can write routines to build data in bulk, and also to inject data. You can write routines that help you test (or check) faster, or make it possible for you to test a larger number of input variations than you could practically manage otherwise. You can write and run your own batch jobs. You can query a database directly, to find out what’s really getting written to it. (SQL is code, too.) You can make clever use of spreadsheets to boost your test capabilities.

There are so many things a tester can do with code. And coding is FUN, folks! In fact, executing working code that you’ve written yourself is a blast! It’s almost as much fun as testing. (Okay, that’s highly subjective. But if you love software, why wouldn’t you love building some?)

There's a social aspect too. Being able to write code helps you understand your programmer teammates and it teaches you empathy and respect for their skills. (You want programmers to empathise with you and respect your skills too, don’t you?)
This is not to say that you can’t be a good tester without knowing how to code. Of course you can. I know lots of excellent testers who can’t code and don’t want to learn. I also know lots of terrific testers who don’t have (or don’t believe they have) exploratory testing skills, or visual thinking skills, and don’t want to learn those either.

In my experience, these and all your other skills give you tools you can use when the context calls for them. Not every tool is appropriate or useful in every context. But the more tools and skills you have at your disposal, the more flexible you can be and the more easily you can rise to the demands of different contexts. If you don’t have a particular skill, you may not even recognize how having it could help you test better in your context.

I don’t believe that the issue comes down to should or should not. Rather, I believe it’s about expanding your skills and your toolkit. Why wouldn’t you want to do that?

Tuesday, June 12, 2012

Breaking the Tyranny of Form - Part 1

Testing in many mainstream organizations is choked with low-value standardized documents that not only gobble up valuable thinking and testing time, but actively discourage thinking and impede good testing. While some testers can hope for relief from the document burden through the spread of Agile methods, this ridiculous situation isn't going away in a hurry. As a blog post by James Christie recently reminded us, the floodgates onISO/IEC 29119 Software Testing – the new international software testing standard” will soon open. I suppose it's possible that the new standard will wash away the documentation excesses we have now. I'm not holding my breath on that.

Test documents, whose sole purpose should be to serve the work, are instead driving and constraining the work. Absurdly, form is dictating substance. When, as in this case, the form is obese and bloated, it sucks up and squanders all the energy that ought to go into the real thing.

Some testers (many, I hope) refuse to be tyrannized by the supremacy of form. I want to reach into the mainstream where form dominates and help testers there to join us. I want to help them learn to think better and think for themselves. This blog post is a step in that ongoing effort.

Another step is the webinar I presented last week for EuroSTAR: "Unburdening Testing - Finding the Balance Point for Test Documentation". (The Q&A from that session are in blog form on the EuroSTAR site.) The webinar is an introduction to the interactive tutorial I will present November 6 at EuroSTAR 2012: "Right-sizing Test Documentation".

After presenting my own webinar, I watched the recorded version of Alan Richardson's excellent webinar "Thinking Visually in Software Testing". The thinking tools and practices he describes there are so uncannily like my own that it prompted me in writing this post to think and write about my thinking. (I encourage you to watch Alan's webinar, if you haven't already done so.) 

Form over Substance

When I first began working for a consulting company, my project manager gave me a mantra for billable work: “Never create anything that is not a deliverable to the customer”.

She was a brilliant PM who became an important mentor for me, but not all her advice was equally stellar. That statement in particular put horrible shackles on my work that I took years to shed completely.

The problem was that my customer deliverables were formal, standardized prose documents. For me to show value, I needed to end each day with sections and sub-sections populated with tidy paragraphs, building up to the finished product.

At first, I was lucky. My company was only dimly aware of standards purporting to govern test documentation, and so I created my own templates. But over time, my templates became our company standards. When I used them, I was so busy trying to adapt the structures for each project that I could no longer work easily with them. Like the templates for test documentation imposed in many companies, I found that the structure—the form—acted as a constraint on the substance. The tail was wagging the dog.

We Can't Test Without Thinking

Testing is all about thinking. We think and rethink constantly. We think about how best to gather information on our projects: what to read and look at, and who to talk to and when. We think about how to approach the software and we develop test ideas as we go. We create test models, often complementary models for testing different aspects of the same piece of software. We plan and replan and then plan again. We think about what we've discovered in testing and design what we're going to do next. If we're doing a good job, we don't ever stop thinking.

Prose documents in preset patterns inhibit thinking and creativity. It’s useful sometimes to have a checklist of important things to think about, but we can’t afford to let those checklists limit our thinking.Templates are not the best checklists.

Writing in sentences can sometimes help me to simplify and think through a tangled knot of ideas. I do occasionally write to understand what I’m thinking.  But I never set out to think in predetermined sections of formal standardized prose. Do you? Does anyone? Can anyone?

Visual Thinking

  A couple of years ago these drawings helped me think through a problem

Most often, I think in scribbles and doodles, beginning with notes and drawings that acquire structure as I develop the ideas and concepts. Or I might start with a tentative visual structure to generate ideas and then modify or replace the structure as needed to fit my thinking. Sometimes I scrawl ideas on coloured sticky notes and move them around over several days on a board or double-page spread of my notebook, drawing connections and annotating as I go. I often use mindmaps. I may use several different techniques to get my hands around a difficult problem. 

Diagrams Emerge

What comes out of my initial thinking processes is rarely a customer deliverable. But over time, the result is usually some kind of structured diagram or set of diagrams that I can then use to communicate my ideas to other people. Rather than dictating and constraining the substance, the form of these diagrams emerges from the substance.

Example of a strategy diagram for an end-to-end systems integration test on a large project

When—as is true on most projects—I must produce a formal document, I prefer to put diagrams front and centre. I want my documents to communicate, and I try to make them easy to read and understand. I use prose as sparingly as I can get away with, using tables and lists wherever possible.  I don't include boilerplate, and I never copy wodges of text from one document to another. (Why ever would I waste valuable project time on such useless make-work?)

This diagram shows the division of responsibility for testing on the same big project

Vacuous Form Tyrannizes the Mainstream

Apparently, that's not how most testers and test leads develop test documentation. In my consulting work with clients, I constantly see mammoth documents stuffed with books worth of low-content stodgy and opaque prose. Often, I search so-called test strategy documents in vain for any actual strategy. I fall asleep looking at test scripts that dictate every point and click and hideously repeat over and over again the most minute detail of so-called "test steps" and their piddling expected results. It's very hard to believe that much thinking is represented therein—or will inform the testing that must unfortunately follow.

Nobody reads this stuff. It isn't useful. It certainly doesn't help anyone test well. So why do testers waste time and spirit churning it out? Why do their managers insist on the tyranny of form over the substance that only thought can produce?

Let's Take Testing Back!

I believe that thinking testers must take testing back from the process weenies and form merchants.  Many testers have done this, but it has yet to happen in the mainstream—in big companies, big banks, government projects...sometimes even in startups and small companies. 

In subsequent posts on this topic, I'll explore some ways we can do this.