Transcript
Kevin Montalbo
Our guest for today is a tester, toolsmith and the Ministry of Testing Chief Operation Officer/ OpsBoss with over 10 years experience providing testing expertise on award-winning projects across a wide range of technology sectors.
He is an advocate for modern, risk-based testing practises and trains teams in Automation in Testing, Behaviour Driven Development, and Exploratory testing techniques. He co-founded the Ministry of Testing Essentials – a community raising awareness of careers in testing and improving testing education. He’s also the author of Testing Web APIs, which we’ll be having a giveaway of soon, courtesy of Manning Publications.
Joining us today for a round of cocktails is Mark Winteringham. Hi Mark, great to have you on the show!
Mark Winteringham
Hi, Kevin! Hi, David! Yes, a pleasure to be here.
Kevin Montalbo
Thank you for joining us. Now, before we dive into your book, can we first talk about what you do at the Ministry of Testing as an OpsBoss and more interestingly, as a DojoBoss before? What do those titles mean?
Mark Winteringham
Yeah, I get asked that question a lot because as fun as the job titles are, they're not super descriptive if you don't really know about Ministry of Testing. So I joined Ministry of Testing about three years ago. I've kind of been involved with Ministry of Testing for years and years and years. Basically, Ministry of Testing, the way I like to think of it, is like a professional community of practice for testers. Until the pandemic hit, we ran conferences around the world, but we also had kind of an online presence. We had to pivot, obviously, when everything happened. So, we do quite a lot of online conferences, online training, just generally building ways to get people in the community, the testing community, talking to one another, sort of sharing.
And that might be casual conversations or actually more formalised training. So, when I joined I'd been doing a lot of work for Ministry of Testing as a sort of individual. And when I sort of came on board, they wanted someone to push forward the online training aspect of it. So, that was the Dojo. So, the Dojo was like our online learning space. And we had the online learning space and then we had the in-person events and then as everything changed, my job suddenly grew from about 30% of the business to a hundred percent of the business. So yeah, I rose to the challenge. And then, about six to eight months I was, I say “promoted,” but we are pretty flat.
So, my role changed and I became the OpsBoss. So now, my focus is just the day-to-day running of the organisation, trying to support the team in what they're doing, trying to encourage more of an experimentation mindset, trying to get a bit more lean and just improving that sort of feedback loop between us, in our organisation and in our community, and just kind of react more to what they need and what they want and that sort of thing. So, yeah, it's a bit of a baptism of fire as it's a bit of a step away from testing, but so far I'm really enjoying it. It's really good.
David Brown
Has your experience in Ministry of Testing led towards this book, Testing of Web APIs? Or is that something that you've had in mind for some time?
Mark Winteringham
It’s something I've had in mind for quite a while.It is kind of connected with this new role, but it's almost like a culmination of my experiences over the last 10 years as a tester. But I've been playing around with it for ages, like this idea of doing this book, but I kept thinking, “I just don't feel like I could do it,” like understanding that it's a big undertaking and not necessarily wanting to take it on board.
But then I ended up watching, bizarrely of all things, I watched a gaming stream on YouTube and the guy who's doing gaming was also an author and he just said “a page a day.” That was the advice he was giving someone else. And so I started a page a day and after a month I had a chapter and a half and I thought, “Hey, I actually quite like this.” I like it as a way to sort of present my thoughts and stuff. And it sort of, you could either say progressed or snowballed on your attitude. So yeah.
David Brown
Small steps. Good strategy. In your first chapter, you cover the reason why there's a need for testing web APIs. And you mentioned a test strategy model to get IT teams and stakeholders onto the same page. So can you tell us more about the model?
Mark Winteringham
Yeah, sure. So, it was kind of introduced to me by the original author, James Lindsay, years ago actually. It was my first ever public speaking engagement thing. We have this idea of exploratory workshops in the testing community where we get together and have proper sit-down discussions about progressing testing. And it's quite formalised, but it's very collaborative and it’s quite exciting stuff.
So, my first time was actually meeting James. He told me about this model and how he sees testing. So the idea is, it's basically a Venn diagram and the two parts of the Venn diagram are your imagination and your implementation.
So, the imagination is what you want to build. And some of that is explicit knowledge. So, that might be things that you've written down, things that you've said, things that you've emailed.
But it's also this stuff that implied the tacit stuff. So when we say we want a search feature built behind those scenes, we're like, “Well, we want the search feature built because we don't want the business to go. We want users to stay.,” that sort of idea. So, it's a test responsibility to sort of dig into the “why” and really understand the solutions, the designs and stuff. And then the implementation side, it's the product itself. We know some of how we expect the product to work, because this is where classic things like test cases and test scripts come into it, and again, that sort of explicit information.
But then, we can do activities like exploratory testing, performance testing, anything that kind of pushes the boundary to have us actually really understand how the application works, not just kind of confirm its correctness. And the idea is, the more we know about those two things, the more they can overlap. And if there's any sort of clash, those are where our issues are. Those are where our bugs are and that's what we need to resolve.
David Brown
Yeah, I think it's interesting. You talk about bringing stakeholders back into the process during the testing phase. Because in API-first design, obviously we talk about collaborating with stakeholders in terms of the design phase of building APIs, but I guess some people may be surprised that once the design phase and implementation phase is done with an API, you have a contract. And so, this concept of bringing stakeholders back in and understanding the expectations of the API and stuff like that, why not just test the contract? Why not just test the endpoint?
Mark Winteringham
Absolutely, absolutely. And that's something I talk about in the book or kind of hint at, but it's a whole book on its own, right? You know, what drives testing skill? What is the core of a tester's experience? And it's that critical and lateral thinking, it's the questioning techniques. So, we question products. Why can't we question ideas as well? Why can't we do the same thing and actually apply that knowledge and almost build a mental model in our mind of what's being presented and explore those eventualities and actually have conversations around that sort of stuff. Because actually, I think a lot of people here in the testing space, when they move into that, they actually go, “Oh, this feels familiar.” It might be a bit scary at first, but actually, it feels familiar because you're kind of stretching the same sort of mental muscles.
David Brown
You dedicated a chapter of your book to quality and risk. Run us through how quality and risk applies to our testing strategies.
Mark Winteringham
Sure. So, I was very much influenced by my colleague Dan Ashby on quality mindset. He's done a lot of work on that and I recommend everyone check out his stuff. And he helped me. He was like the co-founder of the essential stuff that we did. And the way that I see it is that quality and risk are two sides of the same coin. We want to build something of quality. If something has high quality, it is more valuable to our end users, our customers, our stakeholders. But then risk is the other side, and that's what's always potentially degrading our quality.
So traditionally, like when we talk about testing, it tends to sort of get stuck in this older thinking of it's just confirming requirements, it's confirming expectations. So, it's taking what has been explicitly stated, whether that's a big requirement, document or a user story, and turning that into some test cases, test scripts and just running those. And you know, it has some success, I don't think it would've carried on as long as it had, if it didn't have some success. But you know, as we are being asked to deliver faster and the competition gets fierce between businesses and stuff like that, we need to make sure that the actual testing that we're doing is targeted.
So, that's where the quality and risk aspects come in within strategy. My focus is thinking about what quality matters and getting everyone else to think about what quality matters to our end users. Like, what do they care about? How do they define quality to themselves? And then use that as the sort of launching point for where I'm gonna focus my testing because I can't test everything. So, I want to be effective and I want to be laser-focused on the things that matter the most.
So yeah, that's what quality and risk are for me. They are sort of the north star of the testing that I do and the direction I take my strategy and my plans.
David Brown
If you want to address risk to improve quality, how do you go about mapping potential risks for your API?
Mark Winteringham
I think it depends on what activities you're doing. So, one good example is in the exploratory testing space. The exploratory testing space is the use of charters. So, charters for me are ways of capturing risk, but writing it in a way that's sort of almost like an invitation to explore. It's sort of a detail of, “We're concerned about this type of risk. So, we'll do some exploratory testing around that.”
But that only kind of focuses on one type of testing. Myself and my colleague, Richard Bradshaw, who's the BossBoss in Ministry of Testing, we ran a course called “Automation in Testing,” and we talk a lot about how we actually use risks to help us identify what automation we're gonna do.
So, some of it actually might be codified into automated checks. So I might write some API automation. I might write some unit automation, contract testing. You know, we've talked about that, about the risks of contracts actually drifting as things get more complicated, those sort of things. So for me, how I track risk is in the testing activities I do. And the plans that I set around that, I think it would be nice if we could see something where… So historically we have risk registers but they tend to sit separate to strategies. It'd be lovely to see something in the future where we can actually see quite clearly what risks we're concerned about and what things are tied to.
But for me, personally, it's always kind of connected. Like, it's justifying what I'm doing. So, if I can say I'm doing this because I'm concerned about this risk, it doesn't really matter where it's captured. It's sort of almost like a symbiotic relationship between like, you need to ask me about the testing and then I will rely on the other information and stuff to tell the story.
David Brown
I'm guessing that collaboration with stakeholders when establishing the kind of risks is where it comes also really valuable, because imagine a developer is thinking about technical risks and you know, network exposure and denial of service attacks or those types of scenarios. But maybe a stakeholder, which is more of a business stakeholder or thinking about the use cases of the API may say, “Well, actually, we might have some potential users that might try and do this and this and this.” So, I guess the collaboration in that respect is gonna help identify a much broader range of risks. Yeah.
Mark Winteringham
I think my eyes opened a lot when I went to DevRelCon in London back in 2018 and how they had a whole track around the community, but then they had a whole track on just how you design your APIs. So, just because they functionally work, doesn't necessarily mean that that's where you stop. If you have APIs that have to be consumed by third parties, making sure that your error codes are clear, making sure that when you are actually providing feedback from your APIs, that it's intelligible, it's possible, because if not, it becomes difficult to understand. People move on. They don't want to use your APIs because they're going to go and find something that has less resistance, let's say, in terms of integrating it and stuff. And I think that's a good example there. That matters to the business. It's not necessarily a technical risk. It's more complex.
David Brown
One of the things I thought was really interesting in the book was that you had this concept that testing should apply across the entire software development life cycle. You actually mentioned “even before a line of code is written.” How does testing come in even before a line of code is written
Mark Winteringham
Well, that's that questioning concept and, you know, sitting down with your team as you are designing your content tracks, you're working out what sort of solutions you're going to come up with. And it's even more than that. It's actually questioning the problem as well. So, understanding what the problem is, trying to make sure everyone's on the same page, that there's no misconceptions as well. That idea I think it's very much inspired by Janet Gregory and Lisa Crispin’s “Whole Team” approach. So, they talk quite a lot about that. And they've done a lot of work in that space about the idea of joining in as early as possible at that point where we're discussing the ideas.
Because actually, it's a lot more fluid. It's sort of in the early doors. We haven't invested much time in things. We can avoid certain biases like sunk cost fallacy and stuff like that as well. Sometimes, I find personally as a tester, if I'm saying, “I think that this might be a problem,” down the line at the start, it's more likely to be resolved than if we finish the work and we're about to release it and it's like, “Oh, you know, we don't want to stop the release to do this thing.” So yeah, it's just about questioning ideas and questioning problems and getting everyone on the same page, really.
David Brown
Now, let's talk about automated testing. I'm a big fan of automated testing. It's not the be-all-end-all, but I find it’s a scalable process. You cover techniques such as functional API automation, contract testing, automated acceptance, test driven design. Run us through all these concepts, what you can do with automated testing, what we should be testing and what the techniques are.
Mark Winteringham
So again, I think what's really interesting about this is we can bring it back to risk. So, I think about those as three different entities. I think about three different types of risks. So for example, on the functional side, the risks can be the correctness. That’s where the classic sort of test scripts [are]. These are expectations. We would like to confirm that they are still true, so using those maybe in a regression capacity. So, the idea is that I have all these automated checks that are running. If one of those fails, it's not necessarily telling me if the quality is up or down, it's just telling me something has changed in the system. And then I have to react to that change. I have to maybe go and explore it and find out what's actually happened and determine if that change is good or bad to our quality.
And that's really inspired by a webinar by Michael Bolton. He talks a lot about regression testing and thinking about it as change detectors rather than this kind of safety net idea, in which case you're never going to have a safety net that covers everything in that space. So, on the functional space, it's thinking about ways in which the system works, risks that matter to us, core parts of the system, that sort of thing that we want to make sure that we're getting regular feedback on. So, as we make changes to the system, if those changes have a side effect, we can deal with that. And also as a side note, the way I measure that success is developer confidence. So, if a developer feels confident to make changes, I think that that's a sign of successful automation.
So we talked a bit already about the idea of contracts drifting between APIs. So, we work in a very remote world at the moment but if you've got one team in one time zone and another team in another time zone, and it's hard for them to have discussions with each other, if one changes their API and doesn't inform the other, everything's fine and green on their side until everything deploys and it all blows up. So, contract testing is kind of like an automated solution to a human problem. So, it's that same idea of a trigger. We use contract testing to make sure that our systems are honouring the contract that we've agreed with someone else. If it changes, that leads to a discussion and we either make the changes or we roll them back, that sort of thing.
And then with the automated acceptance testing stuff, that's really interesting because it's kind of a bit fuzzy in the testing community. I think it's clearer with developers, but for testers, sometimes they mix up automated acceptance testing with just running good old test scripts. But for me, automated acceptance testing, the best analogy I always come up with is it's like having a bowling alley and having the bumpers on. So, it gives you the framework to deliver what it is that you're being asked to deliver, but it doesn't completely pigeonhole you. So as a developer, you have that freedom to build the system in the way that you wanna build it, but still hit those business expectations. So, the barriers are there to prevent you from bowling a gut ball.
David Brown
Nice analogy. I like it.
Mark Winteringham
Yeah, it's still up to you whether your design is a strike or a one pin or a split. But I see it as a thing to regression testing, because there, I care about the risks of the implementation, whereas in the automated acceptance testing space, it's much more about risk of delivery, risk of not understanding what it is that you're being asked to deliver. So, using those tools to set the boundaries so you know that you're not overstepping the mark, I think it's really useful. I thought it was important in the book to talk about those as three separate things, because they kind of get lumped under automation, but as we see performance testing, you could argue it's automation as well. You have to build an automated script to do the performance testing for you as well. So the banner of automation is actually quite wide ranging and there's lots of different risks and different solutions to those risks that we can think of.
David Brown
Is there a limit to automation and how much automation we should be doing?
Mark Winteringham
Yes, but I think it depends on the context of your situation. So, if I go very crudely, I used to work for a digital agency and we used to have projects which were half an hour long. I'd get half an hour of testing. So, automation is just mad, because by the time I've installed like a JDK and got my IDE up, I'm halfway through my testing time, so I kind of just want to hit the ground and run. Whereas if I'm on a longer form project, that's maybe quite complex, then that's where I wanna think about investing automation. I think again, that measure about confidence, if people don't have confidence in automation, then it's not necessarily working for them, and it requires some sort of reevaluation.
But then, also, there is the factor of how much time am I actually doing other stuff. If I'm spending all of my time fixing automation, creating new automation, then there are going to be gaps in my strategy. It's just simply going to be the case. you have to spend a lot of time understanding what your context is, what the skills of your team are, what your project deadlines are, what resources you have available to you. And all of that will determine how much automation, how much testing you can do just in general as well.
So, I think So the classic consultancy answer of “it depends.”
David Brown
It's always a good way out. I'm not sure if you touch on it in your book, but tell us about what your thoughts are on the evolution of testing frameworks for web APIs. What's the current state of those? Are there any particular frameworks you recommend people take a look at? Is there anything interesting happening in the space?
Mark Winteringham
I mean, I'm going to suggest something and then completely contradict myself. When we do our automation testing training, we always sort of say, again, “Let context be your guide.” So, I want to pick the right tools that work for my team. So, if everyone's writing everything in Python, then I don't necessarily need to want to write all of my frameworks and stuff in Java. So, I let my context be my guide. I try to be someone who's kind of a finger in many pies and just be aware of how each of these tools work and stuff.
And I propose in the book as well, some design patterns if you are building your API testing framework in code, this is a way of arranging it in a nice, clean, dry way. It doesn't really matter what language you use for it, or what libraries you use. It's the same pattern. With that said, I kind of came into API training just as Postman appeared. So I've always had a fondness for Postman. I know some of the people who work for them, I've seen the things that they're doing with their tooling and stuff and how it's grown as well. And I think what I like about Postman is it sets the learning curve, the barrier for people who have not done API testing before, quite now. Same with Swagger as well. Like, we think of Swagger as a documentation tool, but a lot of people I know who get into API testing, the first thing they do is they use Swagger documentation and just create requests and stuff.
But yeah, beyond that, I just prefer to use code rather than codeless, but that is because I'm comfortable with code. And my background and my experience makes it easier. But I know for some people, they've got a lot out of using more of the codeless tools out there as well. It's certainly an exciting time for that space. So, at Ministry of Testing, we work with a lot of these tool vendors as well for sponsorship and working with the community to get feedback and stuff. And yeah, the amount of, not just API testing tools, but just all sorts of frameworks out there and stuff, it's just exploded. It's mad.
David Brown
That's why I asked the question because it is exploding. There's so many choices. But I think that answer with tool sets that you are comfortable with, I mean, we've had that answer on many subject matters, various disciplines. So I think the answer is, it depends, again. On your website, there's a big banner, “How's the book going, Mark?” that flashes. Is this a question you're getting asked a lot?
Mark Winteringham
Yeah, it was at one point, I was like, “This is a good way to put it to bed.” Now it just serves as a reminder, every time I go on my site to get it over the line. It's almost there actually. So, we just went through the second review process and I got lots of feedback, which was really useful. I've actually finished the first draft of the book. So all the chapters, the initial drafts are written and yeah, I got this feedback for the first half of the book that's all done and
I'm just waiting for some edits to come through. We're gonna do a third review. And then after that, one last round of editing and going through the feedback and pushing through the imposter syndrome. And then I think we're close to publishing. So, I’m hoping for late spring or summertime. So, I'm hoping to get it over the line.
David Brown
Fantastic. There are some excerpts available on the book on the Manning.com website, the book Testing Web APIs. Mark, how can our listeners follow your progress on the book and your thoughts on social media and the like
Mark Winteringham
Yeah, so the best place is to sort of get in touch on Twitter. So I'm “@2bittester” on Twitter and I'm also on LinkedIn as well. It's been interesting actually, I've learned that LinkedIn is actually a valuable space for something like this. But yeah, Twitter, LinkedIn.
David Brown
Why does that surprise you?
Mark Winteringham
It's just because most of the time, I've used social media in more of a casual way. Whereas LinkedIn, I think it's a little bit more formal, but I think in this context that was really useful and it’s just, I'm terrible at social media. So, it was just an eye opener, most to my wife's irritation and she works in that space. And then yeah, through Ministry of Testing Slack, I'm always on there, I'm tied to it. So, if you kind of wanna just get involved in the testing community in general, checkout Ministryoftesting.com.
David Brown
Should we expect Ministry of Testing to get back to the face to face seminars?
Mark Winteringham
Yeah. So, we are taking small steps. We have like, over a hundred meetups around the world, so we're working with them to get them started again. We've got some small events in the UK and then we've got TestBash UK, which is our flagship conference. It's going to be in Manchester, UK in September. And it's exciting. We're all looking forward to seeing each other and stuff.
David Brown
So it's going to feel weird?
Mark Winteringham
It's going to feel very weird. Yeah. We have an EventsBoss who I've never met in person, so it's going to be very odd, but yeah, we're really excited. I'm looking forward to it and stuff. There's so many familiar faces and new faces that we haven't seen for a while that I’m looking forward to seeing.
David Brown
Mark Winteringham, thank you very much for joining us on Coding Over Cocktails!
Mark Winteringham
It's been a pleasure.