In the world of software development testing is a necessary evil. Testers serve as a bridge between the product developers and users by troubleshooting bugs and issues to ensure the quality of an application. Beyond ensuring software quality and reliability testers can promote customer loyalty, save costs from fault ridden software launches and build a company's confidence with their own products. In this episode, we are joined by the evil tester as we take a deep dive into testing and talk about the challenges that organizations commonly face with it. We also take a look at the practical skills to become good testers and testing teams and also touch on topics such as agile testing and its relation to agile development automation and the various testing models that we see today.
Transcript
Aaren Quiambao
Welcome to Coding over cocktails, a podcast by Toro Cloud. Here we talk about digital transformation, application integration, low code, application, development, data management, and business process automation. Catch some expert insights as we sit down with industry leaders who share tips on how enterprises can take on the challenge of digital transformation. Take a seat. Join us for a round. Here are your hosts, Kevin Montalbo and Toro Cloud CEO and founder David Brown.
Kevin Montalbo
Welcome to episode 35 of the Coding over Cocktails podcast. My name is Kevin Montalbo with us as always is Toro Cloud CEO and founder David Brown. Hi David.
David Brown
Good day, Kevin.
Kevin Montalbo
And our guest for today is known as The Evil Tester, working as an independent agile coach and software development consultant, helping companies improve their agile development and coding processes, use of automation and exploratory technical testing. He writes books around his expertise, creates online training courses, videos and podcasts and has over 100 report stories on github to keep his development skills up to date. Ladies and gentlemen, joining us today for a round of cocktails is Alan Richardson. Hi, Alan, welcome to Coding over cocktails.
Alan Richardson
Well, thanks very much and that's a great intro. I want to hire that person. He sounds good.
Kevin Montalbo
Oh thank you. Thank you. All right. So let's talk about your nickname. First Evil Tester. So that's interesting. How did you come up with that persona?
Alan Richardson
So evil tester. I mean, it isn't strictly me. That's the little devil icon that appears on all the products. And he was born in the olden days when we did a very structured projects and it was very annoying. So I used to draw little cartoons to get my frustrations out and it's kind of like a dilbert thing that no one has seen. But over time, it just evolved into a kind of an attitude of testing and I wanted to approach that in more detail and free myself up because I noticed that when I was speaking to people, they, they would hear the word evil and then go, you know, testing is not evil, this is bad and they'd hang on to the word. And I thought it was really important that we try and move away from an association with words because it's not like on projects, there are things that are good or bad or evil, there are things that work or don't work and we should be open to exploring those to try and make a difference on the project.
David Brown
It's interesting, I thought it must have come from like a necessary evil. Testing is a necessary evil cos it, it's not everyone's first choice to do testing. It's kind of like documentation, right? So it tends to be pushed to the end of a a project and it sort of done reluctantly, particularly by developers. It's not the interesting part of the project typically. So I thought it must have come from yeah, it's that evil necessity that has to be done in the end.
Alan Richardson
So that was also part of it. where, where your role is unnecessary evil, you go. OK. Well, I can live up to this. I can be as evil as is necessary for this. And because, but also it's a case of, well, if on the project is going down the same path. You need to have other people seeing things on the side, looking at alternatives, pushing it in different ways. So, it was that alternative view. You've got the angel on one shoulder and the devil on the other and you need both in order to, to keep on track.
David Brown
Yeah, fair enough. In 2016 you wrote a book,, which was actually a publication of lots of articles,, columns you'd had in the Testing Planet. What I really liked about this. The articles from the Testing Planet is they seem to take a really sort of humorous tone to the questions. There was a lot of wit and sarcasm littered throughout the responses to each question. And,, it, it took what was a relatively bland subject in terms of the testing and made it entertaining was that your intention all along is to sort of spice up testing and make it more interesting.
Alan Richardson
And so I, I've never viewed testing as a kind of bland or that kind of subject because I'm, I'm in it. I'm interested in it. , there's always so much stuff to research and I can see that for other people it might be. , but the aim was to try and put out something which exemplified the attitude that I constantly try and develop in myself and train people in. So when we're working on teams, we're trying to get people to be more assertive to come out with ideas to not feel that because they're testing, they have to sit in the background and I didn't feel that any testing book really exemplified that attitude and approach that we, that we absolutely want from testers, particularly in newer environments. But also I wanted to make it more humorous, more of a book that you could give to people, more that people who weren't necessarily in testing might read it. They might not necessarily learn how to test from it, but they'll learn what to expect from testers and how they approach it. But also the, the style of the question and answer was designed so that people would realize they probably already have the answers. But what they lack is the confidence in their answers to themselves.
So part of the reasons for giving those kind of odd answers were to help people see the nuances in the question where they were ignoring biases or they were ignoring presuppositions in the question that they, they all they had just answered. If they asked the question differently, they already know the answer and they have the confidence to push it through. So you give them an answer that they will never accept. Therefore, they're forced to mirror their own views from the answer to then go forward with something. If you give people bad advice, they have to try and come up with some good advice to follow. So it was that provocation and a lot of it was built from a study of psychotherapy over the years where you're trying to provoke someone into taking responsibility for their situation. You don't necessarily give them answers, you help them explore their situation.
David Brown
It's clever, you know, the answers are provocative. That's probably a better word for it. And I don't mean to belittle in terms of the subject matter being bland. I mean, our forte is api S and application integration. They're pretty bland subjects as well. In my view, I would say to the general populace as well. So it's all relative, right? And as you say, each topic is interesting to those that are involved in it. You've been asked loads of questions, you know, whether it be through the column or, or through the conferences and talks that you do? Are there any of the questions which I actually thought? Actually that's a really good question and it's something that made you reflect yourself. Is there any, any questions?
Alan Richardson
Yeah. So that question you just asked is one of those questions that make you reflect on, what have I been asked? Because it's such a hard question to ask and to answer and consider. because hard so question that stand out are things like, well, how do you become the best tester? And what stands out from that for me is the concept that there might be a, the best that there's an absolute in there when it's not about an absolute in the world. It's always a relative absolute towards you. How do you continue to improve and things like that? So the questions very often are interesting because they show biases, the questions that concern me and I think are really important are the ones that we get asked all the time. So how do I get into testing? How do I get into automation because they are reflective of the world that we have that some people want to get into testing as a route to get into software development.
And because for some reason, it's viewed as an easy way or a beginner's approach or something else, when it, every part of software development has a beginner level that you can get because they're all tremendously involved skill sets and the focus that people have to learn how to automate more, keeps coming up again and again and again. But that to me is a separate discipline. It's very important discipline, but it's a separate discipline from testing. But the questions we get asked seem to suggest that we conflate testing and automating so much that people think, well, I can go into testing and that's easier because they don't have to automate or I can go into automating and that's easier because I don't have to test. So they stand out because they still demonstrate to me that we still don't really understand how to do software development. Well enough that testing is a natural fit into it that people know when and how to test., so it's, it's still odd for me. , and that's partly why I came up with Devo Tester to have that attitude for responsibility so that people can take charge and have that responsibility and own their role, whatever it is. But bring their full self into that role because I think we don't encourage that.
David Brown
Yeah. Yeah. It's interesting, isn't it as a discipline you mentioned, you get questions about how do I get into testing or how do I become the best tester? What we, we are recruiting testers all our time ourselves, you know, as part of our software platform. And you mentioned that sometimes the discipline is seen as an easier path or, or maybe it's, you know, if you're not a good enough, good enough to be a programmer, then you become a tester or, you know, something like that. So where in actual fact, finding a really good tester is really hard? So in your opinion, what makes, what are the characteristics of a good tester?
Alan Richardson
So one of the issues with issues, one of the things that's hard about that question is that the temptation is to focus on the attributes that are not related to testing, right? Because one of the attributes that makes a good tester is that they're good at testing. So they've studied testing, they understand what testing is and what it's for. But in order to really do that, you have to understand software development to know where testing fits in. So for me, a good tester is one that is wanting to study software development. personally, one of the part of the reasons for coming, one of the there's so many stories behind evil tester, right? So I could give you, keep giving you reasons why evil testers are.
One of the reasons is I'm, I, I started as a programmer and I've been interested in software development and I view myself as a software developer and I aspire to being able to do all the different roles and aspects and processes within software development. Testing is where I'm most known and I like testing because it feeds into all of those different areas and gives me the chance to do all of those things. But so the skill set of wanting to embrace software development in total is really important, I think because otherwise you tend to get narrow focused in on just a testing role and that can be very hard because then that can have a tendency to isolate people when what we are trying to do is spread across the entire project. So a deep, deep understanding of feedback mechanisms, cybernetic systems and software development in total the processes, the history of software development for all the techniques that are involved then moving beyond those.
So this is this is hard for people who are beginners to listen to, right? Because what I'm expressing is the totality of what it takes to really be in software testing because the techniques that we start with then move you out into math. So we've got things like boundary data analysis which then moves you into set theory, we have paths going through systems which then moves you into graph theory, right? We have to try and work out how much data do we need, which takes you into a probability theory, right? There's math involved in this, which you don't have to go into depth, but it's something else we should be interested in is there's a lot of technical information and studies that are required in there that when you go into it in depth come through. And so also that tenacity to see things through we want people to have a develop an attitude of being able to communicate directly when people don't want to hear the information that you have to convey. It's not necessarily the right role for people who want to be liked all the time. Right?
There has to be the ability to challenge people to challenge people with evidence, challenge people with the right language for that person such that they take notice. So there is a lot of I guess people would call it social skills. But for me, I didn't learn them from social skills. I learned them by studying psychotherapy and the language that therapists use in order to challenge people. And, and so, and what you also get when people answer that question are my biases. Those are the things that I think are important because I had to develop those throughout my career. So I'm ignoring all the things that I came in with as default because I'm focusing on the things that I had to develop. So-
David Brown
Let me just see if I understand it correctly. They've got to be an architect, a programmer, have understanding of testing skills related to automation and the like they have to be a psychoanalyst and a good communicator. It, it's a, it's a tough combination of skills to be a tester
Alan Richardson
But also you don't need that at the start, right? You can develop these skills over time.
David Brown
About Holy Grail here.
Alan Richardson
Also you don't need all of those skills, right? Because the whole reason we create teams is that we expect the team to have all those skills. So we bring in someone with on a test or role because they have some of the skills that are missing from the team for some of those aspects. And hopefully they all unify together if you don't have the team building skills, hopefully someone else on your team does to help pull all those skill sets together. That's also why we have Scrum masters and managers, hopefully they have those skills.
But to give yourself maximum flexibility because my job has required maximum flexibility because I've moved from different roles, different companies doing consultancy. So the I've needed to develop that maximum flexibility. If you're working on one team building, one thing, focus on the skill sets are important for that team and focus on the skill sets are in general missing from that team at the moment in order to maximize the value that we can bring to it.
David Brown
Well, you're consulting with organizations all the time and going in there and evaluating their testing procedures and the like, what are the common problems that organizations are facing when they're setting up a testing procedure or, or building it into their agile sprints and the like, what, what are the common issues that they're facing? Is it a personnel issue? Is it finding the right people? Is it process where, where is it?
Alan Richardson
So it's all of those things. And process is essentially a personnel issue because personnel don't know how to build process. So it's generally always a, a people problem at the heart. But one of the issues when you focus on testing is because people don't really study testing anymore. So, when I started, because I came from a programming background, I had to study testing to figure out what on earth this was because I was joining a test consultancy and building test tools for them. But I didn't know what testing was. I knew how to code. So I had to learn testing. So I studied it. Most people now don't seem to study testing. They don't seem to read testing books in terms of the testing theory, what they read are possibly books on programming books on automating introductory books on testing which are still kind of rooted in, write some scripts and, and do big design specs or they might read the agile testing books which are very much about interpersonal skills and the the relationships.
So one of the issues is just people don't know how to test and they don't know how to adapt their testing to environments. And we very often have a lot of junior staff on projects. So the seniority, the experience that helps you adapt isn't there. And so it's just about, I think it's also the focus on testing. How do we test rather than well, how do we effectively build this software project or product? And how do we construct a software development approach such that it's most effective and software testing will naturally fit into that. So part of the reason we have issues is not just because we don't know how to test it's because we don't know how to construct contextual software development processes.
David Brown
Say that again, that last part, we don't know how to construct.
Alan Richardson
Contextual software development processes.
David Brown
What do you mean by that?
Alan Richardson
So every environment is different, every product we're trying to build is different. Every team has a different set of people on it with different skill sets and different attitudes. The software development process has to cover. Not just here are the fundamental building blocks of building software. It's here are the fundamental building blocks of building software within the constraints that we have for time and budget and our overall organizational strategy and the skill sets of this team and the tooling that we are using all of this.
David Brown
I understand. most organizations you mentioned agile testing. Most organizations will be familiar with agile development. They have sprints, they have Scrum masters, they have listening to the market and iterating and releasing frequently all this sort of stuff. Then there's this concept of agile testing. How does that fit in with the whole agile development? So
Alan Richardson
I'm not sure there is such a thing as agile testing other than as a reminder that you are working on an agile project. So do not attempt to bring in all the artifacts and processes that are associated with waterfall or structure development. However, most people now haven't worked on waterfall or structure development. So all they know is agile. So for them, agile testing is testing and the issue is every agile process is different, right? You talk about scrums. People interpret that in different ways. Some teams don't have a lot of the control points that are required for an agile process. Agile is hard, right? Because you've loosened up a lot of the restrictions. It's about evolving, it's about feedback. It's about looking at what's working and what's not that some teams drop retrospectives. Some teams don't retrospect until like two weeks or four weeks when they need to be doing it continually. Some teams don't pair so they don't get that constant knowledge sharing. They don't pay across roles or discipline. So they don't pair with, someone who's in a testing role with a programming role.
They, they don't have a system in place. So a lot of, for me, agile testing is trying to work out. What is the end goal of testing in this particular project? Is it a focus on acceptance criteria? Is it a focus on what are the issues? Is it a focus on gaps? Are we trying to help spread testing knowledge throughout the team or are we trying to make sure that we are bringing it to each story? Remembering that we're not just testing stories, we're testing stories that are interconnected because we're building a product and a system we deliver, which is a thing in its own, right? So if we only focus on stories, then we miss the interconnected parts between them. We miss that this story over here. Now conflicts with this story over here. So it's taken those kind of views as well. So it's hard for me to say what is agile testing because to me, it's just testing but it's remembering that an agile approach to software development has risks that we also have to target and testing is looking at those. It's constantly looking at risks.
David Brown
Yes. Yes. That's interesting. I thought you were going to say it is part of an agile development process. The agile testing needs to be incorporated within the sprint itself. And so you can't sign off a story as done until it's been tested., is, is that just assumed it's part of agile testing is that's part of it.
Alan Richardson
I think for me, so for me, that's just assumed, right? If you want, like, so if you wanted to an answer, that is give me a description of what does agile testing look like on an agile project? It's things like that. We have stories, we look at acceptance criteria, we try and figure out what are the risk of that acceptance criteria? What data do I need to cover what the process is? Da da da. But those are just derivation approaches for coverage of things. For me, the testing process itself is how am I going to test this and how do I work in this environment? And if we take the broader view of risk, then it's not just the risks that are in the story because there's always a risk that an acceptance criteria might not be met. So we check that the acceptance criteria has been met. We automate that to the extent that is required, we test around the covers of it to try and make sure nothing else slips through where there's a bigger focus. So, and on some teams, the programmers are going to do a lot of the testing like that, on some teams, the business analysts or user representatives on the team are going to do a lot of that. So testing or testers sometimes have to pick and choose which parts they're going to do and fit in. So the concept of agile testing is quite broad.
David Brown
As a software company ourselves. We went through a, a process. Here's a bit of a confession. We, we used to do the testing after the sprint, but before the product release cycle, so we say we had a quarterly product release cycle, but we're in a two week sprint. We would just create a backlog of issues that need to be tested before we did the product release. We've since changed and we incorporated the testing as part of the sprint itself. But an issue that we sort of a challenge which you have to address is preventing, you have to resource and allocate resources differently and you have to accommodate as part of the sprint planning. The, the test is part of the plan so that the, the, the testing doesn't become a bottleneck as in terms of your, your sprint and completing your sprint. So is, is, is, is how do, how do you recommend people accommodate testing as part of a sprint plan when they're going from that kind of waterfall type process in terms of release cycle like we were doing and doing all your testing as a separate, completely separate process to within the sprint plan itself and having having to resource it accommodate accordingly.
Alan Richardson
Yeah. And so your description of your initial agile process is why that initial question of what does agile test look like quite hard because it was called an agile process. But many people would have disagreed that it's not an agile process and testing is always about fitting in with whatever process is there and looking at the risk and trying to make it happen. But in terms of when people do have that concept of a definition of done or sprints, then clearly we won't, it's not done until it's tested unless we have a definition of done that says, yeah, it's kind of mostly done and we'll test later or it's a promise to be done in the future if it's in in written in javascript.
So we have to figure out what does it mean in our environment and what is our appetite for risk on release because we may choose to defer certain things because we are happy to accept the risk that it doesn't work because we can fix it faster. So all of those things feed in as well. But in general, and I prefer to see testing done as close to the point that you're doing the development as you can that done incorporates that definition of. We've checked where the acceptance criteria is met. We've tried to automate it, we've explored around it. We've done exports testing, we've demoed it to the user. The user has done some testing on it and are happy with it. All the aspects that we think are in there.
David Brown
What about those automated tests that should they be being completed within that sprint cycle as well?
Alan Richardson
So with automation, there's a number of aspects on that. So one of the aspects of automating and the one that probably should be completed within the definition of done is the concept of asserting that the acceptance criteria has been met and continues to be met over time, right? That's the most basic level of automating. And that's probably what most people typically mean by automation or test automation checking that the acceptance criteria has been met. But we also have test driven development unit tests and some of those are kind of related to the excitement criteria, but really they're related to the code and the architecture of the code that we're building. So we expect that to be included. But it's more of a, we just expect that we very often don't see what that actually means. Then you've got the concept of, well, when we don't just automate acceptance criteria assertions. We also want to use automating to help us test further and explore more.
You've got the concept of just feeding in a lot of data to cover existing paths which people very often don't like to do because it takes time and people have this concept of, well, all our test automation should be done within the bill process and fast feedback. Whereas it is entirely possible to have your acceptance criteria assertion in the bill process but still have in parallel a longer running set of exploratory type. We're putting stuff through because we don't know what result we're going to get back. We're randomly generating data and feeding it through because we don't know what result we're going to get back. It's possible to do that in parallel. And very often people don't because they don't try and bring that full extent of what testing might mean. Testing is how can we best test this product and get as much information back as possible. One way to do that is to use the existing capabilities for automating that we built during the process because very often what we're doing during that done state is building the capabilities to automate. And then very often we under use them because all we do is accept criteria rather than build the capabilities that then allow us to really push this forward and throw in lots of data and have it running with multiple users in parallel reusing the abstractions that we built up down here because we tend to focus on testing or acceptance criteria or automating. And not that broader testing view that can incorporate the automating.
David Brown
Automation is seen as highly scalable because you're building up a bank of repeatable tests that can obviously be executed without human intervention or little intervention. How is there a limit to how much we should be automating in our testing?
Alan Richardson
So probably, and that will depend on every environment and every product. So I think we can say things like is there a minimum and the minimum is we should automate the acceptance criteria because we want to make sure that they continue to work longer term in terms of the maximum, it's going to depend how you do it. If you drive all the automated execution from BDD or cucumber or something like that, which people do regardless of whether people want them to or not. If everything is there, then you have a set of abstractions that are hard to reuse. So then there are limits, right? Because there are limits to how much you should do because the maintenance of it is really hard. If you architect it, you're automating in such a way that it's possible to put them together in different ways, then you're building capabilities that can be harnessed randomly.
So as an example, I a couple of years ago was experiment with bots for testing. So rather than writing test scripts, I would write bots that were used abstraction layers and they would randomly choose actions and randomly choose data which were implemented by abstraction layers. And I could just throw these bots at systems that I was doing. And they would explore the crude model that they had in multiple ways and give me feedback on it. So then it's a case of, are there any limits to what should be done with that? And the answer is, well, no, I can just leave that running forever because it has assertion spill in it will report when it finds something odd, it is robust. So I don't have to maintain it. We've also got to split up the difference between what should we do and what could we do? Because we have capabilities in terms of automating. Can we automate this? Can we automate this in a way that is robust and doesn't fail? Because very often we make statements like, well, we should not automate this because it is hard to automate or because it is flaky, that's actually a capability problem, not a risk decision about whether we need the information on an ongoing basis. So we just have to be careful that we ask the right question or that we answer it using a risk decision or whether it's a capability decision.
David Brown
In a lot of your blogs and resources, you talk about models and and building testing models, models like requirements, accepted criteria, risk flow functionality. Can you tell us about these testing models?
Alan Richardson
So there's a huge amount of testing models and what modeling might mean to testers, right? Because modeling is simply our way of understanding the the world which in this case is the system. And depending on the model that we have, we will view the system differently or we will limit the way that we approach it. As an example. If I built a graph model of a system that showed the flow of actions through that system, I am focusing very much on the structure of the system. I'm limiting myself to linear paths through the system rather than trying to figure out. Well, how can I go from here across to this other point fast? Could I just change the URL go directly to a part of the application? Is that valid? Is it not my model is focusing my attention? So I think it's very important that we build multiple models to view the system from different angles.
One of the things that's hard about this is that again, people very often haven't studied older computer science books. So they're not aware of all the different types of models that people had in the past. They may not know what a data flow diagram is. They may not know the concept of a graph, they may not understand that a state transition model is different from just a model of flow through a system, right, we have all these different aspects. And so I think it's really important to try and study modeling like that. But then you also have informal modeling. So a lot of people would view a mind map as a model which is a model of someone's understanding of a system. Then we have to understand is that a model that was designed to help us understand? Or is that a model is designed to help us communicate? Because very often we conflate two. So we build a mind map to help us understand the system. But then we try and use it to communicate, but it doesn't match anyone else's model of the system because we're using different languages in different terms when we use behavior driven development and we use cucumber, those Gherkin scripts are models. They're a high level abstraction of the system.
Some of them are procedural, some of them are declarative, we can argue about which is best and which is most appropriate. They probably should be declarative. But it's quite possible that under some circumstances, a procedural model would be useful there. There's a ton of stuff to understand in there and it's such a huge area to study because as soon as we start looking at that in detail, the more we can formalize the models, the more we can actually use them automatically in our testing, I mentioned bots earlier on, those bots were built around the concept of state transition models. So I could just let them loose on the system. If it was a, an informal modeling of the system, it would be hard to build a coverage approach, a way of executing those models and building them up to do stuff. So we have to decide whether it's a formal model or whether it's an informal model, whether it's for our understanding, whether it's for communication, whether we're using it to drive execution, whether we're using it to for coverage.
Because a lot of time every model can be used for coverage, we can do this, we can do stuff, we can then look back at our model and go have we covered this aspect of the model, whether it's a mind map, a state transition diagram, a list of stuff in a checklist can still be used as a coverage model. We also have to understand that there are limits to what we've written in that model because there are ambiguities and modes of interpretation. If it's a checklist, we can interpret each point in that checklist differently. So having achieved some coverage of it, it may not be enough coverage because we could interpret it in different ways. So I think modeling is a massively rich area and it's something I continue to look at and why I continue to study systems theory and cybernetics and all the different approaches and I try to get my head round Petri nets and all the different mathematical models. But I have to focus in on what is ultimately practical and usable. And also we're limited by the tooling that we have because you don't, you can't just build your own tools all the time to explore models because then you'd have to formalize them. So,
David Brown
Alan, I just feel like most things when you scratch the surface of a topic like this, you just realize how much is underneath what an enormous discipline this is. When you really start to hear you talk about it, you're providing some amazing resources there for people both already in the field and those looking to get into the field. Can you share with our audience, your social URLs and other channels in which people can follow you?
Alan Richardson
And yeah, so the easiest one to look at is eviltester.com. And then I think I've got the evil tester handle on most of the social media. So that's the easiest place to find it. And they're usually jumping off points to anything else that I'm doing.
David Brown
Alan Richardson. Thank you so much for joining us today. It was really interesting speaking to you and you just make me realize as much as I thought I knew about testing how really little I did know and I need to go back to the books myself. Thank you. So much for joining us today.
Alan Richardson
Thank you very much. We're always learning. That's, that's what we do, but thanks for having me on. That was really good. Thank you very much.
Kevin Montalbo
All right. That's a wrap for this episode of Coding over cocktails to our listeners. What did you think of this episode? Let us know in the comments section from the podcast platform you're listening to. Also, please visit our website at triple W dot Toro cloud.com for a transcript of this episode as well as our blogs and our products. We're also on social media, Facebook, LinkedIn, Youtube, Twitter and Instagram. Talk to us there because we listen, just look for Toro Cloud on behalf of the team here at Toro Cloud. Thank you very much for listening to us today. This has been Kevin Montalbo for Coding over cocktails. Cheers.