1. Library
  2. Podcasts
  3. How It's Tested
  4. Ep. #13, The Evolution of Testing and QA with Katja Obring
How It's Tested
39 MIN

Ep. #13, The Evolution of Testing and QA with Katja Obring

light mode
about the episode

In episode 13 of How It’s Tested, Eden is joined by QA expert Katja Obring. Together they discuss Katja’s 20-year career journey in the software testing and QA industry, her experiences across various companies, the evolution of testing practices, and her perspectives on current trends, including automation and AI. Lastly, Katja shares her thoughts on the future of software testing, the importance of critical thinking, and collaboration between developers and testers.

Katja Obring is a seasoned quality assurance expert and software consultant with over 20 years of experience in the tech industry. Having worked with companies like Linden Lab, Disney, and various startups, she specializes in testing strategy, Agile methodologies, and coaching teams to improve their QA practices. Katja is currently a Power Skills Coach at Skiller Whale and founding director of Kato Coaching Ltd.

transcript

Eden Full Goh: Hey Katja, thank you so much for joining me here on the How It's Tested podcast. How's it going today?

Katja Obring: Thank you, I'm good, how are you? Thank you for inviting me.

Eden: Of course, I'm really excited for the conversation we're about to have.

I think you have a lot of relevant experience and perspective that I would love for you to share with our audience of just your experience in quality assurance and testing and software development and delivery.

So I would love for you to maybe introduce yourself and just give us an overview of kind of like the journey that you've had in building software and what kind of led you to where you are today.

Katja: Love to. So I am, I still call myself a tester even though testing really isn't my main occupation these days anymore, but I stumbled into testing, software testing, after being a chef.

And then I took a role in, maybe you remember a weird virtual world called "Second Life" that was going to change the internet. It didn't in the end, but that's where I started, and a mix of a customer support and community management role really.

And there a quality role became available and my manager, I think a little bit annoyed with me at that point or a little bit tired of me complaining about all the things that I found that weren't working in the software encouraged me to apply.

So I did that, and because I knew the software really well, so I had a lot of domain knowledge and I am somebody who has a never ending supply of questions. I think these were really the main qualifications I had at the time.

So they hired me and the rest is history as they say. So I went on to work for some very big companies. I worked for Disney Online for a little while. I worked for some very small companies.

I got head hunted into a startup in Gibraltar in the online gaming space. Brexit happened, so I left Gibraltar and went back to the UK because I thought that it may, because I am originally from Germany, it may be hard for me to get back into the UK if I didn't get in there before Brexit was going to be executed.

Worked here for one of the very few UK unicorns. Then got into a consulting company in the end where my role broadened a little bit from being mainly a tester to more of a quality consultant and more involved into delivery.

And from there I joined another startup as a head of delivery, and these days I run my own consulting and coaching company.

Eden: That's awesome, I guess I'm curious because yes, starting from your role at "Second Life" to Disney to all the other companies you've worked for since, a lot of things have happened in the last 20 years.

The introduction and proliferation of, everyone can build web apps more easily, CI/CD, Agile became really popular. Mobile started becoming more of a thing, tablets were adopted and now with the advancements in AI, AI is being also applied to testing as well.

Do you feel like there were a few sort of like key events in the last 20 years that you feel like really changed your career or the industry that you think are kind of worth highlighting?

Katja: I do think so, yes, I have been very lucky. I, from the beginning, with the exception of one company, all of my employers were into Agile.

And the one that wasn't, I remember very well, because during the interview I extorted all my great knowledge about Agile and testing and Agile, which at the time was not super well known to a group of five people interviewing me who listens to with fairly stony faces and I was beginning to get the feeling there was something really weird going on.

Until one of them who would become my manager's manager in the future eventually said, "Well we don't do Agile and we never will." Which of course wasn't true two years later when I left they were transitioning into Agile.

So that is definitely something that has been with me from the beginning.

But what has changed through the years I think is the focus on automated testing. That was a thing when I started 20 years ago, but it certainly wasn't the thing it is today. And obviously recently now AI is going to make testers redundant again as automation did 10 years ago I think it was a big wave when, "Oh my god, automation is going to make all the testers redundant," yet not really because very much like AI, using test automation is a tool and we all know what they say, "A fool with a tool is still a fool."

So yes, a tool can be great, a tool can change a lot, but you still need somebody to use the tool. So I think that is the biggest change I have seen.

Apart from that, I think, for the entire time there have been a certain amount of trench warfare between manual and automated testing, a certain amount of trench warfare between "oh you need to script audio testing," no, "you need to do exploratory testing."

So personally I am fairly pragmatic and I think it very much depends on your context. You need to choose the right tool for whatever your context is and then get on with it.

Eden: "A fool with the tool is still a fool," I love it.

It's interesting how like, yeah, these conversations, even in the last 20 years, we're still having similar conversations like you're saying just with newer, different tools, there's always going to be a new testing framework that's coming out, new AI applications coming out.

I think QA and testing is one of those functions in teams where at least I've realized that the strategic input is so important, the context on navigating the business, knowing what aspects of the product are actually worth prioritizing because an AI or whoever, an automated testing framework can go out and find 400 bugs in one release if you really wanted to, but what of those 400 bugs are actually meaningful?

Are you testing in the most realistic way that a human would actually do something? It just feels like it's an interesting that we sometimes as the society are repeating the same conversations as if we haven't learned anything, even though the technology is getting better and better.

Katja: I think that is very much spot on. And to be fair, I think what I've seen from your tool that is actually one of the few real inventions I have seen during the entire time.

So I think that is really, really interesting because like you say, a lot of the rest of it is iterations on tools and that is not a bad thing, of course you can make tools better.

I mean the fact that the wheel was invented, I don't know, back in the stone age, doesn't mean we are not producing much, much better wheels these days because we do. But essentially it's still a wheel and wheels have not meant that people stop walking.

I think humans have a tendency to jump into the most extreme conclusions of what could be or what could be the result of an improvement, a new invention, which generally I don't think is where it ends.

Eden: When you were first in your career, I guess like when Selenium and I know there's like Cypress and Playwright and Appium and Detox and all these testing frameworks for web and mobile that have come out, when they were first sort of announced and started really increasing in adoption, do you feel like you had any difficulty convincing your teammates and employers to adopt these new tools?

How did you sort of learn how to use these tools yourself? What was the mindset? Was it just like looking things up online?

How did you kind of like really wrap your head around some of these new advancements at the time and even with the AI advancements that are happening today, how are you kind of approaching that as well?

Katja: I always was driven by a desire to learn. I just like learning new stuff.

So that for me, I see something, it catches my interest because I don't know, maybe I have an automation problem to solve and I see a tool that says, hey look, I do this particularly well, then I will have a look at it and if I find it interesting enough, if I find accessible enough and if it looks promising enough to solve my problem, then I will invest some time on it.

Adopting new tools, I found at least the teams I always worked with, I find it was harder to stop people from trying to adopt every new tool rather than convincing them to try out new tools.

That changed a little bit when I went into the consultancy because there I worked with a slightly different set of clientele. So the companies where I joined directly as a tester in a team, I chose obviously companies that worked on things that I found interesting and companies that seem to have a company culture that I thought would agree with me and they obviously thought I would agree with them, otherwise they wouldn't have hired me.

Whereas in a consultancy you often have a slightly different setup and we did have clients that were very reserved against new technology and new tools, that is a little bit harder.

In that case, what I usually try to do is build a proof of concept, build something that I can show them that functions and that solves a real problem they have and that usually helps a lot more than a lot of talking and explaining and showing them graphs and numbers and all of that.

You also need to provide them with information obviously, but really showing them, look, if you do this with this tool then this problem of yours just goes away. And I find that a really impactful demonstration.

Eden: Of all of the different teams that you've worked on throughout your career, which of the companies do you feel like most exemplifies the ideal, if you will, quality assurance process or team or or function.

And I don't know if that's necessarily the place you enjoyed working at the most and I would be curious like if those two things are aligned and just curious for you to maybe use a case study or an example from your career and tell us a little bit more about the specifics like how that team was set up, why that was maybe a good example.

Katja: Well the place I enjoyed working most was definitely Linden Lab, which was "Second Life", which I think what they did extremely well, they hired very, very well, they hired extremely smart people.

So to this day this is, and everybody I talk to from that time, I still am friends with quite a few of them. We all have the same feeling, which tells you something about the kind of people they hired because we all felt we were surrounded by people who are all smarter than we are, which is always a nice feeling.

So if you work in a team where everybody has something to teach you, that is my ideal team really. In terms of QA, it was my first QA job so I didn't know any better. I thought they did it well. In retrospect, I don't think we did it super well.

Funny enough, I think the teams I worked with that did it best was the consultancy that I worked with, which of course called Infinity Works. They were later absorbed by a bigger consultancy, partially because that was the first time that I worked in a setup that intentionally declared developers responsible for quality.

So that was very much a setup where developers were hired partially with a view to, if you don't like writing tests, you are not the right kind of developer for us, that was the first center where I worked where every developer spent more time writing tests than they spent writing code and where quality really became a shared responsibility.

I think I mentioned this before, that's really where I transitioned more into a coaching or quality coach position because they were definitely better at writing code than I was. I can write code and I enjoy it. It's never been my main kind of thing that I did.

So I can work with them figuring out what we need to test, how we test it most efficiently. So what kind of level of tests do we want to write, how do we write it? And tying this back to the previous question, what is the best tool to achieve that?

Eden: Yeah, when you worked at Infinity Works, there were way more tools available to you at your disposal that you could pick from compared to when you worked on "Second Life" at Linden Lab 20 years ago.

I would assume that at Linden Lab, especially with something as complex as "Second Life," there was a lot of manual testing involved and also because of how interactive "Second Life" was, did you literally have to have multiple people like manually testing at the same time to talk to each other is that how you guys had to do it or?

Katja: Funny that you should mention this because this was one of my first actual teams was one we had just made a big change to the voice system.

So when I started working at Linden Lab or at "Second Life," we didn't have voice chat, random anecdote, I interviewed for my role at Linden Lab in a text chat online in the virtual world.

Eden: Wow.

Katja: I was looking at my avatar, dressed, being somewhat businessy, talking to two other pixel people in a text chat, that was my interview.

Eden: That's so funny.

Katja: It was hilarious in retrospect, but it was also great because it gave me a really, really good opportunity to actually go back and revisit my interview.

You don't often get the chance to really look at what did you say, how did you say that? Could you have done this better? So I really love that.

Anyways, so one of the first things was I was involved in testing the voice chat and yes it was in fact I and a colleague who was sitting on the other end of the office were talking to each other through "Second Life." That was part of our testing process.

Eden: Yeah, I think there are additional tools that have been developed to sort of like execute automated tests in parallel and simultaneously.

But I also know just like yeah, by the time that you sort of landed at Infinity Works, I would imagine that there was also the introduction of mobile. I think I'd seen in your profile that you worked on a number of mobile apps as well.

How do you sort of think about the tooling differences between web and mobile and sort of what's available to quality assurance experts in in 2024 even in the last few years?

Katja: Well, I think the tooling for web is still vastly superior, shockingly because I think mobile is now a much bigger market really. So it is a little bit surprising that the tooling for web is still so much better.

And obviously one of the biggest challenges with mobile is always the fact that you have the two main operating systems who don't play well with each other, Android and iOS, and then you have in the space for both of them, Android even more so than the Apple variety, you have many different versions of the same operating systems on the market running on vastly different hardware.

So that I think always was the biggest challenge with mobile and maybe one of the reasons why the tooling is still a little bit behind because it's really hard to find good solutions to cater for all of that.

So basically you have a choice to make, either run your testing on simulators, which is relatively fast, relatively cheap, relatively, because you still need to run your tests on a whole lot more variations than you need for web.

Or you try to be more realistic and test on actual device farms and then it gets really, really either very expensive or very difficult, or both.

Eden: Yeah, I think fundamentally the web has just been around for longer. There are more humans over the course of time that have spent time building tools for web.

It's also a lot easier and so I think that's definitely part of it but yes, I would also agree with you that I think the challenges of just like how iOS and Android kind of limit what you can do, they are controlled by two companies, it makes it just, yeah, more challenging to develop for.

And I think just like compared to how long web has been around, mobile still feels fairly new, which I feel like because we use it so much and it's so well proliferated day to day, we sometimes forget that it has only really been kind of like this mainstream for the last like 10, 15 years.

Whereas web has been around for way longer than that and web is a lot of stuff is open source and definitely in the last couple of years have seen a lot of interesting advancements and I think testing frameworks and tooling.

But yeah, sometimes I kind of have to like take a step back and remind myself like, why is it so much easier for web it well it's because it's been around for so much longer. We have so much more experience as an industry with that tech stack.

Katja: It is, and like you say, it is literally just easier.

I think the mind boggling part to me is that from our relatively privileged position in the west, I have a laptop, I have a mobile, I have a tablet and I have a very smart TV as well. I have a very smart watch, so I have a choice of access points to enter the wide, wide world of the worldwide web. However, a big part of the world does not have that choice and that's where I'm saying, that's where the much bigger mobile market comes in. I think it's easy from here to forget that sometimes if you create something that you want to work well for say a population in Africa or in big parts of Asia, you probably want to optimize for mobile because they may not all have access to computers.

And I think it's very interesting.

Eden: One thing I'm also curious, your perspective on, especially because you worked at Linden Lab and "Second Life," which was honestly, I feel like kind of ahead of its time in just how interactive and virtual the product and the experience was.

And it feels like us as a society when we look at technology trends and how we're kind of using technology in the physical world or to replicate the physical world, whether it's AR, VR, or voice assistance, peripherals like smart watches, using iOS CarPlay and Android Auto, software is getting more and more complex again and it's not just a single screen that you interact with in sort of a one dimensional way.

I'm curious how you think that will change the role of testers, and it probably already has, how it's going to change the quality assurance industry or just software development broadly.

I think it's going to require everyone to think more critically about how the whole process works, but yeah, how do you think it's going to change in the next three to five years?

Katja:

I think the ability to think critically about what you're doing has always been really the main trait a good tester needs. I think the easier it gets to run more and more tests, the more important it is that you really, really understand what it is you're trying to achieve with your testing.

Coincidentally, I had a conversation about this just recently about somebody saying, I think he posted on LinkedIn saying something like, "These are the API smoke tests I run, which are exactly two, namely is it up, so it doesn't return the status code I'm expecting, does it send me a response, and that's it, that's my smoke test."

And there was a whole conversation thread coming off of it with lots of people saying, "Yeah, but you also want to test the schema and the response time and this and that and that and that and that and that, all of which was completely misplaced because this is a smoke test.

It has the sole purpose of checking that your last deployment actually deployed that the API is in a functional state after you have finished your deployment, that's it.

So everything you add at that point is literally a waste of time. Yes, you are wasting milliseconds potentially, but if you are Google then you are wasting 10,000 times those milliseconds per day.

That becomes then a bigger chunk of time, time costs money, you run this on hardware you need to provision you need to pay for. So it is not a negligible, " Yeah, but it just takes 10 milliseconds more to, for me to check this other thing as well."

Of course you want to test it, just not there, just not then. And I think these conversations really will become more and more important the more and more advanced our testing setups and our test tools become.

The easier it is to do something, the more I think we are tempted to just do it all the time rather than asking does this serve a purpose here? And if so, what is the purpose it serves? The cost benefit analysis of this. Is it positive or am I doing it just because I can?

Eden: That's a great point. Leaning in on your specific example, if a smoke test is not the time and place to add the more sort of complicated, functional testing, what in your mind is the division of responsibility?

Who on the team, what part of the release and delivery process do you have the other types of tests and like typically how do you kind of work with organizations to really figure that out and navigate that as a coach and consultant?

Katja: So I think thinking and talking about testing is everybody's responsibility and I like to shift that as far left as I can.

So I really like to work with teams and teach them to talk about testing the moment they start developing a story.

So one of the tools I really like for that is a very simple two-access risk analysis matrix. Just map out what you think can go wrong, put that in your matrix and then focus really on the highly likely and highly destructive things that you think could happen, and that's where you start.

From then on, I am very much in favor of automating everything that can be automated. There is stuff that doesn't make sense to automate obviously.

But I think developers need to not commit code that doesn't have unit tests, that doesn't have integration tests. I think developers also should write end-to-end tests. I think all of this should be written before you commit your code to your main trunk.

But I also think that developers shouldn't sit alone in their corner and write all this stuff. So I'm very much in favor of pairing, mobbing, ensembling, whatever you want to call it, specifically, ideally actually with multiple disciplines sitting together.

So I think a tester and developer sitting together to write all of the tests is probably the best way you can go. If you are writing functional tests, it may make sense to even bring in somebody from product, somebody who can tell you exactly what's going to happen there.

All of this should then be triggered automatically in a CI/CD pipeline, and I think that can be augmented by actual people looking at the product because you still need a human to interact with whatever app you're developing.

Humans do things very, very well that computers that are running automated tests just don't do well. Computers are great at everything that needs repeatability. A computer will do the same thing every time unless something goes wrong in the system, and then you know, which is amazing.

Whereas humans doing the same thing every time, probably they're not doing the same thing every time, they get distracted, they may have a bad day, they may have a sick kid at home, lots of things that are distracting, you miss a tiny detail, yada yada yada. So humans are just not very good at that.

Humans on the other hand are extremely good at maybe looking at a website and knowing like in a flash whether the thing looks right or not, that's just what we do. Humans are really good at context, humans know.

So I know if I look at something that is culturally offensive to me, as will you, so those are things computers are not good at and that leads back to the, you really need to think about what you're trying to achieve with your test and then determine how you can most efficiently run these tests because they all need to happen.

Eden: Yeah, that was excellent. But I want to click in on what you said about just like determining the decision process that you work with a team to determine if something should be automated or not. If you could elaborate on that a little bit, what are the trade-offs?

There's obviously like the easy stuff that everyone knows should be automated, but I think where sometimes I've had trickier conversations with folks is just because it could be automated, is it worth investing the resources? Is it a flaky test? Why is it flaky or is it just better done manually?

Why is it better done manually? Is there sort of a framework or decision process that you kind of use and kind of like work with other teams on for navigating when is something worth automating or is it even automatable?

Katja: So I think in my hand it's kind of a set of sieves, where you have increasingly smaller holes, basically. I start by assuming everything will be automated because that is kind of just my default stance.

So the vast majority of tests you want to run with every commit, there are tests however that you don't want to run with every commit. They are the first candidates to look at does this need automated?

Is this a performance test, yes that should still be automated, just maybe outside of your normal CI/CD, at least other than a subset, you probably still want some of them in the CI/CD, but there may be a larger set of tests that you want to run occasionally.

Is it something that is really, really complicated to automate? Are you looking at implementing the test at the right level? Is there way to split this testing so that it is no longer complicated?

Can this move down into, can this be mocked? Should this be mocked? If we are mocking it, is it a mock that will require more maintenance than the test is worth?

Those are some of the questions you want to ask. Then is this something that only needs to happen once a year, that's probably not worth the effort. Just document it a while and do it manually once a year.

Eden: Makes sense.

Katja: Or maybe it is something that runs once a year, but it is really, really a very complex thing that is easy to get wrong and requires lots of math. So some tax systems come to mind.

Then you probably still want to automate that because it is just too error prone to be run by a human. But on the bright side you have a year to get it in place, so there's that.

So these are the questions I think I ask, generally speaking, everything, I think everything that is worth documenting, like writing down the steps on how to do it is probably worth automating.

Eden: Specifically related to mobile where it is harder to automate things. I know you worked on a number of mobile apps when you were a consultant at Infinity Works and also at Adobo as well in your previous role, I'm not sure if you did anything on mobile at Disney, but curious like, what were some things you could automate easily on the mobile side?

I think for a lot of web stuff, almost everything is automatable, but what about on the mobile side like what was automatable, what were sort of like specific use cases or test cases that you found were a little bit harder?

Katja: I think the biggest issue I ran into was when I was in the consultancy, I worked with a building society, so essentially a bank, and we built a mobile app for them.

They didn't have one in 2019, which was a bit shocking, anyways, so we made a mobile app for them, but because of the setup within the company, so they worked with another consultancy and was providing hardware as well.

So provisioning the servers on which the entire authentication system ran, we added two-factor authentication obviously, and that was a proper nightmare to deal with on mobile, on web it's quite easy, I get a token, I inject it into my tests, I'm done.

Unfortunately that's not how it works on mobile. So that was really, really tricky and that was, I don't think we actually did automate that bit.

Eden: I see, so you just had like a separate test that you would run manually but then you could run the authentication process without 2FA in the automated test but you just had to kind of bypass it or?

Katja: Yeah, we had a special test environment where we just didn't have the 2FA.

Eden: I know some folks have like a test code, just enter 1, 2, 3, 4, 5, 6 or something.

Katja: Yeah, well that would have been our preferred choice, but because the whole authentication system was maintained and run by this other consultancy and they were not super easy to work with, it kind of got lost in politics a little bit I think.

So basically whenever we wanted to test our mobile app in the live environment, somebody had to actually sit down and do that manually.

So smoke tests after deploys for example, could not be automated for the live environment because we just couldn't.

Eden: That sounds very familiar to me and I feel like a lot of the other teams that we work with at Mobot have a similar experience and part of it, it's not even about the two-factor authentication itself.

It's about sort of this broader trend that your system, your product is interacting with a different person, a different organization's product and code that you don't control, an API that you didn't develop.

And I suspect that is something that's going to happen more and more as there are more companies spinning up, more products spinning up, it's easier than ever for someone to build and launch their own API or train a new AI model or things like that.

I feel like we're moving into a world where everything is increasingly collaborative, but then it does mean that if you didn't build everything yourself from the ground up, there can be these unexpected hurdles that you have to overcome when you're trying to release your product.

Katja: Absolutely, yes. And especially again, if this happens then under the umbrella of say a larger organization that works with multiple providers for parts of their system, there's always going to be politics involved and that can make things extra tricky then.

Because people may not be happy to relinquish control over a part of the system that they are providing so that somebody else can do a great job with the part they are providing when that part is also something they would like to provide.

Eden: Yeah, totally get it.

Katja: Yeah

Eden: We've had a great conversation so far, but maybe one more question that I have for you is, I think you've seen so much over the last two decades and now you are in a position where you are able to evangelize best practices and sort of like shape the next generation of testers and QA analysts and experts and engineers, test engineers, software developers, and tests that are up and coming in our industry.

What do you think are sort of like the few key points that you're really trying to convey and evangelize for this next generation?

Katja: When I started, as I said, domain knowledge and certain kind of critical curiosity got me really far, but I did have to learn to write code and I would highly recommend to anybody who wants to get into testing these days to focus on learning software engineering principles, at least to the point where they're comfortable reading code ideally to the point where they're comfortable writing code.

This cannot come however, at the expense of being a critical thinker because there are differences in focus between what a software developer does and what a tester does. And I don't necessarily think that you can do either or the other thing, I think most people can do both of these things, but I think most people get trained in either one or the other. And I don't think that is a good combination for the modern day.

I think people really need to learn both sides of this and need to learn how to consciously switch between, I'm now thinking about how I'm going to solve this problem creatively with code too, and now I'm going to think about what can go wrong with this and how I can demonstrate that these things are not going to go wrong with my code.

Eden: Makes sense.

Katja: Often, and this is I think, maybe a third thing, there is still often this myth, A, that testers and developers are kind of on contrary teams which is nothing could be further from the truth, they need to be on the same team, but also that these activities are somewhat solitary.

And I don't think that is the case at all. I think people really need to learn to be more sociable about these things because the easiest way really to swap these perspectives is to sit with somebody and then do this whole, I play the the one side, you play the other side, then we turn around.

Because that gives you the best, I think the best access to the different skills people have and to the different thinking patterns people have. So I think they're probably the most important ones.

And then maybe if I may, one more, and that's one I feel fairly passionate about and it kind of goes into the whole AI discussion a little bit.

I think there is a tendency in many companies to try to solve problems with tools and tools are never going to solve your problem. Tools can help you solve your problem. And understanding what your problem is will help you choose the tool that solves your problem. But if you don't know what the problem is you're trying to solve, if you haven't learned how to think about a problem or to define a problem, then you will never be able to choose the right tool to solve that.

Eden: Yeah, I think continuing to, especially in the age of AI, us humans need to be able to reason from first principles and really be able to understand the root of every issue, of every opportunity, because that's the context, that's the insight and the creativity and the ingenuity that makes us different than just putting something in an LLM and seeing what comes out the other side.

So totally agree with everything you said. I have a lot of hope and optimism for our industry that there are evangelists and coaches and consultants like you who are sharing this perspective and it's backed by the decades of experience that you do have.

You can call on real world scenarios, real world situations that you've had working with these teams, it's really awesome.

Katja: I share your optimism. It is easy to be pessimistic, especially if you look at certain platforms like social media platforms where everything is black and white and AI is either going to solve all our problems or make humanity superfluous in the next five years. It's probably not going to be either.

Eden: That's right. Katja, thank you so much for the amazing conversation today. It's been a pleasure having you on our podcast. This is just the start of many conversations that I hope we'll have.

Katja: Thank you very much for inviting me and yes, I would love to continue this, it was really great.