1. Library
  2. Podcasts
  3. O11ycast
  4. Ep. #66, Building Observability Platforms with Iris Dyrmishi of Miro
O11ycast
34 MIN

Ep. #66, Building Observability Platforms with Iris Dyrmishi of Miro

light mode
about the episode

In episode 66 of o11ycast, Jess and Martin speak with Iris Dyrmishi of Miro. They dive deep on what it takes to build an observability platform with open source tooling. Additionally, they explore the expense of outsourcing observability, the journey from logs to traces, tips for adopting new tools, and the spread of the FinOps movement.

Iris Dyrmishi is a senior observability engineer at Miro, based in Portugal. She began her career as a backend engineer and segued into DevOps engineering when she became a site reliability engineer at Worten Portugal.

transcript

Iris Dyrmishi: I love observability because it's such an important part of platform engineering of a company that we don't even realize it, and I made it my life's mission to be an advocate and to promote it, to actually show how cool it is and how important it is. Because there are so many companies, so many engineers that say, "Ah, observability, why do we need that for?" Until something happens, and you realize that, oh, okay, yeah, I needed that then. I needed some good observability there.

Jessica Kerr: Nice. What was that for you? What experience made you decide that you need observability?

Iris: Well, first my career. I started as a backend engineer, I was trained as one and then I started working in a company that offered the services to other companies so they wanted a DevOps engineer and I was trying to become one. It was the best move I could make in my career. That's why I had a lot of interest and a lot of exposure to a lot of tech starts and how difficult it was to get things done for them.

Especially when you just entered there, you are new and you have absolutely no idea what was happening. There was no metrics, logging was not great. I'm not the biggest fan of logging, that's something that I guess I have in common with Martin. They were logging. It was a bit not great. Tracing was almost nonexistent so it just made me more interested to go and look for solutions, and then I actually had the opportunity to start in a company that was doing observability.

It had a whole observability team, I'm like, "Okay, this is amazing." And my love for it, that's how it started. I saw a company that didn't have it, I saw some very cool things online that, wow, these things can be done. Imagine how much they can improve the lives of all those engineers I have worked with? Yeah, I got hooked, let's say.

Martin Thwaites: It is, it's addictive. Isn't it?

Iris: It's amazing.

Martin: It can get very, very... analyzing data and seeing trends and those Ah-Ha moments of nice, little spikes and that kind of stuff is really addictive at times.

Iris: Yeah. I just started working at my new company, at Miro, a month ago. During my onboarding, one of my coworkers was, "There is a lot of information, maybe you're going to be bored." And I said, "I never get bored of observability. Bring it on." It's just amazing stuff. I am very passionate about it, as you can tell.

Jessica: Great. Can you tell us something specific, a story of something you saw that got you excited?

Iris: Well, the thing that got me excited the most was traces. In my previous company when my full observability journey started, I actually started working with traces heavily. I was using Jaeger, and then you would open a trace and see all the spans and how everything went and you could understand everything that was going on there. I was like, "Wow, this is amazing. We need more of this." So that's definitely the one that really amazed me, and it started my passion for tracing in general as well.

Martin: So I think this would be an amazing time for you to introduce yourself and who you are, what you're doing right now, that kind of stuff.

Iris: I'm Iris, Iris, Iris again. It's a pretty international name. I'm a senior observability engineer at Miro. I am currently based in Portugal and my team is based in Amsterdam, so it's a very interesting, multinational team.

What we're doing right now is we are trying to build our own observability platform with open source tooling, providing some amazing modern tools for our engineers to be able to rely on our system to debug, to troubleshoot, and why not? To avoid having some issues.

So that's pretty much it. My main focus is usually on tracing. Ever since I speak so much about it, usually wherever I go I'm like, "Okay, you want to work with tracing? Work with it." So it kind of becomes my thing, so that's my focus and what I'm most passionate about. But yeah, observability is my thing. I also like writing, speaking about observability, and connecting with the community in general to learn more and to keep an open mind about what's coming and what's new.

Jessica: Great. What goes into building an observability platform?

Iris: Well, first of all you need to be a good platform engineer in the sense that you need to know the environment that you're working with. You need to be a very fast learner because from my experience with observability, something that moves very fast, one year you have this technology and then one year later you have this amazing new one that you really need to improve and get on it and learn it.

Let's say about OpenTelemetry, all the companies that I worked with were not using OpenTelemetry and all of a sudden, boom, it became the next thing and you had to learn and adapt so that's also very important. Also, to build an observability platform, not more on the technical side but more on the people side, you need to have people working on it that are good communicators.

For me, building an observability platform is not just the tech part, it's also knowing your engineers, knowing what they need, helping, assisting and becoming like a service provider. So I think that's the next important thing, other than of course knowing what you're doing technically. It's not just about talking and getting into agreements.

Jessica: Right. Observability is about communication, it's about our software communicating to us so to do it well you also need to communicate to developers.

Martin: It's almost like communications and interactions is at the core of everything that we do.

Iris: Absolutely.

Martin: So let's talk about logs, because I love talking about logs. You're like me, you're into traces and we've been talking to a few people on the o11ycast recently about their journey from logs to traces and why they moved through that kind of journey from logs to traces. What was your experience of the difference? I have my own opinions, I will probably share them because it's who I am. But I'd like to hear what your opinion is and why you moved through from logs to traces.

Iris: Well, one thing that I have to say about logs is that I not completely abolished logs forever. They're good, sometimes they're necessary, it's good to have it. But at times it's like you have this huge amount of information that is not telling you anything and sometimes you have to search for a needle in a haystack. If they're not formatted properly, there is not some kind of a good formatting.

It makes it even more difficult. I've seen some logs that were... I mean, I'm making a gesture right now but this big, that you cannot read through them and you have to spend hours trying to find what is happening there. It can be very, very painful, troubleshooting something. Sure, sometimes it's necessary. But traces, I mean, what can I say about tracing? You have the whole call where it passes through, you can very easily find the bottlenecks, find what was in error and what was not.

Sometimes even it's easy for an engineer who doesn't even know what their system actually does, they can go there and see and actually help you. Much easier in general, and just to troubleshoot. For example, if you are in a time constraint and you have to go between traces and logs, I would always go to traces because it's much easier, it's a very intelligent way of troubleshooting in my case.

I feel like for tracing also the instrumentation is done in a more standard format, usually teams do not really bother of making different fields for different things so tracing is pretty much standardized and works much better. But logging, because it's such an old pillar and it is used for years and years, sometimes it's like 10 years ago that an application was created and some logging was added there and it still exists.

So that's one of the reasons that I don't like it even. In general, it's just too much information, very difficult to filter, to process and sometimes you don't find what you need. That's all my rant about logs.

Martin: I mean, a lot of that is the historic context of that.

We have to pay homage, logs got us here. Logs, if you do them well, you do them structured, you have standards around them and that kind of stuff, they can get you a long way. That doesn't mean that logs are useless because they are useful in a lot of contexts. My main point is that traces provide so much more on top of what you get from logs and sometimes you can't have traces, and that's fine.

But like you say, the logs are what we've had for ages. Everything has logs, and sure, logging systems have logs, even trace systems have logs and you can use logs to debug your traces. So logs can be useful in a lot of contexts.

I have, in the past, been very anti-logs and that's really just because I believe that people can get so much more out of moving to traces. Not because they can't get a lot from logs. So I think you've kind of hit the nail on the head with what you've said there.

Jessica: Moving to traces. That gets to something I wanted to ask you about, Iris. You said that as an observability engineer, you need to learn very fast because it changes a lot and it does. There's a lot of advancement right now but you knowing about it doesn't get it into all the existing software so how do you do that? Even once you understand OpenTelemetry, how do you work that migration path?

Iris: What I would say is you need to be a very good politician. Observability needs some politics. We were actually having a discussion about this the other day with my team, about introducing OpenTelemetry more widely and it's do it by showing. In my case it's like, okay, we want to have... I'm using OpenTelemetry because usually when I'm about to migrate a system to OpenTelemetry, first it's the tracing so that's why I'm focusing there.

So do it by showing, provide a very nice example. In our case we always go for tracing, we implement OpenTelemetry, nice collectors, SDKs of OpenTelemetry Operator and just show these cases and people get hooked. When you see how much information you can get and how easy it can be done, you actually don't need to change much of your ways that need to change. It becomes easier.

But of course it's not just being politics and trying to convince people, because it's not... of course they have their own things to do, their own OPRs and we cannot just force them, "Oh, please update, please update." Of course we have to make it as convenient as possible for them, so their migration to that will be flawless. The first thing is show something beautiful and how useful it can be for them. The second one, provide a very easy and convenient way for them to migrate to it.

Jessica: Nice. Make it useful, make it easy.

Martin: So I think that's at the core of what software is, it's about make the easy thing the right thing, make the wrong thing the hard thing. If the easy thing is the right thing, and that's also going to give them a lot more value as well, then you get a win-win and everyone is going to adopt it. If we make, and I've said this for quite a while, if you're a platform team or a centralized tooling team and you have to force people to use your tool you're obviously not providing the value that they need.

You're providing that you want to give. So making that easy to adopt and making that provide them with the right value means that you don't have to be, and you mentioned politics, you don't have to be the politician there because you're providing them with a service that's essentially free, that will give them value. Who's going to say no to that?

Jessica: But they do stay busy.

Iris: That's very true.

Jessica: Do you see people using observability during development or is it only in production that they look at stuff?

Iris: Honestly, as much as I don't like it, I see people using observability in the form of logs in development phase but, yeah, it's mostly in production. I mean, sometimes it makes sense. You don't want to put as much effort in a development environment or when you're developing it's normal to look at the logs more. You're not going to have to go and try to and have traces and metrics, but dominantly in production, that has been my experience.

Martin: I'm going to challenge that.

Jessica: Logs works when it's only you doing exactly one thing in your test environment. Logs do not scale.

Martin: They also don't work when you're talking about things like multithreading and you're talking about maybe 15 requests, you load up a website locally and it's got 17 JavaScript files, it makes 15 API calls to your backend. Trying to make sense of what thing called what thing and looking at your logs and you just see a big stream of things. Tracing provides so much more value even there.

Iris: I completely agree.

Jessica: We know you like it, Martin. But that brings us to you've talked about tracing and collectors and stuff like that, is it all backend or do you have frontend observability as well?

Iris: Mostly backend. Frontend is what I'm actually more interested in investing now in my new change, but mostly it has been backend. But I'm very curious about learning more about frontend.

Jessica: Yeah, tell us about that because Miro, that's an interesting frontend. It's like a whiteboarding app, right? So it's very interactive and multiplayer.

Iris: It is very interactive. Yeah. That's one of the things that we are actually wanting to improve there, the frontend monitoring. I'm very new, so don't take everything for granted because I'm learning as I go. It's only been one month for me. But as far as I see, we have because it's such an important part, we have a, let's say, completely different or isolated part just for the frontend at the moment.

But the plan is to unify the whole thing together. Yeah, our biggest plan right now is to unify everything together and to put more emphasis on the frontend. But, to be honest, that's all my experience right now with the frontend monitoring. But yeah, I'm getting there, I'm getting there. I'll get there.

Martin: So I've got a question about the data side, you're building an observability backend for your engineers to be able to do. That's a lot of data.

Iris: It is.

Martin: And that's a lot of complex compute and all of that kind of stuff. How are you going about thinking about your volume and scaling and thinking about how you're going to make this work at scale? Miro is not a small thing, and when we talk about frontend observability as well on top of that, there's going to be a lot, a lot of data there. How are you going about thinking about that?

Iris: There is a lot of data. Lets say we are in a path that we need to take a lot of decisions regarding observability. Currently of course we have our own backend. We're using open source tooling, for example. We're using Jaeger for tracing, Jaeger UI. OpenSearch for the backend and of course it is a very big challenge, so that's something that we are currently considering right now. Will it be able to keep as we are right now, knowing the challenges that it is to scale? It's the same for where we're using Victoria Metrics which is a great solution for us, but also if the load of our metrics increases, will it still be able to go further?

Will it be worth it to do all this by ourselves or have the help of, for example, a provider that will do the processing for us? We do the information collection and everything that happens in the background, but then we have a provider that will do the processing for us.

So it's kind of like we are in that path that we are still deciding and seeing how everything is going. Right now we have managed to scale and everything is working smoothly, and of course we have our challenges day to day. You can imagine, it's like having 100 different technologies there.

But yeah, long term? It's a conversation that we're having, that we need to take some decisions because the bigger the company gets, the more information we need to process and, of course, it's never enough observability information. Especially in tracing. You always can get more and can get more valuable information, so yeah, it's not something that we are taking lightly. Just a lot of decisions need to be taken.

Martin: So I suppose the question is there then, do you have observability for your observability? How are you knowing that you need to scale? How are you knowing that your customers, your customers being your developers, how do you know that they're having a problem? Have you tackled that yet?

Iris: Yeah. Well, we do have observability for our observability. What we have done right now is that we have, for example, used Victoria Metrics for our metrics pipeline and we are also running Prometheus from there. That is completely the what-not for everything that is ours. It doesn't have as much information, basically there is an incident, there is a lot less data there, let's say.

So it helps us prevent these major incidents, or that's how we know. For example, in some cases before if we were using Prometheus as the main observability tool, you cannot do that any more, of course. So we were just looking into solutions about using OpenTelemetry, Thanos Ruler to send alerting to Grafana or to an alert manager. In my mind, it is always like that. If you have an observability system, you need something smaller on top of that to look at it because, yeah, it can be bad.

Martin: Yeah, it's who watches the watchers? And who watches the watchers who watch the watchers?

Jessica: And the coupling, how do you decouple your observability tools from production so that if production goes down, your observability doesn't?

Iris: That's a very good question. Usually we try to keep everything separate, so basically observability stack, we have our own instances. For example, if something is happening, we will not be affected and if something is happening and, for example, something is down we will have another instance spin up, have a brand new observability system to take care of that. But it's not foolproof, of course. It can happen, but we try our best just to keep it as separate as possible because if you decouple with other applications, yeah, we will be blind for a lot of times.

Jessica: Right. Yeah, at Honeycomb, of course we use Honeycomb to monitor Honeycomb, but it's a completely separate instance of Honeycomb and a different VPC and everything.

Martin: Yeah. And that one is then monitored by another instance of Honeycomb.

Jessica: Which in turn is monitored by production.

Martin: Yeah. And they're all heavily simple because otherwise that would just be diabolical in terms of load and we'd get ourselves into some really nasty infinite loops of telemetry data.

Jessica: So sampling, that's a way that we deal with quantity of data and also with cost. How do you measure and control the costs of this whole observability solution? And you can speak from previous jobs if you want to.

Iris: Yeah. In my previous company, I could say more because here in Miro I'm still not very familiar with the processes, but in my previous company we had a big FinOps movement regarding that. So everything that was run in a cloud environment, you had the costs for it and of course we had chats about which team they belonged to. Of course, the observability team, all the costs were on us which is, of course, not ideal because there are other teams that are spending our resources.

But we were there, so basically everyone had access to this and we always acted with that in mind, that we have to keep the costs to a certain level. Of course the company had OPRs, said, "Okay, this year we have to save this much or we have to keep the costs optimal," so of course we acted accordingly. Every time that we had a new technology that we wanted to introduce, it was a great financial plan, how much we're going to spend, how much we're going to save, basically.

When it comes to the amount of data for tracing, we were using OpenTelemetry heavily so we were using sampling, tail sampling mostly. I think it was from an article, that's how I learned about it, in an article from Jess from Honeycomb about tail sampling. She had mentioned that you can keep all the errors and then you can do some probabilistic sampling and I really liked it.

When we implemented it was great. The same thing for traces, that was covered for metrics. It was kind of more expensive, but always trying to see the needs of the team for certain metrics to see what was used on all of those nodes. That's pretty much it. I always tried to use the technologies that were cost optimized.

For example, for tracing we were using Cassandra, very expensive, very hard to maintain than Nirvana Tempo so that the users in S3 or some kind of storage and you can run it on Kubernetes and it is very scalable and a lot cheaper so we were like, "Okay, let's go for that one. It's an amazing tool." So it progresses every day to a different challenge and every day is a different solution, I would say.

Martin: Yeah. I think that's one of the problems with product strategies is a lot of the time they don't factor in the cost of increasing the observability team's budget to cater for extra data. It was exactly the same at the data engineering teams, that's obviously changed now, where, "Oh yeah, we're sending tons more data to you. Oh right, we need another X thousand pounds to store all that data." Like, "Oh, but we don't have that money."

And I feel that that's possibly what you're alluding to there, where we've got these products, new initiative, great. 72 new services, all of them generating 10,000 new metric data series. Now we need 50 new servers for our Victoria Metrics clusters. "Oh, we don't have that money, you're just going to have to make it work." I feel like there's a bit of a turnaround coming with that, that people are starting to understand that observability needs to be a bit more first class. From people like yourself, who are really pushing that in the industry. But yeah, I get what you're saying.

Jessica: Well, there's a reason that when you outsource your observability, if you pay Honeycomb to store your data, we charge by event. Your observability team, do you report on which services are sending you the piles of data?

Iris: That's the thing. We have a brief idea because you know your observability system, you know where most of the data is coming from but we don't. That was one of my aggressive ideas, let's say. Let's start charging teams for the information that they send. It's kind of difficult to calculate how much each team is sending but I had the idea that that was going to make them more aware of the information that they were sending, if they really need it or not and to actually optimize their instrumentation as well.

That, for me, would be ideal for an observability system, that you know exactly how much each team or each application is sending. For example, at the end of the day there is a conversation about, "Oh, the observability is being the main spender. Look how much money you guys are spending."You're like, "There you go. This is the paper of it."

Jessica: Right, right. If you have a report.

Iris: Yeah. It's a shared cost. We're not using it to play, it's your data that we're processing. I think that's something very important that needs to be understood and if you have the data to back it up, it makes it more easier for the teams to understand and to take accountability, and to actually give some slack to us observability engineers about our costs.

Jessica: Right, right. A lot of times you don't need a formal method to make the teams pay for it, if you just show them the numbers they can be like, "Ooh, yeah. That went up last month. Maybe I don't need all those span events," or whatever it is.

Iris: Yeah, exactly.

Martin: And it maybe means they come to you next time and say, "We're going to need to send some more data. Is that okay?" And at least you've got some prior knowledge.

Jessica: Which is way better than being surprised.

Iris: Exactly. That's why I'm a big fan of the FinOps that is spreading in all the companies right now.

Jessica: Ooh, tell me about that.

Iris: I have noticed that there are a lot of companies where I worked in the past, and right now where I work as well, that there is a great movement to see where costs are going. So not only the higher ups and the management is aware of being spent but the teams themselves, every engineer has access to the costs of their stack. It's not to pressure people into, "Oh, look how much you're spending."

But it's good to know what you are spending, where you can optimize, when you can make things better. I really, really like that, but it's becoming a common practice in companies because you are spending like crazy for years and years and all of a sudden it's like, "Ah, okay. Now you see the crisis, the pandemic, you have spent $4 million less." It's just imaginary numbers, I don't know, it sounded a lot, $4 million.

And you don't know, you're like, "What is this? How am I supposed to do that all of a sudden?"When you know and you're aware, you can take the measures and know what is happening, and be full owners of your stack. You are not a full owner until you actually know what is happening fully.

Martin: It's accountability, isn't it? That idea that you are accountable for the cost, you can't just say, "I made this run really, really fast." "How did you do it?" "Well, every single customer has their own server." That's not okay.

Jessica: Accountability doesn't have to be, "Oh, your job depends on this or you're going to be punished for this."I like what you said, Iris, about just making it visible to people and then they can balance, they can balance performance which is visible because they have observability to cost.

Iris: Absolutely. We manage costs in our day to day life, that's how I see it. We know how to manage, we know how to cut corners, how to make our lives better. Why not do that with the thing that we're doing eight to nine hours a day, which is our work, as well?

Jessica: Yeah, that's great. One thing that I love about observability and being able to look at production is finding out whether people are using the features that I made and how they're using them, and so you get the whole picture from what it costs to how it's working, to what value it provides.

Iris: Absolutely.

Martin: At Honeycomb we do a lot of that, with coming down to even Lambda costs and being able to see the nanoseconds on a Lambda that we end up reducing down that saves costs. It's almost like bragging rights, it's like, "I saved X thousand pounds because I optimized this Lambda." I've been in organizations where people have code golfed, I don't know whether you're familiar with that term, where they've got it down to four lines of code in this thing.

But it's not actually saved any money, it's not made customer journey's faster. But saving masses of money actually becomes quite decent bragging rights in an organization, but if you actually monitor it which I don't think people did, they used to. Like you said, that FinOps movement has really pushed people towards it.

Jessica: Yeah. This is the beauty of cloud. When you have a data center you've got this big pile of resource and you're using some of it. But with cloud, the billing is specific and serverless especially. You really do pay for the CPU second which corresponds to some power usage and heat output, which is bad for the environment. So we really can optimize for things that matter in some way, at least. Instead of, "If you measure it, people will optimize it." Don't measure lines of code.

Martin: Except all editors measure them in numbers of characters, because you can only have 80.

Iris: In my previous company we had a newsletter from the FinOps team about the item of the month. For example, every month if there was a team that had a huge cost saving initiative that did a good job. I was actually featured in one of them and that was the proudest moment for me in my career. Even that newsletter had like 100 clicks because of course it was internal.

Jessica: How did you achieve that? Can you tell us the story?

Iris: Yeah, absolutely. We did a rebranding or a revamp of the tracing, as I mentioned earlier. Of the tracing we removed Jaeger completely and introduced OpenTelemetry, and for the backend we removed Cassandra there and put a Tempo there so we actually saved around 80% of our tracing costs and we actually increased sampling things for that as well.

I actually liked the Fire Tempo a lot more, of course it's the open source version. I liked it a lot, it was very nice for the users as an experience and a lot more features than the Jaeger UI had so it was a win-win for our developers and also a win-win on the cost side so it was a very proud moment. It was me and an architect of the company, we pushed the initiative and together with the whole team, but I just looked at it and I was so proud. It was a great moment that I want to have again.

Martin: I like the gamification of FinOps. Somebody will create a tool for it soon. They'll listen to this podcast and there'll be a tool that's on the market using AI next week.

Jessica: Yeah. Well, Martin, you posted in our internal channels the other day a screenshot of a dollar amount that you were worried about costing the company a lot of money, and I think our sandbox AWS bill for the last month was $1.21.

Martin: No, it was $1.01. Don't over exaggerate things.

Jessica: We can fix that. We can fix that, I'll spin up a Kubernetes cluster.

Martin: I mean, this has been enlightening to hear about that journey because I think that's a journey that a lot of people are on right now.

Jessica: Right. And the visibility at so many levels of what's going on into your code, to how much that is costing you.

Iris: How much you can do to improve it.

Jessica: Yeah. So Iris, if you had one wish, say for OpenTelemetry and observability tooling in general, what would your wish be?

Iris: Well, it might sound a bit conservatory but I wish OpenTelemetry had a UI because right now it's amazing, I have no complaints, you can do everything with it like transport, collect, modify, transform your data, everything. But imagine if OpenTelemetry had alerting capabilities, had just a UI that you could go and see your spans, your metrics, your logs there.

Instead of being just a tool that it's collecting everything and sending somewhere else, it's also storing it as well. That would be amazing to me because I am such a fan of OpenTelemetry that I would immediately try to adopt it. I would be like, "Okay, I trust the other part of it so why not the UI part as well?" I don't know how realistic it can be and it's probably not designed to be that way, but it's a dream.

Jessica: So you want a frontend in OpenTelemetry?

Iris: Yeah.

Martin: Frontend, a backend, some sides. Just make it a big box.

Iris: To be able to fit everything, logs, traces and metrics.

Martin: And profiles, profiles is the next one.

Jessica: Yeah. User funnels in the frontend.

Martin: Yeah. We need all of it.

Jessica: There's so many ways to go.

Iris: The OTel moment hasn't ended, that's for sure.

Martin: Just getting started.

Jessica: Nice. Is there anything else you'd like to say to our listeners?

Iris: I'm always preaching about observability.

I just want to say talk to your observability engineers, talk to your observability team, ask them about what you would like them to offer you. See what tools they're using, how you can benefit from them, because for me the biggest thing is that observability is not something done by just a bunch of people that are in one team and are observability only, and they do these beautiful things. It's something that everyone needs to contribute.

If you are thinking, "Okay. Observability in my application is really bad, I don't have anything going on," it's probably your fault too. I believe that every team should be an owner of their observability signals, of what they're sending, alerts they're creating, dashboards they're creating. Of course we're there to help, to provide the tools, guidance but I think the teams are the ones that need to do their part and not always say, "Oh, observability, observability." Do your part, tell us what you need, give us feedback and we'll try to do better.

Jessica: And instrument your code. Put the important attributes in there.

Iris: Yes, please.

Jessica: Yes, because you can provide the platform but not the data. Where can people find your work?

Iris: They can find me on Medium. It's always like that or on LinkedIn. But mostly in Medium, yes. The best way to reach me is through LinkedIn, it is through Dyrmishi, the same name everywhere. Yeah, I don't have nicknames yet.

Jessica: Excellent. Thank you so much for coming on the show today.

Iris: Thank you so much, I had a lot of fun.