Ep. #20, Smells Like ML with Salma Mayorquin and Terry Rodriguez of Remyx AI
In episode 20 of Generationship, Rachel Chalmers is joined by Salma Mayorquin and Terry Rodriguez of Remyx AI. Together they unpack Remyx's mission to democratize machine learning by addressing common hurdles like data scarcity and resource limitations. The episode also highlights the critical balance between automation and human judgment, emphasizing the need for AI to handle repetitive tasks while humans focus on creativity and decision-making.
Salma Mayorquin, Co-Founder of Remyx AI, is an expert in machine learning and is skilled in advising MLOps strategies for small teams to Fortune 500 companies.
Terry Rodriguez is a Co-Founder of Remyx AI, with over 8 years of experience in applying machine learning to various domains such as healthcare, robotics, and content recommendation.
In episode 20 of Generationship, Rachel Chalmers is joined by Salma Mayorquin and Terry Rodriguez of Remyx AI. Together they unpack Remyx's mission to democratize machine learning by addressing common hurdles like data scarcity and resource limitations. The episode also highlights the critical balance between automation and human judgment, emphasizing the need for AI to handle repetitive tasks while humans focus on creativity and decision-making.
transcript
Rachel Chalmers: Today I am very excited to welcome Salma Mayorquin and Terry Rodriguez to the show. Salma is an expert in machine learning and skilled in advising MLOps strategies for small teams to Fortune 500 companies.
Some of her notable achievements include delivering technical guidance and demos to clients on leveraging Databricks for advanced analytics as a solutions architect, an ML deep learning subject matter expert, creating a smart mirror that uses image recognition to teach and correct yoga postures, which was featured in "Magpie Magazine" and open sourcing Action AI, a toolkit for building lightweight custom action recognition models, coupling pose estimation, and LSTMs, gaining recognition from NVIDIA's developer community.
We're also welcoming Terry after brief stints building surfboards in Hawaii, teaching Latin and ballroom dance and catching king crab off Alaska. Terry turned towards his original passion, mathematics.
He concentrated in analysis, operations research and scientific computing before discovering applied machine learning where he has focused for the last 10 years.
Together, Salma and Terry are the founders of Remyx AI, which is committed to democratizing tailored machine learning by streamlining the process of AI customization and deployment.
Remyx aims to overcome traditional hurdles to AI adoption, including data scarcity, talent shortages, and resource limitations. Salma and Terry, welcome to the show. It's so great to have you.
Salma Mayorquin: Rachel, thanks for welcoming us. It's an absolute privilege to be here on the show and we're excited to chat.
Rachel: Before we talk about Remyx, can we talk about your machine learning consultancy, the wonderfully named Smells Like ML? What did you do there and which enormous AI companies recognized the work that you did?
Salma: Sure, I think Smells Like ML was really born out of curiosity from both of us to dive deeper into machine learning and machine learning applications, especially when it was really early in breaking in computer vision and NLP and we really expanded that blog to allow us to partner up with teams like NVIDIA and Armon promoting some of their software and hardware platforms showcasing AI applications.
Terry Rodriguez: Yeah, I guess we've been interested in using it as a vehicle to advance our own understanding and get experience in different industry problems and different data types.
And I think in our experience working with different groups and advising them on their AI initiatives, we commonly found a lot of the problems that you had hinted to before around the difficulty of data wrangling and even simple things about like integrating all the software and environments together, packaging the ML artifacts and building a system around that so that you can be confident that the changes that you're making, the investment that you're making in the application is moving in the right direction.
Salma: Yeah, I think it started even from a gap that we identified in the content that was available around AI and application.
There was a lot of content in the beginner realm where folks who were just getting started and trying to wrap their heads around some of the basics, some content in the very advanced level, but there wasn't a lot of intermediary content of people who were essentially in the midst of figuring out how to build more complex applications and then sharing that. So that's where we found a place where we could share more of our journey.
And also another gap that we identified was, there's a lot of information out there on how to use specific libraries and frameworks, but not a lot of content on the intuition on how to actually go about solving a problem with artificial intelligence.
How do you integrate it with other software systems? What kind of limitations do they have and how can you work around those to build a robust application?
Rachel: Yeah, I've run into that missing middle a lot. I'm a very bad programmer and I've done a lot of like the introductory level courses and then you go to the next step and it's like, and now build a distributed operating system for Running Rovers on Mars. And I'm like, there's a step missing here.
Terry: Something between "Hello, World!" and like an archives paper.
Rachel: Yeah, exactly. Exactly. And I loved what you said about intuition, Salma, because something I've seen in the really productive creators that I've worked with in my career is this ability to do systems thinking, to take a step back from, you know, how do we write a program to do this and say, what problem is it we're really trying to solve?
And I think that's something that engineers acquire later in their career is they are able to reason about larger and larger abstractions. I love the idea that what you're building can help people start to develop that skillset.
Salma: Yeah, I think that's the aim of encoding a lot of the intuition that was hard earned over our years. And I'm sure lots of peers like ourselves have bumped into and learned along the way. And how do we build tools or systems, processes, platforms that encode more of that so that more people don't have to go through the hard earned lessons that we did.
Rachel: I am the only non-gamer in my household, so for my kids and my partner's sake, I've got to ask you about the work that you did with Riot Games.
Terry: Well, you know, at the beginning of the, I guess, explosion of excitement around generative AI when stable diffusion was open sourced and the text to image applications were being explored, of course they were very interested in that and text to image, text to 3D, text to video.
They were interested in synthesizing audio and voice. So for them it was interesting to explore how they can use their content catalog and provide tools that help their developers explore the fitness for AI and know what they're building in their content catalog and where the technology is moving.
So it was a really awesome experience to get to work on that breadth of problems and explore like the limitations of the technology and understand like how to bring a technology like that to a creative audience.
Rachel: That's very cool. Tell my kids that what I'm working on is actually interesting.
Salma: Absolutely.
Rachel: How did the, all of this work that you did with Smells Like ML, all of this building and learning in public inform your decision to take a bet on a startup and build Remyx?
Salma: I think it was, you know, sharing all of those lessons over time and trying to bring them to a broader audience. We've purposely put a lot of our work out into GitHub repositories and blog format instead of maybe a traditional archive paper just because it has broader reach.
And I think that we could even take some of these lessons and package code and recipes that we've earned over the years and explore that further through a platform.
I think there's a way to get more developers onboard and accelerated through a collection of tools that are more batteries included.
Rachel: And we're seeing this strong desire on the part of app developers to retrain and incorporate more machine learning models. We've had Swyx on the show talking about the rise of the AI engineers. Is that something that you're seeing in your user base?
Terry: Absolutely. I think, you know, one of the most exciting things catalyzing all the momentum behind AI is that it seems more batteries included to work with foundation models.
But as much as that may seem to be true, especially early on when you're just exploring the fitness for AI in your application, you may have a demo working.
It's like as you go to scale that up and you run into the other problems and failure modes that you're going to encounter in a production deployment, it's important to use the same kinds of tools and technologies that went into the development of those models and borrow off of the state-of-the-art that's existed before generative AI and foundation models became so popular to give people like a mental model or framework tools to help them measure the changes, understand the effect.
You know, it's a notoriously difficult thing to assess goodness for AI applications. And so we've been able to lean into what generative AI's enabled for reducing the cost on that in terms of applying LLM evaluators or using synthetic data.
So there's a lot of space for a new kind of framework or platform that is oriented around the workflows that people are using, transfer learning and parameter efficient fine tuning, to make these model artifacts like really work for their application to really optimize them.
Salma: It's kind of a, even similar to that idea of there's a lack of a middle ground that I think is ripe for exploration. There's a lot of tools that maybe are making it really simple or a bit too simple where there's little room for you to improve upon that system.
And then on the other side of that coin, on the very far end, you have, you know, research papers and folks who have tons of compute access and data access to be able to move foundation models forward.
But there is a middle ground that I think we could explore more to allow teams with less resources, with less full fine tuning options and maybe more parameter efficient techniques to be able to then iteratively improve their solutions beyond what maybe a ChatGPT API call could do.
Rachel: How cool would it be if these tools become something like the first affordable synthesizers and enable like people to record the beginnings of EDM in their backyard studios and start a whole movement that way?
Salma: I'd love that.
Rachel: What are people using Remyx for? What are some of the coolest applications that you've seen?
Terry: Well, you know, at first we were the main users and one of the first users that we tell people about was a company called Teramina. They were looking to apply computer vision to a food production product.
And I think that was a pretty exciting, and more recently we've been building out more of the platforms features around supporting foundation model engineering workflows.
And so we're finding small teams with a data scientist or I guess a non-tech company being able to pull off things that really you couldn't do in organizations like that before because they have some of the best practices put together in a platform.
So finding people who are able to analyze their content that they're presenting to users and maybe they didn't have all the resources that would've went into like content recommendation teams and pipelines, and yet they're able through using generative AI to assess like the quality of the content and present the best content to their users.
So I think that's like a pretty exciting, very general kind of use case and really highlights like our thesis that there's a lot of mid-sized companies that don't have big data teams, but they of course want to use AI and they want it to be at a high production quality for their use.
Salma: I think it's an exciting prospect to kind of, to see that thesis materialize in real time where folks who were traditionally boxed out or blocked from being able to move forward, finding new ways of applying even the LLMs beyond just fine tuning and using the model specifically, can we use it in other processes?
Just as you know, capturing metadata about the lake that they currently have so they can get past cold start problems, I think is super exciting.
Rachel: Yeah, say a little more about that, that food production use case. Because I know that's like beyond gen AI, it's a computer vision example.
Salma: Yeah, I think that's another example of there's room for a combination or a mixture of generative AI and more classical machine learning applications for that food production solution.
They were trying to estimate yields off of really small fish or even shrimp. And the problem is tough because they want to allow folks to submit an image and then return a count of very, very small objects in the field of view.
Historically, this was something that farmers did manually, which took an entire day to go ahead and try to, you know, manually count all of these things, but through something like an object detector that is robust enough to detect a very many small objects, you can get those yield counts in just seconds.
So that was an exciting thing that we were able to help Teramina deploy just from a handful of examples. So that's another thing too, where these kinds of solutions, traditionally you would need a large data lake of labeled data, right?
And that if you're trying to do that manually, that would take a ton of time. That would be all the work that those farmers were doing, you know, individually now doing it digitally. But to be able to just jump over all of that work and be able to build a model actually that's beating Teramina's competitors with a specialized hardware, super exciting.
Rachel: It is, it is. And this is one of the rare cases where I wish a podcast was a visual medium because you showed me that picture of these little organisms and I was like, I can't count those.
And it's such a great example of, you know, a lot of people are talking about this technology is like replacing writing and replacing music. Music and writing are fun to do, counting tiny shrimp is not fun. Let's give the robots the boring jobs.
Salma: Exactly. I think even in creative work, I think we, we've had conversations with folks who are in video production and removing some of the repetitive tasks of finding the clips that are most interesting on a reel that you want to go ahead and further edit down, right?
How do we allow more folks to do the fun parts of their jobs and reduce all the boilerplate, really repetitive work that's boring, right? That machines are great actually to be able to go and go and automate.
Rachel: And, and this comes back to what you were saying, Terry, about how it's, it's hard to evaluate the goodness of something. So that's like what a human job should be is having taste and having judgment and having like a sense of of the fitness of something.
Terry: Yeah. I think that's kind of speaks to the heart of where people have been struggling to find fitness for generative AI. It's like the excitement of the idea of being able to automate everything versus like understanding what are people motivated by and they want to be at the interesting part of the problem. They want to be making executive decisions, they want to feel like that the decisions that they made mattered.
And so rather than them being kind of tied down with boilerplate and running down bugs or little like technical issues, integrating different tools together, it's like we see where people, when we're seeing them in different channels where people are talking about this stuff on Reddit, on Twitter, we see the kinds of decisions that they want to be focusing on.
They want to be thinking more in the systems design. They want to be composing tools like this to get to the point where AI's providing value and they're not kind of like bogged down by the minutiae.
Salma: Yeah.
I think hopefully we're moving into a world where AI helps us become more architects of our domains instead of mechanical turks of those domains. I think that's where more of the exciting work lies and it's definitely the more fulfilling, creative work to iteratively improve those systems.
Rachel: That's perfectly put, let these agents give us more agency.
Salma: Yeah.
Rachel: If you're a regular software developer right now, how might you use a tool like Remyx to level up into using ML and maybe becoming an AI engineer?
Terry: Well, I think the way we've organized it right now, you can learn a workflow. It's learning like which tools have the most leverage for improving your model and making all of that transparent by integrating everything to where you can see the output in terms of evaluation, in terms of assessment, how that changes depending on the input.
And we're able to, through this platform, bring a lot of the kind of key decisions or touch points into a different tool to support that part of the workflow.
So I think people would use it and understand what the machine learning workflow looks like, that it's an empirical science, that it's something that you iterate on and that there's like certain tools that are higher leverage that you want to concentrate your attention on. And things like data modeling and data curation are still like some of the biggest parts of the problem.
When we're putting a tool like this out there, they're parts of the problem that are easy, kind of canned recipes that anyone could apply. They're not, I guess, special in the sense of we're doing something different than what is the best practices, but I think it's by the integration and by the opinionated kind of design around a workflow that people will be able to get up to speed on how they can improve their AI applications.
Salma: I also think that as we continue developing the Remyx production agent or essentially guidance co-pilot, there's already ways where we can present a kind of a workflow that we've reasoned through and put together and present to a user to show them, you know, based off of what you had in the conversation and the context you provided for us, we think these are the steps that we could execute or you could take to go ahead and build that solution.
And I think we can encode even more conversation with that production copilot buddy to be able to inform, you know, why the agent thought about a specific game plan was made sense for your application.
And then also have a conversation back on, you know, giving your input as you learn, you know, what matters most for your application.
I think we want to expand the Remyx Studio to be a place where you learn from your past experiments, that your coworkers or your other machine learning or data scientists or software engineers that are working on a solution with you can learn from those past iterations and then figure out a way forward based off of that information.
Rachel: So continuing with the Remyx and studio theme, the, the co-pilot is like your producer, he's your Steve Albini or Brian Eno saying, well, the Beatles did it this way, but Pink Floyd did it this way, so maybe you could try sampling this.
Salma: Yeah, exactly.
Terry: When we're thinking about how the app is built, it's going back to that point before about the future of work and where is someone going to want to spend their time?
It's in a place where they can collaborate. It's in a place where they have that institutional knowledge about what's worked in the past and where they have the tools to help them move the product as efficiently as possible in the right direction.
Salma: Yeah. A place where too you could brainstorm even. There's been some conversation or talk that LLMs or generative AI isn't very creative necessarily, but it can bring more of the creative juices out of us by providing a kind of like a-
Terry: Rubber ducky.
Salma: Yeah, exactly. A rubber ducky or being able to bounce back ideas with it and then learn something new or explore a new area.
Rachel: A bad first draft that we can iterate on.
Salma: Yeah, yeah, exactly. It's very patient for that, so we should use it to our advantage.
Rachel: As founders with now years and years of ML experience, are you worried about the gen AI hype wave? What's keeping you up at night?
Terry: Well, I do think a lot of people are asking the question now, like, where's the value to business coming from? You know, the initial hype was about not being able to believe that an LLM could produce something so realistic or that you could generate images from text and now people are looking to valuable and practical applications of that.
For us, it's been a lot about understanding how these tools can unblock people on data access and data wrangling. That's like still one of the major pain points to actually putting AI into production. And so that's how we want to address that challenge that still remains.
But obviously like people's interest and expectations and a technology can change and wane, there'll be something new that people are excited about. I'm confident that our approach to this is like, it's a well principled and that it's going to be like useful and valuable and enduring as an approach to working with machine learning.
So that's where we're focused. But I can see why, and there's a lot of, I guess, tension about like where the value from AI is coming is, it's like still a pretty early in its story to like making impact in real systems and applications for people.
Rachel: And I think as more companies are trying to figure out where generative AI could help them and actually impact their business KPIs, the thing that keeps me up at night is that we may be slowly forgetting the hard-earned lessons that we've learned over the last 10 years of the folks who were successful in deploying maybe more classical machine learning applications and models.
I would hate to see all that institutional knowledge get lost with tools that maybe don't bring those people who maybe haven't been steeped in that forward because the technology still is very data dependent whether we like it or not.
And I think that's one of the best places of leverage to improve systems and get 'em to a point where they do make a difference for organizations. So I hope that we don't lose that knowledge through the next wave of generative AI even though it's getting better and has more knowledge baked in upfront.
I guess I hope that the new generation of software engineers, for example, that maybe have less data priors and are jumping into AI applications can expand their horizons into those fundamental lessons because they are very useful. I would hate for them to be kind of siloed off into their own area where maybe they have less tools or experience or knowledge to move these things forward.
Rachel: Yeah, for a an industry that's all about information, we do sometimes have a very short memory.
Salma: Yeah.
Rachel: What are some of each of your favorite sources for learning about AI?
Terry: I think certain Reddit communities, you know, LocalLLaMA is a really popular one. You can find people beginners all the way up to, you know, preeminent experts talking about how to do all kinds of things with foundation models.
So LLMs and BLMS and all of that's kind of like, it's a hotbed of discussion around that. So we learn a lot from seeing what people are interested in doing and what challenges they're running into. I'd say that's my favorite right now.
Salma: Generally speaking, even, even from the very early days, I found that the best way to learn is just by doing a project that's really interesting to you. I think, whenever I chat with folks who are trying to figure out a way into the industry, that's my number one advice is instead of necessarily reading too many books, go ahead and find something that is, you know, really calling you.
I think for example, the, the YogAI project that we did really early on was something super interesting for us to, you know, can we figure out how to level up our own fitness journeys through, you know, something that is also interesting to us in AI and see if there's a way to marry those two interests together.
I think that's a way for you to explore further and get past maybe some of the areas that may be hard to get through when you're early on and still exploring or figuring out your bugs, gets you motivated and also it's easy to translate if it's something that matters to you or impacted your own life.
Rachel: That's awesome. Yoga is a good warmup.
Salma: Yeah.
Rachel: And Terry, I jumped into LocalLLaMA at your recommendation and it's really lively and it's a really high signal to noise subreddit for anyone who's looking for that.
Terry: Absolutely.
Rachel: Okay, so I'm making you guys god emperors of the solar system. You get to dictate everything that happens for the next five years. What does the world look like in the near future?
Salma: I think the world looks like we're more kind of a architects and designers of our domains rather than mechanical turks where we have tools that are disposal, that are jam-packed with information and knowledge that help us iterate over the systems designs that we want to build. Whether that being creative work or technical work or even social work.
I think that there is a place to use this technology to remove more of the busy work that we all kind of dislike and help us actually build things that are super interesting to us and iterate over those solutions. I think we get bogged down by some of the little details and implementation and I hope to be bogged down by less of those.
Rachel: Sounds good to me. Build an agent to do my taxes.
Salma: Exactly. See, that's a great application for it.
Rachel: And my favorite question, if you had a colony ship off journeying towards Alpha Centauri, what would you name it?
Terry: Something like the Far-Ranging Composer I think would resonate.
Salma: Yeah, that was a riffing on on our music theme that we've been playing with through Remyx. I think emphasizing creativity and emphasizing, you know, reaching for the stars, reaching far ahead is where we want to even aim our company and the efforts that we're building towards.
We want to encourage people to think outside the box, to explore ideas that maybe sound, you know, dumb in the beginning, but maybe could be wild and great and then remove the constraints that maybe prevent people from exploring those.
I think that we want to figure out ways and tools to help more people get past some of those hurdles and explore and experiment.
Rachel: I don't know if you've heard this, but there has never been a human culture discovered that didn't have some version of music and singing and dancing.
So one thing we can be absolutely sure of is if we do become a multi-planetary, multi-stellar civilization, there will be music.
Salma: Yeah. I hope to spread the space jams far and wide.
Rachel: Salma and Terry, it's always fantastic talking to you both. Thank you so much for being on the podcast.
Salma: Likewise. It's been a pleasure and we hope to join you soon. This has been fantastic. We really appreciate it.
Content from the Library
Generationship Ep. #22, Back to the Real World with Elijah Ben Izzy
In episode 22 of Generationship, Rachel Chalmers speaks with Elijah Ben Izzy, CTO at Dagworks. Elijah emphasizes the importance...
Enterprise AI Infrastructure: Compliance, Risks, Adoption
How Enterprise AI Infrastructure Must Balance Change Management vs. Risk Aversion 50%-60% of enterprises reportedly “use” AI,...
Machine Learning Model Monitoring: What to Do In Production
Machine learning model monitoring is the process of continuously tracking and evaluating the performance of a machine learning...