Ep. #4, The Serverless Framework & AWS Lambda
In Episode 4 of JAMstack Radio, Brian and Ryan are joined by engineer David Wells who explains the Serverless Framework and automation using AWS Lambda. The three cover topics including potential pain points of complex microservices, advantages of event-driven architectures, and writing Kanye skills for Amazon’s Alexa. Plus a new round of JAMPicks.
David Wells is a full stack software developer at Serverless, where he gives developers the tools to build and operate serverless architectures.
In Episode 4 of JAMstack Radio, Brian and Ryan are joined by engineer David Wells who explains the Serverless Framework and automation using AWS Lambda. The three cover topics including potential pain points of complex microservices, advantages of event-driven architectures, and writing Kanye skills for Amazon’s Alexa. Plus a new round of JAMPicks.
transcript
Brian Douglas: Welcome to another installment of JAMstack Radio. On the podcast we've got Ryan Neal and we've got David Wells from Serverless.
David Wells: Good to be here.
Brian: Way to address the crowd. So, David. I asked you to come on a month ago because I wanted to find out about your serverless, ideal setup. Then, in between that, you actually joined Serverless, the framework, the team. Do you want to explain what serverless is?
David: Serverless is this new paradigm of how to build out your applications. Instead of worrying about managing your servers, maintaining them and scaling them out, the serverless approach is basically that you can build out your application and deploy that to a service provider like AWS or Microsoft Azure or Google, and basically not have to worry about that stuff. That's the core of it.
Once that stuff is abstracted away from you, you're really just focusing on your application logic and the value you're actually providing with your app, rather than all the little tiny things you have to worry about with infrastructure, if you're managing that yourself.
Brian: I guess, to really crack that nut open, for lack of better example.
David: Let's crack that nut.
Brian: So, it's truly serverless. You mention Lambda; I am not actually spinning up a PHP server or an Apache server or anything like that to manage some sort of web form or anything like that. How does that work?
David: You're still writing your code in your language of choice. Right now, Amazon supports JavaScript via Node, Java and Python. But yeah, you're basically not spinning up that server yourself. Amazon is actually running that code on demand for you.
When your function gets invoked, it will do magic behind the scenes, spin up a containerized instance of that, run your code and then shut down. So that's kind of how it runs.
At the end of the day, there is a server somewhere. It's just you don't have to think about it anymore. That's all abstracted away from you.
Brian: Ryan, I have you on and you're like a perfect tandem to the talk about servers because you're like our head of infrastructure at Netlify.
Ryan Neal: Yeah, I actually run all of our machines and deal with all this stuff that makes it look serverless for everyone. So, I guess the question then for me is how do you do long-term running jobs or persistent stuff in serverless architecture?
David: Right now there's a limitation on how long a Lambda function can run. The max right now is five minutes. I believe they're working on extending that out.
But if you're running something longer than that, the approach that people are taking is breaking down whatever that job is into smaller pieces. Or if you do need to run a longer job, you would still be running your EC2 instance or what have you.
Ryan: And then just put your own binaries out there, because that's actually what we do for some of our longer ones. We have EC2 instances, which are just running binaries. But a lot of them just need to be always present, and that's something that you couldn't do with Lambda.
David: When you invoke a Lambda function, basically it has a lifespan of around, I think it's "warm," for around 10 minutes. And there's ways you can keep the function warm. But after that timespan, it spins down so there's no concept of state. It's like a stateless architecture.
If you do need state between Lambda functions or what have you, you'd be calling from a database like a DynamoDB or something like that. So that is a limitation. That's also another point to bring up, though.
For a lot of jobs that don't run that long, the way things are done now, instances are just running idle with that stuff kind of sitting there. Whereas this is like the flip-side approach where you're just running it on-demand and you're just paying for the actual execution time of that function.
Ryan: We have actually had full machines that are just kind of waiting for something to happen right now or have really long, persistent jobs. How do you then deal with performance implications? You were talking about keeping functions warm, is this not meant to be something that's in the user time? For instance, submitting a form or something like that?
David: If your function hasn't been hit recently, there's kind of this cold startup time which adds a little bit of latency to it. To keep a function warm, you would have to basically have to ping it. That's one of the use cases of running a Lambda function. You can set it on a chron.
You can basically just set a chron to keep these functions warm. Typically they're used for asynchronous processes or data transformation or stuff like that. You can use it for a lot of different things, and we can jump into the use cases in a little bit.
Brian: Actually, let's get into the use cases. Because I'm really curious. Users of the Serverless Framework, what are they using in real-life situations?
David: The use cases vary tremendously. As we mention, you can't really do super long-running processes. But what you could do, if you have those long-running processes, is you can actually use the Lambda function to invoke the EC2 instance to spin up for you, run that longer-term process, then via the events of that EC2 instance, trigger another Lambda function to send a text message to you, "Process is done," or whatever. That's kind of like a use case there, automating DevOps stuff behind the scenes and acting off of the provider-specific events in your infrastructure.
Another good example of that would be, and this is a really common use case of Lambda, is if a user uploads a profile image, that will go into your S3 bucket. That S3 bucket triggers an event of, "There's a new item or new object in this bucket," that can trigger a Lambda function to resize the image and put it back into the bucket and then basically store that URI, send it back to the app to use. So that's one of the most common ones that we see.
Some other things that you can do are basically using it for a back-end for a web- or a mobile- or IoT-type of app. A really good example of this is a company called A Cloud Guru. Their entire website is serverless, meaning they use static web hosting, which you guys know a lot about, and their entire backend application is running via Lambda functions.
They're uploading video tutorials on running on different cloud providers, so they'll upload their videos into S3 buckets, those will get resized dynamically by their Lambda functions. And the actual user authentication, I believe they're using Auth0, I'd have to double check on that.
But basically, when you log into the app and are going around to the different pieces of content, that's all being driven by, I think they're using Dynamo behind the scenes and API Gateway. to kind of wrap those calls. I'm not 100% sure on their tech setup, but I know that it is serverless and they're a really good example of that.
Like I mentioned before, you can also do chron stuff. So, let's say that you want to run a job every six hours, every 24 hours, whatever it is. You could basically, instead of having that instance just running there eating up cost, you could just basically have it trigger via chron, with a normal chron statement, and just trigger that Lambda function. It runs for however long, spins back down, so you're not paying the, whatever, eight, 10 bucks a month for that box. Well, depending on the box size.
Ryan: I'm actually now going to do that. We have to clean up our Elasticsearch cluster. We can use that to do it.
David: That was actually the most compelling thing to me. When I first heard about the whole serverless idea, it really struck a nerve with me. I had a huge pain point. I built this app using Node and Express, went through all that stuff, and then I got to the point where I'm like, "All right, I'm ready to release this thing."
But now I've got to figure out how to set up a load balancer, how to basically have redundancies in the database server. Following this microservices approach, all these different pieces need their own server. Which is the, you know, I'm doing bunny ears, "the right way to build the app."
I really ran down this rabbit hole of how to do this without basically learning the entire world of DevOps.
And having this thing scale if it does hit, which it didn't, but that's a whole other story. So this serverless approach is like, I can have all these side projects or weekend projects that aren't costing me like 10 bucks a month each. Because that cost adds up really quickly.
Brian: You mentioned node processes. I think of things like Slackin. If you're unfamiliar with Slackin, Slack is a community to talk to other people. It's almost like IRC but with a GUI interface, in case the listeners didn't know by now.
We had a Slack group that I created with some other people, and I created this Node service to, every time you filled out a type form, then that type form would take that email and then invite them to Slack. And they had a node service that auto-invites them, using Slack's API, to Slack.
I never had to go through the process of clicking "accept" and links. We had started this group, this was what the group was, was it "Kitten Gym"? No, it wasn't. A gym for kittens. So kittens can sign up through Slack. Just kidding.
David: That sounds amazing.
Brian: Anyway, this group that we had, we actually launched it, and we knew we were going to have a lot of people sign up in a short amount of time. We did not want to have to click "accept" for every single person we invited to Slack. I bring this up because
I'm thinking with serverless, I can create a Lambda function to accept these Slack invites, and use the Slack API to bring them into the fold, our community.
David: Absolutely. That's actually one of the kind of Lambda boilerplates that AWS has, like how to connect into a chatroom. Lambda launched, I don't know, probably two years ago now. And then Amazon came out with API Gateway, so you could basically put that REST API on top of the triggers for your Lambda functions.
So now everything's exposed. And you could basically, via Webhook or an API called Frontend, AJAX request, trigger any Lambda function to do any kind of custom logic. I'm a full-stack JavaScript guy, so I write everything in Node. So you get "Import the Slack STK" and trigger whatever you wanted with these Webhooks.
The callback, you could either shut it down or give a 200 response or whatever, or trigger other Lambda functions. And that's what's hard talking about the use cases of this, because it's really like anything you can imagine, minus like longer-running jobs.
Ryan: But even those you usually try to avoid in general. Because when they crash, you have a problem, a real big problem. If you can break it down into really small, atomic functions, that's really useful. Just a different design way.
Brian: One thing that really sold me on Serverless, when I finally, I don't want to say bit the bullet, but finally sat down and did a tutorial and got it all set up on my machine, I only have experience from Lambda. Because I don't know if you know this, Amazon Echo, the Alexa project, is all built on Lambda functions.
So all the different skills are actually Lambda functions. Whenever you say, "Alexa, tell me about water," she'll find the Lambda project that someone put up as a skill, and they can hit that. So when I actually made my first skill, which was a Kanye skill, it was a side project when they had announced that everything was open-source.
It'd be like 'Alexa, tell me about Kanye.' And they'd tell you an actual fact about Kanye, which you know, really intriguing.
Ryan: That's pretty cool.
Brian: I'm a Kanye fan.
Ryan: Same. He's a genius.
Brian:Amazon's GUI is really bad. I don't know if anybody's actually used it.
Ryan: The console is painful.
Brian: I mean, if they want to hire me for a lot of money to redo the URI, call me. My email is brian@netlify. But anyway, their UI is horrible. And it's really hard to like figure out how to get these Lambda functions working and how to connect them.
At the end of the day, all you want to do is upload your .zip file and your JavaScript and get your endpoint. And with serverless, you can do that without actually even logging in. Well, you have to log in to get your keys first to set it up but once you get that done, I can get all my Lambda functions working.
I can get it, if I wanted to continue to make Kanye skills for Alexa, I can. I don't have to struggle half a day trying to figure out why Lambda is not working at a certain point and I can't debug certain things. That's basically what I went through.
I went through a Saturday night trying to figure out why my Lambda function wasn't working. It wasn't being tested properly, and I just finally gave up because I don't want to deal with Lambda and try to figure that out.
David: That's exactly why Austin created the framework, because he was feeling that pain. Basically, if you're going through the AWS Console, via their interface, you have to upload that zip to configure API Gateway. You have to go in, set all the response requests and there's just a lot of things to wire together.
Whereas with the framework now, you can just basically say, "Okay, I have this function that runs at this endpoint," and then you just type "Serverless deploy," and then it wires all that stuff up for you. And then you can version it and roll it back and stuff like that.
Brian: Versioning it and rolling it back is amazing. Because I'm not even sure they have that on Lambda functions, if you can go back to your previous incarnation of your code.
David: I think it's stored in an S3 bucket, the previous versions. There's also this concept of aliasing functions, and that's one of the things that the framework helps a lot with as well.
So if you're deploying to a dev environment, you can deploy to that dev alias, and that endpoint will be the same. But for the prod, it's like you're API/prod, whatever, the endpoint is on the API.
So it's pretty easy to keep those things separate so you're not basically deploying perhaps broken code to a production environment. And we just realized the Beta version: Version 1 Beta, Version 2 today.
Brian: Of Serverless?
David: Yeah, yeah.
Brian: That's cool.
David: We're doing, I think it's every two weeks, we're releasing a new version. We're on this cadence right now.
Ryan: The Mozilla cadence, that release cycle. That's a quick release cycle.
David: Yeah, it's moving fast. The open-source framework, we're about to hit 10,000 stars.
It's pretty insane, the community that we have behind this thing. I was stoked to be a part of it.
Brian: So all your code for Serverless is open-source?
David: Yeah, so the Serverless framework is completely open-source. You can dig in there, do a pull request, so it's also a pluggable architecture. If there's something that the framework doesn't do right now, doesn't handle out of the box, we have basically hooks that you can build that.
So, a good example of a plugin was, somebody built like an optimizer plugin that basically mini-fies and removes dead code so your Lambda executes faster. There was another one to help you manage secrets, like secret API credentials and stuff across teams.
Ryan: The Serverless framework handles handing out tokens and making sure that that's also committed to your Git hooks?
David: I'm not too sure on the specifics of the secrets plugin, that was for Version 0.5. But I think the guys that made it are going to do it for Version 1.
But yeah, an example of a plugin, we added core support. So when you deploy an API and endpoints to API Gateway, you have to turn on cores for it to be actually accessible, like a front-end interface. So that's something that we just added in through this plugin model.
Brian: And what's the code written in, for Serverless?
David: It's all in Node. So you can get in there and check it out.
Brian: I'll try to break things.
Ryan: I would just look at it, from a far distance. Like, I assume it works. I'm not a JavaScript guy.
David: The other thing that we're focusing on in Version 1: When Serverless first came out, it used to be called JAWS, the JAWS framework, with a shark logo.
Brian: I remember that.
David: Which I believe was for JavaScript AWS or something like that. But we soon realized, and all these other providers, Google Cloud functions, IBM, Microsoft, they all have these paper execution function as a service kind of things coming out.
The idea now is to connect with all those. We have a pull request right now, from the Microsoft Azure team and IBM OpenWhisk, so you can use the Serverless framework to deploy to those as well.
Ryan: That's really helpful, avoid lock-in with AWS and stuff like that.
David: That was one of the main things. The other benefit there is, I was talking about this with Austin the other day, is some providers might not have certain services that others do. For example, Google has a voice API or an image-recognition API that you can tie into.
So if you had some of your infrastructure on Amazon and you wanted to tie into those vendor-specific APIs, you don't need to have everything running in one place.
You can deploy this over there and this over there and glue them together with this event-driven model.
Brian: That's nice to be opened up. I mean, JAWS actually sounds like a really cool name.
Ryan: I actually liked that, but I guess "Serverless" is a better descriptor.
Brian: Serverless makes sense, especially if you want to add in, like you don't want to add JAWS or whatever to Azure at the end of it.
David: I can't remember exactly why we changed it. It might've been a legal thing. Like, JAWS with a shark as a logo? Steven Spielberg came knocking on our door.
Ryan: That's not a phone call you really want to get.
Brian: Well, he's not doing anything right now. He's watching his properties, making sure no one's making any E.T. frameworks.
David: We're thinking about renaming it Avatar, but yeah, that also is a no go.
Brian: With open source, let's talk about the Serverless community. Now, you guys have pull requests from other people adding other architectures to be working with Serverless. Do you guys have a pretty strong community? 10,000 stars?
David: We have ton of people in our Gitter chatroom talking. It's very hard to keep up with, to be honest. There are so many pull requests.
Our team is distributed so we have half of our team in Europe, we have a guy in Japan right now and four people here in San Francisco. I'll wake up at, you know, eight in the morning, and there'll just be this huge long list.
Becaue I'm watching the repo, obviously, all these pull requests, all these new issues, of either feature requests or discussions around how, basically, the framework, should be formed. It's pretty amazing to see.
That core's functionality, that was contributed by the open source community. That wasn't our core team doing that.
Ryan: Does that mean your core team now just spends a lot of time reviewing PRs? Which they could code themselves?
David: We have a couple guys working full-time on the framework. And then we're also developing some commercial products around it as well. We're a venture-backed company. We do have some products in mind to throw into the ecosystem as well.
Brian: Is the framework your only product at the moment that you talk about?
David: Yeah, that's our only product right now, totally open-source and free.
Ryan: With the open-source community, you've got a lot of contributions, great PRs coming in. How do you both vet the PRs and also share the roadmap of where you guys want the framework to go?
David: Our CTO, Flo, would be better at answering this. But basically, every single PR that comes in, there's typically a discussion thread that usually gets pretty long with people throwing in their ideas.
A good example is we're introducing environment variables in your Serverless function. So there's pretty long thread, both from the core team and from people in the community, of how that syntax should look and what different use cases we should support.
There'll typically be a pretty long discussion, and then I think it's Flo making the calls. But again, it's also the community of who's using this, and what their use cases are. It really ranges, too, who's using it.
We have just hobbyists, "Oh, I need to have a contactless form on our site." Kanye app, custom Alexa functions, whatever, to Nordstrom using the Serverless framework to do heavy-duty prod stuff. So we kind of have to run the gamut there.
In terms of roadmap stuff, we have milestones on the GitHub repo, so we strive for those milestones. It's interesting because, I wouldn't say it's a brand-new space, because Lambda came out like two years ago. But it seems like it's finally hitting and people are starting to be like, "Oh, what is this thing?"
And all these other providers, a good example is, Microsoft Azure didn't have this function as a service product at the beginning of this year, and I think they started working on it in February. Now it's launched or whatever, but
people are waking up to this lower-cost way to do things and event-driven models.
Brian: But having an event-driven model, you mentioned earlier that microservers' architecture is the same issue. Is there any way to wrangle them? It's notoriously hard to figure out monitoring or how many are executing or if it's really popular or not. Is there any stuff built into Serverless or Lambda that helps you with that?
David: That's one of the biggest challenges with this. A good example, and Austin always brings this up, is Netflix. Netflix, they have, you know, 1,000 different microservices or whatever. I don't think they're using the framework now. Maybe.
But let's imagine we take those 1,000 microservices and break them into smaller pieces of those functions. Now you have this pretty complex system. So that's actually one of the biggest pain points. That's one of the products that we're developing, a monitoring and metrics tool around that.
The other side is, how do you actually visualize that architecture of what the map looks like? I have all these functions, what is triggering what? And in what order and where are the errors happening?
Ryan: Yeah, when do they break.
David: Exactly. That's definitely something that we're working on.
Ryan: That's something that's not an easy problem to solve, for sure.
David: Right. But wouldn't it be so cool if you could see this map? You're just like, "Oh, what is this service doing?" And then you just see the map of every piece of architecture that it's touching and how it's flowing and the error rates on each thing.
Brian: Lambda, they do have how many times the error function's been run and stuff like that. So I wonder if, maybe down the road, Serverless can like talk to those APIs and have some sort of dashboard.
David: There's logging with Lambda functions via CloudWatch, and that's a free thing from Amazon. And you can pull it in, with the CLI, with the Serverless CLI, and you can actually tail it in your CLI.
When you're running the function, there's a lot of console.log-ing when the function's running, and stuff like that. You can watch the logs from the command line interface, which is a little bit nicer than going through the console in AWS.
Ryan: Anything to keep me out of the AWS Console. That's our goal.
Brian: If someone wanted to start using Serverless today, what would be their first step?
David: The first step is "npm install serverless -g." Install it globally, and you're off to the races. If you go to the GitHub repo, github.com/serverless/serverless, there's a "Quick Start" section where I recorded a video which I'm not too crazy about now, because I just rewatched it and I start the video out like, "Hey, everybody!"
Brian: We can start a podcast that way.
David: That'll walk you through basically getting set up with your first function and deploying that to a live API endpoint, which really only takes about 30 seconds.
There is a longer window for the Amazon Cloud Formation, because we're using Cloud Formation under the hood to spin up the infrastructure and set up the API Gateway. So that takes about three minutes right now, that's kind of something we can't control.
Then you basically have this live API endpoint that, by the way, is infinitely scalable. You can throw in, add as much traffic as you want, and it's going to stay there. That, to me, is like basically the aha moment, when I saw the actual demo of the CLI at the AWS Loft a couple of months back. Austin was doing it, and I remember getting tingles. Like, "Oh my gosh, this is amazing."
Ryan: The potential.
David: And then I started stalking him until he hired me.
Brian: Well, that's one way to get a job. Well, I think this was a good conversation. I think it's good intro to Serverless, and hopefully the listeners have got a good idea of what they can do with it. Hopefully they can reach out to you.
But before we get into how to contact you, I wanted to jump into Picks. Basically these JAMPicks are anything that keeps you going, things that you wanted to share, to the listeners, about how awesome something is.
I'll go first. My JAMPick is actually an HBO show called "The Night Of." It was on my radar a little bit when Game of Thrones was on, they had showed this trailer. And I jumped in. Actually, I have to give respect to my wife. She jumped into it first and then told me about it.
She gave me a really bad sell on what the show was about. And I was like, "Oh, okay, I'll watch it." And I watch it and I'm hooked. It's like basically the new Game of Thrones for me. I'm there every Sunday, ready to watch it. Highly recommend "The Night Of." I won't even tell you anything about it. I think you should just watch the first episode.
Ryan: Just go on Netflix?
Brian: Actually, HBO. David, do you have a Pick?
David: I was going to say Stranger Things, but it was already taken earlier. So my pick, honestly, you guys aren't paying me to do this. But my pick this week is Netlify.
I'm rebuilding the Serverless site on Netlify, and I must say the integration with Git is pretty phenomenal.
I'll just push my changes up to master and then, within like 30 seconds, it's on the live site. I haven't even tried the different branch setup yet, but I'm pretty excited about that.
Again, this was not paid. I'm doing a talk on this in two weeks, and I'm going to walk through how I built it. I was using Phenomic. We did.
Brian: A talk here at SF?
David: Yeah. I think it'll be recorded. And then the other one I wanted to shout out was, as well as the vibrant community submitting PRs to Serverless, there was also a ton of Serverless consultants out there.
One of them, they're called Trek10, they just did a webinar on, basically, Serverless architectures. It's like 50 minutes long but it's super solid in terms of what you can really do with it.
They go into a lot more use cases, so I wanted to just shout that out. I just watched it the other day, and it was pretty good. And then Boosted Boards becuse I want to get one, the electric boards.
Ryan: They're a lot of fun to ride. Actually, one of our designers had one, and I got to borrow it a little bit. It was a little terrifying to take through the city at times, but fun still.
My Pick? Lately I just got back from Yosemite, so I've just been going through images and tracking what different national forests I can go to now becasue I'm kind of hooked on it. Going out to Wyoming, hopefully. After I fix all the build issues.
Brian:Make sure you squash some bugs before you go and disappear in Wyoming.
Ryan: Exactly. Absolutely no cell signal. That's my Pick.
Brian: David, if they want to contact you to find out more about Serverless, where can they go?
David: Go to Serverless.com. I actually don't think we have a contact form there. There will be on the new site. Email me at david@serverless.com or just tweet at me @davidwells on Twitter. I'd be happy to point you in the right direction for anything that you're looking for.
Brian: David, thanks for coming onto the podcast. Hopefully you can continue to spread the JAM.
Subscribe to Heavybit Updates
You don’t have to build on your own. We help you stay ahead with the hottest resources, latest product updates, and top job opportunities from the community. Don’t miss out—subscribe now.
Content from the Library
The Kubelist Podcast Ep. #43, SpinKube with Kate Goldenring of Fermyon
In episode 43 of The Kubelist Podcast, Kate Goldenring shares her journey from Microsoft, where she contributed to Kubernetes...
Jamstack Radio Ep. #136, Serverless Postgres with Nikita Shamgunov of Neon
In episode 136 of Jamstack Radio, Brian speaks with Nikita Shamgunov of Neon. This conversation explores how Serverless Postgres...
Jamstack Radio Ep. #125, Life After Cold Starts with Matt Butcher of Fermyon
In episode 125 of Jamstack Radio, Brian speaks with Matt Butcher of Fermyon. Together they explore cloud computing, the evolution...