Ep. #19, Measuring Security with Allison Miller
In episode 19 of The Secure Developer, Guy meets with Allison Miller to discuss the ways technology and security have intersected throughout her career.
Allison Miller has worked as Program Chair for the O’Reilly Security Conference and as Product Strategy, Security for Google. She has also served on the boards for KeyPoint Credit Union and (ISC)2.
In episode 19 of The Secure Developer, Guy meets with Allison Miller to discuss the ways technology and security have intersected throughout her career.
transcript
Guy Podjarney: Hello everybody. Thanks for tuning back in. Today we have an awesome guest with us that I've long wanted to bring on the show, Ali Miller. Thanks for joining us, Ali.
Allison Miller: Thank you for having me. Very excited to be here.
Guy: So Ali, you've got a long and interesting history in the world of security. Can I ask you to give us some context? How did you get into security? What's the short story of your life here in the security world?
Ali: Absolutely. I think I have tried a few times to figure out where I got bitten by the initial bug. Every time I think about it I start going back further and further into childhood, where that paranoia or interest in protecting things came from. But I largely got interested in it in college.
I was studying finance and economics and I was interested in e-commerce, which was just getting started to grow. And so how technology was going to be applied to the needs of business became interesting to me. No one around me knows where it came from, but something about, "That's going to go wrong," or, "That is going to get exploited."
I became really interested in the security implications, the economic implications of electronic money. And of commerce, and transactions, and payments. I was very interested in that in college. There was nothing to research. These were the days of e-cash. This was long before cryptocurrency became a thing. So that's where my interest gelled, but then I didn't know how to do that for a living.
It was more of an academic or prurient interest than a particular idea of what I could do for work. So I ended up going into IT. I used the Information Systems Decision Science part of my degree, and left the Business Economics and Criminology aspects of what I studied to the side for a little while.
Within six months I got a reputation as the girl who asks all the weird questions about cryptography and security.
When the company that I worked for decided, or rather realized that they needed to have a more specific security strategy and that they were going to build a security department, I became the first hire into that department and was able to help build that practice from the ground up in partnership with the CEO and a couple of other folks. And then I took the show on the road and decided to see what other opportunities I may be able to find.
Interestingly enough, I ended up at Visa, which was one of the places that I had thought was interesting from the outside, and from an academic point of view because that's where payments were happening. They looked at my resume, and where you very kindly described it as interesting and lengthy, when they saw it they just thought it was weird and they wanted to chat with me.
That's how I moved out to the Silicon Valley. Not to work for a Silicon Valley startup. Perhaps to work for one of the original Silicon Valley startups, which was Visa. And then ended up going to other .com--
Guy: Environments, yeah. You went on from there and this got you into this risk category. We'll continue a little bit along this journey. But what was captured in those days in risk? What type of activity falls under that mantle?
Ali: My career has been a jungle gym. Swinging from one side to the other. I went to Visa to work on technology risk. They were interested in using new technologies and making payments work. Things like making payments work online, chip cards, mobile technologies. And my job was to help them figure out how to employ the new technologies safely.
I very quickly became interested not just in what the technology implications were, but about how the design of the financial products themselves created or resisted riskier exploitation.
And so I moved from technology risk to product risk. I don't know if you noticed but compared to say, a startup that's a little bit slow, I actually got through and risk assessed all of their new and emerging products at some point. And I realized I didn't have anything new to work on, and that perhaps I wanted to go where there was a lot of risk being dealt with directly. And that's what led me to make the jump over to PayPal.
At PayPal, the game of Risk was very different. It wasn't about, "Allison's going to go in and take apart the design of this product and figure out where all the weak spots are." It was all math and statistical modeling, anti-fraud technology as played by large banks. They've been developing those modeling techniques and capabilities for years and years.
So it is quite structured and required that I learn on the job fairly quickly how to use things like SaaS, which was an SPSS, and R, and those types of things. It was a very different approach to what I had initially thought about related to understanding how risk works. Which was from that technical point of view, where you're deconstructing a design and figuring out where there are flaws in the design.
To go from that, all the way over to the pure math approach. But I did that at PayPal for a little while and enjoyed what I learned, and learned a lot. I worked in that anti-fraud risk function for many years at PayPal and then went back to the technology. I was helping design the technology, how mobile authentication mechanisms were going to be incorporated into the login flow.
For example, using 2FA and getting all of these different factors of authentication. Adding in the type of identity validation that we needed to do for new account signups, and other technology that we were bringing to bear to protect the accounts or to reduce the fraud risk. That approach to technology.
I was working a lot more with engineers. I was working as a product manager in some contexts. I was also still doing a lot of analysis and working with the modeling team, but I then shifted away from the math and back more squarely into the technology. And since that experience, using data has infused or informed almost every job that I've had since then in some form or another.
I've worked a lot on detection technologies, and the underlying math of that is often similar. I have continued working with engineers in every role that I've had subsequently because the risks that I've been dealing with battling, the controls we embed them in the fabric of whatever the platform is.
At PayPal it was fairly straightforward. You're either trying to prevent a fraudulent transaction or you're trying to prevent someone from logging into someone else's account. Those were the primary risks that I was looking at. But every platform has their own version of that.
Communication platforms deal with spam. Gaming platforms deal with cheating or griefing. And then in advertising you can have bad actors trying to get ads into the system.
Or use other people's accounts to put ads into the system. Account security has also been a common thread. So I guess, the broad strokes of my career. I went from the economics to the technology, to the technology of economics, to the economics of technology, back to the technology. And there's still economics in it, plus data. It's just amazing how everything is interconnected. When people look at my resume their reaction is, "Wow. You've had a weird journey." But it all makes sense in retrospect.
Guy: I think the acknowledgement of the combo of data and engineering, and having data driven decisions, is something that permeates more and more with the world of machine learning. I think the notion of using data for risk assessment is an early player. It doesn't matter if you call it ML or AI, but driving technology actions to act on the data. Though at the end of the day I think you're quite right.
Ali: And in a fit of pique I've said, "ML is just a fancy name for statistics." I caught a lot of heat for that. Yes, I know the difference between the different algorithms and supervise versus unsupervised, but at the end of the day a lot of folks end up realizing what works best. And that's the trick. The trick is, you're trying to optimize the performance of this decisioning system. And sometimes the ML helps you, sometimes a neural matter AI might help you.
But in a lot of cases you're just going to end up with a rules engine with a bunch of heuristics, and no matter how wonderful the Bayesian Learning Network is sometimes you're just going to end up with a logistic regression model. It's okay folks, it's okay.
Use the tool that helps you best get to the optimization level that is your horizon. Push the envelope. It doesn't matter which tool you use. No one gets bonus points for using a cooler algorithm.
Guy: Indeed. You just need to sound impressive. And I think that's pretty much the way it is. That's where the term "Machine Learning" kicks in. You've done a lot of these different risk assessments, are there any low-hanging fruit? If you think of somebody building a solution, and granted, this is probably a very broad question, but building a solution that has a large volume of transactions.
Are there first early suspects around reducing risk? Have you seen some trend of, "If you only did this then you have some initial first hit at eliminating noise." Or the most blatant abuse of something that is equivalent to the input validation of web attacks? For risk reduction, or bad transaction reduction?
Ali: Well, I guess it really does depend on the system. I suppose a few rules of thumb, or something I would recommend to folks who are starting out. You have a platform, you deal in something, therefore you may have someone figure out how to exploit whatever your version of a transaction is. Instrument the heck out of everything. In a sense, that is the way that these decisioning technologies work, off of its telemetry. Or what would be considered telemetry.
And to give you an idea of what to instrument, because in my head I'm looking at this horrifyingly long tables. And that's what I'm imagining. I'm imagining when an event happens, I mean a timestamp, and you have who attempted it and what they were attempting and what the result was, etc. That's how I think I know that not everybody is using relational databases. I mean, thank goodness not everybody is using relational databases for everything. But in my mind it looks like logs and log files.
You want a record of what happened, so that when you start to build these decisioning technologies you have data, where you can start to process and look for what you then learn are unusual behaviors in transactional systems.
If we're talking about something like payment, or even to an extent communications, there's a couple places where risk tends to cluster or bundle. Which is when you have newness, for example a new account. You don't know much about it. It could be real, it could be a bot. And so what it does initially or what you let brand new accounts or actors do on your system, there's an interesting place there.
And then when you have accounts, or features or processes that you've had for a very long time, and suddenly they're doing something new? That's also a place to look for risks. So, that's very abstract.
Guy: Yeah. But super practical I think. I feel like the first recommendation you gave is very DevOps in nature, "If it moves, measure it. If it doesn't move, measure it in case it moves." Very oriented at accumulating data so you can later establish right from wrong. And I find newness to be fairly concrete. Not very abstract, but very clear cut when a new entity gets created and a new action is done for the first time. That's when you scrutinize.
Ali: An example that I like to use, I worked with Skype fairly early on. Pre- it being owned by Microsoft. Folks forget it was owned by eBay briefly for a couple of years. So when Skype first started up and was offering voice over IP, many folks might not have thought about the fact that VoIP minutes were extremely highly monetizable and very attractive to fraudsters.
Fraudsters being folks who are using other folks credit cards to buy things. And Skype did something very interesting, because to provide voice over IP is not expensive. It's pretty cheap. Which is why they were doing it. But it wasn't free, interestingly, because there was telecommunications providers that have to pay for connecting calls and things like that.
So what Skype did is if you were a new paying customer, meaning you wanted to make calls out of Skype to phones or accept phone calls into you, that was the paid portion of the service. You were allowed to pay in £15 pounds worth of calls for the first 90 days that you were a subscriber. It was very specific, and there was a reason for it.
Which was if you had stolen someone's credit card and you had maxed out your £15 pounds, then they expected that they would receive a chargeback from the legitimate cardholder's issuer within 90 days. Because at least at that time, the average chargeback return time was somewhere around 45 days. So they figured they would get 80-90% of the chargebacks in within that window.
If you were a normal, innocent customer, like most folks are. Maybe you wanted more. And you would complain, "Why can't I have £30 pounds? £15 pounds is not enough for me to make all the phone calls I want to make." They were very annoyed. But the annoyance factor of the good customers was just it. They had to risk that. Because the attraction of the system to fraudsters was such that they put in that really strict, draconian measure.
Because new customers and new credit cards to their system, those were the riskiest ones.
Guy: Yeah. This is risk. And we got into it and you've done all of that gig. The current leg of your journey is a little bit more in the security engineering side, right?
Ali: Yes, that's right.
Guy: Is that indeed a transition? How is that a transition? What's this new world, and what made you try it out?
Ali: I think yes. And a lot of folks, when they think of me they think of fraud. That's an interesting distinction.
Guy: It's not a bad association if you can dismantle it, you know? If you're not the fraudster yourself.
Ali: So, to put it into context. I want to mention what I had been doing just prior to this new role that I'm in, which is that I had been working as a product manager. I guess that's how I describe it. I was doing strategy for some of the engineering teams working on security at Google. And specifically, I've been working with a few teams on things I cannot necessarily talk about, but I've been doing a lot of work with the Safe Browsing Team.
The Safe Browsing Team. The public version of what people see, they go, "Safe Browsing. They're the ones who make Chrome show a red warning page if there's a phishing link or a malware link that I just clicked on." That's the Chrome experience that it's created. But interestingly, all of the major products at Google use the results of the Safe Browsing network.
In the end, some of them have customer facing experiences and some of them just have back-end things that they have done to make their product safer.
Search uses it. Chrome uses it. Android uses it. Ads uses it. Gmail uses it. The Big Five, as I thought of them. And most of the other products, too. And what Safe Browsing does on the back-end to create this list of URLs that are hosting harmful content is they crawl the whole internet. In pieces, of course. There's no way to snapshot the whole thing and process it overnight.
So they crawl, they sample, and then they evaluate what they see. For a phishing page, maybe they're evaluating a content on a page. But for malware they evaluate the behavior of the software. They have these enormous pipelines set up to understand the behavior of the software itself, which means they're downloading it and running it, and, "If it moves, measure it. And if it doesn't move, measure it in case it moves."
The thing that just blows my mind about that, is that all of the behavioral analytics techniques that I learned in a transactional environment like payments, all of those techniques can also be brought to bear to understand or to make evaluations about the behavior of software.
Guy: Yeah. Letting that sink in a little bit. All of those can help measure the behavior of software. Software, or the humans using that software?
Ali: Malware classification is a classification event that's being done based on the behaviors of software because you ran it. You ran the software and were able to extract data out of the resulting behaviors. So you created a transaction by making the software run, or taking some data associated with what else is on the page. You can classify the "behavior," I'm doing air quotes for anyone who can't see me, by what you observe.
It just blew my mind. This idea that, "Content analysis associated with spam? Fine. Behavioral analytics on transactions? Those are events. Something's in motion already, but fine." The idea that it could then go back to the behavior of software, and be right back into the security use cases that I'd left a decade ago, it just blew my mind. I felt like I was home, to a certain extent.
Because it was one of the things that I was thinking to myself when I was in payments at PayPal, or any of the other gigs that I've been in where I've been doing anti-fraud. I wish I could bring this expertise back to information security. Because so much of information security, it feels like it's guessing.
It's hard to quantify how much attack you have diverted. It is hard to clarify why to make the investments you're going to make, and protections beyond compliance.
Compliance is the backstop, bottom line answer for a lot of shops.
Guy: Yeah. Protect yourself from audits.
Ali: And so I was always hoping I could bring something back in related to that quantified understanding of the exposure and the performance of what you'd built. When I worked with Safe Browsing what blew my mind is the data driven approach that I had been using, operationalized, also could be helpful. The quantification? Awesome. Yes, I still want to pursue that, but so cool. I developed this expertise in this approach in this technology, and it wasn't just an anti-abuse, anti-fraud thing.
It could also be useful in the core guts of what information security is, which is either that the software is broken. It's vulnerable and there are exposures. Or the software is bad and it's coming for you.
In any case, I certainly got a good taste of the bad software, the malware software. I never thought I would be working on anything that someone might call antivirus. In my life, I never thought that I would be working on that. But malware and phishing and a lot of the things that folks stumble onto on the web, to me I was back-end core InfoSec. I still wasn't spending a ton of my time working on what a lot of folks think of when they think of core InfoSec, which can be boiled down to AppSec and understanding the vulnerabilities in their software and fixing them.
But here I was, back again. So it was a nice homecoming. And then the role that I'm just moving into is one where I am working as a technologist in an information security context. And we are engineering and building the protections that are being incorporated in to protect the organization and all of our technology. So I'm back into the thick of it and it's like a full circle, because I started in IT Security way back. I started in IT security, then technology risk, then product risk, then anti-fraud then anti-spam, anti-abuse.
Back to security with Safe Browsing, and now I'm back in IT Security in an enterprise context for real, but I brought with me all of the data driven goodness, all of the platform engineering, building things in and measuring and that learning system approach. I'm excited to see how it's going to play out in an enterprise context.
Guy: In some sense this is the inverse of the artificial intelligence. If software is trying to behave a little bit more human-like, it's created by humans so there's probably some attributes there. There's models, and statistics, and data driven angles that we can use for it.
But we're now using human techniques to analyze software. Techniques you would use to analyze human misbehavior, to analyze software misbehavior.
Ali: I see what you're saying.
Guy: Like the AI version of--
Ali: Right. I think I know what you're saying. There's a lot of IT security that is humans, and it's manual to a certain extent, trying to deal with the implications of the software. Versus where I was at, where I was using the software to deal with the bad or malicious humans. So you're absolutely right, in that I am full speed ahead trying to figure out how to apply computational power to identifying bad behaviors and solving the problems of security and anything that can be automated.
I want the software to do the analytics and reserve the hard stuff for the human analysis.
That is definitely influencing how I am approaching it for sure.
Guy: So, we're talking a lot about going from the risk analysis to defense, to an extent. You've also been involved in a lot of education in the IOC organization, helped create O'Reilly Security. I wanted to touch briefly around this notion maybe of defenders or defending techniques in that security world. Do you want to share a few thoughts around, how do you see some of the world's evolution? And a little bit about sharing these techniques, your involvement with O'Reilly Security, or similar conferences?
Ali: Yeah. Thank you for mentioning that I was involved with IOC Squared. I think that's a good organization and it's doing a lot of good work to arm our practitioners, if you will, with a baseline set of skills. And to try and connect those folks so that our rising tide can lift all boats.
The other piece that you mention, O'Reilly. To me that's also a homecoming. Because back in college when I was studying things for which there was no major, no corpus of research, and I was just hunting around the books with the animal covers. We're still in the bookstore. And I have probably collected most if not all of the Yellow Series, which were the ones that were security related.
So I have the one with the safe on the cover, the one with the Bobbies, the British police on the cover. I thought of them as the Keystone Cops but I realize that is not what they were. So the Security Series from O'Reilly was one of those things that was so exciting to read, and when I thought about technology and reference materials I would always think of O'Reilly. They were a foundational to my self education learning Perl and understanding Unix and all of those things. A lot of that was self-taught.
There were no courses in school to learn those things. Those were things that I was just interested in because I wanted to understand how the underlying technology that I was using worked.
And I appreciated it so much. So in the past few years what I had realized is that O'Reilly was also in the conference space, and some of the conferences that I had a lot of friends who were excited about were talking about were Strata and Velocity. These were both really interesting events for me because I was working with data. So what was happening at Strata and the data scientists who were talking there and presenting there, and in fact a CTO who I worked with, I helped with some of the materials that he ended up using on that.
And then Velocity, with high performance computing, which was instrumenting everything. Optimizations and scaling everything to the high heavens. To me it seemed as though this was the perfect time. If O'Reilly was ever going to get interested in doing a security event, this would be the perfect time for them to do that. Because where data and DevOps were dancing, security needed some of that.
Guy: Yeah.
Ali: And so I was very excited to attend or hear about those events. And when O'Reilly did decide they were going to dip their toe in, I thought, "How wonderful. I hope that what they do is they consider how these existing things they're exploring with communities are shaping how those communities work, or how those could drive or infuse a security event." I made a few comments to that effect and they liked that so much that they brought me on to help lead the security events.
Courtney Nash, who was the person who I worked with originally on the concept, the idea that this was about building and this was about engineering defenses and providing defenders with the tools and capabilities they needed. That was the spirit in which we pursued this. And I was so happy with how the events turned out. I think that we were able to bring folks in.
One of the things that we talked about is how folks who are tasked with defending systems and organizations today, they don't necessarily self identify as IT security folks. You have folks who are looking at problems of privacy, they're looking at problems of compliance. They're the ones who are building the software, not just the ones who are auditing the software.
All of these folks have to be empowered with the right information and the right incentives to build better systems, and more defensible systems, and systems with less inherent vulnerabilities.
And so the events themselves went well. I think that we were able to infuse the conference with this excitement, a creativity, collaboration. The idea of sharing and how to do things better. Which it felt to me sometimes that a lot of the shows and conferences in the space had been more focused on the narrative of the breaker. The idea that you have to know how an attacker thinks in order to defend against them.
I don't disagree with that. But I also think defenders may also have things they need to do in addition to living with the "breaker over their shoulder" mentality. So the idea that resilient systems are-- resilient systems, and optimized systems, and scaled systems-- there is an art to that, and an emerging science around that in and of itself. In addition to the fact that there are attackers on the way always.
It was refreshing to be able to have conversations with folks who are thinking about doing things differently, and always with that build mindset as opposed to the constant idea of the defender as the ultimate reactionary. I was really excited to have been a part of that and I think that it's changed the industry a little bit. I see a lot more events trying to hone in on this, "By defenders, for defenders." Or make sure that their SDLC tracks, or more outreach to developers. I think it's fantastic and I'm so excited to have been a part of it.
Guy: I fully agree. And I like that the journey you're describing with O'Reilly and maybe the whole ecosystem is somewhat similar to your own journey. Going from understanding software to appreciating risk, to using data to combat that risk, and then to bring all of that back into technology solutions and into the technology practitioners that do it. So I'm a fan, both of the approach and of O'Reilly Security, which I try to help out a bit as well.
And I hope that vibe of events that orient defenders and that using data and using technology to help us build things that are more indeed resilient to attacks, not just downtime, but within those same communities. So I appreciate the effort there and I appreciate you sharing the journey. I guess I'll leave you just one quick question on a pet peeve. I like to ask every guest, if you had one quick word of advice or pet peeve on security that you wish people would do or stop doing, what would that be?
Ali: We're talking everyday, average folks. I like password managers. I'll just say that I know they're not perfect, but I actually really do like Password managers for multiple reasons. But I think you're really asking that question on behalf of the technologist. And I guess I would say, I'm not sure if there's a nice DevOps equivalent that there's already a saying for, but I will say that your designs are not done if you've only considered the happy path.
What I mean is that people are on average good, and design assuming good intentions, but always make sure that you look out for the outlier abuse cases and failure cases, and instrument them and make sure that there's a path for them as well.
Guy: Very valid advice. Ali, thanks a lot for your time joining us today.
Ali: Thanks.
Guy: Thanks everybody for tuning in. Join us for the next one.
Subscribe to Heavybit Updates
You don’t have to build on your own. We help you stay ahead with the hottest resources, latest product updates, and top job opportunities from the community. Don’t miss out—subscribe now.
Content from the Library
How It's Tested Ep. #3, Balancing the Ownership of Testing with Alan Page
In episode 3 of How It’s Tested, Eden speaks with Alan Page. The conversation begins by exploring why developers should own the...
How It's Tested Ep. #1, A Closer Look at Product Testing with Ian Brillembourg of Plunk
In this debut episode of How It’s Tested, Eden Full Goh of Mobot speaks with Ian Brillembourg of Plunk. Together they explore...
Jamstack Radio Ep. #119, Customer Retention with James Hawkins of PostHog
In episode 119 of Jamstack Radio, Brian speaks with James Hawkins of PostHog. In this talk, James shares insights on utilizing...