Is fuzzing for the cybersec elite, or should it be accessible to all software developers? FuzzCon panelists say join the party as they share fuzzing wins & fails.

LAS VEGAS – In 2014, two teams of security researchers independently started fuzz testing OpenSSL. Within days, the advanced black-box software technique led to an exploitable vulnerability in OpenSSL: namely, the Heartbleed vulnerability.

What is fuzzing? That’s what the FuzzCon event is all about. Black Hat wasn’t the only game in town last week: FuzzCon threw a bunch of software security experts and industry leaders into a black box and shook them up to see what fuzzing – an emerging trend in continuous software testing that automates white-hat hacking – is all about.

Fuzzing is an elite tool, so it makes sense that its use to discover Heartbleed – one of many bugs uncovered with fuzzing – was discovered and confirmed by elite code testers: Google’s Neel Mehta discovered the vulnerability, while the Finnish company Codenomicon (now Synopsys) confirmed it.

Fuzzing is a technique used to find implementation bugs using malformed/semi-malformed data injection in an automated fashion. It may well be advanced, but these days, there are many open-source tools that are free and which the non-elite can use as they establish their own security testing programs.

At FuzzCon’s “Fuzzing Real Talks!” session last week, a panel of experienced application and product security leaders discussed the ins-and-outs of establishing a successful security testing program, including tool selection, value justification, getting organizational buy-in, building a strategy and more.

Two of the panelists, Damilare D. Fagbemi of Resilience Software Security and Anmol Misra of Autodesk, dropped in to the Threatpost podcast to give us a preview of fuzzing tips, tricks and cautionary tales they’d be presenting on Thursday night.

As far as Fagbemi and Misra are concerned, this isn’t an invitation-only party. “I think if we really want to be successful, we need to hand it off to developers, or QA, at least,” Misra said. “The thing that I’ve seen success in, in the past, is when QA work [on] code coverage pieces, as if you were [all neighbors]. It crosses the company.”

He pointed to examples: Microsoft has enabled fuzzing, as has, of course, Google: a company that’s had “some amazing successes,” Misra said, Heartbleed being a case in point.

Listen to the Podcast, Get the Tool

For a look at Misra’s and Fagbemi’s fuzzing tips, tricks and cautionary tales, you can download the podcast here, listen to the episode below, or scroll down to read a lightly edited transcript.

Also, here’s a link to the fuzzing tool, Mayhem – a tool to automate white-hat hacking that triumphed in DARPA’s 2019 Cyber Grand Challenge – mentioned in the podcast.

 

Threatpost Webinar Series Worried about where the next attack is coming from? We’ve got your back. REGISTER NOW for our upcoming live webinar, How to Think Like a Threat Actor, in partnership with Uptycs on Aug. 17 at 11 AM EST and find out precisely where attackers are targeting you and how to get there first. Join host Becky Bracken and Uptycs researchers Amit Malik and Ashwin Vamshi on Aug. 17 at 11AM EST for this LIVE discussion.

Lightly Edited Transcript

Lisa Vaas: My guests today are Damilare D. Fagbemi of Resilience Software Security and Anmol Misra of Autodesk. They dropped in on the podcast [last week] to give us a preview of a session being held Thursday night at FuzzCon called Fuzzing Real Talks. FuzzCon, which takes place virtually and in Las Vegas at the same time as Black Hat, is all about autonomous security, application security and the role fuzzing plays in securing code.

Welcome to the Threatpost podcast.

On the panel, there will be four experienced application and product security leaders who will discuss the ins and outs of a successful security testing program. You guys are going to give your tips, tricks and cautionary tales on everything from tooling selection to value justification, organizational buy-in and strategy building.

But first let’s back it up to the real basics. Could you just describe briefly what fuzzing is, who uses it and why?

Damilare D. Fagbemi: In a nutshell, the premise of fuzzing is to supply different kinds of inputs to the interfaces of a system or software to verify and to improve the robustness.

Interface fan does that software. So systems get act at the entry points to the system. So want to see how does the system be able and it gets on unexpected input. How does it hold up and how can you improve how it reacts to unexpected bad input? So that’s what fuzzy and that’s how I described for us.

Anmol Misra: I think that’s an accurate description of it. Do you only thing I’ll add that too, is. It depends on what are you doing fuzzing for? That’s the other aspect, right? So fuzzing can do a whole lot of things. And among those list of things you can use fuzzing for you know, what is the use case that makes more sense?

This is one thing that I’ll add because in some cases, fuzzing doesn’t make sense.

Lisa Vaas: So who uses it? Penetration, testers, software engineers, Global 500?

Anmol Misra: Fuzzing is used by a spectrum of stakeholders here, but primarily by security people, product security.

So software security engineers doing testing in the development of the SDL phases and then penetration testing. At least some advanced versions of those in the production environment. So those are the two that I would say key stakeholders would do this kind of testing as far as which companies do it.

Alot of companies try to do it. How well they do it or what kind of fuzzing they are able to do at scale that’s really, to me, a more relevant question because just doing fuzzing won’t give you the results that we are talking about.

Lisa Vaas: Please give me a preview of the tips, tricks and cautionary tales you’re going to pass on to participants.

Anmol Misra: The things that I would lay out for people who are building fuzzing programs for the first time is make sure you understand why you are doing fuzzing, because you’re doing it for code coverage. Are you doing it for other reasons? It gives you a good sort of technical understanding.

And of course you look at tech models and trust boundaries. But that should give you a technical starting point. The other one to me is a cultural one, right? How does this first testing fit into the overall security testing portfolio that we have?

You go to a doctor and they give you a bunch of tests and there is a rationale you do one test first, and then you, if needed, then you go and do the second test. Right. We need to make sure when we do security testing, we know what each test is covering.

Damilare D. Fagbemi: When we talk about funding, why do we fall? As for starters, you mentioned code coverage. You know, how much of the code has been explored effectively to see how it behaves and really, about interfaces, how the code interfaces.

So we’re talking about that in terms of where do we fuzz? How do you determine where you should be doing stuff? Stuff like fast tests and now what are the opportunities. That are available with fuzz testing in terms of workup, what kind of issues that we found through testing that the team did not find already using other testing techniques, how do flaws feed into the models of software development, where people are trying to release software super fast on a continuous basis?

I’ve got a question for Anmol as well. Earlier you mentioned how oftentimes folks who use pen testers for the product security. Should it be special for those folks, or should it be something that is accessible to any software developer, as well as a technique through which you can use to ascertain the robustness of their interfaces?

Anmol Misra: Yeah, this is awesome. I think if we really want to be successful, we need to hand it off to developers or QA at least.

The thing that I I’ve seen success in, in the past, is when QA work doing that for code coverage pieces, as if you were a neighbor. It crosses the company. And I think you can see examples. Microsoft has enabled it, and I think Google has enabled it and they have had some, some amazing successes back plus those programs are

Lisa Vaas: That’s great. I don’t want to let you guys go before we hear the opposite of success, though. Cautionary tales, where do practitioners screw up? And what’s the result?

Anmol Misra: Yeah. I think it’s the people who make fuzzing the most important thing they do.

And I think this is where we need to again, talk about why you are doing it, first thing, and what your landscape looks like and what outcomes you want. The single biggest mistake I’ve seen people doing … static analysis, dynamic analysis, pen testing, all sorts of testing, fuzzing, without thinking, what is the return on investment?

And the other thing I’ve seen is when you add fuzzing, are there other things you can take away from static analysis or other places that you may not need? So really, calibration of fuzzing. This place in the testing is where I’ve seen first time programs falter, or people who are new to fuzzing not taking that into account right off the bat.

Lisa Vaas: Well, what happens when things fail, if your fuzzing program fails? What are the results?

Damilare D. Fagbemi: Sometimes companies just have this required as a requirement, a checkbox almost: “OK, you got a fuzz,” and there isn’t enough awareness or even buy-in by development teams.

People try to fuzz, just because they’re told they have to, and don’t have the right bonding or resources or guidance.

Anmol Misra: What I would look at is, What’s your credibility, in question with your stakeholders? What developers? Meaning, the question could be asked, do you really know what you’re doing?

The static analysis on other places, too. So it’s a credibility issue that comes in the end. However, there’s another aspect: We miss a lot of [flaws] that will not be detected that then can be exploited in production or in the environment. And to me, that would be terrible.

Lisa Vaas: How do you justify value and how do you get that organizational buy-in?

What are the metrics you throw around or the results that you point to?

Damilare D. Fagbemi: I’m always talking about things like code coverage as against our finding an actual weaknesses with interfaces and software systems. So an example is just with, with a time period of testing done or different products that are tested, what issues are found that were not found otherwise.

So that comes right into my mind.

Anmol Misra: The other thing that to me again, I think I’ve spoken to slightly earlier is what kind of coverage apart from code you’re looking for your interface, as you mentioned your trust boundaries when you are putting this sort of program, that’s where you start collecting metrics. Before you put this in place, you think through, you know what, you’re going to show to developers, for example, how many, you know, when the fuzzing testing ends, how many issues did you find what they’re critical and how do they stack it? Stack rank against other types of testing we have done. If fuzzing only finds medium or low-severity issues and, really nothing, or fuzzing finds issues that require you to do far more work to identify the root cause.

Then I think, you know, those metrics, won’t give you the optimal coverage with your stakeholders. You need to make sure the fuzzing is on the spot about the issues we want to fix and not giving developers [just] ‘Hey, here’s the result.’ Those are the part of things that I would look for in a successful report to the stakeholder.

Lisa Vaas: Is there anything else you guys wanted to add? I know there’s much more to delve into, including how to select tools and tricks of the trade, but what are the big takeaways you want to leave us with?

Anmol Misra: Fuzzing is probably one kind of security testing that doesn’t get as much attention. Not many people understand it that well, and I think that limits our ability to use it, to do security in the real world. That’s really where I’m coming from.

Damilare D. Fagbemi: Even the name fuzzing: What is fuzzing? We’re simply supplying a lot of inputs – many times, bad inputs – to an interface of a software company, to see how it performs on the test.

And a lot of bugs … like, say, Heartbleed, I believe, OpenSSL bug, and other bugs, you can find on mobile phones and operating systems, have been found with this very technique. And there are many tools today in our open-source fuzzing services that are free, that allow folks to [use] open-source software to … fuzz-test by fuzzing [with] services that are always running.

I believe ForAllSecure as well has also released a freemium version of the Mayhem fuzzing tool such that startups and small and medium businesses will have an opportunity to experiment with fuzzing without a lot of investment.

Lisa Vaas: Well, thank you so much. It’s been a real pleasure to have you on. I’m going to let you go, and thank you for taking the time to share a preview of the panel with us. It was a pleasure, and I hope you both have a wonderful time on the panel.