A View Into DEF CON 25 CFP…

First, this post is not sanctioned by DEF CON in any way. I am a member of the CFP team who decided to keep some rudimentary statistics on the submissions this year. I did this to give the team a feel for just how many submissions we got, how many talks we accepted, and primarily to track the way we voted. This greatly assists the powers that be (the amazing Nikita) to more quickly determine which talks are well-received. Every day that I kept up on the spreadsheet, the more ideas I had on tracking. Other team members said “you should track…”, and I typically did. So this blog is to give some insight into the entire CFP process, with a solid slant on statistics about the submissions.

First, a few basics:

  • DEF CON 25 CFP opened on February 01, 2017
  • DEF CON 25 CFP closed on May 01, 2017
  • 17 talks were submitted after closing date and were considered for various reasons
  • We received 536 submissions
  • Three of the submissions were retracted by the end of CFP
  • BlackHat received 1,007 submissions this year for comparison

Next, who are we? There were technically 31 DC CFP reviewers this year, and you can read their fun profiles now (mouse over stuff here and there, call it an Easter egg)! Ten of them are considered ‘specialty reviewers’, where they typically review talks on a very specific topic such as ‘social engineering’ or ‘legal’. These are generally topics where the submissions are either too numerous and potentially murky to figure out if they are worth accepting (social engineering), or a topic that most of InfoSec aren’t really experts on, even when some of us are the #1 armchair lawyer in InfoSec. The specialty reviewers are expected to review their topic only usually, while a few are open to review multiple topics. That means there are 21 reviewers who are expected to review ‘as many talks as you can’, understanding that we may DEFER on a given submission if we feel it is out of our wheelhouse, and remembering that this is extremely time-consuming and we all have day jobs. Some of us have night jobs, and some of us have social lives (not me).

Every year we come up short on reviewers who are truly qualified to give solid feedback on a given topic. This year DC CFP put out a call for more volunteers and we hit a bit of gold, getting several new reviewers who are quality and put in a crazy amount of time. Next year? We know there are topics we need help on, so if you are sharp, kind of special(ty), or the top of your game in a popular field… come join us. I can’t stress how important this is. Instead of just working on a talk or doing a thing, you have the ability to help influence the presentations given at a conference with some 20,000+ attendees. That is a lot of power, a lot of influence, and the potential to do a lot of good. Personally, that is why I still sacrifice the incredible time I do.

Shout outs! The only way to start this paragraph is to call out Nikita for handling almost all CFP submission related emails. Incoming submissions, replies saying “you didn’t follow directions”, second-attempts, replies saying “no really you ‘brilliant hacker’, you didn’t read our guidelines”, posting them to the CFP platform, watching for the CFP team to say “I have questions” and us largely forgetting to flag it back to her, her following-up with the submitter, repeating several times in some cases, posting their replies, looking for the CFP team to ask more questions… hopefully you get the picture. The amount of work she fields in a three-month span, just related to CFP, is insane. I say that as someone who has worked more than 80 hours a week in this industry for the last twenty years. Oh, did I mention that she also voted on 60% of the talks? While five ‘full’ reviewers voted on less talks than her.

A plea! If you didn’t see the numerous Tweets and requests to get your talks in early, I cannot emphasize how much it benefits you, more than us. When a talk comes in during the first few weeks, it gives us plenty of time to not only review and ask questions, but to give feedback in the way of suggestions. In some cases, one of the team will break away from the board and work with the submitter to improve their submission. This year, I did that once with someone who’s original two submissions garnered a single yes vote. After working with them and giving feedback on how to combine the talks and hone in on the areas of interest, the re-submission received 12 yes votes and zero no votes. In an ideal world, that would happen for every submission, but a significant number of talks are submitted the last two days.

Meaningless numbers! Because our industry loves to work with statistics that they don’t fully understand or have little meaning without serious caveat and disclaimer (PPT), let me throw out a few. For the 536 submissions we received, the CFP team voted yes 1,223 times, no 3,555 times, maybe 186 times, deferred 945 times, and abstained 54 times. Again, we defer if we feel that a topic is not one we can fairly judge based on our expertise and rely on the rest of the team to review. We abstain when there is a potential conflict of interest: if we work with the submitter, we contributed to the submission, or have a negative personal past with the submitter.

Meaningful numbers! We requested feedback from the submitter 125 times and changed our votes 61 times. Working with us to answer our questions, willingness to accept our feedback, and work with us to build a better presentation benefits everyone. As Nikita tweeted, more than 60 of the accepted talks were from first-time DEF CON speakers. Given there were ~ 110 accepted talks (and 422 rejected), that is quite a lot. It is encouraging to see this many new speakers given some of the past submissions from egotistical industry veterans that felt they deserved a speaking slot on the back of a weak submission, simply because of “do you know who I am?!”

More meaningful numbers! Of the 536 submissions, 185 (34.77%) said they would release a new tool. Only 56 (10.53%) of those submissions said they would release a new exploit, and some of those claims were questionable. It is common for people submitting to DEF CON to also submit to BlackHat and/or BSidesLV. This year, 218 (40.98%) of those submissions were also submitted to BlackHat and 65 (12.22%) of them were also submitted to BSidesLV. For various reasons, often around the ability to get to Las Vegas, some submitting to BlackHat will submit to DEF CON but say that acceptable at DEF CON is contingent upon acceptance at BlackHat. This year, 36 (6.77%) talks were submitted to us with that caveat. In a somewhat arbitrary categorization, overall I felt that 200 (37.31%) of the talks were ‘red’ (offensive), 88 (16.41%) were ‘blue’ (defensive), and 38 (7.09%) were ‘black’. By ‘black’, I mean that the topic really had little merit or benefit for red-teaming and were really in the realm of criminals.

Even more meaningful numbers! Some of the most basic stats that can be generated for your ocular pleasure. First, these are arbitrary categories that were developed as we received submissions. Nothing formal and some talks were hard to classify:

From there, I broke it down further by some topics that aren’t necessarily specific to the red or blue domain. Again, kind of arbitrary and based on seeing the submissions as they came in and note that one talk may have been flagged as more than one topic:

When building a schedule over four days and across five tracks, while considering if it is better to suggest a talk for a village or alternative venue (e.g. Skytalks), Nikita has to play Tetris of sorts based on the accepted talks, the requested time, and the schedule. This is what she had to work with:

One of the more popular questions this year after an increased awareness and public discussion around diversity in InfoSec, is the gender breakdown for submissions:

Finally, a general picture of the submissions by month. Recall what it looked like for the April breakdown above and you once again get a good idea why we would like more submissions earlier in the process:

Finally, a quick note on a common perception for InfoSec conferences and talks in general. Given the drastic rise in the number of conferences popping up, there is a saturation that demands more submissions to fill the schedules. That means that veteran speakers can typically shop their talks around or be selective in where they submit based on the venue they find appealing. That also means more new speakers are submitting which results in a wide range of topic and quality of submissions. That led me to argue this Tweet and remind people that a conference can only work with what is submitted. Personally, I feel that the overall quality of submissions to DEF CON (and a couple other conferences I review for) have gone down this year and last. That means that DEF CON ended up accepting some talks that I personally did not care for.

Bottom line? If you are researching a cool topic, submit a talk on it. Have a unique perspective or done more digging on something? Share your work. Never submitted before? Submit early and let us work with you if you need it. If a security conference is lacking, it is due to the community as much as anything else.

Analysis Of The RANDom Report on Zero-days and Vulnerability Rediscovery

[This was originally published on RiskBasedSecurity.com.]


On March 9, 2017, RAND released a report (PDF) titled “Zero Days, Thousands of Nights; The Life and Times of Zero-Day Vulnerabilities and Their Exploits” by Lillian Ablon and Andy Bogart that received a fair amount of press. The RAND press release goes on to describe it as “the first publicly available research to examine vulnerabilities that are still currently unknown to the public“. While the report covers many topics and angles around this discussion, one specific bit that caught our attention was the data put forth around vulnerability rediscovery. From the press release:

Based on the dataset, RAND researchers have determined that zero-day vulnerabilities have an average life expectancy—the time between initial private discovery and public disclosure—of 6.9 years. That long timeline plus low collision rates—the likelihood of two people finding the same vulnerability (approximately 5.7 percent per year)—means the level of protection afforded by disclosing a vulnerability may be modest and that keeping quiet about—or “stockpiling”—vulnerabilities may be a reasonable option for those entities looking to both defend their own systems and potentially exploit vulnerabilities in others’.

Specifically, their analysis suggests that collision rates are rare among researchers, which seems entirely too low even at first glance. Unfortunately, they use a closed data set, making it impossible to validate their findings. This low collision rate analysis caught the eyes of others in the industry as well. Many researchers and others involved in vulnerability aggregation generally have the impression that rediscovery isn’t that rare, but as RAND notes, there isn’t a lot in the way of research on this topic. In 20 years of aggregating vulnerabilities in one form or another, RBS has not specifically looked to track this data point. We track a great deal of metadata about each vulnerability disclosed, but do not currently  have a classification for research collisions

RAND is presenting at BlackHat Briefings USA in Las Vegas this year, on the back of public discussion and some solid criticism of their report, we at RBS were also curious about the rediscovery rate. Since the dataset used for the RAND report is not public and anecdotal ‘data’ does not cut it, we decided to create a new public dataset based on VulnDB to help us better evaluate their report and findings; more on that later.

First, let us examine how RAND describes their own dataset:

In an effort to address the question, RAND obtained rare access to a dataset of information about zero-day software vulnerabilities and exploits. It is a rich dataset, as some of these exploits have been found by others, but others have not. The dataset spans 14 years (2002–2016) and contains information about more than 200 zero-day exploits and the vulnerabilities they take advantage of, over half of which are publicly unknown. The data we received had a final count of 207 exploits, after approximately 20–30 were removed due to operational sensitivity. [..] This dataset includes exploits for 64 vendors, with a range of 1 to 55 exploits per vendor. Microsoft (n = 55), Linux (n = 39), Apple (n = 14), and SUN/Oracle (n = 11) each had the most number of exploits, while the majority of the other vendors each had one or two exploits. These include well-known vendors such as Mozilla, LinkSys, Google, Citrix, AOL, Ethereal, Adobe, Alt-N Technologies, CryptoCat, and RealPlayer/RealServer, as well as some lesser-known vendors.

For ease, there are several key points pulled from this statement:

  • 207 “zero-day” exploits, with half of them not publicly disclosed, we interpret this to mean that half of the zero-day exploits are now publicly known
  • 14 years worth of data (2002-2016)
  • 20-30 exploits “removed” for operational sensitivity
  • 64 different vendors had exploits included

When reviewing the key points it becomes very interesting, as in the world of exploits it suggests the data came from an exploit broker or, more likely, a government-run or funded shop that produces a considerable number of high-quality exploits themselves. However, later when talking about the limitations of the research, they write:

Ideally, we would want similar data on Red (i.e., adversaries of Blue, or other private-use groups), to examine the overlap between Blue and Red, but we could not obtain that data.

This quote suggests that the data was captured by a Blue team, which used a certain level of technology that allowed them to identify a high number of zero-day exploits used in the wild, against their network.

Using the dataset described above, RAND analyzed it and produced the following results:

Finding #4: For a given stockpile of zero-day vulnerabilities, after a year, approximately 5.7 percent have been discovered by an outside entity. [..] In our analysis, collision rates changed significantly depending on the interval time used (from 40 percent to less than 1 percent), and so the timing of “flushing” a stockpile of dead vulnerabilities matters. We found a median value of 5.76 percent overlap (6.79 percent standard deviation) given a 365-day time interval, and a median value of 0.87 percent overlap (5.3 percent standard deviation) given a 90-day time interval. A 14-year interval (i.e., all of our data in one time interval) yielded a 40 percent overlap. With the exception of the 14-year interval, our data show a relatively low collision rate. This may be because those in the private exploitation space are looking for different vulnerabilities from those hunting for vulnerabilities to share as public knowledge, as well as using different techniques to find the vulnerabilities (e.g., vulnerabilities found via fuzzing, or automatic software testing, are often different than those found via manual analysis).

There is a lot going on in this single finding, and it is worth dissecting further. When looking at the overlap value as compared to the interval time it is certainly interesting and a great data point, but without knowing anything more about the dataset used, it becomes kind of pointless.

Here are a few points to consider:

  • The overlap varies from < 1% to over 40% depending on the interval. Based on that wide range alone it makes one question the credibility of this claim.
  • On the surface it may sound logical that as more time passes, more vulnerability collisions occur. However, consider that in the span of 14 years many versions of products assumed to be in the dataset (e.g. Windows, Linux, Office, Flash Player, etc.)included in the analysis may have become obsolete.
  • To properly perform this analysis, the dataset would require having the vulnerability discovery and exploit creation date for the first and second researcher.
  • Based on RAND’s description, they would have the date for the first researcher, but likely not the second. If they worked with vendors to determine the collision rate, they would have the second researcher’s date for when it was reported to a vendor, or when it was discovered internally by the vendor, but most likely not have the discovery and exploit creation date for most of the vulnerabilities.

RAND makes one other observation about the low collision rate they observed, saying this “may be because those in the private exploitation space are looking for different vulnerabilities from those hunting for vulnerabilities to share as public knowledge“.

Finding #5: Once an exploitable vulnerability has been found, time to develop a fully functioning exploit is relatively fast, with a median time of 22 days

On the surface, this claim does not appear to have merit. Due to a wide range of factors during exploit development, it can be completed very quickly or take a significant amount of time. However, based on our past experience, that range is anywhere from half an hour, to days, or even several weeks or longer to ensure a reliably working exploit . With such a small dataset, and without additional meta-data, such a claim while maybe valid in their analysis, it simply cannot be  used as a metric for larger analysis of the issue.

In VulnDB, we have implemented our own standard to track and provide Vulnerability Timelines and Exposure Metrics (VTEM). We believe that it is key to better understand metrics on how vendors respond to vulnerabilities e.g. the average time organizations have to apply available fixes until exploits are published. While one can surmise that the exploits in question are high-end (e.g. overflows, memory corruption, use-after-free), it is simply impossible to determine if 22 days for exploit development is consistent with any other vulnerability rediscovery dataset. In a larger view of vulnerability research, finding a vulnerability may be part of your day job, but they may not require you to write a functioning exploit for it. Just to highlight the sample bias potentially at play: It would be trivial to create a dataset that is twice the size as the one used, which ‘proves’ a median time of a single day for exploit generation after the vulnerability was found. Just come up with a list of 500 cross-site scripting (XSS) vulnerabilities and use that as your dataset. Voila!

Finding #1: [..] In the course of investigating life status for our vulnerabilities, we found that Common Vulnerabilities and Exposure (CVEs) do not always provide complete and accurate information about the severity of vulnerabilities.

This is not a new finding by any stretch, but it does demonstrate that more and more of our industry are realizing the shortcomings of relying on CVE/NVD for vulnerability intelligence. If you want to understand the gap further just check out any of our VulnDB QuickView reports.

To contrast their data, RAND cites the previous research on this topic as:

Literature on collision rate focuses mostly on vulnerabilities reported to vulnerability reward programs or publicly found and reported within a code base. Finifter, Akhawe, and Wagner (2013) found that roughly 2.25–5 percent of all vulnerabilities reported to vulnerability reward programs had been discovered by others. Past RAND rough order-of-magnitude estimates put the probability that a vulnerability is discovered by two parties within a year at approximately 10 percent (Libicki, Ablon, and Webb, 2015). Researchers in 2015 created a heuristic model that found a 9 percent overlap in non-security-tested software and 0.08 percent in more secure software (Moussouris and Siegel, 2015).

Shortly after the RAND paper was published, another paper titled “Taking Stock: Estimating Vulnerability Rediscovery” by Trey Herr, Bruce Schneier, and Christopher Morris was published. Their conclusions are considerably different than RAND’s, and also use a much larger dataset.

From the paper’s description:

This paper presents a new dataset of more than 4,300 vulnerabilities, and estimates vulnerability rediscovery across different vendors and software types. It concludes that rediscovery happens far more often than previously estimated. For our dataset, 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For just the Android sample, 13.9% of vulnerabilities are rediscovered within 60 days, rising to 19% within 90 days, and above 21% within 120 days. Chrome sees a 12.87% rediscovery within 60 days; and the aggregate rate for our entire dataset generally rises over the eight-year span, topping out at 19.6% in 2016. We believe that the actual rate is even higher for certain types of software.

With RAND’s closed dataset, and Herr et al’s dataset created on open data (but not distributed with the paper), RBS thought it would be interesting to offer a dataset focused on Microsoft that is open and available. It was created to show that serious bias can be introduced based on a given data set. We are not stating that this dataset is accurate and any other is not; simply that many factors must be considered when creating vulnerability statistics. Without their datasets published, other researchers cannot validate their paper’s findings or build on their data. While the RBS dataset is being made available, it is a draft / proof-of-concept more than anything. The dataset, titled “PoC Vuln Rediscovery Dataset Using Microsoft Creditee Data“, has additional notes and caveats to illustrate some of the pitfalls when performing this type of research.

The following two charts based on our dataset show the Total Vulnerabilities in Microsoft Advisories and the approximate Percentage of Vulnerabilities Rediscovery by Year:

While creating this dataset, we made some observations and came up with questions for the industry to consider:

  • Microsoft changes their acknowledgement format from time to time and does not use a consistent format, so this dataset could be enhanced.
  • Microsoft does not always credit the person who found a vulnerability. Examples include if it wasn’t reported directly to them or researchers disclosed before Microsoft was ready.
  • When there is a collision, Microsoft does not indicate which researcher reported first.
  • In 2016, Microsoft changed their format to make it more difficult to determine collisions from the same company. While fairly rare, a company that maintains several research groups (e.g. HP, Qihoo, Tencent) may be the source of a collision.
  • Starting in 2016, Microsoft became less precise in acknowledgements, not even listing the collision researchers together.
  • We found evidence of “double dipping” in third-party bounty programs, something long suspected but we don’t believe has been proven. (CVE-2014-0270 credits Jose A. Vazquez of Yenteasy working with HP’s Zero Day Initiative and separately as working with VeriSign iDefense Labs).
  • There may be additional collisions, but they are hard to determine based on MS and ZDI advisories. For example, CVE-2015-6136 is credited to “Simon Zuckerbraun, working with HP’s Zero Day Initiative” and “An anonymous researcher, working with HP’s Zero Day Initiative”. Of the seven ZDI advisories covering this one CVE, one is attributed to anonymous and the rest to Zuckerbraun suggesting they are distinct reports. Having to evaluate dozens of cases like this to refine a dataset is time-consuming.
  • Did either the RAND or Herr et al’s dataset show a big uptick in vulnerability rediscovery in 2014 like our Microsoft dataset does?
  • Nine different researchers discovered a single vulnerability in one case (CVE-2014-1799). Did the prior research account for the number of researchers in a single collision?

What is quite clear from each of the datasets is that vulnerability rediscovery is not only prevalent, but – depending on a number of factors – may be considerably more frequent than we imagine. To further illustrate this, consider that on June 28th, HackerOne tweeted out a statistic of their own regarding rediscovery.

The HackerOne statistics makes it pretty clear (quite concerning actually!) that rediscovery or duplicates, as it is known in the bug bounty world, are extremely prevalent.  A higher collision rate with bug bounties programs run by HackerOne or other companies actually make a lot of sense for the most part. The expectation from this data set is most likely lower maturity bugs, which are easier to find such as XSS, SQLi and are also heavily focused on companies websites as well.

We have spent a lot of time researching Bug Bounties over the years, and researchers are definitely motivated by money. Researchers look for bugs when there is money to be made or if there is some very cool technology or fame to be gained. With more and more bounty programs that are encouraging more researchers to look for vulnerabilities, we, therefore, would also expect more eyes looking for the same exact issues and often using the same tools even.

In contrast with the assumed RAND dataset, the expectation is that they are high-end and much more complex vulnerabilities than most bug bounties. These higher-end vulnerabilities typically require more technical skills to find and it, therefore, makes sense to see a lower rediscover rate. We at RBS have been working hard on implementing our Code Maturity metrics into VulnDB. Once completed, we expect to see rediscovery rates are much lower for products with a higher code maturity.

The concept of finding higher severity bugs in more mature products as well as websites also apply to bug bounties. In the “Taking Stock: Estimating Vulnerability Rediscovery” paper they cite data from Bugcrowd, as they found that:

Rediscovery happened least often with their highest severity bugs, 16.9% of the time. For second- and third-tier vulnerabilities (based on a five-tier system), the rediscovery rate jumped to 28.1% and 25.8%, respectively.

Based on RAND’s report and our own impromptu dataset, it is clear that the data you work with can strongly influence the observations and results of your research. As we have long maintained, vulnerability research of this nature is never perfect, largely due to the challenges of maintaining high-quality vulnerability data. With this in mind, such research should definitely move forward! However, it is extremely important to disclaim and caveat the data and resulting findings to make it clear where the original data may be lacking or how a conclusion may be based on incomplete data.

If you are attending Black Hat and are interested in further discussions on this topic, then consider attending a panel moderated by Kim Zetter that plans to dive further into the topic and reports.

The Steady Rise of Bounty Programs, and the Counterpart

[This was originally published on RiskBasedSecurity.com.]


Companies that once said they would not pay for vulnerability information seven years ago, have been steadily expanding their program to pay for more and more vulnerability information and recently made Edge bounties permanent. 

Service-oriented companies like Uber, that rely on a significant amount of user interaction and transactions via mobile apps, also utilize bounty programs. Surprisingly many industries, even the U.S. military have entered the bug bounty game, as the Department of Defense (DoD) announced in 2016 their own vulnerability bounty program, to be run through HackerOne.

As can sometimes happen with high-profile bug bounty programs such as United Airlines in 2015, they can come with a bit of drama. The initial announcement of the DoD program faced some turmoil. According to the DoD’s own press release, “participants in the bug bounty will be required to register and submit to a background check prior to any involvement with the pilot program.” An Ars Technica news article goes on to enumerate more requirements, citing the HackerOne project for the bounty:

  • You must have successfully registered as a participant through this security page.
  • You must have a US taxpayer identification number and a Social Security number or an employee identification number and the ability to complete required verification forms.
  • You must be eligible to work within the US; meaning you are a US citizen, a noncitizen national of the US, a lawful permanent resident, or an alien authorized to work within the US
  • You must not reside in a country currently under US trade sanctions.
  • You must not be on the US Department of the Treasury’s Specially Designated Nationals list [a list of people and organizations identified as being involved with terrorism, drug trafficking, and other criminal activities].

Interestingly enough, when originally viewing the HackerOne DoD Bounty page (dead link) after the announcement it showed:

This type of confusion and lack of guidelines can cause a serious lack of trust among security researchers, who are notoriously paranoid, and often for good reason. For a program that the DoD describes as “innovative“, the lack of organization coupled with the registration requirements was concerning for many.

While the excitement of launching a bug bounty can lead to rushing implementation, it is very critical to be prepared and have clear guidelines in place from the very beginning. To be clear, the page has now been updated and provides the expected guidelines and other information. Interesting enough there are also some Response Efficient metrics that are posted publicly on the Hacker One bounty page. You can see that their average first response is three days, but it does take approximately four months on average for resolution. The continued debate on vulnerability disclosure also even exists in the Bug Bounty world as well.

Over the past few years, almost in-line with the increase in bounty programs increasing, we’re seeing more open sales of vulnerability information, as well as the information obtained by using those vulnerabilities. An article from mid 2016 in The Register cited Dell SecureWorks research that boiled down hacking activity into a menu-style offering:

  • Banking credentials are sold for between 1% and 5% of the account balance.
  • American Express Cards fetch $30, towards the upper end of prices for plastic card credentials, which start at $7 and rise depending on the type of card and the amount of associated data offered for sale.
  • DDoS (distributed denial of service) attacks can be contracted for as low as $5 an hour, the same price as remote-access trojans.
  • Angler exploit kits – a common hacking tool that’s used to sling malware from compromised or hacker-controlled websites – are licensed from $100.
  • Crooks are also expanding their offerings with a greater range of products, including ATM skimming devices for $400 or less.
  • Hacking a corporate email account costs $500 per mailbox, about four times the price of hacking into a Gmail or Hotmail account ($123).

Even on social media, users are becoming very open about their hacking activity. This includes the rather benign poking at notable media characters like Edward Snowden, and computer security professionals such as Kevin Mitnick.

The same person, @1×0123 on Twitter (account now suspended, so links are 404 but kept for the record), announced the compromise of naughtyamerica.com with 150,000 accounts and plaintext passwords, an unnamed nasa.gov system with remote access, implications that the New York Times web site has an SQL injection vulnerability, and the Los Angeles Times web site has a remote vulnerability that allows full access.

If that wasn’t enough, consider this was apparently one person and they also announced their own exploit market, to “improve safety & security products”. This comes full circle thinking back to the Department of Defense’s bounty program announcement, and the likely requirements. The same person also demonstrated a couple cross-site scripting vulnerabilities on Pentagon web sites.

Since the initial DoD program, there have been more government bug bounty programs launched and or expanded. In January 2017, the U.S. Army launched a bounty and the U.S. Air Force launched a bounty-based contest for a given period of time. While the Army program seems to be part of the broader Hack the Pentagon initiative, no signs yet that the Navy, Marines, or Coast Guard have started a bounty program. Additionally, it is interesting to note the number of valid reports as compared to the payout for the Hack the Pentagon bounty:

Out of all the submissions, 138 were found to be “legitimate, unique and eligible for a bounty,” and resolved according to Defense Secretary Ash Carter. Those vulnerabilities earned $75,000 in total bounty rewards, paid promptly by HackerOne at the conclusion of the program.

That means the average payout was $543.48 and the total program payout of $75,000 was only 0.005% the cost of some military ordinance. Formal bug bounty programs are largely considered to be on the side of good and the domain of whitehat hackers, and increasingly considered “worth the risk”. It is important to note there is also a steady rise in their counterparts (as always, take commentary on the sales of exploits with a grain of salt).

More exploits, and the fruits of their use, are being sold on the open or black market to the highest bidder. Every day, RBS hopes that more data comes to light about these sales, even if in the past. We have been following the U.S. Government foray into the bug bounty world closely and have been pleased to see the continued expansion.  It does beg the question as to how many other countries will view the U.S. results as a success and also begin to create programs as well.  

Recently, the UK Government announced they were working with Katie Moussouris from Luta Security on their first vulnerability coordination pilot. Meanwhile, tracking vulnerabilities and the subsequent data breaches, continues to prove to be a healthy challenge. There has already been over 10,500 new vulnerabilities and more than 2,200 data breaches reported in 2017. Software vulnerabilities and the ‘businesses’ they create are not going away anytime in the near future. Organizations of all sizes in all industries need to address network protections and focus the necessary resources on vulnerability discovery and remediation.