Forbes: Lazy Vulnerability Reporting & A Bit of Bias

It may have been almost two decades ago, I joked with colleagues that many Information Security news articles could just be done via Mad Libs. We later joked that breach notifications often appeared to be done via Mad Libs, using the same phrases with different organization names and the number of affected customers. Over the last few years, it seems Forbes has gotten lazy in their reporting on computer vulnerabilities.

First, a bit of background by querying Risk Based Security’s VulnDB, which I work on. While we track news articles on vulnerabilities, it is important to note that it is done in a best faith effort. We try to capture higher profile articles in the bigger publications within InfoSec and those outside the proverbial “echo chamber”, which includes Forbes, New York Times, Washington Post, Fox, etc. So by no means is this comprehensive, but it is important to understand the methodology which is using Google Alerts based on “CVE” strings. This started several years ago, maybe around 2015 give or take. Articles included before that were as they came across social media, referenced in modern disclosures, or some other current manner despite the publication date.

The first Forbes article we have associated goes back to June 17, 2001, covering a vulnerability in a GE Healthcare device. Up to 2010, almost every Forbes article we have is in a GE device along with one about Oracle and one about Linux Kernel. That alone is kind of interesting. From 2010 to 2020 we have Forbes articles covering a wide variety of vendors including Google, Onity, GE, Apple, Magento, PLX, and more. They also included articles covering big disclosures that covered multiple vendors of DVR systems, SIM cards, micro processors, and more. Last year, in 2020, Forbes produces a steady stream of articles for all the big vendors including Cisco, Microsoft, Apple, Google, Intel, Citrix, Zoom, and more.

This year though, it seems like Forbes got lazy. Perhaps it is burnout writing what is essentially the same article? You might think that, but no, because that is exactly what they started doing. Coverage is heavily based around Google Chrome and components in it, but disclosed via Google Chrome’s blog. Of the 48 vulnerabilities in 2021 cataloged by VulnDB, that have an associated Forbes article, only 12 are in non-Chrome products. What’s the gist of their coverage? Here’s three examples, see if you notice the similarities.

You may see the common phrase, “2 Billion Chrome Users”. Don’t worry, in a recent article that got increased to 2.6 billion! If it isn’t in the headline, you can find the phrase in almost every article talking about Chrome vulnerabilities. I get that these articles are repetitive, because there are only so many ways you can say Google fixed vulnerabilities in their browser.

That said, what’s more interesting to me is that they appear to have a single similar article for Mozilla Firefox vulnerabilities in all their time while continuing to encourage users to ditch Chrome. If I didn’t know better, I might think Forbes has chosen a side in the browser wars.

Commentary on Radware’s Top Web Exploits of 2020

At the close of each year we see at least one article covering the top vulnerabilities / exploits from the prior year. This is usually written on the back of having large detection networks across the Internet that get a comprehensive view of exploitation. It’s a great way to get real intelligence for criminal hacking activity. Unfortunately, we often see a breakdown when it comes to conveying that information in a useful manner. I know there is an argument to be made that the companies releasing such blogs are primarily after PR, sure. But they also have an opportunity to help their clients and the rest of the world by ensuring the blogs contain more useful and actionable information.

For this commentary, I’ll examine Radware’s blog, “The Top Web Service Exploits in 2020” published December 23, 2020 and covered almost verbatim by Security Magazine on January 5, 2021. I don’t have a view into exploit activity itself, but I do have a good view into the vulnerability disclosure landscape that is a cornerstone of this commentary.

We’ll start by setting a few basic ideas for mutual understanding for any such blog. First, each exploit should be tied to a unique vulnerability or it should explain it is an exploit chain and clearly delineate each vulnerability in the chain or explain what it represents if not a pure vulnerability. Second, it should provide at least one external reference for each vulnerability; either a CVE ID, vendor advisory, or commonly accepted third-party advisory such as US-CERT or another similar body. This is what allows the reader to quickly determine if their organization has patched against the vulnerability or not. If I have to spend considerable time trying to determine which vulnerability is being described, many organizations may be at a complete loss trying to figure it out.

With that, let’s look at the top 10 exploited vulnerabilities in 2020, according to Radware, and try to figure out some additional information for perspective. I will also be very clear that Radware’s blog is extremely frustrating and not immediately helpful, instead requiring a lot of extra work. The fact that they only attributed three exploits to a CVE ID is a dismal commentary on the CVE ecosystem. This analysis of their analysis will server as a reminder that comprehensive vulnerability intelligence is the foundation of any good security program.

Service Exploit #1: /ws/v1/cluster/apps/new-application

Based on their description, this appears to match VulnDB 184750 “Apache Hadoop YARN ResourceManager REST API Request Handling Remote Command Execution“. The first thing of interest is it was disclosed on October 19, 2016 and does not have a CVE assignment over four years later. No wonder many organizations aren’t aware of this vulnerability and have not sought out their own remediation strategy.

Service Exploit #2: /manager/html

This is summarized as “Apache Tomcat Manager Application Upload Authenticated Code Execution” and goes on to describe it as “This module can be used to execute a payload on Apache Tomcat servers that have an exposed “manager” application. The payload is uploaded as a WAR archive containing a JSP application using a POST request against the /manager/html/upload component.

Despite this description, that does not cleanly map to any vulnerability in VulnDB. The closest matches are CVE-2017-12615 and CVE-2017-12617 which is an abstraction for different platforms, but fundamentally “Apache Tomcat HTTP PUT Method JSP File Upload Remote Code Execution“. On the surface this is a match with Apache Tomcat, JSP application, and POST request to achieve code execution. However, those two CVEs cover a JSP file upload, not a WAR archive, and do not mention the /manager/html/upload component. So we’re left wondering if the exploit described is simply a misconfiguration scenario (i.e. intended functionality not secured) or an actual disclosed vulnerability.

Service Exploit #3: /level/15/exec/-/sh/run/CR

Based on the description, this is a misconfiguration scenario where an administrator sets up a Cisco router with the HTTP admin interface enabled, but without password protection. This allows an attacker to use the legitimate functionality to run arbitrary commands.

Service Exploit #4: /admin/assets/js/views/login.js

Radware says this “resource belongs to Sangoma FreePBX code and it looks like the attackers are trying to detect vulnerable FreePBX servers and exploit one of the known vulnerabilities.” The first issue is that script doesn’t immediately track to a VulnDB entry based on titles, which reflect the script name typically. However, let’s consider the URL being seen: … login.js. Rather than attempting to exploit “one of the known vulnerabilities“, I would suggest instead they are trying default credentials. At least back around 2000, the tried-and-true default credentials of admin/admin were all you needed to access the interface.

This one is curious to me because presumably a company that was detecting exploit traffic and could see e.g. POST requests as demonstrated in Service Exploit #2, would also see that the attackers were trying the default credentials. So we’re left with Service Exploit #4 being of little help and only creating confusion over what is being exploited.

Service Exploit #5: /ftptest.cgi?loginuse=&loginpas=

Radware attributes this to “many cheap Wireless IP web cameras use the same genetic code based on the GoAhead code (the tiny, embedded web server).” This tracks cleanly with VulnDB 181032 “Axis Multiple Products axis-cgi/ftptest.cgi Multiple Parameters Remote Command Execution Weakness“. This is actually a fun rabbit hole as this disclosure originally comes from an audit of a AXIS A1001 Network Door Controller and exploitation of this issue requires privileged access to the management interface. With that in mind, we’re back to a default credential scenario that may be the actual issue. Back in 2001, defaults for Axis network cameras were covered by CVE-2001-1543.

[Update: Z Balazs points out that this finding is likely due to Persirai botnet activity and links to more information.]

Service Exploit #6: /service/extdirect

This is the only one of the ten exploits covered that they include a CVE ID for. CVE-2019-7238 maps to VulnDB 198437 “Nexus Repository Manager /service/extdirect Insufficient Access Control Request Handling Remote Code Execution“. But, is that really the right ID? If we look at CVE-2020-10204 we are given a very brief summary of “Sonatype Nexus Repository before 3.21.2 allows Remote Code Execution” and a link to the vendor advisory. However, VulnDB 226228 also maps to this and is summarized as “Nexus Repository Manager /service/extdirect Request Handling Remote Command Execution“. We immediately see the /service/extdirect from Radware’s finding in both titles. The vendor’s advisory does not include this endpoint though, but we find it in this exploit published on GitHub that tracks with the CVE-2020-10204 and we see it in a different exploit for CVE-2019-7238.

CVE-2019-7238 was fixed in Nexus Repository Manager version 3.15.0 and CVE-2020-10204 was fixed in version 3.21.2. Due to the vague vendor advisories it difficult to tell if this was a regression situation or something else. But, the CVE-2020-10204 vendor advisory gives us the interesting bit in the context of exploitation: “The vulnerability allows for an attacker with an administrative account on NXRM to execute arbitrary code by crafting a malicious request to NXRM.” That is an important distinction! So this is likely CVE-2019-7238 as Radware says, unless there are default credentials which would allow for exploiting CVE-2020-10204 as well.

Looking at the NVD entry for CVE-2020-10204 we also see that they scored this incorrectly for their CVSSv3 score, as ‘Privileges Required‘ should be ‘High‘, notLow‘ as they have it.

Service Exploit #7: /solr/admin/info/system?wt=json

For this one, we get an Apache Bug ID (SOLR-4882) and CVE-2013-6397 as references which is great. That said, it would be very helpful if Radware would link to these resources to make it easier for their readers.

Service Exploit #8: /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php

This is the third exploit they match to an ID, CVE-2017-9841 and it was disclosed June 27, 2017. Another good reminder that software with disclosed vulnerabilities and vendor solutions are not being applied, causing many organizations to become low-hanging fruit in the exploit world.

One little nitpick is that the full path they include is likely not how this would manifest on a server. Everything after “src” would be the endpoint being scanned presumably: /Util/PHP/eval-stdin.php

Service Exploit #9: /hudson

With this, we run into another mess and rabbit hole. Radware summarizes this as “Hudson continuous integration tool – multiple vulnerabilities” and further describes Hudson as “a continuous integration tool written in Java, which runs in a servlet container, such as Apache Tomcat or the GlassFish application server. Over the years the project was replaced by Jenkins. The final release. 3.3.3 was on February 15, 2016. Today Hudson is no longer maintained and was announced as obsolete in February 2017.

Based on this description, this could be any one of at least 50 vulnerabilities going back to February, 2014, one of which does not have a CVE ID. 41 of these are in Jenkins software which is mentioned above.

Other Service Exploits

This is a curious conclusion to the “top 10” list, as it states “In addition to the new items that we covered in this list, we have also seen items that we already saw and covered in our previous blog Top 10 Web Service Exploits in 2019 such as /ctrlt/DeviceUpgrade_1, /TP/public/index.php and /nice%20ports%2C/Tri%6Eity.txt%2ebak.

That isn’t exactly a #10 on this list, rather a catch-all for “other stuff we saw including…“. The first listed tracks with VulnDB 170573 “Huawei HG532 Routers /ctrlt/DeviceUpgrade_1 NewStatusURL Element Remote Command Execution (Satori)” which is notable as it is used in Satori, a Mirai botnet variant.

The second tracks with VulnDB 194379 “ThinkPHP /public/index.php call_user_func_array() Function vars[1][] Parameter Remote Code Execution“. Note the different exploit path and we see it can actually be exploited via several endpoints according to analysis of the vulnerability by Knownsec 404 Team.

The third doesn’t immediately track with an entry in VulnDB. Radware gives us “/nice%20ports%2C/Tri%6Eity.txt%2ebak” which we can decode to a more friendly “/nice ports,/Trinity.txt.bak“. A quick Google for that request finds a blog from Dragos titled “Threat Hunting With Python Part 2: Detecting Nmap Behavior with Bro HTTP Logs” explaining this request:

The request for “/nice ports,/Trinity.txt.bak” comes from Nmap’s service detection routine testing how a server handles escape characters within a URI. The actual request is “GET /nice%20ports%2C/Tri%6Eity.txt%2ebak HTTP/1.0\r\n\r\n”.

So this isn’t an actual exploit, rather, it indicates that attackers are using the Nmap port scanner. This is a good reminder that “exploit scanning” doesn’t always cleanly map to a specific vulnerability.

Detecting exploitation is critical for every organization. Doesn’t matter if it is on-premises devices or a managed service-based detection. What is more critical is having comprehensive and timely vulnerability intelligence that can turn what you detect into actionable information. This is how you not only detect, but evaluate and remediate, assuming of course the vulnerability is known to the vendor or a mitigation can be enacted.

Microsoft, CVE, MITRE, ETERNALBLUE, Headache…

2019-02-14 Update: Thanks to Chris Mills @ MSRC (@TheChrisAM), who has been working behind the scenes since this blog was published, he has brought clarity to these assignments! MSRC is still potentially touching up some additional documentation to make it easier to see these associations, but here is the definitive answer from him:

CVE-2017-0143 ShadowBrokers : EternalSynergy (Blog)
CVE-2017-0145 ShadowBrokers : EternalRomance (Blog)
CVE-2017-0144 ShadowBrokers : EternalBlue (Blog)
CVE-2017-0146 ShadowBrokers : EternalChampion (Blog)

Note that only the EternalChampion blog does not reference the associated CVE, but he is working on getting that updated. I have also recommended that MSRC update MS17-010 to use the codenames in that advisory as well. Apparently editing the actual bulletins takes a bit more work, but he’s on it! I can’t thank Chris enough for running with this and helping bring clarity to these assignments.

There was initially a lot of confusion over the Equation Group disclosure. Which were legitimate vulnerabilities, which were new, which were known, which were patched, and ultimately how they would be referred to other than their leaked nicknames. That is the purpose of The Common Vulnerabilities and Exposures project (originally Common Vulnerability Enumeration), to give a unique ID to a specific issue so that you can reference a vulnerability without question. A year and a half later? We’re still wondering apparently.

I contacted Microsoft Security Response Center (MSRC) on August 6, 2017 asking for clarification on the CVE assignment for one of the Equation Group vulnerabilities codenamed ETERNALBLUE, because their own resources contradicted each other. From my mail:

Per an older blog [1], the vulnerability known as ‘EternalBlue’ is assigned CVE-2017-0145. From the blog:

However, in this unique case, the ransomware perpetrators used
publicly available exploit code for the patched SMB “EternalBlue”
vulnerability, CVE-2017-0145, which can be triggered by sending a
specially crafted packet to a targeted SMBv1 server.

A newer blog [2] now lists it as CVE-2017-0144, which I believe to be incorrect. From the blog:

The new ransomware can also spread using an exploit for the Server
Message Block (SMB) vulnerability CVE-2017-0144 (also known as
EternalBlue), which was fixed in security update MS17-010 and was
also exploited by WannaCrypt to spread to out-of-date machines.

Can you confirm the correct assignment for ‘EternanBlue’ [sic], and due to the second blog, the assignment for ‘EternalRomance’, and update your blog(s) accordingly?

All this time later? MSRC never answered my mail, and never fixed one of the two blogs. CVE’s description of each does not mention the nickname in either entry. So the assigning CVE Numbering Authority (Microsoft), or CNA, and the core CVE project (MITRE) still don’t answer this question. To date, the Microsoft advisories for those two CVE ID still don’t mention the nickname. To add more confusion? Try using Google to find it, and you get a third CVE ID it may be (screenshot below). Although, that one result doesn’t actually have ‘EternalBlue’ in it, making us wonder why it is the sole result. The blog that MSRC originally published to add some clarity to the Equation Group still only references MS17-010 (and a dead link now). Looking at the new location for MS17-010 doesn’t find the nickname in the advisory either.

To this day, I am still fairly sure ETERNABLUE is CVE-2017-0145 and attribute it as such, but it sure would be nice if MSRC would clean up and clarify this mess.

Further, I have had to chase down two more errant CVE assignments by MSRC in the last months, which was fairly painful. After getting the runaround on both, being told to go ask Microsoft Support via a forum (despite MSRC being the definitive source for this information), not getting a reply, opening a new ticket with MSRC, reminding them that I was still waiting… those two finally got resolved after a month or more. I really don’t like casting shade on MSRC as over the years, in total, they have been wonderful to deal with. However, the last couple of years have seen a serious decline in this type of incident which should be ‘Vulnerability 101’, and a serious uptick in their resistance to clarify assignments when asked. Finally, if you are wondering why MITRE doesn’t provide some kind of oversight to this? Well they basically never have despite repeated requests for just that. Their only oversight is a ‘CNA Report Card’ that is more about statistics of assignments and such, and does not deal with the quality of assignments, incidents of confusion like this, or anything else that would be helpful to the community.

The only upside to all of this? I got to [sic] my own typo from the quoted email.

New libssh Vulnerability – No Logo But Plenty Of Attention

[This was originally published on]

Earlier this week, Andreas Schneider announced the release of a new version of libssh, covering “an important security” that addressed “an authentication bypass vulnerability in the server code”. Pretty quickly we saw several news articles published that covered this issue, as well as third-party blogs that added commentary on the technical side of the vulnerability. Since we were following the issue closely, we wanted to share some of the meta-information we tracked as well as commentary from the ‘social side’ of this disclosure.

First, a few basics and quick recap. This appears to be the first libssh vulnerability disclosed in 2018. Last year there were at least two vulnerabilities disclosed in libssh2, a different project. Prior to that, in February 2016, a vulnerability was disclosed that impacted both libssh and libssh2 likely due to common code. For this new vulnerability, there has been immediate speculation on how bad this vulnerability is considering Github and others might be using the code, with Github providing a quick response addressing the concerns. While there is a CVE assigned for this vulnerability, it lacks a lot of references that give great additional information as far as the technical details and products impacted. Others are wondering why an issue this serious didn’t earn a CERT advisory, or since so many believe it is a critical issue, at least a CERT-VU. As expected, regardless of the potential severity we are seeing some people getting fed up with sensational headlines around vulnerabilities trying to scare consumers to get those ad revenue generating clicks.

How prevalent is libssh? According to their homepage, it is used in KDE, GitHub, and X2GO. Additionally, we know that most of the Linux distributions use it, including Debian Linux, Red Hat Enterprise Linux, Ubuntu, SUSE, and openSUSE. Further, Puppet Enterprise, F5 BIG-IP, and F5 BIG-IP AFM also include it in their products. Based on a cursory look at VulnDB entries for libssh and libssh2, it appears that more companies adopt libssh2, including IBM, Xerox, Oracle, and Symantec. As vendors take time to process the libssh vulnerability, we will start to see their own advisories on the issue, such as from F5 confirming that some of their Big-IP Advanced Firewall Manager products are vulnerable and could allow unauthorized logins.

Authentication Bypass Vulns Everywhere!

Steve Christey Coley, who worked on the CVE project for 17 years, points out that people are quick to make fun of “easy auth bugs” because “yes, some are simple”. He also points out there is “very little research in detection & prevention for these kinds of logic/control/state-machine flaws vs buffer overflows, injection, etc.” in the thread by citing a trivial remote authentication bypass vulnerability that impacted AIX, IRIX, and Slackware Linux from 1994. He makes a great point. This type of vulnerability should be just as easy to find by researchers as any other, given the time that has passed and the tools readily available today. But as we have learned, this libssh issue is over four years old and just now being found and disclosed.

Dominic White shared “a brief and incomplete history of embarrassing auth bypass bugs:”

It’s clear that some researchers are now focused on finding more of these types of issues though. Twitter user Aris Adamantiadis points out the “exact same bug was found in paramiko a month ago” and that it is an “interesting pattern”. It’s true that the particular Paramiko vulnerability he references is eerily similar to the new libssh issue. In fact, a quick skim of VulnDB vulnerability titles reveals remote authentication bypass vulnerabilities in Juniper Junos, Responsive FileManager, Cisco Digital Network Architecture, Cisco HyperFlex System, Cisco IOS XE, IBM Rational Engineering Lifecycle Manager, Neo4J Server, and Symantec Messaging Gateway. All of these vulnerabilities disclosed in the past month!

Doom & Gloom?

When a vulnerability is disclosed that potentially impacts a lot of organizations, or it is trivial to exploit, we tend to see a wide variety of headlines and commentary about how the issue is critical. Does this new libssh disclosure warrant the “doom & gloom” type of Tweets and headlines that suggest a vulnerability is really bad without disclosing additional details that may mitigate the concern? For example, in response to the libssh vulnearbility, GitHub confirmed that while they do use the libssh code, they are not vulnerable due to how they use the code. In a further statement, Github shared additional information that stated they are actually using a custom version of libssh.

This example stresses that it is important to understand how a third-party library is integrated into a product, if it has been customized, and even if a vulnerability can ultimately even be triggered. GitHub has provided a clear example as to why some vendors will release advisories on such issues that ultimately say “we are not vulnerable”.

Twitter user Bob Rudis cites Project Sonar Scan data that suggests there are around 5,500 Internet-facing vulnerable libssh nodes, saying that amount “isn’t too bad”, but then immediately concludes they are “all vulnerable to the auth bypass issue, so consider them pwnd”. Twitter user Rob Graham points out that “while SSH is used everywhere, libssh is not so common” and that “it’s usually client-side”. The important point here is that this new libssh issue affects server side implementations only.


Ultimately, over the coming months we suspect that we will see additional vendors address this issue in advisories or release notes as they determine if they are impacted. Until then, use common security practices like network segregation, ACLs, and internal auditing to test if a system is vulnerable manually or try using a newly-published scanner.

The Anthropomorphization Of A Vulnerability

Most commonly associated with human behavior toward animals or material objects, to anthropomorphize, or to “ascribe human form or attributes to (an animal, plant, material object, etc.)” [] can apply to computer activity. Perhaps one of the best known examples of this is an interaction captured in a single panel XKCD cartoon:

With the disclosure of the libssh issue, one of the curious trends that caught our eye was the same type of anthropomorphization of this vulnerability happened quite a bit, almost all in the same manner. There is clear value in doing this in many cases, as it can sometimes better explain the simplicity of the exploit to a non-technical crowd. But as we all know, we must also be careful when over-simplifying and using analogies that are too far-reaching as they could lead to misinterpretation.

The common theme in the libssh tweets can be seen in Tweets from @dev_console@0xAmit, and @sphinxgaiaone, but our favorite came from @DAkacki:

This takes a fairly simple vulnerability, in concept, and converts it to a human interaction to explain it. As you can see from the Likes and Retweets, it has received positive attention. This shows that this approach can be an effective way to help explain the issues while also underscoring the severity of an issue, or at least, the exploitation of it. However, most anthropomorphisms that we have seen thus far don’t attempt to speak to the actual impact by addressing how many servers are actually vulnerable or if there are mitigating circumstances among other things.

What’s Next?

This new libssh vulnerability didn’t get a name or a fancy logo, but it sure did receive media attention as if it had. It also was the focus of quite a few blog posts and several articles that made it appear that this vulnerability was going to cause substantial impact to organizations, and at first, incorrectly, to all GitHub users. While this particular libssh vulnerability has also been rated a base CVSS score of 10, there is still debate in the security community as to whether this bug has been overhyped or not. Regardless, if you are running the server side implementation of libssh we recommend that you do your own analysis to see if the vulnerability can be triggered and update accordingly.

RIP – You Will Be Missed

[This was originally published on]

On February 22, Will Dormann tweeted that the main CERT Coordination Center (CERT/CC) website ( had been shuttered. Upon checking ourselves we found the website now redirecting to the Software Engineering Institute at Carnegie Mellon, the parent group of CERT. As a 14-year veteran at CERT/CC, Dormann understandably had some feelings about the situation.

When we further asked if there was a press release from CERT/CC, Carnegie Mellon, or DHS on this change, Dormann replied that there was none. Days later, there are still no apparent press releases or even news articles covering this event. After publishing this blog, someone from the SEI contacted us saying that there had been banners on both SEI and CERT linking to an article saying there were changes coming. While that is true, day-to-day use of the CERT KB site gave no indication of the changes that we noticed, nor did any mainstream news article, social media chatter, or general word-of-mouth. 

We were immediately curious if the CERT Vulnerability Notes Database would continue to operate, which Dormann confirmed that it would be. He went on to say that the site was apparently “deemed to be unnecessary” and expressed that he suspects the next phases would include that the “World forgets that CERT is a thing” and then “profit”. For those not familiar with CERT/CC, or who perhaps forgot the legacy they represent, the organization is a non-profit founded in 1988 under DARPA’s direction to help respond to the Morris worm incident. It was the first of such organizations designed to offer help in responding to computer security incidents. Over the years they have provided a wide variety of help to countless organizations that found themselves victim to computer crime. Almost three years later, CERT/CC created their own domain that acted as a clearinghouse for early incident response information.

Their initial website was simple and to the point. It remained that way for years as seen on a 1997 copy captured by the Internet Archive. Over the years it evolved into a bigger variety of offerings including news, a vulnerability database of sorts, security podcasts, training and more. CERT/CC is part of the CMU Software Engineering Institute, which receives money from the DHS as well as Hanscom Air Force Base to the tune of 1.7 billion US Dollars a year per a 2015 Freedom of Information Act (FOIA) request. It will be curious to see what becomes of the CERT/CC organization as a whole. After publishing this blog, SEI reached out to clarify that “the SEI receives $1.7 billion per year from DHS and Hanscom AFB. The contract you cite in the article is with the DoD and administered by Hanscom. The contract is for five years with a possible extension for five additional years. The $1.7 billion figure is the total value of contract plus extension (i.e. 10 years), not an annual amount.

While the standalone CERT/CC website has vanished with very little warning to many of us, their Vulnerability Notes Database remains up for now. While it by no means is intended to be a comprehensive database, it remains a steady source of major vulnerability disclosures that some organizations coordinate through CERT/CC. For now, removing the CERT/CC website appears to be as simple as a consolidating web presence effort with most of the content copied over, yet many are still pondering the question of what will come next for the organization?

As we saw with the recent government shutdown and the National Vulnerability Database (NVD) not processing vulnerabilities, there is a potential that organizations relying on such government-funded databases, will no longer have a reliable source of vulnerability intelligence. We believe that the biggest concern is ensuring that the Vulnerability Reporting Form remains available and that CERT/CC has the resources necessary to assist with coordinating the disclosure of vulnerabilities having widespread impact across multiple vendors. If you are feeling a bit of nostalgia and want to visit the most recent version of the site before being redirected, you can still see some of the content using the Wayback Machine

March 6 Update: Richard Lynch, a PR manager with the Software Engineering Institute at Carnegie Mellon, contacted us shortly after this blog ran. Yesterday we updated the blog to reflect some of his feedback and today we wanted to share his full feedback:

Hello. I am the PR manager for the Software Engineering Institute. I was just made aware of your non-bylined article at titled “RIP” Your article contains some serious errors, and I would appreciate it if you would make corrections. First regarding your statement that the SEI receives $1.7 billion per year from DHS and Hanscom AFB. The contract you cite in the article is with the DoD and administered by Hanscom. The contract is for five years with a possible extension for five additional years. The $1.7 billion figure is the total value of contract plus extension (i.e. 10 years), not an annual amount. Second, you state that the site vanished with no warning and that there were no press releases or even news articles. However, on January 25, about a month before the new website launch, we announced the upcoming change through a news article on both the SEI and CERT websites. The article was accompanied by banner announcements on both sites for a month prior to the change over.

Researchers Find One Million Vulnerabilities?!

[This was originally published on]

No researcher has yet claimed to find one million vulnerabilities, but we are sure to see that headline in the future. Every so often we see news articles touting a security researcher who found an incredible number of vulnerabilities in one product or vendor. Given that most disclosures involve a single vulnerability, or sometimes a dozen or two, a headline claiming ‘thousands’ of vulnerabilities is eye-catching, suspect, and problematic to the industry.

Perhaps one of the biggest cases of this came between May and July in the form of headlines such as “‘Thousands’ of known bugs found in pacemaker code” (BBC) and “Code Blue: Thousands of Bugs Found on Medical Monitoring System” (Security Ledger). The headlines were clear, thousands of vulnerabilities in a critical medical device.

Reading past the headline in the Security Ledger article however, it wasn’t so clear: In-brief; The Department of Homeland Security warned of hundreds of vulnerabilities in a hospital monitoring system sold by Philips. Security researchers who studied the system said the security holes may number in the thousands. After another mention of “in the thousands”, a less dramatic paragraph followed saying that ICS-CERT warned of 460 vulnerabilities, while one of the researchers again emphasized the bigger number:

The Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) issued an alert on July 14 about the discovery of 460 vulnerabilities in the Philips Xper-IM Connect system, including 360 with a severity rating of “high” or “critical” severity. But an interview with one of the researchers who analyzed the Xper system said that the true number of vulnerabilities was much higher, numbering in the thousands.

After digging into these claims a bit, it came to light that a majority of them were due to the use of outdated third-party libraries. While these library vulnerabilities may impact a device like a pacemaker, the opportunity for any one of them to be exploited could be an issue or may be non-existent. If an attacker can’t reach the vulnerable code, then it likely isn’t an issue. As such, while there are real issues with vulnerabilities in third-party libraries, claims of ‘thousands’ of vulnerabilities are often creative at best, and untrue at worst.

The alarming headlines don’t help anyone with a potentially vulnerable pacemaker, and the lack of proper analysis of those flaws to determine which are critical is a disservice to the medical and InfoSec industries.

The Curious Case of Tizen OS Security

Tizen is an operating system, that many likely have never heard of before, based on the Linux Kernel, first released on January 5, 2012, designed to offer a consistent user experience regardless of the device running it. 

According to Wikipedia, it “works on a wide range of devices, including smartphones, tablets, in-vehicle infotainment (IVI) devices, smart TVs, PCs, smart cameras, wearable computing (such as smartwatches), Blu-ray players, printers and smart home appliances (such as refrigerators, lighting, washing machines, air conditioners, ovens/microwaves and a robotic vacuum cleaner).” As such, this operating system is poised to have a massive digital fingerprint on devices moving forward, even more so than the millions of Samsung TVs that run it currently.

Since it is based on Linux, one might expect it to be fairly mature code from the start, and not prone to serious vulnerabilities. While Linux has its share of vulnerabilities over the years, a majority of them are local issues resulting in a denial of service or information disclosure. For the first five years, Tizen certainly seemed like it was more mature with a single low-risk vulnerability disclosed in 2012. This year however, has seen a spectacular explosion in Tizen vulnerabilities… maybe? 

In April, researcher Amihai Neiderman told Vice “it may be the worst code I’ve ever seen” and told ThreatPost that he “found 40 bugs, and most of them look exploitable”. Neiderman presented his findings at the Kaspersky Security Analyst Summit in a 20 minute talk that only gave details on four of the issues, alluding to many others. During his talk, he also confirmed that he had only verified a single vulnerability was exploitable, and that the rest look exploitable. All of that only produced six actionable vulnerabilities based on the information made public. Last month, Tizen hit the news again, this time with a spectacular headline that the operating system contains 27,000 bugs according to researcher Andrey Karpov!

From the article: After finding almost a thousand bugs in Tizen code, Karpov contacted Samsung to pitch for the sale of static analyser PVS-Studio software, but Youil Kim from Samsung declined the offer. You may note that he contacted Samsung after finding “almost a thousand bugs”, a far cry from the 27,000 in the headline. The Register goes on to explain this disparity better:

It does look bad. According to Andrey Karpov, founder and CTO of Program Verification Systems, the Russia-based maker of static code analyzer PVS-Studio, Tizen’s codebase contains approximately 27,000 programming blunders. This is, though, based on extrapolating from 900 errors found in 3.3 per cent of the 72.5 million lines of C/C++ code (excluding comments) that compose the Tizen project.

This is certainly an eye-catching figure and one that might scare the most seasoned user of the operating system, if they actually even knew they were running it. What isn’t mentioned in the news articles or any form of disclosure from Karpov is the reality of such claims. While he has shared a somewhat detailed list of the nature of the flaws, there is no indication which of them, if any, are exploitable.

As we often see, and disclaim in many of our vulnerability entries in VulnDB, is that issues found via static code analysis cannot be taken at face value without additional validation. Since Karpov used PVS-Studio to find these code defects, the same disclaimer would apply. In fact, Karpov was questioned on the false positive rate of his findings and blogged that 10 – 15% may be invalid

First, even if these flaws are buffer overflows, memory corruption issues, or other serious flaws that can lead to code execution, it doesn’t mean that any of these discovered or extrapolated issues have legitimate attack vectors.

Second, the more time you spend in vendor bug trackers watching the discussion of such reports, the more you are exposed to “vulnerabilities” that are relegated to a “theoretical” status as no one, researcher or developer, can demonstrate a user-controlled code path to reach the flaw.

Yes, we’re well aware of the pitfalls around calling a vulnerability “theoretical”! In the meantime, we strongly encourage news outlets to report such stories, but to do so in a more mindful and responsible way. Explosive and potentially misleading headlines simple do not help the world of security. As Brian Krebs recently pointed out, in a very similar vein to the above, “beware of security by press release”.

Analysis Of The RANDom Report on Zero-days and Vulnerability Rediscovery

[This was originally published on]

On March 9, 2017, RAND released a report (PDF) titled “Zero Days, Thousands of Nights; The Life and Times of Zero-Day Vulnerabilities and Their Exploits” by Lillian Ablon and Andy Bogart that received a fair amount of press. The RAND press release goes on to describe it as “the first publicly available research to examine vulnerabilities that are still currently unknown to the public“. While the report covers many topics and angles around this discussion, one specific bit that caught our attention was the data put forth around vulnerability rediscovery. From the press release:

Based on the dataset, RAND researchers have determined that zero-day vulnerabilities have an average life expectancy—the time between initial private discovery and public disclosure—of 6.9 years. That long timeline plus low collision rates—the likelihood of two people finding the same vulnerability (approximately 5.7 percent per year)—means the level of protection afforded by disclosing a vulnerability may be modest and that keeping quiet about—or “stockpiling”—vulnerabilities may be a reasonable option for those entities looking to both defend their own systems and potentially exploit vulnerabilities in others’.

Specifically, their analysis suggests that collision rates are rare among researchers, which seems entirely too low even at first glance. Unfortunately, they use a closed data set, making it impossible to validate their findings. This low collision rate analysis caught the eyes of others in the industry as well. Many researchers and others involved in vulnerability aggregation generally have the impression that rediscovery isn’t that rare, but as RAND notes, there isn’t a lot in the way of research on this topic. In 20 years of aggregating vulnerabilities in one form or another, RBS has not specifically looked to track this data point. We track a great deal of metadata about each vulnerability disclosed, but do not currently  have a classification for research collisions

RAND is presenting at BlackHat Briefings USA in Las Vegas this year, on the back of public discussion and some solid criticism of their report, we at RBS were also curious about the rediscovery rate. Since the dataset used for the RAND report is not public and anecdotal ‘data’ does not cut it, we decided to create a new public dataset based on VulnDB to help us better evaluate their report and findings; more on that later.

First, let us examine how RAND describes their own dataset:

In an effort to address the question, RAND obtained rare access to a dataset of information about zero-day software vulnerabilities and exploits. It is a rich dataset, as some of these exploits have been found by others, but others have not. The dataset spans 14 years (2002–2016) and contains information about more than 200 zero-day exploits and the vulnerabilities they take advantage of, over half of which are publicly unknown. The data we received had a final count of 207 exploits, after approximately 20–30 were removed due to operational sensitivity. [..] This dataset includes exploits for 64 vendors, with a range of 1 to 55 exploits per vendor. Microsoft (n = 55), Linux (n = 39), Apple (n = 14), and SUN/Oracle (n = 11) each had the most number of exploits, while the majority of the other vendors each had one or two exploits. These include well-known vendors such as Mozilla, LinkSys, Google, Citrix, AOL, Ethereal, Adobe, Alt-N Technologies, CryptoCat, and RealPlayer/RealServer, as well as some lesser-known vendors.

For ease, there are several key points pulled from this statement:

  • 207 “zero-day” exploits, with half of them not publicly disclosed, we interpret this to mean that half of the zero-day exploits are now publicly known
  • 14 years worth of data (2002-2016)
  • 20-30 exploits “removed” for operational sensitivity
  • 64 different vendors had exploits included

When reviewing the key points it becomes very interesting, as in the world of exploits it suggests the data came from an exploit broker or, more likely, a government-run or funded shop that produces a considerable number of high-quality exploits themselves. However, later when talking about the limitations of the research, they write:

Ideally, we would want similar data on Red (i.e., adversaries of Blue, or other private-use groups), to examine the overlap between Blue and Red, but we could not obtain that data.

This quote suggests that the data was captured by a Blue team, which used a certain level of technology that allowed them to identify a high number of zero-day exploits used in the wild, against their network.

Using the dataset described above, RAND analyzed it and produced the following results:

Finding #4: For a given stockpile of zero-day vulnerabilities, after a year, approximately 5.7 percent have been discovered by an outside entity. [..] In our analysis, collision rates changed significantly depending on the interval time used (from 40 percent to less than 1 percent), and so the timing of “flushing” a stockpile of dead vulnerabilities matters. We found a median value of 5.76 percent overlap (6.79 percent standard deviation) given a 365-day time interval, and a median value of 0.87 percent overlap (5.3 percent standard deviation) given a 90-day time interval. A 14-year interval (i.e., all of our data in one time interval) yielded a 40 percent overlap. With the exception of the 14-year interval, our data show a relatively low collision rate. This may be because those in the private exploitation space are looking for different vulnerabilities from those hunting for vulnerabilities to share as public knowledge, as well as using different techniques to find the vulnerabilities (e.g., vulnerabilities found via fuzzing, or automatic software testing, are often different than those found via manual analysis).

There is a lot going on in this single finding, and it is worth dissecting further. When looking at the overlap value as compared to the interval time it is certainly interesting and a great data point, but without knowing anything more about the dataset used, it becomes kind of pointless.

Here are a few points to consider:

  • The overlap varies from < 1% to over 40% depending on the interval. Based on that wide range alone it makes one question the credibility of this claim.
  • On the surface it may sound logical that as more time passes, more vulnerability collisions occur. However, consider that in the span of 14 years many versions of products assumed to be in the dataset (e.g. Windows, Linux, Office, Flash Player, etc.)included in the analysis may have become obsolete.
  • To properly perform this analysis, the dataset would require having the vulnerability discovery and exploit creation date for the first and second researcher.
  • Based on RAND’s description, they would have the date for the first researcher, but likely not the second. If they worked with vendors to determine the collision rate, they would have the second researcher’s date for when it was reported to a vendor, or when it was discovered internally by the vendor, but most likely not have the discovery and exploit creation date for most of the vulnerabilities.

RAND makes one other observation about the low collision rate they observed, saying this “may be because those in the private exploitation space are looking for different vulnerabilities from those hunting for vulnerabilities to share as public knowledge“.

Finding #5: Once an exploitable vulnerability has been found, time to develop a fully functioning exploit is relatively fast, with a median time of 22 days

On the surface, this claim does not appear to have merit. Due to a wide range of factors during exploit development, it can be completed very quickly or take a significant amount of time. However, based on our past experience, that range is anywhere from half an hour, to days, or even several weeks or longer to ensure a reliably working exploit . With such a small dataset, and without additional meta-data, such a claim while maybe valid in their analysis, it simply cannot be  used as a metric for larger analysis of the issue.

In VulnDB, we have implemented our own standard to track and provide Vulnerability Timelines and Exposure Metrics (VTEM). We believe that it is key to better understand metrics on how vendors respond to vulnerabilities e.g. the average time organizations have to apply available fixes until exploits are published. While one can surmise that the exploits in question are high-end (e.g. overflows, memory corruption, use-after-free), it is simply impossible to determine if 22 days for exploit development is consistent with any other vulnerability rediscovery dataset. In a larger view of vulnerability research, finding a vulnerability may be part of your day job, but they may not require you to write a functioning exploit for it. Just to highlight the sample bias potentially at play: It would be trivial to create a dataset that is twice the size as the one used, which ‘proves’ a median time of a single day for exploit generation after the vulnerability was found. Just come up with a list of 500 cross-site scripting (XSS) vulnerabilities and use that as your dataset. Voila!

Finding #1: [..] In the course of investigating life status for our vulnerabilities, we found that Common Vulnerabilities and Exposure (CVEs) do not always provide complete and accurate information about the severity of vulnerabilities.

This is not a new finding by any stretch, but it does demonstrate that more and more of our industry are realizing the shortcomings of relying on CVE/NVD for vulnerability intelligence. If you want to understand the gap further just check out any of our VulnDB QuickView reports.

To contrast their data, RAND cites the previous research on this topic as:

Literature on collision rate focuses mostly on vulnerabilities reported to vulnerability reward programs or publicly found and reported within a code base. Finifter, Akhawe, and Wagner (2013) found that roughly 2.25–5 percent of all vulnerabilities reported to vulnerability reward programs had been discovered by others. Past RAND rough order-of-magnitude estimates put the probability that a vulnerability is discovered by two parties within a year at approximately 10 percent (Libicki, Ablon, and Webb, 2015). Researchers in 2015 created a heuristic model that found a 9 percent overlap in non-security-tested software and 0.08 percent in more secure software (Moussouris and Siegel, 2015).

Shortly after the RAND paper was published, another paper titled “Taking Stock: Estimating Vulnerability Rediscovery” by Trey Herr, Bruce Schneier, and Christopher Morris was published. Their conclusions are considerably different than RAND’s, and also use a much larger dataset.

From the paper’s description:

This paper presents a new dataset of more than 4,300 vulnerabilities, and estimates vulnerability rediscovery across different vendors and software types. It concludes that rediscovery happens far more often than previously estimated. For our dataset, 15% to 20% of vulnerabilities are discovered independently at least twice within a year. For just the Android sample, 13.9% of vulnerabilities are rediscovered within 60 days, rising to 19% within 90 days, and above 21% within 120 days. Chrome sees a 12.87% rediscovery within 60 days; and the aggregate rate for our entire dataset generally rises over the eight-year span, topping out at 19.6% in 2016. We believe that the actual rate is even higher for certain types of software.

With RAND’s closed dataset, and Herr et al’s dataset created on open data (but not distributed with the paper), RBS thought it would be interesting to offer a dataset focused on Microsoft that is open and available. It was created to show that serious bias can be introduced based on a given data set. We are not stating that this dataset is accurate and any other is not; simply that many factors must be considered when creating vulnerability statistics. Without their datasets published, other researchers cannot validate their paper’s findings or build on their data. While the RBS dataset is being made available, it is a draft / proof-of-concept more than anything. The dataset, titled “PoC Vuln Rediscovery Dataset Using Microsoft Creditee Data“, has additional notes and caveats to illustrate some of the pitfalls when performing this type of research.

The following two charts based on our dataset show the Total Vulnerabilities in Microsoft Advisories and the approximate Percentage of Vulnerabilities Rediscovery by Year:

While creating this dataset, we made some observations and came up with questions for the industry to consider:

  • Microsoft changes their acknowledgement format from time to time and does not use a consistent format, so this dataset could be enhanced.
  • Microsoft does not always credit the person who found a vulnerability. Examples include if it wasn’t reported directly to them or researchers disclosed before Microsoft was ready.
  • When there is a collision, Microsoft does not indicate which researcher reported first.
  • In 2016, Microsoft changed their format to make it more difficult to determine collisions from the same company. While fairly rare, a company that maintains several research groups (e.g. HP, Qihoo, Tencent) may be the source of a collision.
  • Starting in 2016, Microsoft became less precise in acknowledgements, not even listing the collision researchers together.
  • We found evidence of “double dipping” in third-party bounty programs, something long suspected but we don’t believe has been proven. (CVE-2014-0270 credits Jose A. Vazquez of Yenteasy working with HP’s Zero Day Initiative and separately as working with VeriSign iDefense Labs).
  • There may be additional collisions, but they are hard to determine based on MS and ZDI advisories. For example, CVE-2015-6136 is credited to “Simon Zuckerbraun, working with HP’s Zero Day Initiative” and “An anonymous researcher, working with HP’s Zero Day Initiative”. Of the seven ZDI advisories covering this one CVE, one is attributed to anonymous and the rest to Zuckerbraun suggesting they are distinct reports. Having to evaluate dozens of cases like this to refine a dataset is time-consuming.
  • Did either the RAND or Herr et al’s dataset show a big uptick in vulnerability rediscovery in 2014 like our Microsoft dataset does?
  • Nine different researchers discovered a single vulnerability in one case (CVE-2014-1799). Did the prior research account for the number of researchers in a single collision?

What is quite clear from each of the datasets is that vulnerability rediscovery is not only prevalent, but – depending on a number of factors – may be considerably more frequent than we imagine. To further illustrate this, consider that on June 28th, HackerOne tweeted out a statistic of their own regarding rediscovery.

The HackerOne statistics makes it pretty clear (quite concerning actually!) that rediscovery or duplicates, as it is known in the bug bounty world, are extremely prevalent.  A higher collision rate with bug bounties programs run by HackerOne or other companies actually make a lot of sense for the most part. The expectation from this data set is most likely lower maturity bugs, which are easier to find such as XSS, SQLi and are also heavily focused on companies websites as well.

We have spent a lot of time researching Bug Bounties over the years, and researchers are definitely motivated by money. Researchers look for bugs when there is money to be made or if there is some very cool technology or fame to be gained. With more and more bounty programs that are encouraging more researchers to look for vulnerabilities, we, therefore, would also expect more eyes looking for the same exact issues and often using the same tools even.

In contrast with the assumed RAND dataset, the expectation is that they are high-end and much more complex vulnerabilities than most bug bounties. These higher-end vulnerabilities typically require more technical skills to find and it, therefore, makes sense to see a lower rediscover rate. We at RBS have been working hard on implementing our Code Maturity metrics into VulnDB. Once completed, we expect to see rediscovery rates are much lower for products with a higher code maturity.

The concept of finding higher severity bugs in more mature products as well as websites also apply to bug bounties. In the “Taking Stock: Estimating Vulnerability Rediscovery” paper they cite data from Bugcrowd, as they found that:

Rediscovery happened least often with their highest severity bugs, 16.9% of the time. For second- and third-tier vulnerabilities (based on a five-tier system), the rediscovery rate jumped to 28.1% and 25.8%, respectively.

Based on RAND’s report and our own impromptu dataset, it is clear that the data you work with can strongly influence the observations and results of your research. As we have long maintained, vulnerability research of this nature is never perfect, largely due to the challenges of maintaining high-quality vulnerability data. With this in mind, such research should definitely move forward! However, it is extremely important to disclaim and caveat the data and resulting findings to make it clear where the original data may be lacking or how a conclusion may be based on incomplete data.

If you are attending Black Hat and are interested in further discussions on this topic, then consider attending a panel moderated by Kim Zetter that plans to dive further into the topic and reports.

The Steady Rise of Bounty Programs, and the Counterpart

[This was originally published on]

Companies that once said they would not pay for vulnerability information seven years ago, have been steadily expanding their program to pay for more and more vulnerability information and recently made Edge bounties permanent. 

Service-oriented companies like Uber, that rely on a significant amount of user interaction and transactions via mobile apps, also utilize bounty programs. Surprisingly many industries, even the U.S. military have entered the bug bounty game, as the Department of Defense (DoD) announced in 2016 their own vulnerability bounty program, to be run through HackerOne.

As can sometimes happen with high-profile bug bounty programs such as United Airlines in 2015, they can come with a bit of drama. The initial announcement of the DoD program faced some turmoil. According to the DoD’s own press release, “participants in the bug bounty will be required to register and submit to a background check prior to any involvement with the pilot program.” An Ars Technica news article goes on to enumerate more requirements, citing the HackerOne project for the bounty:

  • You must have successfully registered as a participant through this security page.
  • You must have a US taxpayer identification number and a Social Security number or an employee identification number and the ability to complete required verification forms.
  • You must be eligible to work within the US; meaning you are a US citizen, a noncitizen national of the US, a lawful permanent resident, or an alien authorized to work within the US
  • You must not reside in a country currently under US trade sanctions.
  • You must not be on the US Department of the Treasury’s Specially Designated Nationals list [a list of people and organizations identified as being involved with terrorism, drug trafficking, and other criminal activities].

Interestingly enough, when originally viewing the HackerOne DoD Bounty page (dead link) after the announcement it showed:

This type of confusion and lack of guidelines can cause a serious lack of trust among security researchers, who are notoriously paranoid, and often for good reason. For a program that the DoD describes as “innovative“, the lack of organization coupled with the registration requirements was concerning for many.

While the excitement of launching a bug bounty can lead to rushing implementation, it is very critical to be prepared and have clear guidelines in place from the very beginning. To be clear, the page has now been updated and provides the expected guidelines and other information. Interesting enough there are also some Response Efficient metrics that are posted publicly on the Hacker One bounty page. You can see that their average first response is three days, but it does take approximately four months on average for resolution. The continued debate on vulnerability disclosure also even exists in the Bug Bounty world as well.

Over the past few years, almost in-line with the increase in bounty programs increasing, we’re seeing more open sales of vulnerability information, as well as the information obtained by using those vulnerabilities. An article from mid 2016 in The Register cited Dell SecureWorks research that boiled down hacking activity into a menu-style offering:

  • Banking credentials are sold for between 1% and 5% of the account balance.
  • American Express Cards fetch $30, towards the upper end of prices for plastic card credentials, which start at $7 and rise depending on the type of card and the amount of associated data offered for sale.
  • DDoS (distributed denial of service) attacks can be contracted for as low as $5 an hour, the same price as remote-access trojans.
  • Angler exploit kits – a common hacking tool that’s used to sling malware from compromised or hacker-controlled websites – are licensed from $100.
  • Crooks are also expanding their offerings with a greater range of products, including ATM skimming devices for $400 or less.
  • Hacking a corporate email account costs $500 per mailbox, about four times the price of hacking into a Gmail or Hotmail account ($123).

Even on social media, users are becoming very open about their hacking activity. This includes the rather benign poking at notable media characters like Edward Snowden, and computer security professionals such as Kevin Mitnick.

The same person, @1×0123 on Twitter (account now suspended, so links are 404 but kept for the record), announced the compromise of with 150,000 accounts and plaintext passwords, an unnamed system with remote access, implications that the New York Times web site has an SQL injection vulnerability, and the Los Angeles Times web site has a remote vulnerability that allows full access.

If that wasn’t enough, consider this was apparently one person and they also announced their own exploit market, to “improve safety & security products”. This comes full circle thinking back to the Department of Defense’s bounty program announcement, and the likely requirements. The same person also demonstrated a couple cross-site scripting vulnerabilities on Pentagon web sites.

Since the initial DoD program, there have been more government bug bounty programs launched and or expanded. In January 2017, the U.S. Army launched a bounty and the U.S. Air Force launched a bounty-based contest for a given period of time. While the Army program seems to be part of the broader Hack the Pentagon initiative, no signs yet that the Navy, Marines, or Coast Guard have started a bounty program. Additionally, it is interesting to note the number of valid reports as compared to the payout for the Hack the Pentagon bounty:

Out of all the submissions, 138 were found to be “legitimate, unique and eligible for a bounty,” and resolved according to Defense Secretary Ash Carter. Those vulnerabilities earned $75,000 in total bounty rewards, paid promptly by HackerOne at the conclusion of the program.

That means the average payout was $543.48 and the total program payout of $75,000 was only 0.005% the cost of some military ordinance. Formal bug bounty programs are largely considered to be on the side of good and the domain of whitehat hackers, and increasingly considered “worth the risk”. It is important to note there is also a steady rise in their counterparts (as always, take commentary on the sales of exploits with a grain of salt).

More exploits, and the fruits of their use, are being sold on the open or black market to the highest bidder. Every day, RBS hopes that more data comes to light about these sales, even if in the past. We have been following the U.S. Government foray into the bug bounty world closely and have been pleased to see the continued expansion.  It does beg the question as to how many other countries will view the U.S. results as a success and also begin to create programs as well.  

Recently, the UK Government announced they were working with Katie Moussouris from Luta Security on their first vulnerability coordination pilot. Meanwhile, tracking vulnerabilities and the subsequent data breaches, continues to prove to be a healthy challenge. There has already been over 10,500 new vulnerabilities and more than 2,200 data breaches reported in 2017. Software vulnerabilities and the ‘businesses’ they create are not going away anytime in the near future. Organizations of all sizes in all industries need to address network protections and focus the necessary resources on vulnerability discovery and remediation.

Electronic Voting; an Old but Looming Threat

[This was originally published on]

VulnDB E-Voting

As everyone on the planet knows, U.S. politics are in full swing with primaries almost every week and an upcoming presidential election in November of this year. At Risk Based Security we find it curious that one of the most dangerous topics seems to evade the 24-hour a day news cycle every election period. The dangers we face as a democratic society relying on electronic voting machines. Despite the U.S. news being saturated with slogans, petty bickering, and silly antics of candidates seeking to win elections, it was actually news out of Costa Rica that brought the topic back to mind.

Costa Rica’s Supreme Court of Elections is said to to be investigating many electoral contests in Latin America over the past decade. Now jailed hacker, Andres Sepulveda, claims to have used a wide variety of hacking techniques along with social media to gather information and manipulate elections through several means. While this is considerably different than tampering with an election via manipulating electronic voting machines, it speaks to the severity of such activity. These types of issues are not isolated and rare either. The Election Commission of India (ECI) ran into a case of malfunctioning electronic voting machines during the first phase of the Assembly elections in Bengal this month. 

In the U.S., there has been a growing trend of claims of “rigged electronic voting machines” in past elections. These range from anecdotal to showing videos of such machines in action flipping votes. The claims of voter fraud are leveled at both major parties; both the Democrats and the Republicans have been accused of this. Some analysis of these claims moves beyond the rumors and examines the statistics behind the tallied results. For example, an engineering professor and statistician from Wichita State University has found “voting irregularities” that indicate tampering of electronic voting machines.

Going back farther, evidence has been aired that exit polling data has been manipulated specifically to benefit one candidate. Focusing more on what many consider to be the real threat in such elections, electronic voting machines have been found to contain an incredibly wide variety of traditional vulnerabilities in the past. Based on published research, no manufacturer of these machines is immune. Advanced Voting Solutions (AVS), Election Systems & Software (ES&S), Digivote, Sequoia, Premier Election Solutions, Hart InterCivic, and Diebold have all been put to the test (note: some of these companies have been purchased by another over the years).

As time permits, Risk Based Security reviews the academic research papers examining these machines, to include the relevant findings in VulnDB. This begins to give a solid picture of just how fragile these machines are when it comes to manipulation and tampering, which can directly affect election outcomes. To date, we have cataloged over 260 vulnerabilities in electronic voting machines, with more research papers in queue for analysis. This supports articles we see from time to time, where a researcher or interested party speaks out about the insecurity of voting machines.

Take the report from 2008, in which researchers showed that Election Systems & Software (ES&S) machines could be calibrated to favor a candidate of choice. Reading the actual report showed that the machines had a flaw related to a backdoor implemented in the devices that could be exploited with a Factory QA PEB (Personalized Electronic Ballot). Or the 2008 report on Sequoia AVC Advantage machines that were ultimately audited, but led to a court case to get the results published. That report resulted in nine distinct vulnerabilities that were added to Risk Based Security’s VulnDB.

More recently, we’ve seen reports from Virginia telling readers how tampering with U.S. voting machines is “as easy as ‘abcde’”. The more detailed report on the voting equipment is somewhat of a dry read, but should be alarming all the same. An article from Wired reminds us that many of these machines are over a decade old, and “ripe for tampering, breakdowns”. Of the 260+ electronic voting machine vulnerabilities we are aware of, that span from February 2004 to July 2015, 154 still have no known solution. Regardless of the election or your affiliation, as you visit your polling place to vote, be mindful of the machines you are using and that any paper trail provided matches the vote you actually cast. With all this evidence of potential voter fraud, where is the “Fair and Balanced” news coverage?

112 Years of Vulnerabilities: How did we get here, knowing what we know?

I gave a presentation on computer vulnerability history at BSides Delaware in November, 2013. Shortly after, I gave the presentation a couple times at Westchester Community College and the University of Pennsylvania, along with a brief version for the Invisible Harms conference at UPenn. The linked version is the revised copy after my initial run at BSidesDE. The talk gives a history of computer vulnerabilities starting in 1902 (for real!) and continues up to modern day, looking at how long we have been subject to them, and asking the question ‘why’ do we still see them in modern software. Video from the BSidesDE presentation is available courtesy of Irongeek. A copy of the original BSidesDE presentation is available, but I recommend the revised copy above from Shakacon in June, 2014. As always, there are extra comments and tons of references in the PPT files.