CVSSv2 Shortcomings, Faults, and Failures Formulation

[This was originally published on the OSVDB blog.]

The Open Security Foundation (OSF) and Risk Based Security wrote an open letter to FIRST regarding the upcoming Common Vulnerability Scoring System (CVSS) version 3 proposal. While we were not formally asked to provide input, given the expertise of managing vulnerability databases, along with the daily use of CVSS, we felt the feedback would provide valuable insight to improve CVSS in the future.

Some of the areas discussed include:

  • Introducing 4 levels for granularity
  • Better definitions for terminology for more accurate scoring
  • Re-examining the pitfalls of “Access Complexity”
  • Limitations of the current Access Vector breakdown
  • The challenge of scoring authentication
  • And a variety of other considerations to improve vulnerability scoring

Our conclusion points to the need for CVSS to be overhauled as CVSSv2 has too many current shortcomings to provide an adequate and useful risk scoring model. You can download the full letter in PDF format.

Why I Don’t Attend the RSA Conference

For years now, I am asked if I will be at the RSA Conference (RSAC). Invariably, I answer no because I will not subject myself to it, or support the conference in any way.

The short answer as to why, is that it is basically the “Comdex” of InfoSec. Overly large, full of flash, and mostly a waste of time. Rather than real value or progress, RSAC offers the same buzzwords and claims of innovation that fail us year after year. The same technology from last year, five years ago, and often ten years ago is rebranded, given a new interface, and sold to us as if it is the next great miracle that will magically solve all of our security woes. Every year, security gets worse, attackers get better, more systems are compromised. RSAC is doing nothing for us.

And then there are the keynotes. The biggest names in InfoSec! People that make the news, lead the biggest companies, boldly take on the title of “visionary” or “thought leader”. They give banal talks that rehash the same ideas that are supposed to be the fundamental core of our business. Rather than providing real help, they offer us crappy analogies and the latest buzzwords. These platitudes fill the seats with professionals that are excited to be there, walk away feeling they got some kind of value, and return to providing mediocre services that consistently fail to secure the networks they consider so valuable.

Watching Tweets from the conference are absolutely disgusting. The blatant fan-boy attitudes, getting excited about free giveaways, bragging about the parties attended. They live-tweet talks that frequently offer the same platitudes and buzzwords as the keynotes. The worst part, they don’t even realize they are part of the problem.

Speaking of the parties, this year has around 70 parties crammed into one week. Remind me, as an industry, what exactly are we celebrating? Record number of data breaches, almost a thousand vulnerabilities disclosed every month, endless malware, new types of attacks that are harder to detect, compliance initiatives that waste time and offer no lasting security. Are we celebrating that? Or that these security companies continue to make stupid amounts of money selling inferior products and solutions?

So no thanks, not interested in attending the security cesspool.

Subway, the Missing Inch, and Karma

In case you hadn’t heard, Subway is embroiled in a lawsuit over them serving up 11″ sandwiches, while advertising them to be 12″. While it doesn’t sound like much, those missing inches add up over time. There is also the whole truth in advertising issue.

I’ve been going to Subway for a long, long time. My first experience was in the early 90’s in Albuquerque. I’ve gone through their phases, including the “V” cut phase, while consistently ignoring the concept of making “every bite equal”. Recently, it involves between 1 and 4 trips a week. I’ve been on a first name basis with a dozen employees of my local Subway over the last 8 years. I offer this to establish that I am a loyal customer and frequent the stores quite often.

My experience at Subway today was so absurd as to be laughable, instead of make me angry. I ordered a 6″ sandwich like usual. The employee took out what appeared to be a 6″ piece of bread, put it on the counter, and consider it. Remember, Subway serves 6″ and 12″ sandwiches, nothing between. After considering it for a second, she cut off about an inch and a half of the bread and put it back in the case. Uh… what?

First, if that wasn’t a 6″ inch piece, it means the person before me didn’t get a 6″ sandwich. Second, if it was, it means she arbitrarily decided to short me an inch. Finally, why keep a 1 – 1.5″ piece of bread that can’t be used for another sandwich? There is simply no logical reason to do that to a customer, especially when Subway is catching grief over shorting people.

In conclusion Subway; after shorting customers for years, is it really such a bad idea to show some good sandwich karma and give a customer an extra inch?

Selling out, a bit at a time…

I sold out when I signed up for Google, Gmail, Facebook, Twitter… might as well sell out a bit more and use WordPress. While guest-blogging recently, I found out that the managed WP site is actually pretty well done for a stable, mostly intuitive blogging platform. This will also help ensure my spew stays around for years to come, even if attrition.org goes away for some reason. In reality, I won’t run blog software on that domain, and doing static HTML for every little quick blog, gripe, or musing is not efficient.

Time permitting, I may actually post and backdate content from other sources, from the previous years as well, since it is so spread out.

CVE Vulnerabilities: How Your Dataset Influences Statistics

[This was originally published on the OSVDB blog.]

Readers may recall that I blogged about a similar topic just over a month ago, in an article titled Advisories != Vulnerabilities, and How It Affects Statistics. In this installment, instead of “advisories”, we have “CVEs” and the inherent problems when using CVE identifiers in the place of “vulnerabilities”. Doing so is technically inaccurate, and it negatively influences statistics, ultimately leading to bad conclusions.

NSS Labs just released an extensive report titled “Vulnerability Threat Trends; A Decade in Review, Transition on the Way“, by Stefan Frei. While the report is interesting, and the fundamental methodology is sound, Frei uses a dataset that is not designed for true vulnerability statistics. Additionally, I believe that some factors that Frei attributes to trends are incorrect. I offer this blog as open feedback to bring additional perspective to the realm of vulnerability stats, which is a long ways from approaching maturity.

Vulnerabilities versus CVE

In the NSS Labs paper, they define a vulnerability as “a weakness in software that enables an attacker to compromise the integrity, availability, or confidentiality of the software or the data that it processes.” This is as good a definition as any. The key point here is a weakness, singular. What Frei fails to point out, is that the CVE dictionary is not a vulnerability database in the same sense as many others. It is a specialty database designed primarily to assign a unique identifier to a vulnerability, or a group of vulnerabilities, to coordinate tracking and discussion. While CVE says “CVE Identifiers are unique, common identifiers for publicly known information security vulnerabilities” , it is more important to note the way CVE abstracts, which is covered in great detail. From the CVE page on abstraction:

CVE Abstraction Content Decisions (CDs) provide guidelines about when to combine multiple reports, bugs, and/or attack vectors into a single CVE name (“MERGE”), and when to create separate CVE names (“SPLIT”).

This clearly denotes that a single CVE may represent multiple vulnerabilities. With that in mind, every statistic generated by NSS Labs for this report is not accurate, and their numbers are not reproduceable using any other vulnerability dataset (unless it too is only based on CVE data and does not abstract differently, e.g. NVD). This distinction puts the report’s statements and conclusions in a different light:

As of January 2013 the NVD listed 53,489 vulnerabilities ..
In the last ten years on average 4,660 vulnerabilities were disclosed per year ..
.. with an all-‐time high of 6,462 vulnerabilities counted in 2006 ..

The abstraction distinction means that these numbers aren’t just technically inaccurate (i.e. terminology), they are factually inaccurate (i.e. actual stats when abstracting on a per-vulnerability basis). In each case where Frei uses the term “vulnerability”, he really means “CVE”. When you consider that a single CVE may cover as many as 66 or more distinct vulnerabilities, it really invalidates any statistic generated using this dataset as he did. For example:

However, in 2012 alone the number of vulnerabilities increased again to a considerable 5,225 (80% of the all-‐time high), which is 12% above the ten-‐year average. This is the largest increase observed in the past six years and ends the trend of moderate declines since 2006.

Based on my explanation, what does 5,225 really mean? If we agree for the sake of argument, that CVE averages two distinct vulnerabilities per CVE assignment, that is now over 10,000 vulnerabilities. How does that in turn change any observations on trending?

The report’s key findings offer 7 high-level conclusions based on the CVE data. To put all of the above in more perspective, I will examine a few of them and use an alternate dataset, OSVDB, that abstracts entries on a per-vulnerability basis. With those numbers, we can see how the findings stand. NSS Labs report text is quoted below.

The five year long trend in decreasing vulnerability disclosures ended abruptly in 2012 with a +12% increase

Based on OSVDB data, this is incorrect. Both 2009 (7,879) -> 2010 (8,835) as well as 2011 (7,565) -> 2012 (8,919) showed an upward trend.

More than 90 percent of the vulnerabilities disclosed are moderately or highly critical – and therefore relevant

If we assume “moderately” is “Medium” criticality, as later defined in the report, is 4.0 -‐ 6.9 then OSVDB shows 57,373 entries that are CVSSv2 4.0 – 10.0, out of 82,123 total. That means 90% is considerably higher than we show. Note: we do not have complete CVSSv2 data for 100% of our entries, but we do have them for all entries affiliated with the ones Frei examined and more. If “moderately critical” and “highly critical” refer to different ranges, then they should be more clearly defined.

It is also important to note that this finding is a red herring, due to the way CVSS scoring works. A remote path disclosure in a web application scores a 5.0 base score (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N). This skews the scoring data considerably higher than many in the industry would agree with, as 5.0 is the same score you get for many XSS vulnerabilities that can have more serious impact.

9 percent of vulnerabilities disclosed in 2012 are extremely critical (with CVSS score>9.9) paired with low attack/exploitation complexity

This is another red herring, because any CVSS 10.0 score means that “low complexity” was factored in. The wording in the report implies that a > 9.9 score could be paired with higher complexity, which isn’t possible. Further, CVSS is scored for the worst case scenario when details are not available (e.g. CVE-2012-5895). Given the number of “unspecified” issues, this may seriously skew the number of CVSSv2 10.0 scores.

Finally, there was one other element to this report that was used in the overview, and later in the document, that is used to attribute a shift in disclosure trends. From the overview:

The parallel and massive drop of vulnerability disclosures by the two long established purchase programs iDefense VCP and TippingPoint ZDI indicate a transition in the way vulnerability and exploit information is handled in the industry.

I believe this is a case of “correlation does not mean causation“. While these are the two most recognized third-party bug bounty programs around, there are many variables at play here. In the bigger picture, shifts in these programs do not necessarily mean anything. Some of the factors that may have influenced disclosure numbers for those two programs include:

  • There are more bug bounty programs available. Some may offer better price or incentive for disclosing through them, stealing business from iDefense/ZDI.
  • Both companies have enjoyed their share of internal politics that affected at least one program. In 2012, several people involved in the ZDI program left the company to form their startup. It has been theorized that since their departure, ZDI has not built the team back up and that disclosures were affected as a result.
  • ZDI had a small bout of external politics, in which one of their most prevalent bounty collectors (Luigi Auriemma) had a serious disagreement about ZDI’s handling of a vulnerability, as relates to Portnoy and Exodus. Auriemma’s shift to disclose via his own company would dramatically affect ZDI disclosure totals alone.
  • Both of these companies have a moving list of software that they offer a bounty on. As it changes, it may result in spikes of disclosures via their programs.

Regardless, iDefense and ZDI represent a small percentage of overall disclosures, it is curious that Frei opted to focus on this so prominently as a reason for vulnerability trends changing without considering some influencing factors. Even during a good year, 2011 for example, iDefense (42) and ZDI (297) together accounted for 339 out of 7,565 vulnerabilities, only ~ 4.5% of the overall disclosures. There are many other trends that could just as easily explain relatively small shifts in disclosure totals. When making statements about trends in vulnerability disclosure and how it affects statistics, it isn’t something that should be done by casual observers. They simply miss a lot of the low-level details you glean on the day-to-day vulnerability handling and cataloging.

To be clear, I am not against using CVE/NVD data to generate statistics. However, when doing so, it is important that the dataset be explained and qualified before going into analysis. The perception and definition of what “a vulnerability” is changes based on the person or VDB. In vulnerability statistics, not all vulnerabilities are created equal.