A critique of the summary of “Latent Feature Vulnerability Rankings of CVSS Vectors”

What do you think of this?” It always starts out simple. A friend asked this question of an article titled Summary of “Latent Feature Vulnerability Rankings of CVSS Vectors”. This study is math heavy and that is not my jam. But vulnerability databases are, and that includes the CVE ecosystem which encompasses NVD. I am also pretty familiar with limitations of the CVSS scoring system and colleagues at RBS have written extensively on them.

I really don’t have the time or desire to dig into this too heavily, but my response to the friend was “immediately problematic“. I’ll cliff notes some of the things that stand out to me, starting with the first graphic included which she specifically asked me about.

  • The header graphic displays the metrics for the CVSSv3 scoring system, but is just labeled “CVSS”. Not only is this sloppy, it belies an important point of this summary that the paper’s work is based on CVSSv2 scores, not CVSSv3. They even qualify that just below: “We should note the analysis conducted by Ross et al. is based upon the CVSS Version 2 scoring system…
  • Ross et al. note that many exploits exist without associated CVE-IDs. For example, only 9% of the Symantec data is associated with a CVE-ID. The authors offered additional caveats related to their probability calculation.” That sounds odd, but it is readily explained above when they summarize what that data is: “Symantec’s Threat Database (SYM): A database extracted from Symantec by Allodi and Massacci that contains references to over 1000 vulnerabilities.” First, that data set contains a lot more than vulnerabilities. Second, if Symantec is really sitting on over 900 vulnerabilities that don’t have a CVE ID, then as a CNA they should either assign them an ID or work with MITRE to get an ID assigned. Isn’t that the purpose of CVE?
  • Ross et al. use four datasets reporting data on vulnerabilities and CVSS scores…” and then we see one dataset is “Exploit Database (Exploit-DB): A robust database containing a large collection of vulnerabilities and their corresponding public exploit(s).” Sorry, EDB doesn’t assign CVSS scores so the only ones that would be present are ones given by the people disclosing the vulnerabilities via EDB, some of whom are notoriously unreliable. While EDB is valuable in the disclosure landscape, serving as a dataset of CVSS scores is not one of them.
  • About 2.7% of the CVE entries in the dataset have an associated exploit, regardless of the CVSS V2 score.” This single sentence is either very poorly written, or it is all the evidence you need that the authors of the paper simply don’t understand vulnerabilities and disclosures. With a simple search of VulnDB, I can tell you at least 55,280 vulnerabilities have a CVE and a public exploit. There were 147,490 live CVE IDs as of last night meaning that is almost 38% that have a public exploit. Not sure how they arrived at 2.7% but that number should have been immediately suspect.
  • In other words, less than half of the available CVSS V2 vector space had been explored despite thousands of entries…” Well sure, this statement doesn’t qualify one major reason for that. Enumerate all the possible CVSSv2 metric combinations and derive their scores, then look at which numbers don’t show up on that list. A score of 0.1 through 0.7 is not possible for example. Then weed out the combinations that are extremely unlikely to appear in the wild, which is most that have “Au:M” as an example, and it weeds out a lot of possible values.
  • Only 17 unique CVSS vectors described 80% of the NVD.” Congrats on figuring out a serious flaw in CVSSv2! Based on the 2.7% figure above, I would immediately question the 80% here too. That said, there is a serious weighting of scores primarily in web application vulnerabilities where e.g. an XSS, SQLi, RFI, LFI, and limited code execution could all overlap heavily.
  • Input: Vulnerabilities (e.g., NVD), exploit existence, (e.g., Exploit-DB), the number of clusters k” This is yet another point where they are introducing a dataset they don’t understand and make serious assumptions about. Just because something is posted to EDB does not mean it is a public exploit. Another quick search of VulnDB tells us there are at least 733 EDB entries that are actually not a vulnerability. This goes back to the reliability of the people submitting content to the site.
  • The authors note their approach outperforms CVSS scoring when compared to Exploit-DB.” What does this even mean? Exploit-DB does not do CVSS scoring! How can you compare their approach to a site that doesn’t do it in the first place?

Perhaps this summary is not well written and the paper actually has more merit? I doubt it, the summary seems like it is comprehensive and captures key points, but I don’t think the summary author works with this content either. Stats and math yes. Vulnerabilities no.

More authorities, more CVEs; Oh, and more commentary.

On November 10, TechBeacon published a great article by Rob Lemos titled “More authorities, more CVEs: What it means for app sec teams” in which I was quoted, along with several other people.

Like many articles of this nature, those who provide input often will talk for as long as half an hour and ultimately get a couple lines quoted. We do it to provide background and context on the topic as well as have an open discussion on vulnerability trends. That means there are ‘outtake’ opinions and facts, as well as our potential reaction to other parts of the article that did not include our input. So this blog just covers some of my random observations to compliment the article.

Until 2016, more than 80% of software security issues assigned a CVE identifier belonged to only 10 classes, or weaknesses, as classified by their Common Weakness Enumeration (CWE) category. But in 2019, the top 10 weaknesses only accounted for 59% of reported vulnerabilities.

The Common Weakness Enumeration (CWE) index is interesting to me and I wonder if it has gotten so big to degrade its value. Consider that there are now 891 CWE identifiers as of August 20 in version 4.2 of the framework. Per the article, only 10 of them account for 59% of vulnerabilities which will no doubt include XSS, SQLi, and CSRF as examples. That makes me wonder the value of abstracting so much as it means that hundreds of those CWEs will represent a handful of vulnerabilities at most.

Digging into the 2,298 page PDF documenting version 4.2, you can jump toward the end of the CWE list and see that several have been created but have no “Observed Examples”. In fact, searching for that phrase only yields 397 hits. Does that mean that out of 891 CWE IDs representing weaknesses, that MITRE has only come up with 397 that match known vulnerabilities? I certainly expect otherwise and hope this is just documentation shortcoming as I feel that every CWE ID should be linked to a concrete real-world example. I’d love to see

I’d love to see a simple breakdown of the top 100 CWE along with how many vulnerabilities are associated with them (via NVD, since MITRE doesn’t actually apply CWE to entries) and what percentage of the overall vulnerabilities that represents. It might be very telling just how useful CWE is and if the project is being pushed too heavily from an academic standpoint. Before you judge that comment, let me know how useful this CWE report from MITRE is, and make sure you load it in Chrome.


It’s an open question whether the addition of coding repositories will lead to another expansion in the number of vulnerabilities.

I don’t think that is an open question at all. I think the number of vulnerabilities will go up as a result of more coding repositories becoming a CNA. But that isn’t really the issue here. Instead, the real questions should be centered around what quality of CVE entries they will provide, if they will adhere to CNA standards, and if MITRE will enforce CNA policy on them.

Based on history, the short answers to those questions are: quality will go down, no, and no. As soon as MITRE provides a breakdown of how many IDs were published by each CNA it is difficult to know. Speaking of, why hasn’t MITRE published such statistics? Rhetorical question, apologies.


Open source vulnerabilities: Do you know what’s in your software?

I, along with many others, can’t stress this enough! Please make sure you understand what third-party software your developers are using. This affects your organizations from both a vulnerability standpoint, as well as legal accountability. Using a third-party library against its license could open you up to some hardships.

The only reason I quoted this section is because I just read an article in the latest Wired that mentions Bootstrap is thought to be used on nearly 20% of all web sites across the Internet. That is incredible.


Patch Tuesday is becoming a bottleneck

There is a lot more that can be said on this topic. It reminds me of a 2015 blog I wrote that actually goes back farther to 2007 where this problem was predicted long before the nightmare IT teams experience once a month. It’s only going to get worse as more vendors jump on this patch schedule and the only ones who will suffer are their paying customers. But hey, let’s let them keep using the term “responsible” disclosure too.


But exploitability alone doesn’t solve the problem—three quarters of the 17,300 vulnerabilities identified ranked CVSS exploitability rating of 8.0 or higher.

I’m not sure a more perfect example of why CVSS has become worthless exists. On its own, especially using the Base score only, is it really helpful that so many vulnerabilities are ‘High Risk‘? This is also a good reminder of another blog I have been meaning to write for a while that outlines the distribution of CVSSv2 versus CVSSv3 and how it impacts scoring. With a couple charts you will get a nice visual of just how poorly thought out some of the framework was. Of course, this has already been evaluated by others years back as well.


Finally, because I don’t hold the copyright to the picture used in the TechBeacon article header, I offer my version:

Picture of me by D2d in November, 2018 at Tomayo, in Denver, CO.

Why @anacondainc Doesn’t Fully Understand CVEs

It’s worrisome that in 2020 we still have people in influential technical roles that don’t understand CVE. A friend told me earlier this year he was in a meeting where someone said that CVE IDs are assigned in order, so CVE-2020-9500 meant there were 9500 vulns in 2020 so far. Of course that is not how it works and a dangerous understanding of CVE.

I ran across an article written by Nick Malkiewicz of Anaconda titled “Why Understanding CVEs Is Critical for Data Scientists“. This article has several bits that show a lack of understanding of what CVE is. One of the biggest is equivocating a CVE with a vulnerability. Yes, many vulnerabilities directly map to a single CVE identifier, but a CVE is the identifier not the vulnerability. Additionally, sometimes one vulnerability can track with multiple CVE IDs, or one CVE ID can track to multiple vulnerabilities. So lines like the following are concerning:

When someone finds a CVE, they report it to a CVE Numbering Authority (CNA).

When someone finds a vulnerability, they report it to MITRE or a vendor, who may be a CNA but more often not one. That vendor can then ask MITRE for an ID via a web form.

CNAs assign identification numbers to CVEs and list them in publicly accessible databases.

A CNA is required to inform MITRE after a CVE-assigned vulnerability has been disclosed. That is actually a fairly recent rule, implemented in the last few years. For most of CVE’s history there was no requirement or specific communication channel for a CNA to notify MITRE of this. That was one of many failings of the CVE ecosystem and directly led to companies being breached, as they relied on CVE to be ‘complete’ and timely.

Each vulnerability listed in a CVE database has a score from .1 to 10, 10 being the highest risk level. These scores are based on exploitability, impact, remediation level, report confidence, and other qualities.

Technically, not even the first line is true as NVD can score a vulnerability as 0.0, meaning it is not a vulnerability and poses no risk. This occurs when a researcher or vendor disclose a vulnerability but don’t fully understand the issue or the subsequent impact. This happens hundreds of times a year although many are not included in NVD. The second sentence from Anaconda is also incorrect as NVD only scores CVSS Base scores. The exploitability, remediation level, and report confidence are part of Temporal scores and not included. You can see an example with CVE-2020-2800 published by Oracle and given a CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:L/A:N score by both Oracle and NVD. This misunderstanding of NVD CVSS scoring is more baffling as Anaconda links to the same FIRST CVSS document I do in this paragraph.

Anaconda goes on talking about how there are other factors at play including looking at the history of a package, how fast vendors respond, and more. This is great advice and critical for companies facing tens of thousands of vulnerabilities a year. Unfortunately, they slide into the “more lipstick on that pig” problem:

The good news is, there are tools that automate the CVE monitoring process.

This is true. But, more ways to manipulate bad data still leaves you with bad data. In addition to CVE missing several thousand vulnerabilities a year, their push for quantity in the last few years have led to a serious hit on quality. There are some CVE IDs that have descriptions missing critical information like the vendor, affected version, or impact. All the data wrangling and fancy analysis of that data is still based on bad or incomplete information. All the lipstick on that pig still makes it a pig.

Finally, I will quote on other line from their blog that is curious:

Hacking open-source software also has a bigger payoff because many more people use it.

I understand and appreciate the sentiment, and finding a vulnerability in a library like OpenSSL obviously has a huge impact. However, that and a couple dozen libraries are still the outliers in the bigger picture. Compare your vulnerabilities like EternalBlue to a standard open source library vulnerability and they are not even close as far as how “many more people use it”.

The problem with SCADA goes deeper…

[This was originally published on the OSVDB blog.]

We know SCADA is virtual swiss cheese, ready to be owned if someone can reach a device. We have preached airgaps for decades, even before we knew how bad the software was. Back then it was just, “this is so critical, it has to be separate!”

The last five years have proven how bad it is, with the rise of SCADA vulnerabilities. Sure, we can overlook the bad coding, proprietary protocols, no evidence of a SDLC, and the incredible amount of time it can take to patch. For some silly reason we put up with “forever-day bugs” because something is so critical it can’t be rebooted (forgetting how absurd that design choice is). But, what if we go a step beyond that?

An ICS-CERT 14-084-01 advisory released yesterday on vulnerabilities in Festo products is a good reminder of just how bad the problem is, and how much deeper it goes. First, the product has a backdoor in the FTP service allowing unauthenticated access (CVSSv2 9.3). This can allow a remote attacker to crash the device or execute arbitrary code. Second, the device is vulnerable due to bundling the 3S CoDeSys Runtime Toolkit which does not require authentication for admin functions (CVSSv2 10.0), and a traversal flaw that allows file manipulation leading to code execution (CVSSv2 10.0). Those two issues were reported in January of 2013, making this report as relates to Festo products over a year late.

So we have a vendor backdoor, unauthenticated administrator access, and a way to bypass authentication if it was there to gain privileges. So realistically, what type of organizations does this potentially impact? From the ICS-CERT advisory:

This product is used industrywide as a programmable logic controller with inclusion of a multiaxis controller for automated assembly and automated manufacturing. Identified customers are in solar cell manufacturing, automobile assembly, general assembly and parts control, and airframe manufacturing where tolerances are particularly critical to end product operations.

Now to dig the hole deeper. Under the “Mitigation” section, we see how serious Festo considers these vulnerabilities. Paraphrased from two lines in the advisory:

Festo has decided not to resolve these vulnerabilities, placing critical infrastructure asset owners using this product at risk … because of compatibility reasons with existing engineering tools.

The two 3S CoDeSys vulnerabilities have a fix available and just need to be integrated into the Festo products. What does “compatibility with existing engineering tools” really mean in the context of software? The ICS-CERT advisory also says:

According to the Festo product web page, other products are using newer versions of CoDeSys software and may not be vulnerable to the CoDeSys vulnerability, but this has not been evaluated by the researcher.

The researcher already spent time finding the issues, reporting them to a coordinating body, and following coordinated disclosure practices. Expecting them to also evaluate which products are not vulnerable is ridiculous. This is a case of the vendor just being lazy and irresponsible.

A company that makes vulnerable critical components that affect our infrastructure and directly impact our safety, but refuses to fix them. Why is this allowed to exist in our society?

festo

Mobile Devices and Exploit Vector Absurdity

[This was originally published on the OSVDB blog.]

The last few days has seen several vulnerabilities disclosed that include serious gaps in logic with regard to exploitation vectors. What is being called “remote” is not. What is being called “critical” is not. Here are a few examples to highlight the problem. We beg of you, please be rational when explaining vulnerabilities and exploit chaining. The biggest culprit in all of this is the “need for a user to install a malicious app” to then allow a vulnerability to be exploited. Think about it.

Number One

We start with an H-Online article titled “Critical vulnerability in Blackberry 10 OS“. First word, critical. In the world of vulnerabilities, critical means a CVSSv2 score of 10.0 which essentially allows for remote code execution without user interaction. Consider that standard and widely accepted designation, and read the article’s summary of what is required to exploit this vulnerability:

As well as needing Protect enabled, the user must still install a malicious app, which then compromises a Protect-component so that it can intercept a password reset. This password reset requires the user, or someone who knows the BlackBerry ID and password, to go to the web site of BlackBerry Protect and request the password. If the attacker manages that, then the Protect component, compromised by the earlier malicious app, can let the attacker know the new password for the device. If he has physical access to the device, he can now log on successfully as the actual user. Otherwise, the attacker can only access Wi-Fi file sharing if the actual user has activated it.

The only thing missing from this exploit chain are the proverbial chicken sacrifices at midnight on a full blue moon. Want to get the same result much easier? Find your victim and say “Wow, that is a slick new phone, can I see it?” Nine out of ten times, they unlock the phone and hand it to you. Less work, same result.

Number Two

There were a few disclosures out of Japan’s JVN system, run by JPCERT. Two examples, both the same fundamental vulnerability, are summarized below:

#1 – CVE-2013-3643 (NVD Entry) – JVN 99813183 / JVNDB-2013-000056
#2 – CVE-2013-3642 (NVD Entry) – JVN 79301570 / JVNDB-2013-000055

#1 – The Galapagos Browser application for Android does not properly implement the WebView class, which allows attackers to obtain sensitive information via a crafted application.

Despite all these references, users are left with either incorrect or very misleading information. First, CVE says “an attacker” instead of qualifying it as a local attacker. I only call them out because they are historically more precise than this. Second, NVD calls this a “context-dependent” attacker via the CVSSv2 score (AV:N/AC:M/Au:N/C:P/I:N/A:N), saying it can be exploited over the network with moderate user interaction. NVD also says this affects confidentiality ‘partially’. JVN goes so far to say it can be exploited “over the Internet using packets” with “anonymous or no authentication”.

The Reality

The reality of these vulnerabilities is that they are not remote. Not in any form under any circumstances that the vulnerability world accepts. For some reason, VDBs are starting to blur the lines of exploit traits when it comes to mobile devices. The thought process seems to be that if the user installs a malicious application, then the subsequent local vulnerability becomes ‘remote’. This is absurd. Just because that may be the most probable exploit vector and chaining, does not change the fact that getting a user to install a malicious application is a separate distinct vulnerability that cannot have any scoring weight or impact applied to the vulnerability in question. If you can get a phone user to install a malicious application, you can do a lot more than steal ‘partial’ information from the one vulnerable application.

Let me put it to you in terms that are easier to understand. If you have a Windows local privilege escalation vulnerability, it is local. Using the above logic, if I say that by tricking a user into installing a malicious application it can then be exploited remotely, what would you say? If you have a Linux Kernel local DoS, it too can become remote or context-dependent, if the root user installs a malicious application. You can already spin almost any of these local vulnerabilities into remote by saying “remote, authentication required” and assuming it can be done via RDP or SSH. To do so though, devaluates the entire purpose of vulnerability classification.

Any doubts? Consider that CVE treats the exact same situation as the mobile browser vulnerabilities above as a local issue in Windows, even when a “crafted application” is required (see IDs below). The only difference is if the local user writes the application (Windows), or gets the user to install the application (Mobile). Either way, that is a local issue.

CVE-2013-1334CVE-2012-1848CVE-2011-1282CVE-2010-3942CVE-2009-1123CVE-2008-2252

CVSSv2 Shortcomings, Faults, and Failures Formulation

[This was originally published on the OSVDB blog.]

The Open Security Foundation (OSF) and Risk Based Security wrote an open letter to FIRST regarding the upcoming Common Vulnerability Scoring System (CVSS) version 3 proposal. While we were not formally asked to provide input, given the expertise of managing vulnerability databases, along with the daily use of CVSS, we felt the feedback would provide valuable insight to improve CVSS in the future.

Some of the areas discussed include:

  • Introducing 4 levels for granularity
  • Better definitions for terminology for more accurate scoring
  • Re-examining the pitfalls of “Access Complexity”
  • Limitations of the current Access Vector breakdown
  • The challenge of scoring authentication
  • And a variety of other considerations to improve vulnerability scoring

Our conclusion points to the need for CVSS to be overhauled as CVSSv2 has too many current shortcomings to provide an adequate and useful risk scoring model. You can download the full letter in PDF format.

Advisories != Vulnerabilities, and How It Affects Statistics

[This was originally published on the OSVDB blog.]

I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!

I noticed a Tweet from SC Magazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.

Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.

It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.

A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12” (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!

If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.

Base SourceCriticalImportantModerateLow
2012 Advisories (83)35 (42.2%)46 (55.4%)2 (2.4%)
2012 CVE (160)100 (62.5%)18 (11.3%)39 (24.4%)3 (1.8%)
2012 Total (176)101 (57.4%)19 (10.8%)41 (23.3%)15 (8.5%)

It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:

The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.

In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.

  • MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
  • MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
  • Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
  • OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
  • Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
  • One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.

These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.

OSVDB – Search Filters & Custom Exports

[This was originally published on the OSVDB blog.]

Last week, OSVDB enhanced the search results capability by adding a considerable amount of filter capability, a simple “results by year” graph and export capability. Rather than draft a huge walkthrough, open a search in a new tab and title search for “microsoft windows”.

As always, the results will display showing the OSVDB ID, disclosure date and OSVDB title. On the left however, are several new options. First, a summary graph will be displayed showing the number of vulnerabilities by year, based on your search results. Next, you can toggle the displayed fields to add CVE, CVSSv2 score and/or the percent complete. The percent complete refers to the status of the OSVDB entry, and how many fields have been completed. Below that are one click filters that let you further refine your search results by the following criteria:

  • Reference Type – only show results that contain a given type of reference
  • Category – show results based on the vulnerability category
  • Disclosure Year – refine results by limiting to a specific year
  • CVSS Score – only show entries that are scored in a given range
  • Percent Complete – filter results based on how complete the OSVDB entry is

Once you have your ideal search results, you can then export them to XML, custom RSS feed or CSV. The export will only work for the first 100 results. If you need a bigger data set to work with, we encourage you to download the database instead.

With the new search capability, you should be able to perform very detailed searches, easily manipulate the results and even import them into another application or presentation. If you have other ideas of how a VDB search can be refined to provide more flexibility and power, contact us!

OSVDB – Search Enhance: by CVSS Score or Attribute

[This was originally published on the OSVDB blog.]

Using the ‘Advanced Search‘, you can now search the database by entering a CVSSv2 score range (e.g., 8 to 10) or by a specific CVSSv2 attribute (e.g., Confidentiality : Partial). To search for entries with only a 10 score, use the search range 10 to 10.

Using this search mechanism, we can see there are 3,217 entries in the database with a score of 10 and 9,266 entries that involve a complete loss of availability.

We hope this flexibility allows for even more refined searches to better help your project or organization. Stay tuned, this is one of many new search features planned.

OSVDB Now Supports CVSSv2 Scoring

[This was originally published on the OSVDB blog.]

OSVDB now displays CVSSv2 scores, mostly as calculated by the National Vulnerability Database (NVD):

cvss-score

Along with the score, we display the date that NVD generated it and give users a method for recommending updates if they feel the score is inaccurate. While this is long overdue, this is one of many CVSS related features we will be adding in the near future. For those wondering about the delay in adding CVSS support, the cliff notes answer is “we had reservations about the scoring system”. Back in 2005, Jake and I had a long chat with a couple of the creators of CVSS and brought up our concerns. Our goal was to create our own scoring system, but internal debate (and procrastination) lead to neither being implemented. Rather than creating our own system, we finally opted to use what has become an industry standard. Some of our planned CVSS score enhancements on the to-do list, no particular order:

  1. Method for adding our own CVSS score. There are thousands of entries in OSVDB that do not have a CVE assignment, and as a result, no NVD based CVSS score.
  2. A more robust moderation queue to handle proposed changes. This may optionally have a one-click method for us to notify NVD of our change so they may consider revising their score.
  3. Ensure the CVSSv2 score is part of the database dumps, available for download.
  4. Method for tracking CVSS score historically. As NVD revises their score, or we do, a user should be able to see the history of changes.
  5. Compare our/NVD scores with other public tools, display discrepancy if different. For example, if a Nessus plugin scores an issue differently than NVD, show both scores so users may consider which is more accurate.
  6. Track researcher generated CVSS score. While infrequent, some advisories provide scoring. If different than NVD, display the discrepancy.

As always, if you have ideas on how we could better handle CVSS scoring, or have additional ideas for features, please contact us!