A Day in the Life of a Security Bulletin

[This was originally published on the OSVDB blog.]

A Day in the Life of a Security Bulletin

Hi all- Alexandra Huft here again! I thought you might find it interesting to see “behind the scenes” of how a security vulnerability eventually becomes a security bulletin.

So, I’ll start way back at the beginning. We receive reports from many different finders on issues that may or may not be a vulnerability. The first thing that we do is work to determine that we are able to duplicate what the finder has reported. Sometimes this is very simple, other times we need to go back to the finder for additional information, but whenever possible we try and recreate what they’ve discovered with our own research. We work with the affected product teams and our own experts on the Secure Windows Initiative team (SWI) to reproduce these reports. We also try to keep the finder updated with as much information as we can provide, so that they are aware of where we are in the process. We then work on determining the severity, which is not always the easiest thing. Like you, we all have our opinions, which lead to many a heated discussion in the MSRC Situation Room where we meet several times a week. We all want the best decision for all of our customers.


I’d be interested in seeing the same topic covered by Sun Microsystems, HP, Oracle, and other vendors with large product bases.

An Analysis of Reputational Risk

[This was originally published on the OSVDB blog.]

Kenneth Belva of Franklin Technologies United, Inc. announced a paper titled “How It’s Difficult to Ruin a Good Name: An Analysis of Reputational Risk”. The paper was delivered as the keynote address at the FiTech Summit 2005. In his announcement, he states “This paper should be regarded as a starting point for further, positive discussion” and he is right, but this is an excellent first step.

From the paper:

What is the impact of an information security breach both monetarily and on one’s reputation if the breach is publicly disclosed? And, just as important, why does it happen in the way that it does? What are the factors that lead to the results (outcomes)? This becomes especially relevant as most States are beginning to pass laws similar to California’s SB1386.

The title of my presentation — How It’s Difficult to Ruin a Good Name — may have hinted at my conclusion.

Another person commented that this follows an article by Richard Menta titled “A need to know” which goes into breaches, investors and consumer confidence. The article ends:

As an illustration: on February 14, 2005 information aggregator ChoicePoint announced hackers had breached its network and stolen the personal information of up to 500,000 people.

How did Wall Street react? The firm’s shares plummeted 15 percent.

A few years back, a couple of journalists and security professionals brought this same thing up on a mail list, but questioned the impact of vulnerabilities and companies. Would the release of a nasty remote vuln impact a company like Microsoft? Would the release of a vulnerability in a security product affect a security company? How about if that same vulnerability was made into a worm with a destructive payload?

There is sketchy evidence that such vulnerabilities and subsequent worms can affect the value of a company. While I don’t have hard data to say this for sure, it is a project i’ve long since wanted to take up. All it requires is a good timeline of vulnerabilities (OSVDB), a good sense of media/popular opinion of the events (ISN), and access to stock prices over the years (favorite broker). Mapping the bigger vulnerabilities, or the ones that made more press (even if less serious than others), combined with stock prices would make for some interesting research. Bottom line: can Joe Random Hacker release vulnerability information and negatively impact the value of a company?

Vulnerability Classification Terminology

[This was originally published on the OSVDB blog.]

Local or remote, seems so simple when classifying a vulnerability. The last few years have really thrown this simple distinction for a loop. Think of a vulnerability that occurs when processing a file, such as a browser rendering a JPG or GIF, or a program like Adobe Reader processing a PDF file. On one hand, you could argue that a browser has to remotely load an image or a user must e-mail a PDF to be opened. On the other hand, what happens when the malformed file is given to you on a floppy disk? What if you are using MSIE to locally browse files on the hard disk? It’s not that local or remote are *wrong*, just not descriptive enough.

This debate has popped up on mail lists in the past year, and has been discussed at every VDB I guarantee you. After a couple years of discussing it internally at OSVDB, we haven’t been able to come up with a better classification scheme. Why? Everything we come up with is just as non-descript or overly complex. We can’t seem to find a good middle ground to cover such distinctions.

Recently, Steven Christey of CVE has come up with a middle ground and begun using it in some entries. For attacks that require external help to somehow deliver hostile material to a victim, he has begun using “external user-complicit attackers” and it seems to be a good fit.


MusicPlasma for Vulnerabilities

[This was originally published on the OSVDB blog.]

A couple years back, I ran across musicplasma. For those not familiar with the engine, it allows you to type in your favorite music artist/band, and see “related” artists. So I type in “portishead” (mmmm) and see related bands like Tricky, and Sneakerpimps. These are all considered “trip-hop” so the links are expected. Moving a bit farther out, I start to see new bands (at the time) like Zero7, Air, or Hooverphonic (many of which are now on my playlist). So using this graphical representation, it is easy to see related bands and this type of tool is incredible for finding new music.

Shortly after, I started wondering what it would be like to use such an engine on vulnerabilities. What would it do, would it be valuable, would it help anyone? Two years later I still have the same questions, but lean toward the idea that it would be invaluable for vulnerability research, statistical analysis, and trending. People like CVE or OSVDB would love such a tool, and we’ve discussed the idea in the past. This most recently came up when Steven Christey (CVE) mailed asking what rules we adhered to for related OSVDB links within our database. As I said to him in e-mail, the cliff notes answer to if we have rules that govern this is “no”. I know, bad VDB! Despite that, there is a definite intention and desire for such links that would be used more strictly and consistently if we had developers to help us integrate our ideas into the actual database and front end. The gist of the related links is to eventually move toward an engine like MusicPlasma for vulnerabilities. Instead of rewriting portions of the mail I wrote, i’ll lazily quote some relevant parts:

Obviously a *great* tool for music given it is hard to find similar bands to the ones you like, given that most music reviews won’t even disclose if the lead singer is male or female, let alone the real style of the music beyond some pretty broad categories like “rock” or “rap”. Anyway, on an abstract level, using something like this to chart vulns and make an interface for users to chart vulnerabilities would be interesting. You visit osvdbplasma, click on PHP-Nuke, then graphically browse the issues but instead of just ‘similar’, you do it by age and severity. The closest to the PHP-Nuke ring would be the remote code execution on latest versions, then you could follow that out to older issues. You could choose a different path for XSS, Path Disclosure and other classes.

Like I said, maybe not so useful but it would look really cool(tm), and would make it more understandable to end users without much security experience (a long term goal of OSVDB).


Yep, another idea I had a while back, tracking the history of vulns in a set of products. Pick a few that cover a wide range .. Windows, Oracle, PHP-Nuke, John’s Blog. Then look at the vulnerabilities discovered in them, focusing on the types (SQL, PD, XSS, Overflow, etc). See if there are trends in the types discovered, then cross match it with (very rough) dates of when the class of vulnerability was discovered/announced (a task unto itself). Do any of these products get better? Worse? Are there trends on folks discovering the same types as they become ‘popular’ to research? All kinds of neat research to do here.

Not surprising, Christey replied quickly saying that he too had thought of this type of model to view vulnerabilities and added his own ideas for reasons and features of such a project. I don’t think he took me seriously when I suggested mugging top DHS officials to fund such a project.

A couple weeks ago, HexView Security Research brought this to life in the first generation of such an engine. Check out their vulnerability maps. Done in java, they tie in products and platforms to vulnerabilities showing how they are related. Currently, mousing over a vulnerability only offers a title and no additional information, but this is the first step! It’s very cool to see other companies and researchers looking into modeling this type of information.

Anyway, all of this goes back to a long running gripe OSVDB has about the industry and VDBs specifically, and that is lack of evolution. These types of projects would be incredibly fun to work on, and potentially offer great insight into vulnerabilities, research and product history. On the off chance someone reading this knows about rendering such data or has time/expertise, contact us! We’d love to abuse your knowledge and get you involved in making this project happen.

“OSS means slower patches” – huh?!

[This was originally posted on the OSVDB blog.]

OSS means slower patches
Chris Jenkins
SEPTEMBER 19, 2005

This was posted to Full-Disclosure where I first replied, and ISN picked up. Articles like this do nothing positive for our industry. Jenkins should not waste his time writing fluff pieces like this, and he should do some digging or at least question other sources. Of course, this is not the first time Symantec’s vuln stats have been questioned either. Since that post, no one at Symantec has given any insight as to how they derive their statistics and what lead to their conclusions.

I haven’t had time to read the full report mirrored here, but I have a feeling it will bring more questions than answers like the previous one did.

Full text of my reply:

The obvious criticism:

“The Mozilla family of browsers had the highest number of vulnerabilities during the first six months of 2005, with 25,” the Symantec report says. “Eighteen of these, or 72 per cent, were rated as high severity. Microsoft Internet Explorer had 13 vendor confirmed vulnerabilities, of which eight, or 62 per cent, were considered high severity.”

Microsoft IE had at least 19 vulnerabilities from 2005-01-01 to 2005-06-30. Why does Symantec make the distinction of “X vulnerabilities in Mozilla” vs “MSIE had X *vendor confirmed vulnerabilities*”? This all to conveniently allows the silently patched vulnerabilities to slip through the cracks of our statistics. Does Mozilla’s honesty in acknowledging vulnerabilities come back to bite them in the ass?

Mozilla browsers had more than 25, but are 72 per cent really “high severity”? Download information spoofing x2, File extension spoofing, URL restriction bypass, DoS x2, redirect spoofing, XSS, link status bar spoofing, Dialog overlapping, URL Wrap Obfuscation.. are all of these really “high severity”? Is that theoretical, practical, or hype?

Now, the media/Symantec driven propaganda (for lack of better word?):

THE growing popularity of open-source browsers and software may be responsible for the increasing gap between the exposure of a vulnerability and the provision of patch to fix it, security software vendor Symantec has said.

Mr Sykes said the increasing popularity of open source software, such as the Mozilla Foundation’s Firefox browser, could be part of the reason for the increase in the gap between vulnerability and patch, with the open source development model itself part of the problem. “It is relying on the goodwill and best efforts of many people, and that doesn’t have the same commercial imperative,” he said. “I’m sure that is part of what is causing the blow-out in the patch window.”

The growth in Firefox vulnerability reports coincides with its increasing popularity with users. “It is very clear that Firefox is gaining acceptance and I would therefore expect to see it targeted,” Mr Sykes said. “People don’t attack browsers and systems per se, they attack the people that use them,” he said. “As soon as large banks started using Linux, Linux vulnerabilities started to get exploited.”

The premise of this article is open source software is to blame for longer vendor response times. In laymen’s terms, blame vendors like Mozilla for having vulnerabilities patched slower? Err, compared to what? This shallow article doesn’t even qualify that statement! Slower than previous vulnerabilities? Slower than non open source? Given the article directly compares Mozilla browsers to Microsoft IE, it is trivial to assume the claim is made in relation to closed source vendors such as Microsoft. So then what .. 30 days “blown out” to 54 days is some huge time gap compared
to Microsoft IE patches? What clueless *moron* really believes this crap they are shoveling? Is it Symantec or Chris Jenkins or Australian IT?

Given that Symantec won’t even quote previous statistics: “Symantec had not published previously statistics on the average time required to produce patches, but Mr Sykes said data showed the lag had previously been about 30 days.” Given that Jenkins/AusIT/Symantec won’t give us any statistics (even questionable ones) regarding MSIE patches, we’re supposed to take this at face value? It is *well documented* that Microsoft takes well over 30 days to patch vulnerabilities. It is also becoming crystal clear that Microsoft is hiding behind their “30 day patch cycle” to imply
that is the longest they go before patching a vulnerability, when it simply is not the case. Taking a look at a *single vendor* [1] and their experience with reporting vulnerabilities to Microsoft, we see that they give MS a 60 day window to patch vulnerabilities, and are consistently overdue. As of this mail, the worse is *ONLY* 114 days past due (we’ve seen it closer to 250 days before). So again, where are these implications coming from? Where does this statement/conclusion/observation that “OSS causes slower patches” come from exactly?

[1] http://www.eeye.com/html/research/upcoming/index.html

Scary Oracle Numbers

[This was originally published on the OSVDB blog.]


On Security, Is Oracle the Next Microsoft?
September 16, 2005
By Paul F. Roberts

While [Oracle CSO Mary Ann Davidson] acknowledges that some of the criticism from Litchfield and others is valid, outsiders aren’t privy to the 75 percent of product holes that Oracle discovers and fixes internally.

OSVDB has listings for roughly 330 Oracle vulnerabilities. If we take Davidson’s comment at face value and believe the number isn’t inflated, that means those 330 represent 25% of the vulnerabilities in their products. So according to Oracle, they have over 1,300 vulnerabilities in their products that they know of.

.. and the debate keeps raging

[This was originally published on the OSVDB blog.]

ZDnet Asia had an article recentl, titled “Bug hunters, software firms in uneasy alliance” which brought up the age old full disclosure (or ‘responsible’ disclosure) debate. This prompted a slashdot thread with various comments.

My favorite pop tart, Mary Ann Davidson (chief security officer at Oracle) managed to get quoted again. As usual, she still seems to have this serious disconnect between “responsible disclosure” and “responsible patching”. Let me quote a small portion of the article, see if it jumps out at you too.

Mary Ann Davidson, chief security officer at Oracle, sees security researchers who threaten vendors with disclosure of bugs as a problem, she wrote in a recent perspective piece on News.com. “The reality is that most vendors are trying to do better in vulnerability handling. Most don’t need threats to do so,” Davidson said.

Alexander Kornbrust specializes in security of Oracle products. He went public with details on six security vulnerabilities in Oracle software in July, about two years after he reported the bugs to the software maker and fixes still had not been provided.

Oracle chided Kornbrust as irresponsible for disclosing the data.

These vulnerabilities were disclosed to Oracle on 2003-07-31 and disclosed to the public on 2005-07-19. Three of them were Cross-Site Scripting (XSS), considered by most to be trivial to patch. Who is irresponsible here?

Vuln Info Disclosure via Blogs

[This was originally published on the OSVDB blog.]

Recently, Juha-Matti Laurio questioned if there is a trend in releasing vulnerability information via blog entry. While he is right that we are seeing it a bit more frequently, I don’t think it is any different than the dozens of “hacker” or security message forums that consistently seem to be the first point of disclosure. The other point in the post was how such disclosures may suffer from varying report formats, unofficial comments and vendors not being able to keep up with such blogs. My thoughts:

1. There is already a huge disparity in vulnerability disclosures as far as the format. Even vendor advisories can vary quite a bit, making it increasingly difficult to parse the information, receive the same type of info, etc. There have been several attempts to standardize such disclosures, and it is even something I harped on at the last CanSec conference. Trying to get such a diverse group of researchers to use a single format, or even include a base amount of information is likely a pipe dream.

2. Unofficial comments are something that would affect not only blogs, but message forums and even mail lists. There are times when someone will post to Bugtraq, but subsequent replies are cross-posted to Vuln-Dev or other lists for further discussion. Some vulnerability databases also tend to miss new information (and even new vulnerabilities) in such replies, as if anything with “re:” in the subject gets ignored.

3. Vendors can’t keep up with blog entries, there is no question about it. Hell, *we* can’t either as there are dozens of blogs and message forums where people disclose new vulnerabilities. That is a value of having one or two primary sources for such information (Bugtraq & Full-Disclosure for example). One thing folks can do to help this is if they run across such a blog/forum post, dump the contents to one of the bigger mail lists. Include not only the URL, but the text as well (many sites tend to vanish, mail list archives are all over).

Vulnerabilities becoming more mainstream?

[This was originally published on the OSVDB blog.]

Before 2005, it was fairly rare to see a news article specifically covering a vulnerability. They would usually pop up if a vuln was used in a mass compromise, the basis of a worm propagating, or affected large vendors such as Microsoft and Oracle. This year however, it seems more and more news is being written about vulnerabilities. Some may be explained due to vendors being considered more mainstream (Mozilla & Apple), while others may be attention being put on underlying technology that drives more mainstream applications or high profile lists. Two examples of this can be seen in Mailman [OSVDB 13671, Article] and CPAINT [OSVDB 18746, Article].

More recently:
OSVDB 19255: Firefox flaw found: Remote exploit possible
OSVDB 19227: New Cisco flaw could pose threat to Net
OSVDB 19089: Microsoft Investigates New IE Hole
OSVDB 18956: Reports: Long Registry Names Could Hide Malware

Additionally, it is getting to be routine to see articles covering monthly patch cycles:
Microsoft patches IE, Word, Windows
Microsoft to release six patches, some ‘critical,’ next week
Major Oracle Patch Covers Enterprise Products, Database Server
Apple unloads dozens of fixes for OS X

To stay even more current, articles covering ‘0-day’ vulnerabilities still in various stages of the disclosure cycle.