Perlroth & The First (Zero-Day) Broker

I am currently reading “This Is How They Tell Me The World Ends” by Nicole Perlroth, only on page 60 in Chapter 5, so a long ways to go before completing the 471 page tome. I hit chapter 4, titled “The First Broker” and it was of specific interest to me for sure, prompting this (second) blog on the book. A broker is defined as “a person who buys and sells goods or assets for others” so I was never a vulnerability broker by that definition. I am not trying to claim to be the actual first broker of zero-days in that context at all. Instead, I would like to share a couple of my own stories that are adjacent to the topic. This is all to the best of my recollection, but my memory isn’t the best due to being a diabetic and not having it under control for several years. If anyone involved in any of these stories has a different memory please feel free to comment or reach out directly and I will update this blog accordingly.


First, I was someone who ‘brokered’ deals in the sense of trading zero-day vulnerabilities for a few years in the mid-90s. As a member of multiple hacking groups, some an actual member and some an honorary member, one of my roles in several of those groups was not writing the zero-days because I simply wasn’t a coder and did not have that skill. Instead, it was to barter and try to gain access to specific zero-days one group or member wanted and my currency was other zero-days we had. While I couldn’t code, my social network of hackers was sizable.

Some of what I was authorized to trade for was toward the goal of obtaining e.g. “any remote zero-day in $target operating system” while in other cases it was “trade anything and everything we have for $specific-zero-day“. I acted as a go-between for the groups I was in and a liaison to the general hacker scene. Many knew me to have a well-rounded vulnerability collection and we already traded more pedestrian exploits, some of which weren’t public, but definitely more circulated in such groups.

Back then it was just hackers and groups, not companies, so we didn’t have “duffel bags stuffed full of half a million dollars in cash to buy zero-day bugs” (p.49). Instead we had other zero-day bugs which were just as valuable between groups and acted as the ideal currency. Just like Perlroth describes in her book relating the story of “Jimmy Sabien” (p.43), not his real name, the vulnerabilities had serious value back then too. Some were very closely guarded, to the point of not being shared with their group. For example, Sally may have shared 99% of her exploits and zero-days with her group but held one back because it was so valuable. That one she would use sparingly herself so as not to burn it or authorize it to be traded for a vulnerability of equal value. In those rare cases I would know just enough about the vulnerability to try to arrange a trade on her behalf, sometimes never seeing the vulnerability myself.

There were rumors at the time that some hackers had sold vulnerabilities to specific agencies in European governments. There were also rumors that some were trading zero-day exploits to a European law enforcement agency as a proffer or part of a plea to avoid being charged for hacking activity. But those were just rumors at that point. To me, that was the precursor to the more financial based zero-day market.


Later in the 90s, I was one of the two founders of a startup called Repent Security Inc. (RSI or RepSec). We were three people and started trying to be a penetration testing shop. This was still early in the world of commercial penetration testing and we were going up against companies that either had an established business reputation like a couple of the ‘Big 5’ at the time, or companies that were pioneers in the game like The Wheel Group. We also created software for securely streaming logs over an encrypted tunnel so if a system was popped, you had the logs on a remote host with timestamps including your shell histories (which didn’t have timestamps natively). That software was partially outsourced to a renowned “InfoSec luminary” who had it developed by one of his interns on a compromised .edu machine and later essentially stole the software after RSI imploded. But that story is for another day because it isn’t part of the zero-day world, it’s part of the Charlatan and Errata world.

One thing RSI had of real value was the vulnerability database that I had been maintaining since 1993. It was first maintained for the hacker group I was part of (TNo) where it was originated by other members. When I took over maintaining it I worked on further organizing it, adding several points of metadata, and expanding it. After that group drifted apart I kept maintaining it while a member of w00w00 and honorary member of ADM, where I brokered some trades. I did not maintain the databases for either of those groups which were separate from mine, but I was privy to some of their exploits and shared some of what I had. Members from both groups would frequently ask me to check my database for exploits specific to an operating system or service they were targeting, as this was before Google and Yahoo! didn’t aggregate much in the big picture. Even though a majority of vulnerabilities were posted to Bugtraq, you couldn’t just skim it quickly to determine what was there that you could use for your purpose. Someone that had them all sorted in a database with metadata was fairly valuable. To this day, many friends and colleagues still ask me to do vulnerability lookups, now with VulnDB.

Throughout my hacker days I maintained that database, and then continued to as I transitioned into a career doing penetration testing. Like Perloth documents in her book about the early days of iDefense and the outfit that “Sabien” worked for, we all scoured Bugtraq for our information primarily. I had the benefit of several circles of hackers and hackers-turned-legit that still traded vulnerability intelligence (vuln intel). Essentially the grey market back when the currency was still vuln intel not those duffels of cash. By that point, the database that RSI had was unparalleled in the commercial world. This was initially created before and maintained during Fyodor’s Exploit World and Ken Williams’ Packetstorm. The RSI database came before the ISS XForce database, before BID, before NIST’s ICAT Metabase, and before MITRE’s CVE. More importantly, it was heavy on exploit code but light on proper descriptions or solutions, so it was geared toward penetration testing and compromising machines rather than mature vulnerability intelligence.

As RSI struggled to get penetration testing gigs and opted to work on the “Secure Remote Streaming” (SRS) product, we had taken a trip to Atlanta to talk to ISS about selling a copy of our database to their relatively new X-Force penetration testing team (I forgot who we met there, but I would love remember!). That deal did not happen and we soon found ourselves in talks with George Kurtz at Ernst & Young, one of the ‘Big 5’. While most or all of the ‘Big 5’ had penetration testing teams, their reputation wasn’t the best at the time. That was primarily due to their testers frequently being traditional auditors turned penetration testers, rather than being a ‘real’ tester; someone that came up through the hacking ranks.

It is also important to remind everyone that back then these companies “did not hire hackers“. They literally printed it in advertisements as a selling point that they did not hire and would not consort with so-called black hats. This was almost always an outright lie. Either the company knew the background of their team and lied, or they did not know the background and conveniently overlooked that their employees had zero experience on their resume around that skillset, yet magically were badass testers. Years of companies claiming this also led to what we see now, where many security professionals from that time still refuse to admit they used to hack illegally even 25 years later.

Anyway, back to George and E&Y. It made sense that a shop like that would want to get their hands on RSI’s database. If their testers were primarily from the auditor / bean-counter side of things they would not have had their own solid database. Even if they had hackers it didn’t mean they came with the same vuln intel we had. As best I recall, the negotiations went back and forth for a couple weeks and we settled on a one-time sale of the RSI database for $75,000 with the option to revisit selling ‘updates’ to it as we continued to maintain it. This would have become the first commercial vulnerability intelligence feed at the time I believe, in early 1999. Then, disaster.

The FBI raided the offices of RSI, which was my apartment. At the time that was a death sentence to a penetration tester’s career. Regardless of guilt, the optics were one of black hat / criminal hacking, and finding someone to trust you to break into their systems was not happening. RSI dissolved and I found myself struggling to find work of any kind. So I reached back out to George about the deal we had on the table that we were close to signing and said I was fine with the price, let’s do it. Suddenly, Kurtz had a change of heart.

He didn’t have a change of heart as far as doing the deal, his change was in the price. Instead of $75,000 he came back and said we could do the deal for $25,000 instead, just a third of what we had agreed to. He knew I was in a tight spot and needed the money and he took full advantage of that. This is someone who had a reputation of being a friend to hackers, someone that had bridged the gap between the business world and hackers to put together a reputable team at E&Y. He even had his name on a book about penetration testing, co-authored with names other hackers recognized. He was also very explicit that he knew I had no real power at that point and refused to budge on his one-third offer.

So when he had a chance to honor the deal we originally worked on, a chance to be a friend to a hacker, at no expense of his own? He opted to screw me. Since I was out of options and my limited savings were dwindling I had to accept the offer. That takes me full circle, via a meandering path I know, to likely making one of the largest vulnerability sales at the time. While it wasn’t a single exploit, a $25k deal that was originally set to be $75k is pretty impressive for the time. If RSI had made it, odds are we would have become a software (SRS) and vulnerability intelligence shop rather than a penetration testing shop.

Many aspects of how Perlroth describes the early days of iDefense and “Sabien’s” shop, we were already doing. With a lot fewer people than they claimed, but we were aggregating information from Bugtraq and other sources, writing exploits for some of the vulnerabilities, and then we began to try to sell that information. I guess it isn’t a big surprise I ended up in the vulnerability intelligence business eventually.

CVE Vulnerabilities: How Your Dataset Influences Statistics

[This was originally published on the OSVDB blog.]

Readers may recall that I blogged about a similar topic just over a month ago, in an article titled Advisories != Vulnerabilities, and How It Affects Statistics. In this installment, instead of “advisories”, we have “CVEs” and the inherent problems when using CVE identifiers in the place of “vulnerabilities”. Doing so is technically inaccurate, and it negatively influences statistics, ultimately leading to bad conclusions.

NSS Labs just released an extensive report titled “Vulnerability Threat Trends; A Decade in Review, Transition on the Way“, by Stefan Frei. While the report is interesting, and the fundamental methodology is sound, Frei uses a dataset that is not designed for true vulnerability statistics. Additionally, I believe that some factors that Frei attributes to trends are incorrect. I offer this blog as open feedback to bring additional perspective to the realm of vulnerability stats, which is a long ways from approaching maturity.

Vulnerabilities versus CVE

In the NSS Labs paper, they define a vulnerability as “a weakness in software that enables an attacker to compromise the integrity, availability, or confidentiality of the software or the data that it processes.” This is as good a definition as any. The key point here is a weakness, singular. What Frei fails to point out, is that the CVE dictionary is not a vulnerability database in the same sense as many others. It is a specialty database designed primarily to assign a unique identifier to a vulnerability, or a group of vulnerabilities, to coordinate tracking and discussion. While CVE says “CVE Identifiers are unique, common identifiers for publicly known information security vulnerabilities” , it is more important to note the way CVE abstracts, which is covered in great detail. From the CVE page on abstraction:

CVE Abstraction Content Decisions (CDs) provide guidelines about when to combine multiple reports, bugs, and/or attack vectors into a single CVE name (“MERGE”), and when to create separate CVE names (“SPLIT”).

This clearly denotes that a single CVE may represent multiple vulnerabilities. With that in mind, every statistic generated by NSS Labs for this report is not accurate, and their numbers are not reproduceable using any other vulnerability dataset (unless it too is only based on CVE data and does not abstract differently, e.g. NVD). This distinction puts the report’s statements and conclusions in a different light:

As of January 2013 the NVD listed 53,489 vulnerabilities ..
In the last ten years on average 4,660 vulnerabilities were disclosed per year ..
.. with an all-‐time high of 6,462 vulnerabilities counted in 2006 ..

The abstraction distinction means that these numbers aren’t just technically inaccurate (i.e. terminology), they are factually inaccurate (i.e. actual stats when abstracting on a per-vulnerability basis). In each case where Frei uses the term “vulnerability”, he really means “CVE”. When you consider that a single CVE may cover as many as 66 or more distinct vulnerabilities, it really invalidates any statistic generated using this dataset as he did. For example:

However, in 2012 alone the number of vulnerabilities increased again to a considerable 5,225 (80% of the all-‐time high), which is 12% above the ten-‐year average. This is the largest increase observed in the past six years and ends the trend of moderate declines since 2006.

Based on my explanation, what does 5,225 really mean? If we agree for the sake of argument, that CVE averages two distinct vulnerabilities per CVE assignment, that is now over 10,000 vulnerabilities. How does that in turn change any observations on trending?

The report’s key findings offer 7 high-level conclusions based on the CVE data. To put all of the above in more perspective, I will examine a few of them and use an alternate dataset, OSVDB, that abstracts entries on a per-vulnerability basis. With those numbers, we can see how the findings stand. NSS Labs report text is quoted below.

The five year long trend in decreasing vulnerability disclosures ended abruptly in 2012 with a +12% increase

Based on OSVDB data, this is incorrect. Both 2009 (7,879) -> 2010 (8,835) as well as 2011 (7,565) -> 2012 (8,919) showed an upward trend.

More than 90 percent of the vulnerabilities disclosed are moderately or highly critical – and therefore relevant

If we assume “moderately” is “Medium” criticality, as later defined in the report, is 4.0 -‐ 6.9 then OSVDB shows 57,373 entries that are CVSSv2 4.0 – 10.0, out of 82,123 total. That means 90% is considerably higher than we show. Note: we do not have complete CVSSv2 data for 100% of our entries, but we do have them for all entries affiliated with the ones Frei examined and more. If “moderately critical” and “highly critical” refer to different ranges, then they should be more clearly defined.

It is also important to note that this finding is a red herring, due to the way CVSS scoring works. A remote path disclosure in a web application scores a 5.0 base score (CVSS2#AV:N/AC:L/Au:N/C:P/I:N/A:N). This skews the scoring data considerably higher than many in the industry would agree with, as 5.0 is the same score you get for many XSS vulnerabilities that can have more serious impact.

9 percent of vulnerabilities disclosed in 2012 are extremely critical (with CVSS score>9.9) paired with low attack/exploitation complexity

This is another red herring, because any CVSS 10.0 score means that “low complexity” was factored in. The wording in the report implies that a > 9.9 score could be paired with higher complexity, which isn’t possible. Further, CVSS is scored for the worst case scenario when details are not available (e.g. CVE-2012-5895). Given the number of “unspecified” issues, this may seriously skew the number of CVSSv2 10.0 scores.

Finally, there was one other element to this report that was used in the overview, and later in the document, that is used to attribute a shift in disclosure trends. From the overview:

The parallel and massive drop of vulnerability disclosures by the two long established purchase programs iDefense VCP and TippingPoint ZDI indicate a transition in the way vulnerability and exploit information is handled in the industry.

I believe this is a case of “correlation does not mean causation“. While these are the two most recognized third-party bug bounty programs around, there are many variables at play here. In the bigger picture, shifts in these programs do not necessarily mean anything. Some of the factors that may have influenced disclosure numbers for those two programs include:

  • There are more bug bounty programs available. Some may offer better price or incentive for disclosing through them, stealing business from iDefense/ZDI.
  • Both companies have enjoyed their share of internal politics that affected at least one program. In 2012, several people involved in the ZDI program left the company to form their startup. It has been theorized that since their departure, ZDI has not built the team back up and that disclosures were affected as a result.
  • ZDI had a small bout of external politics, in which one of their most prevalent bounty collectors (Luigi Auriemma) had a serious disagreement about ZDI’s handling of a vulnerability, as relates to Portnoy and Exodus. Auriemma’s shift to disclose via his own company would dramatically affect ZDI disclosure totals alone.
  • Both of these companies have a moving list of software that they offer a bounty on. As it changes, it may result in spikes of disclosures via their programs.

Regardless, iDefense and ZDI represent a small percentage of overall disclosures, it is curious that Frei opted to focus on this so prominently as a reason for vulnerability trends changing without considering some influencing factors. Even during a good year, 2011 for example, iDefense (42) and ZDI (297) together accounted for 339 out of 7,565 vulnerabilities, only ~ 4.5% of the overall disclosures. There are many other trends that could just as easily explain relatively small shifts in disclosure totals. When making statements about trends in vulnerability disclosure and how it affects statistics, it isn’t something that should be done by casual observers. They simply miss a lot of the low-level details you glean on the day-to-day vulnerability handling and cataloging.

To be clear, I am not against using CVE/NVD data to generate statistics. However, when doing so, it is important that the dataset be explained and qualified before going into analysis. The perception and definition of what “a vulnerability” is changes based on the person or VDB. In vulnerability statistics, not all vulnerabilities are created equal.

iDefense VCP as seen through OSVDB

[This was originally published on the OSVDB blog.]

In 2002, iDefense started their Vulnerability Contributor Program. The VCP was created to solicit vulnerability information from the security community and pay researchers for the information. Paying up to US$15,000 for a vulnerability or exploit, iDefense proved there was a significant market for such information after years of debate. The VCP also served as a stark reminder that researchers do not have an obligation to report vulnerabilities to vendors, that doing so is a courtesy.

The VCP pays for “actionable research”, meaning exploits in prominent software (e.g., Microsoft, Oracle) and infrastructure devices (e.g., Cisco). With the information in hand, iDefense in turn leverages researcher’s time by notifying their customers as an early warning system while handling the responsible disclosure of the information to the vendor. This activity can save a world of time for researchers who are long since tired of the headache that often comes with disclosure.

The list of vulnerabilities disclosed by iDefense is impressive. They attribute the large number of advisories to “250 security researchers worldwide”.

In the past few months, an OSF employee (Nepen) has begun to add creditee information for many vulnerabilities in prominent software. This has resulted in creditee information being added for all of the iDefense vulnerabilities. Using OSVDB, we can now look at their advisories in a new light.

iDefense employees have released 131 advisories, credited to 11 unique researchers and “iDefense Labs”. The VCP program has released 479 advisories, credited to 78 unique researchers and “anonymous”. If we assume the 250 researcher number is an estimate and includes both iDefense and VCP, then 89 researchers are distinct and public. That means the “anonymous” submissions make up approximately 161 unique people and cover 326 advisories out of the 479 released.

Using OSVDB’s new creditee system, we can see a neat timeline of the advisories as related to both iDefense and their VCP:

iDefense VCP (79 researchers, 479 advisories): http://osvdb.org/affiliations/1139-idefense-labs-vcp

iDefense Labs (12 researchers, 131 advisories): http://osvdb.org/affiliations/1091-idefense-labs

This is one of many neat ways to use the enhanced creditee system. Over time, as more information is added to the database, we can begin to look at other researchers and organizations.

Responsible Disclosure – Old Debate, Fresh Aspects?!

[This was originally published on the OSVDB blog.]

Earlier this evening, there was a Twitter debate regarding a proposed standard for responsible vulnerability disclosure. It referred to ISO/IEC 29147, a proposed standard for responsibly disclosing a vulnerability. Dino Dai Zovi brought up a fresh angle, that the “responsible disclosure” name itself completely ignored the aspect of the vendor practicing “responsible remediation”. That term should really be more in the center of our minds and discussion. The lack of “responsible remediation” is why so many researchers are fed up with dealing with vendors. That is one reason some use services like ZDI or iDefense, not just the cash.

The “responsible disclosure” debate is stale for the most part. We’ll never agree on how much time is ‘right’ for a vendor to fix a vulnerability. Some researchers think it’s days, other think weeks or months. In the paraphrased words of some female vendor on some boring responsible disclosure panel a few years back, “if i can have a kid in 9 months, i should be able to fix a vulnerability too“. Yet 9 months isn’t reasonable to some vendors like HP, who routinely break the 1,000 day mark, even for simple XSS.

Martin McKeay brought up another aspect to the responsible disclosure debate that was actually fresh, asking what part consumers played in the disclosure process. While I believe it is a neat aspect and something most haven’t considered, I personally believe it is quickly answered by “consumers can put financial pressure on vendors that don’t play well with others”. In reality, consumers are lazy. It takes more than a few bad acts to get us to spend time an energy finding a new vendor. Short of anally raping us with a router and pouring lemon juice in our festering wound, nine times out of ten, we will not find a new vendor.

Back to Dino. He is right, any standard for disclosure should be equally centered on the vendor as it is for the researcher. Researchers can easily fall back on RFP’s “rfpolicy” disclosure policy and change X days to something they believe in. The framework is still perfectly valid and outlines the process, the time frames are always up for debate.

What if we carried this one step beyond? How about making the ISO standards apply to any and every vulnerability, regardless of who found it? If BigVendor finds a vulnerability during internal testing and fixes it, don’t consumers have a right to know? When BigVendor says “upgrade to Service Pack 18” and only gives us a reason of “big stability enhancements!!”, shouldn’t we have a right to know those enhancements translate into 17 remotely exploitable vulnerabilities discovered during internal testing and QA? Wouldn’t *that* knowledge be a more significant reason to upgrade and apply the service pack?

I realize it is a pipe dream to think that most vendors would ever offer that level of transparency, even months (years?) after a given issue is fixed. In reality though, they are the proverbial large mythical flightless birds who stick their heads in the sand rather than face a difficult situation (ostriches are real and don’t bury their heads). It has been proven countless times that serious vulnerabilities in big vendors (e.g., Microsoft, Apple, Adobe) are being discovered by multiple parties. No one with an inkling of common sense and rational thinking can believe that the ‘bad guys’ aren’t also discovering some of these bugs. We’re long past the point of vendors honestly thinking that they can get away with some notion that they have a reputation for ‘security’. Add it up, and we’re to that time where the big vendors should be disclosing vulnerabilities discovered during their internal QA / SLDC process. The reputation of insecure software really can’t hurt them any more, and transparency is finally the one thing that could buy back some degree of consumer confidence.

Perhaps now is the time where ‘responsible disclosure’ should apply equally to hackers, security researchers and vendors, as well as apply to ‘responsible remediation’. Because really, some 20 years after the disclosure debate got going, do we really think we need to try to apply more guidelines to researchers giving away $250/hr consulting work or “hackers” posting vulnerabilities as a hobby? Vendors that have tried to label or apply policy to these people were simply blame-shifting from day one, while not applying that desired policy to themselves.

“high price bug brokering market just isn’t viable”

[This was originally published on the OSVDB blog.]

On January 17, 2007, SnoSoft / Netragard LLC announced a new Exploit Acquisition Program designed to compete with iDefense, TippingPoint and others. Nothing special or different other than the suggestion that they would pay more for high end vulnerabilities. A little over a year later, and they announced they were shutting down the Exploit Acquisition Program. From their post:

We regret to say that its true, we’ve shut down the Exploit Acquisition Program. The reason for the shutdown was that it was taking our buyers too long to complete a single transaction and it wasn’t fair to the researchers. While we’d expect a single transaction to take no more than a month, the average transaction time for our buyer was 4 months. The last transaction that we attempted took 7 months at which point the issues were silently patched and the transaction was dead. As it stands right now, we can’t justify asking anyone to wait that long to move a single item. So until the end players learn how to move faster, the high price bug brokering market just isn’t viable.

No offense to SnoSoft / Netragard, but their competitors have proven that the market is viable. I guess the trick is how you ‘sell’ the information. For iDefense it is early warning for their customers in case the same vulnerability is being exploited by others. For TippingPoint it is early warning and IPS signatures. For WabiSabiLabi it is more like the SnoSoft program, where one buyer gets exclusive rights to the information, and it appears to be working to some degree.

The value of 0-day…

[This was originally published on the OSVDB blog.]

Another interesting article regarding the value of 0-day vulnerabilities. Rob Lemos relates the stories of a few researchers who sold their 0-day vulnerability/exploit information for big dollars. The twist here, which is news to some, is who purchased it (the .gov) and for how much (as high as 80k). This is significantly more than vulnerability purchase shops iDefense and ZDI (3COM/Tipping Point) currently offer. The only catch? The big spenders aren’t advertising so you have to have contacts to make such a sale. The scary part? We all know how cheap the U.S. government can be.. so how much are other governments paying?

Matousec’s Vulnerability Value

[This was originally published on the OSVDB blog.]

Since the debate about pay-for-disclosure started, some folks have wondered what vulnerabilities are worth. We’ve seen companies like Verisign/iDefense and Tipping Point/ZDI offer serious money for vulnerabilities in the past. Adding to the mix, matousec.com has published a purchase page with prices of some of their vulnerability research information:

* Full analysis of reviewed personal firewalls

Visit Windows Personal Firewall analysis methodology page to get information about what the full analysis is. The full analysis is preferentially offered to the product vendor. If the vendor buys the analysis it is given 30 days protection for all private information included in this analysis.

Prices:
o ZoneAlarm Pro 6.1.744.001 analysis – 1,500 ($ 1,950)
o Kerio Personal Firewall 4.3.246 analysis – 500 ($ 650)
o Norton Personal Firewall 2006 version 9.1.0.33 analysis – 1,500 ($ 1,950)
o BlackICE PC Protection 3.6.cpj analysis – 1,500 ($ 1,950)

* Single bugs of reviewed personal firewalls

Visit Windows Personal Firewall analysis methodology page to get information about what the single bug is.

Prices:
o ZoneAlarm Pro 6.1.744.001 bugs – visit ZoneAlarm Pro 6.1.744.001 – Review
o Kerio Personal Firewall 4.3.246 bugs – visit Kerio Personal Firewall 4.3.246 – Review
o Norton Personal Firewall 2006 version 9.1.0.33 bugs – visit Norton Personal Firewall 2006 version 9.1.0.33 – Review
o BlackICE PC Protection 3.6.cpj bugs – visit BlackICE PC Protection 3.6.cpj – Review

Security expert dubs July the ‘Month of browser bugs’

[This was originally published on the OSVDB blog.]

Security expert dubs July the ‘Month of browser bugs’

By Greg Sandoval

Each day this month, a prominent security expert will highlight a new vulnerability found in one of the major Internet browsers.

HD Moore, the creator of Metasploit Framework, a tool that helps test whether a system is safe from intrusion, has dubbed July the Month of Browser Bugs. Already, the security researcher has featured five security flaws, three for Microsoft’s Internet Explorer and one apiece for Mozilla’s Firefox and Apple Computer’s Safari.

Thirty one days later, MoBB is done! By far one of the more interesting vulnerability disclosure projects we’ve seen this year. I have a strong feeling that the real ramifications won’t be realized until months later, but until someone does a more thorough analysis.. my random thoughts.

First, HDM and I chatted almost every single day during the month, mostly to coordinate the pre-assignment of OSVDB IDs for each bug. Due to the schedule I keep, it was usually easy to check the blog around midnight every night, and for 30 of the 31 days, he was right on time releasing the next bug. Only on the 31st day did he finally fall behind by a whole two hours (jeez, what a slacker!) in releasing the final bug. Ok ok, it wasn’t due to slacking, he had been working for hours trying to isolate the exact details to fully understand and document the bug he had been researching in Safari.

31 browser bugs, what’s the final breakdown?

  • MSIE: 25
  • Apple Safari: 2
  • Mozilla: 2
  • Opera: 1
  • Konqueror: 1

I’ll let you make any conclusions you want. If I hadn’t posted this, we’d no doubt see at least one article saying how much more insecure MSIE is than X and this is just proof of that. Hopefully the fact I posted that last line might actually make a journalist stop and think, “why, is it something else?!” GLAD YOU ASKED! Ok not really, but there is more to it than W bugs in X browser vs Y bugs in Z browser so W must be more insecure than Y!@$#! If you can’t think of any such reasons, quit your job and go to art school.

What if he had…

  1. followed ‘accepted’ vulnerability disclosure guidelines? (the project would have been dubbed the YoBB?)
  2. sold his findings to the shops like ZDI or iDefense that pay for such information? (he’d be rich?!)
  3. sold his findings to a russian spam syndicate? (he’d be able to buy a new iPod?!)
  4. never posted a single bug in any fashion? (he and a dozen others would all be sitting on this information)
  5. provided even more easy point-and-drool exploitation? (we’d be reading another CNET article about the latest spyware/adware that exploited..)

Want another month of browser bugs? Yes, he could continue on into August without a problem. The amount of browser bugs is stupid. Apparently, the idea of writing a basic fuzzer is still lost on the authors. The good news, HDM will be releasing the fuzzer he used to find all these to the public. Will an insane rush of browser bugs follow? We can hope!

Want another month of browser bugs? Then do it yourself. While it may sound easy, researching each one to the degree HDM did is not easy and it isn’t fast. If you can devote between 15 minutes and 3 hours a day for 31 days, then go for it! Until then, as my friend major says, “never lick a gift whore in the mouse.”

Vulnerability Purchasing

[This was originally published on the OSVDB blog.]

Several years ago, iDefense started purchasing vulnerabilities from freelance researchers, and created its Vulnerability Contributor Program. Find a vulnerability, disclose it to iDefense under mutual NDA, and they would act as a mediator between you and the vendor for disclosure. After a patch was available, iDefense releases an advisory and pays you. Ignoring the fact that they would sit on the information for up to a year before disclosing it to the vendor, this program rewarded people for finding and disclosing vulnerabilities.

Months back, David Endler left iDefense to join Tipping Point, a division of 3Com. Shortly after, TP announced its “zero day initiative”. Like iDefense, the ZDI would pay for vulnerabilities, but also created a ‘loyalty’ program for continuing to disclose vulnerabilties through them (wonder if they give out keychain thingies like my grocery store does?).

Now, Digital Armaments is also offering a “pay for vuln” program. Instead of just offering cash for 0-day, they also offer trade-in credit so you can receive other 0-day in return for your own. This deviates off the path of “responsible disclosure” (questionable under the other two models) quite a bit.

Zero Day Vulnerabilities – Sell Your Soul?

[This was originally published on the OSVDB blog.]

There have been several Vulnerability Sharing Clubs (VSC) in the past including iDefense, Immunity and others. For those who question this business model, consider Verisign just purchased iDefense for US $40 million. Still not a believer? Consider 3Com/TippingPoint is now offering a new VSC called the Zero Day Initiative. Now instead of just selling an exploit for cash, you can earn points and trade them in for cash and prizes! Since this new program is being lead by David Endler, who was an early participant in the creation of the iDefense VSC, this business model appears to be very sound (for the time being). In response, iDefense/Verisign has announce that not only is it continuing their program, it is beefing it up and offering more money for the 0-day. For the skeptics out there, you are not alone. Frank Knobbe wrote a really good response to the 3Com/TP announcement, questioning the nature of the vulnerabilities that would be shared. I tend to agree with many points of this.

Other random thoughts:

  • VSCs typically receive a 0-day vulnerability, share the info with their clients, then disclose the vuln to the vendor, give them all the time they want for a patch and eventually publish the information (presumably when it has little/no value). Verisign may now give iDefense a better opportunity to know when the 0-day is worthless via its customer networks they monitor. Once they see the vulnerability in the wild, they know it isn’t 0-day and the value drops.
  • With the above model in mind, we now know the Verisign doesn’t care about the ethical dilemma of having 0-day vulnerability information, and not immediately disclosing it to the vendor. Even if they do share with the vendor immediately, they also share this information with clients who can leak the information out to other people.
  • With the above model in mind, we know that 3com/TippingPoint also doesn’t care about the ethical dilemma.
  • Is this the start of a trend regarding vulnerabilities, disclosure and the bottom line?
  • Will this be the precursor to half a dozen other companies offering similar programs?
  • If there are a dozen VSCs like this, are the vendors expected to pay for the information to receive it before the VSC decides to “responsibly disclose” said information to the vendor? (Remember, the vuln info usually stays in the hands of the VSC and it’s clients for months before vendor notification)