Perlroth and the History of Microsoft Vulns

While reading “This Is How They Tell Me The World Ends“, early in the book I ran across a single line that made me double-take. I took a note to revisit it after a complete read since it was so early in the book. For those familiar with my blogs, I tend to write about vulnerability statistics and this one fits the bill. This blog is a bit different in that a single line provoked it, but re-reading the section for clarity still takes me down other rabbit holes. Ultimately, this is a good example of how one sentence can have a lot of interpretations depending on how you read it, what statistics you use, and the deeper context that the sentence is embedded in.

Below are some additional lines that offer the full context of the line in question:

The first shift in the wind was Bill Gates’s memo. In 2002, after a series of escalating attacks on Microsoft’s software and customers, Gates declared that security would become Microsoft’s top priority. (P35)

On January 15, 2002, just as iDefense was getting going, Gates fired off the cybersecurity equivalent of the “shot heard round the world.” From that point on, Gates said, security would be the company’s “highest priority”. (P37)

What the security community wrote off as a stunt became an economic force. Microsoft froze new products and dredged up existing ones, ripping its software apart and training nearly ten thousand developers to build it back up again with security principles at the core. For the first time, procedures were put in place to embrace the hacking community. Microsoft set up a customer service line for hackers, tracked each caller and even logged their psychological quirks, noting which hackers needed to be handled with kid gloves, which had rock-star status, and which were just trolls. It instituted a regular system for rolling out software patches, releasing them on the second Tuesday of every month – “Patch Tuesday” – and offered customers free security tools.

And while plenty of zero-day bugs were still discovered, the frequency and severity of Microsoft bugs started to dry up. (P38)

For those not familiar with the memo, titled “Trustworthy computing”, it can be read in full here. The question that came to mind was, did the frequency and/or severity of Microsoft bugs go down? Before we answer, remember that this is fairly broad since it encompasses all Microsoft bugs, not specific to Windows or Internet Explorer for example. It is also important to note that Perlroth says they started to dry up, but not for how long. On the back of the Gates memo it would be expected that some researchers may change their attitude toward disclosure if they could sell the exploits for a higher payout. Finally, all of what follows is kind of moot because Perlroth’s statement is made on the back of a known unknown. That is, we know there are zero-day bugs discovered, but by nature, they are only zero-days if not publicly known.

Perlroth says two more lines that essentially tips her hand, I believe, demonstrating that her comments were made in mindsight based on extrapolation, not fact. First, she qualifies that she joined the security beat eight years after this memo. Second, she says:

The ripple effect of Gates’s [sic] memo could be seen far from Redmond, in underground dark web forums and in hotel rooms at the big security conferences.

The dark web barely existed in 2002. Given that Tor was released in September of that year, the first hint of dark web sites would have been starting. Gates’ memo was published eight months before Tor was released in fact. It’s hard to imagine that there were already established well-known forums to trade or sell vulnerabilities that would have a noticeable change at that point. With all of that in mind, I think that the rest of this rabbit hole is academic at best but illustrates why we must be careful when describing vulnerabilities in such a manner.

The Stats

All Microsoft Vulns, 2001 – 2005, per VulnDB

There was a significant drop in volume from 2002 to 2003 so it is easy to make this assessment in a very limited picture. But by 2004 it was back up quite a bit. Given what I outlined above about her tenure in the security beat along with questionable statements around the dark web as well as making statements based on unknown factors, the question here is how did she arrive at this conclusion. Further, the severity did not drop from 2002 to 2004 either.

The stats above are from VulnDB with the advantage of hindsight and a comprehensive collection of disclosures from that period. If someone made such a conclusion based on disclosures, it likely would have been based on CVE. Looking at only disclosures with a CVE ID, it does not change for the disclosure trends or severity.

Microsoft Vulns w/ CVE ID, 2001 – 2005, per VulnDB
Microsoft Windows Vulns, 2001 – 2005, per VulnDB
Microsoft Internet Explorer (MSIE), 2001 – 2005, per VulnDB

We see a dip in disclosures from 2002 to 2003 for both Windows and MSIE, but both rebound to varying degrees in 2004. Then Windows shoots up higher in 2005 while MSIE drops in 2005, which could just have been the browser war with Firefox and Opera heating up. That leads us to one more section from page 38:

Finally, did the bugs dry up or did their perceived value go higher, so people were less likely to disclose or sell for lower prices? For a book that dives deep into the value of 0days I figured this would be the hot take. Oh wait, it is, right after saying the frequency/severity dried up, Perlroth says:

Then, in the shadows, a growing number of defense contractors, intelligence analysts, and cybercriminals started doling out higher rewards to hackers who promised to keep their bug discoveries secret. In these subterranean circles, people started assigning a far higher value to Microsoft zero-day exploits than what iDefense was paying. 

So the fun part is go back to the charts and speculate. If the premise is that the Gates memo caused bugs to dry up because they were perceived more valuable, as outlined shortly after by Perlroth, why did the disclosures rebound in 2004? Did Microsoft suddenly stop caring about security a year later? Was 2003 just an abnormal, albeit coincidental, year for disclosures? Were there other factors at play?

There are a lot of questions that Perlroth nor the vulnerability statistics answer.

That Vulnerability is “Theoretical”!

[This was originally published on the OSVDB blog.]

A few days ago, while writing a draft of a different blog, I made reference to and said “we’re well aware of the pitfalls around calling a vulnerability ‘theoretical’“! I wanted to link off to what I was referencing, a case where security researchers found a vulnerability in a big vendor’s products and were told it was “theoretical”. Of course, they in turn provided a working exploit for the vulnerability proving it wasn’t just theory. Thus, around 1995, the researchers took on the slogan “L0pht, Making the theoretical practical since 1992.” After digging, I couldn’t find a concise story of the details around that event, so I took to Twitter. Over the course of a couple hours, with input from many people, including some involved with the story, I collected the details. I told those in the conversation that if I had the information I would blog about it to better preserve that slice of history.

My bad memory had me believing the vulnerability was in Sendmail, and that Eric Allman said it to Mudge. Royce Williams dug up the Sendmail exploit i was thinking of, and the header text from Mudge suggested I was on the right track.

*NIX Sendmail (8.7.5) – Buffer Overflow – Newest sendmail exploit
:
# Hrm… and Eric Allman told me to my face that there were *no* buffer
# overflows in 8.7.5 — .mudge
# This works on systems that have the chpass program runable by
# users. Tested on FreeBSD, though the vulnerability exists in all
# Sendmail8.7.5. Granted you need to be able to change your gecos field 😉
#
# The problem is in buildfnam() which lives in util.c – it treats
# the static allocated array nbuf[MAXSIZE+1], from recipient.c, in
# an unbounded fashion.

Next, Royce reminded me that I had actually referenced that vulnerability in a prior blog post from 2006… oops! Mark Dowd was the first to challenge me on that, saying he believed it was related to a vulnerability in Microsoft’s products, related to RAS or CHAP, and he was right about the vendor (which vuln it was specifically is still not confirmed). Next, Space Rogue, an original L0pht member chimed in saying he thought it referred to Microsoft and the NT/Lanman vulnerability, and that the 1992 part of the slogan simply referred to when L0pht was formed. DKP further confirmed this by digging into the wonderful Internet Archive, finding the slogan and quote on the L0pht’s page.

“That vulnerability is completely theoretical.” — Microsoft
L0pht, Making the theoretical practical since 1992.

From here, Weld Pond and Mudge, original members of the L0pht joined the conversation. First, Weld said it might be a RAS or CHAP vulnerability but he wasn’t certain, but that the slogan came from a response from a Microsoft spokesperson who was quoted saying “that vulnerability is theoretical“, and that resulted in the exploit being written to prove otherwise. Weld further confirmed what Space Rogue had said, that the “slogan was coined in 95 or so but made retrospective to the founding of the L0pht.

DKP continued digging and found the quoted in Bruce Schneier’s “Secrets and Lies”, and pointed out Schneier worked with Mudge on a MS-CHAPv2 vulnerability. The paper on that vuln is from 1999 though, suggesting it wasn’t the “theoretical” vuln.

With that, the “theoretical” vulnerability is mostly uncovered. It would be great if anyone could confirm exactly which vulnerability it was that prompted the response from Microsoft. If anyone else recalls details about this, please share!

In the mean time, we also get to wonder about the Sendmail story, where this saga started, that also apparently is interesting. Space Rogue mentioned there was a separate story around that, but couldn’t remember details. Mudge jumped in confirming it was a Sendmail 8.6.9 exploit, “in response to in-person ‘discussion’ w/ [Eric] Allman at a Usenix Security in Texas. Witnesses, but no writeup.Mudge added that “a very similar ‘quote’ happened in person with Allman quite some time prior to the MS issue. It wasn’t a throw away quote. I/we lived it 🙂

August 14, 2017 Update:

Mudge provided more insight into another issue, also a ‘theoretical’ risk. During the Twitter thread, there were questions about L0phtcrack. Mudge saysIn a nutshell – a MSFT PR article in NT magazine said l0phtcrack was a theoretical risk but not an actual one. I responded with LC rant. Soon after we coined the phrase to describe the PoCs I felt it was crucial to write and release“. DKP dug up a Wired article that better illustrates how vendors can dismiss security researchers:

The Duality of Expertise: Microsoft

[This was originally published on the OSVDB blog.]

The notion of expertise in any field is fascinating. It crosses so many aspects of humans and our perception. For example, two people in the same discipline, each with the highest honors academic can grant, can still have very different expertise within that field. Society and science have advanced so we don’t have just have “science” experts and medical doctors can specialize to extreme degrees. Within Information Security, we see the same where there are experts in penetration testing, malware analysis, reverse engineering, security administration, and more.

In the context of a software company, especially one that does not specifically specialize in security (and is trivial to argue was late to the security game), you cannot shoehorn them into any specific discipline or expertise. We can all absolutely agree there is an absolute incredible level of expertise across a variety of disciplines within Microsoft. So when Microsoft releases yet another report that speaks to vulnerability disclosures, the only thing I can think of is duality. Especially in the context of a report that puts forth some expertise that they are uniquely qualified to speak on, while mixed with a topic that pre-dates Microsoft and they certainly aren’t qualified to speak on to some degree.

A Tweet from Carsten Eiram pointed me to the latest report, and brought up the obvious fact that it seemed to be way off when it comes to vulnerability disclosures.

carsten-ms-sir

The “MS SIR” he refers to is the Microsoft Security Intelligence Report, Volume 21 which covers “January through June, 2016” (direct PDF download).

It’s always amusing to me that you get legal disclaimers in such analysis papers before you even get a summary of the paper:

ms-disclaimer

Basically, the take away is that they don’t stand behind their data. Honestly, the fact I am blogging about this suggests that is a good move and that they should not. The next thing that is fascinating is that it was written by 33 authors and 14 contributors. Since you don’t know which of them worked on the vulnerability section, it means we get to hold them all accountable. Either they can break it down by author and section, or they all signed off on the entire paper. Sorry, the joys of academic papers!

After the legal disclaimers, then you start to get the analysis disclaimers, which are more telling to me. Companies love to blindly throw legal disclaimers on anything and everything (e.g. I bet you still get legal disclaimers in the footer of emails you receive, that have no merit). When they start to explain their results via disclaimers while not actually including their methodology, anyone reading the report should be concerned. From their “About this report” section:

This volume of the Microsoft Security Intelligence Report focuses on the first and second quarters of 2016, with trend data for the last several quarters presented on a quarterly basis. Because vulnerability disclosures can be highly inconsistent from quarter to quarter and often occur disproportionately at certain times of the year, statistics about vulnerability disclosures are presented on a half-yearly basis.

This is a fairly specific statement that speaks as if it is fact that vulnerability trends vary by quarter (they do!), but potentially ignores the fact that they can also vary by half-year or year. We have seen that impact not only a year, but the comparison to every year prior (e.g. Will Dormann in 2014 and his Tapioca project). Arbitrarily saying that it is a ‘quarter’ or ‘half-year’ does not demonstrate experience in aggregating vulnerabilities, instead it is a rather arbitrary and short time-frame. Focusing on a quarter can easily ignore some of the biases that impact vulnerability aggregation as outlined by Steve Christey and my talk titled “Buying Into the Bias: Why Vulnerability Statistics Suck” (PPT).

Jumping down to the “Ten years of exploits: A long-term study of exploitation of vulnerabilities in Microsoft software” section, Microsoft states:

However, despite the increasing number of disclosures, the number of remote code execution (RCE) and elevation of privilege (EOP) vulnerabilities in Microsoft software has declined
significantly.

Doing a title search of Risk Based Security’s VulnDB for “microsoft windows local privilege escalation” tells a potentially different story. While 2015 is technically lower than 2011 and 2013, it is significantly higher than 2012 and 2014. I can’t say for certain why these dips occur, but they are very interesting.

5year-win-lpe

Thousands of vulnerabilities are publicly disclosed across the industry every year. The 4,512 vulnerabilities disclosed during the second half of 2014 (2H14) is the largest
number of vulnerabilities disclosed in any half-year period since the Common Vulnerabilities and Exposures system was launched in 1999.

This quote from the report explicitly shows serious bias in their source data, and further shows that they do not consider their wording. This would be a bit more accurate saying “The 4,512 vulnerabilities aggregated by MITRE during the second half of 2014…” The simple fact is, a lot more than 4,512 vulnerabilities were disclosed during that time. VulnDB shows that they aggregated 8,836 vulnerabilities in that same period, but less than the 9,016 vulnerabilities aggregated in the second half of 2015. Microsoft also doesn’t disclaim that the second half of 2014 is when the aforementioned Will Dormann released the results of his ‘Tapioca’ project totaling over 20,500 vulnerabilities, only 1,384 of which received CVE IDs. Why? Because CVE basically said “it isn’t worth it”, and they weren’t the only vulnerability database to do so. With all of this in mind, Microsoft’s comment about the second half of 2014 becomes a lot more complicated.

The information in this section is compiled from vulnerability disclosure data that is published in the National Vulnerability Database (NVD), the US government’s repository of standards-based vulnerability management data at nvd.nist.gov. The NVD represents all disclosures that have a published CVE (Common Vulnerabilities and Exposures) identifier.

This is a curious statement, since CVE is run by MITRE under a contract from the Department of Homeland Security (DHS), making it a “US government repository” too. More importantly, NVD is essentially a clone of CVE that just wraps additional meta-data around each entry (e.g. CPE, CWE, and CVSS scoring). This also reminds us that they opted to use a limited data set, one that is well known in the Information Security field as being woefully incomplete. So even a company as large as Microsoft, with expected expertise in vulnerabilities, opts to use a sub-par data set which drastically influences statistics.

Figure 23. Remote code executable (RCE) and elevation of privilege (EOP) vulnerability disclosures in Microsoft software known to be exploited before the corresponding security update release or within 30 days afterward, 2006–2015

The explanation for Figure 23 is problematic in several ways. Does it cherry pick RCE and EOP while ignoring context-dependent (aka user-assisted) issues? Or does this represent all Microsoft vulnerabilities? This is important to ask as most web browser exploits are considered to be context-dependent and coveted by the bad guys. This could be Microsoft conveniently leaving out a subset of vulnerabilities that would make the stats look worse. Next, looking at 2015 as an example from their chart, they say 18 vulnerabilities were exploited and 397 were not. Of the 560 Microsoft vulnerabilities aggregated by VulnDB in 2015, 48 have a known public exploit. Rather than check each one to determine the time from disclosure to exploit publication, I’ll ask a more important question. What is the provenance of Microsoft’s exploitation data? That isn’t something CVE or NVD track.

Figure 25 illustrates the number of vulnerability disclosures across the software industry for each half-year period since 2H13

Once again, Microsoft fails to use the correct wording. This is not the number of vulnerability disclosures, this is the number of disclosures aggregated by MITRE/CVE. Here is their chart from the report:

ms-2016-1h16-chart

Under the chart they claim:

Vulnerability disclosures across the industry decreased 9.8 percent between 2H15 and 1H16, to just above 3,000.

As mentioned earlier, since Microsoft is using a sub-par data set, I feel it is important to see what this chart would look like using more complete data. More importantly, watch how it invalidates their claim about an industry decrease of 9.8 percent between 2H15 and 1H16, since RBS shows the drop is closer to 18%.

vulndb-vs-nvd

I have blogged about vulnerability statistics, focusing on these types of reports, for quite a while now. And yet, every year we see the exact same mistakes made by just about every company publishing statistics on vulnerabilities. Remember, unless they are aggregating vulnerabilities every day, they are losing a serious understanding of the data they work with.

March 19, 2017 Update – Carsten Eiram (@carsteneiram) pointed out that the pattern of local privilege escalation numbers actually follow an expected pattern with knowledge of researcher activity and trends:

In 2011, Tarjei Mandt while he was at Norman found a metric ton of LPEs in win32k.sys as part of a project.

In 2013, it was Mateusz Jurczyk’s turn to also hit win32k.sys by focusing on a bug-class he dubbed “double-fetch” (he’s currently starting that project up again to see if he with tweaks can find more vulns).

And in 2015, Nils Sommer (reported via Google P0) was hitting win32k.sys again along with a few other drivers and churned out a respectable amount of LPE vulnerabilities.

So 2012 and 2014 represent “standard” years while 2011, 2013, and 2015 had specific high-profile researchers focus on Windows LPE flaws via various fuzzing projects.

So the explanation is the same explanation we almost always see: vulnerability disclosures and statistics are so incredibly researcher driven based on which product / component / vulnerability type a researcher or group of researchers decides to focus on.

An Analysis of Google’s Project Zero and Alleged Vendor Bias

[This was originally published on RiskBasedSecurity.com.]


Google announced a new initiative called Project Zero. The basic premise of the project was that Google invests heavily in their own security and had for quite some time been also tasking their researchers part time work on improving the security of other high-profile products.

Project Zero is Google’s investment into a well-staffed team that they hoped to get the ball rolling as they felt that people “should be able to use the web without fear that a criminal or state-sponsored actor is exploiting software bugs to infect your computer, steal secrets or monitor your communications.” Recently, there have been questions that have been raised wondering if Google’s Project Zero is really unfair towards and targeting Microsoft?  

From reading several news articles as well as the blog from Microsoft’s Chris Betz, who runs their Product Security Response, you would definitely think that this is the case. Since Project Zero was announced, we at Risk Based Security have been following it closely as we were interested in how it might influence vulnerability research. More specifically, we were curious if the project would help improve the code that many rely on such as third-party libraries and if it might ultimately impact the Bug Bounty economy.

With all of the recent conversations about Project Zero, we decided to publish some of our own analysis of with an attempt at using data to see if there is a bias towards targeting Microsoft, or any other company. One of the great things about Google and Project Zero is that they are generally pretty transparent with their research and try to clearly communicate their intentions about disclosure as well. But have they really been clear with their disclosure policy?

We know there is a 60-day Google company-wide intention for security disclosures that was published back in July, 2010. But what about the policy for Project Zero? We aren’t aware of a clear policy that has been published for the project or any sort of blog post on the requirements (nothing on their main blog, or the wiki in their tracker). In a Wired article last year, it mentions that Google staff stated:

“… they’ll alert the company responsible for a fix and give it between 60 and 90 days to issue a patch before publicly revealing the flaw on the Google Project Zero blog. In cases where the bug is being actively exploited by hackers, Google says it will move much faster, pressuring the vulnerable software’s creator to fix the problem or find a workaround in as little as seven days.”

Based on reading the individual tickets you can see that they have been communicating the 90-day policy since ID 44, opened on Jul 9, 2014 and since ID 89they have been very consistent about telling all vendors about the 90 day deadline and started automatically including the boilerplate text:

“This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public.”

In the initial announcement of Project Zero, there was no mention of a hard line of 90 days for a disclosure timeline, but it does mention they want to get things fixed in a “reasonable time”:

“Once the bug report becomes public (typically once a patch is available), you’ll be able to monitor vendor time-to-fix performance, see any discussion about exploitability, and view historical exploits and crash traces. We also commit to sending bug reports to vendors in as close to real-time as possible, and to working with them to get fixes to users in a reasonable time.“

Chris posted a blog update in November 2014, where he highlighted many bugs that were fixed well within Project Zero’s 90-day disclosure deadline and that it represented a solid security response by the teams responsible for Adobe Flash, Microsoft Office, Internet Explorer, and Microsoft Windows. A more important point to mention is the following:

We calibrate our deadline to balance two important factors: the right of end users to know about a risk versus giving vendors a fair chance to cleanly fix a bug under a reasonable embargo. Given the ratio of deadline hits to misses, we think we’ve calibrated correctly for the current state of the industry but we do hope to tighten the deadline in the future.”

It seems that they could benefit from posting a formal policy at some point in the future to clarify their disclosure process, but regardless, they have been making vendors well aware of their expected timelines. What we can say with certainty is that by publishing details about their research, it allows us to get a better insight into their work, their focus, as well as a limited view of their interaction with vendors. 

Show Me The Data

Project Zero has a tracker that they show their work in, and in some cases publish very detailed information for us to see their process. As of January 25, they have assigned 206 IDs in their tracker. One aspect that was not clear is the reason that so many IDs, especially earlier assignments, were restricted, meaning they were not open for viewing to the public. We speculated that these restricted IDs were ones that Google was not sharing for some specific reason, as they could have just been IDs that have been abandoned, or they were still being resolved with the affected vendors.

Considering they still have “Issue 1: This is a test” public, it didn’t seem that they were cleaning up older entries that are not relevant. We decided to reach out to Chris Evans who runs the Project Zero team asking for clarification on why so many entries were restricted, and he responded very quickly offering some great insight on the situation. Chris shared that most of the early IDs that are shown as restricted were due to a bunch of spam reports that were received when the project was officially launched.  He mentioned that there are a few other causes of early bugs not being made public.

  • The first few IDs were not filed as well as the project team would have liked, so they filed a new ID again and marked the earlier issues as invalid.
  • A couple more bugs were marked as duplicates because they unfortunately filed the same report twice.
  • There were a few more issues that he shared were valid, but just needed to be made public. In fact, our mail prompted him to open up (ID #125) which he described as “a very interesting Flash bug that has been fixed several days ago”. We thank Chris for opening it up tonight, even though it made us redo our statistics!
  • And finally, as we expected, every ID currently from 144 onwards is still less than 90 days old which aligns with their stated 90 day disclosure timeline.

When looking at the 206 Project Zero IDs, the breakdown of tickets is as follows:

ID Status# of Vulnerabilities
Total Fixed83
Total WontFix4
Total Restricted73
Total Invalid26

  The “WontFix” designation means that the vendor does not consider the reported issue to be a security problem, or does not cross privilege boundaries.  In terms of the severity and impact to organizations of the vulnerabilities that they have discovered, we looked quickly at the VulnDB assigned CVSSv2 score for each ID.

cvss

For those that don’t like seeing just a CVSSv2 average (as there are currently still a lot of issues with this version, and a new version hopefully around the corner), we pulled more data from VulnDB to help us understand Exploit availability.

Note that the high percentage of Proof of Concept (PoC) is largely due to Google researchers developing them to demonstrate the vulnerability, which typically makes it easier for the vendor to diagnose and understand the problem.

Microsoft

As mentioned, there are theories being voiced that Google is specifically targeting Microsoft and not practicing Coordinated Vulnerability Disclosure, putting customers at risk.  A post from ErrataRob reminds us that Microsoft seems to be enjoying a double standard with their views on vulnerability disclosure. Additionally, it has been noted that Microsoft’s latest plea for CVD is as much propaganda as sincere, in a response to Microsoft’s Betz’s article published on the OSVDB blog. It is important to understand where this perception comes from.

Articles posted like the one from PC World that talk about a “third unpatched Windows zero-day” being released within a month is polarizing in the context of the vulnerability disclosure debate. It also makes it sound like this is a straight rivalry with one party intent on hurting the other. From what we can see by analyzing the data, this doesn’t appear to be the case. The disclosures are a byproduct of Google’s stated policy of disclosing after 90 days, fix or no fix. The use of “0day” denotes that Google published full details of the vulnerability (after ~90 days) without a vendor-supplied fix available.

StatisticsMetric
Total 0Day19 Vulnerabilities
Average Disclosure Time59.68 Days

It is critical to remember that not all vulnerabilities are created equal, even within the same organization. While Microsoft may be able to patch a particular product such as Office or Internet Explorer quickly, they may not be able to do the same for Windows depending on the numbers of platforms and versions affected. The life cycle of a privately disclosed vulnerability varies wildly as it has to be confirmed, patched, and then heavily tested.

Organizations want and really need Microsoft to test those patches adequately before releasing, or you run into updates that can do more harm than good. So this supposed “vulnerability disclosure war” as some are calling it between two companies is also mixing in one Google employee’s spare time activity with his day job.  No matter how many times someone may say that “their thoughts and tweets do not represent their employers”, it can be a very fine line specifically when you are a researcher working for a company where a strong focus is on discovering vulnerabilities in other company’s products. It can lead to questions as to what constituents their employer’s versus their own time, and is it decided who has ownership when it is more convenient. This is especially true since that researcher’s day job and hobby are heavily intertwined.

In doing further analysis on the Google Project Zero tracker, we find that they have disclosed 20 vulnerabilities in Microsoft products as of this blog. Of those 20, Microsoft labeled 4 as ‘WontFix’ (they don’t consider them to be a security issue, and Google actually agrees with most), 13 were fixed within 90 days, and 3 were released without a fix due to the 90 day deadline being exceeded (the dreaded 0day!).  On the surface, considering ~81% have been handled in a coordinated fashion that actually makes it seems Microsoft is doing pretty well. Not only have they gone through the process with Google many times already, they are obviously well aware of the 90 day disclosure deadline.

While it is still debatable that Google should provide extra time for companies when requested, it has become clear that this is not an option in the eyes of Project Zero. Since Microsoft now knows that Google is basically not flexible on that deadline, the onus is solely placed on them moving forward. If Microsoft does not want to have a 0day disclosed by Project Zero they are going to have to determine how to improve their response timeline and potentially dedicate more resources to a given vulnerability than they have for the three that were not fixed in time. It would be hard for anyone at Microsoft to argue that producing secure products isn’t critical and that they are short on resources to correct known vulnerabilities that may impact their customers. But the real question that begs to be asked; is Microsoft all alone in this supposed ‘war’ and being unfairly picked on by Google? Again lets take a quick look at the data to help us answer the question:

Vendors# of Vulnerabilities
Apple39
Adobe36
Microsoft20
Linux Kernel7
LibreSSL2
glibc1
PHP1
Red Hat1

The data suggests that there is a simple answer; of course Microsoft is not being solely targeted. In fact, just a few days ago, Project Zero hit the 90 day deadline on a trio of Apple Mac OS X issues that went public without a fix. Ars Technica covered it in the same style as other articles covered the “0day” dropping on Microsoft.  We found it very interesting that there is no mention in this article that these aren’t the first Google Project Zero “0day” dropped against Apple. In fact, between July 4, 2014 and September 30, 2014, Google released eleven vulnerabilities in Apple products without a patch.  While Apple and Microsoft have each had “0day” issue due to missing the timeline.  

We are also able to see that Project Zero has sent a substantial amount of  vulnerability reports to the Adobe Flash team and they’ve hit all the deadlines, which is very impressive. That was the initial reason that prompted Brian Martin, our Director of Vulnerability Intelligence to really dig even further than we had previously into all of the public Project Zero vulnerabilities. The analysis focused on the vendor, vendor notification date, disclosure date (when the vulnerability was disclosed, which was by Google for some, and by the vendor for others), CVSSv2 score and exploit availability. 

Conclusion 

From our analysis of available information, we believe it removes some of the emotional arguments and allows one to make their own educated statements which appear to support that Google is not singling out Microsoft with their Project Zero efforts. Furthermore, it shows that  Apple has received the most attention with Adobe close behind, both of whom have received considerably more attention than Microsoft. It just appears at this point, Microsoft is the only vendor that is failing to meet the 90 day disclosure deadlines sometimes, and then publicly criticizing Google for their work (or as some may say, providing free security work to improve their products and ultimately ensure their customers are better protected).

We sincerely hope that cooler heads will prevail as we all work together to better secure our organizations and protect consumer confidential information. Google’s work at Project Zero is impressive, and clearly making a positive impact on the security of high-profile products. We hope that companies will continue to work with Google and understand the importance of fixing issues as soon as possible to reduce the time of exposure customer’s face. But as with all policies, there are times when some level of flexibility does make sense when the purpose of the project is to improve security.

If you look at companies that Google has recently acquired, such as Dropcam, even their disclosure policy states, “We ask that you give us a reasonable amount of time to respond to your report before making any information public”. Of course, “reasonable amount of time” has been the cornerstone of the coordinated vulnerability disclosure debate for decades and doesn’t appear to be going away anytime soon.

[Update: Since starting a dialogue with Chris Evans, 24 issues have been published in the last 15 hours. Some of the stats quoted in this blog are already considerably different (e.g. Time to disclose). Further, we are now tracking additional information with each entry in order to generate more metrics.]

Microsoft’s latest plea for CVD is as much propaganda as sincere.

[This was originally published on the OSVDB blog.]

Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.

Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.

Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.

In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).

Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.

Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.

And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.

Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.

Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.

It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.

This is a great point. But, let’s read on and offer some context using your own words…

Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.

Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.

CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.

And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)

[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]

One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…

What’s right for Google is not always right for customers.

This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.

For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?

It is a zero sum game where all parties end up injured.

What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.

Betz’ blog goes on to quote the Microsoft CVD policy which states:

Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.

Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:

Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.

Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.

Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…

Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.

Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.

Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?

Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.

Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.

The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.

While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.

Advisories != Vulnerabilities, and How It Affects Statistics

[This was originally published on the OSVDB blog.]

I’ve written about the various problems with generating vulnerability statistics in the past. There are countless factors that contribute to, or skew vulnerability stats. This is an ongoing problem for many reasons. First, important numbers are thrown around in the media and taken as gospel, creating varying degrees of bias in administrators and owners. Second, these stats are rarely explained to show how they were derived. In short, no one shows their work, shows potential bias, caveats, or other issues that should be included as a responsible security professional. A recent article has highlighted this problem again. To better show why vulnerability stats are messy, but important, I will show you how it is trivial to skew numbers simply by using different criteria, along with several pitfalls that must be factored into any set of stats you generate. The fun part is that the word used to describe the differences can be equally nebulous and they are all valid, if properly disclaimed!

I noticed a Tweet from SC Magazine about an article titled “The ghosts of Microsoft: Patch, present and future”. The article is by Alex Horan, security strategist, CORE Security and discusses Microsoft’s vulnerabilities this year. Reading down, the first line of the second paragraph immediately struck me as being incorrect.

Based on my count, there were 83 vulnerabilities announced by Microsoft over the past year. This averages out to a little more than six per month, a reasonable number of patches (and reboots) to apply to your systems over the course of a year.

It is difficult to tell if Horan means “vulnerabilities” or “patches”, as he appears to use the same word to mean both, when they are quite different. The use of ’83’ makes it very clear, Horan is referencing Microsoft advisories, not vulnerabilities. This is an important distinction as a single advisory can contain multiple vulnerabilities.

A cursory look at the data in OSVDB showed there were closer to 170 vulnerabilities verified by Microsoft in 2012. Doing a search to include references for “MS12” (used in their advisory designation), 160 results. This is how it was easy to determine the number Horan used was inaccurate, or his wording was. If you generate statistics based on advisories versus independent vulnerabilities, results will vary greatly. To add a third perspective, we must also consider the total number of disclosed vulnerabilities in Microsoft products. This means ones that did not correspond to a Microsoft advisory (e.g. perhaps a KB only), did not receive a CVE designation, or were missed completely by the company. On Twitter, Space Rogue (@spacerog) asked about severity breakdowns over the last few years. Since that would take considerable time to generate, I am going to stay focused on 2012 as it demonstrates the issues. Hopefully this will give him a few numbers though!

If we look at the 2012 Microsoft advisories versus 2012 Microsoft CVE versus 2012 Microsoft total vulnerabilities, and do a percentage breakdown by severity, you can see heavy bias. We will use the following breakdown of CVSS scores to determine severity: 9 – 10 = critical, 7 – 8.9 = important, 4 – 6.9 = moderate, 0 – 3.9 = low.

Base SourceCriticalImportantModerateLow
2012 Advisories (83)35 (42.2%)46 (55.4%)2 (2.4%)
2012 CVE (160)100 (62.5%)18 (11.3%)39 (24.4%)3 (1.8%)
2012 Total (176)101 (57.4%)19 (10.8%)41 (23.3%)15 (8.5%)

It isn’t easy to see the big shifts in totals in a chart, but it is important to establish the numbers involved when displaying any type of chart or visual representation. If we look at those three breakdowns using simple pie charts, the shifts become much more apparent:

The visual jump in critical vulnerabilities from the first to the second two charts is distinct. In addition, notice the jump from the first two charts to the third in regards to the low severity vulnerabilities and that they didn’t even make an appearance on the first chart. This is a simple example of how the “same” vulnerabilities can be represented, based on terminology and the source of data. If you want to get pedantic, there are additional considerations that must be factored into these vulnerabilities.

In no particular order, these are other points that should not only be considered, but disclaimed in any presentation of the data above. While it may seem minor, at least one of these points could further skew vulnerability counts and severity distribution.

  • MS12-080 Only contains 1 CVE if you look at immediate identifiers, but also contains 2 more CVE in the fine print related to Oracle Outside In, which is used by the products listed in the advisory.
  • MS12-058 actually has no immediate CVEs! If you read the fine print, it actually covers 13 vulnerabilities. Again, these are vulnerabilities in Oracle Outside In, which is used in some Microsoft products.
  • Of the 176 Microsoft vulnerabilities in 2012, as tracked by OSVDB, 10 do not have CVE identifiers assigned.
  • OSVDB 83750 may or may not be a vulnerability, as it is based on a Microsoft KB with uncertain wording. Vague vulnerability disclosures can skew statistics.
  • Most of these CVSS scores are taken from the National Vulnerability Database (NVD). NVD outsources CVSS score generation to junior analysts from a large consulting firm. Just as we occasionally have mistakes in our CVSS scores, so does NVD. Overall, the number of scores that have serious errors are low, but they can still introduce a level of error into statistics.
  • One of the vulnerabilities (OSVDB 88774 / CVE-2012-4792) has no formal Microsoft advisory, because it is a 0-day that was just discovered two days ago. There will almost certainly be a formal Microsoft advisory in January 2013 that covers it. This highlights a big problem with using vendor advisories for any statistic generation. Vendors generally release advisories when their investigation of the issue has completed, and a formal solution is made available. Generating statistics or graphics off the same vulnerabilities, but using disclosure versus solution date will give two different results.

These are just a few ways that statistics can be manipulated, often by accident, and why presenting as much data and explanation is beneficial to everyone. I certainly hope that SCMagazine and/or CORE will issue a small correction or explanation as to the what the “83” number really represents.

Microsoft, Aurora and Something About Forest and Trees?

[This was originally published on the OSVDB blog.]

Perhaps it is the fine tequila this evening, but I really don’t get how our industry can latch on to the recent ‘Aurora’ incident and try to take Microsoft to task about it. The amount of news on this has been overwhelming, and I will try to very roughly summarize:

Now, here is where we get to the whole forest, trees and some analogy about eyesight. Oh, I’ll warn (and surprise) you in advance, I am giving Microsoft the benefit of the doubt here (well, for half the blog post) and throwing this back at journalists and the security community instead. Let’s look at this from a different angle.

The big issue that is newsworthy is that Microsoft knew of this vulnerability in September, and didn’t issue a patch until late January. What is not clear, is if Microsoft knew it was being exploited. The wording of the Wired article doesn’t make it clear: “aware months ago of a critical security vulnerability well before hackers exploited it to breach Google, Adobe and other large U.S. companies” and “Microsoft confirmed it learned of the so-called ‘zero-day’ flaw months ago”. Errr, nice wording. Microsoft was aware of the vulnerability (technically), before hackers exploited it, but doesn’t specifically say if they KNEW hackers were exploiting it. Microsoft learned of the “0-day” months ago? No, bad bad bad. This is taking an over-abused term and making it even worse. If a vulnerability is found and reported to the vendor before it is exploited, is it still 0-day (tree, forest, no one there to hear it falling)?

Short of Microsoft admitting they knew it was being exploited, we can only speculate. So, for fun, let’s give them a pass on that one and assume it was like any other privately disclosed bug. They were working it like any other issue, fixing, patching, regression testing, etc. Good Microsoft!

Bad Microsoft! But, before you jump on the bandwagon, bad journalists! Bad security community!

Why do you care they sat on this one vulnerability for six months? Why is that such a big deal? Am I the only one who missed the articles pointing out that they actually sat on five code execution bugs for longer? Where was the outpour of blogs or news articles mentioning that “aurora” was one of six vulnerabilities reported to them during or before September, all in MSIE, all that allowed remote code execution (tree, forest, not seeing one for the other)?

CVEReported to MSDisclosedTime to Patch
CVE-2010-02442009-07-142010-01-216 Months, 7 Days (191 days)
CVE-2010-02452009-07-142010-01-216 Months, 7 Days (191 days)
CVE-2010-02462009-07-162010-01-216 Months, 5 Days (189 days)
CVE-2010-02482009-08-142010-01-215 Months, 7 days (160 days)
CVE-2010-02472009-09-032010-01-214 Months, 18 days (140 days)
CVE-2010-02492009-09-??2010-01-144 Months, 11 days (133 days) – approx
CVE-2010-00272009-11-152010-01-212 Months, 6 days (67 days)
CVE-2009-40742009-11-202009-11-212 Months, 1 day (62 days)

Remind me again, why the “Aurora” conspiracy is noteworthy? If Microsoft knew of six remote code execution bugs, all from the September time-frame, why is one any more severe than the other? Is it because one was used to compromise hosts, detected and published in an extremely abnormal fashion? Are we actually trying to hold Microsoft accountable on that single vulnerability when the five others just happened not to be used to compromise Google, Adobe and others?

Going back to the Wired article, they say on the second to last paragraph: “On Thursday, meanwhile, Microsoft released a cumulative security update for Internet Explorer that fixes the flaw, as well as seven other security vulnerabilities that would allow an attacker to remotely execute code on a victim’s computer.” Really, Wired? That late in the article, you gloss over “seven other vulnerabilities” that would allow remote code execution? And worse, you don’t point out that Microsoft was informed of five of them BEFORE AURORA?

Seriously, I am the first one to hold Microsoft over the flames for bad practices, but that goes beyond my boundaries. If you are going to take them to task over all this, at least do it right. SIX CODE EXECUTION VULNERABILITIES that they KNEW ABOUT FOR SIX MONTHS. Beating them up over just one is amateur hour in this curmudgeonly world.

Microsoft LifeCam – sucks the life out of me

It wasn’t my choice, but I was handed a Microsoft “LifeCam” today, as the original recipient didn’t want it. Figured I’d try to use it to make a Guinea Pig cam after seeing a nice setup and live streaming a few nights ago.

The software it comes with makes me want to vomit. Insert the CD, let it run. It installs the software and immediately shuts down the machine. No warning, doesn’t let you gracefully close running applications, just bam. The instructions say to run the software now.. but oh wait, it didn’t really install. It apparently had to install drivers that require a reboot, then you actually install the software. But the installation process looks identical, no way to know the 2nd time is actually installing it. Microsoft, it’s been 20 fucking years, and you still can’t produce halfway user friendly software? Nevermind that it finishes at almost 60 megs, nevermind that the picture quality is complete ass, nevermind that it can’t handle dimly lit rooms like competing cameras can. Forcing me to reboot w/o warning and install your software twice? All the while telling me I should install Windows Live Messenger?! Fuck you.

If I opt not to install Messenger, every time I accidentally hit the overly large button on top of the camera, I get a popup telling me that Messenger is required to use that functionality. If you are going to integrate this into Windows so heavily, the least you could do is not require the reboot.

Hypersensitive Computer Displays

I’ve noticed this many times over the last decade. computerized display systems are often hypersensitive to the point of being silly and absurd. Years ago I noticed downloading files via some clients would show me the speed of the transfer to the Nth place, where N is totally ridiculous. I was getting 83.92384293842903k download and all but the first three numbers were a blur as they changed too quickly to read. Instead of updating the number every 1 second, it updated as fast as the computer could. What does that do for me exactly?

Along these lines are computerized estimates of time left. Microsoft Windows is well known for showing absurd times to complete a task. One minute it shows 4 minutes to copy that file, the next it shows 83 minutes. I’ve actually seen it show over 6,000,000 hours to finish a large file copy (that ended up finishing in about an hour). Some programs (names fail me right now) also have install progress meters that are totally worthless. as the bar reaches the right and you think it is finished installing, the bar resets and does it again. And again. And again. What is the purpose of the progress bar if there is no “overall % complete” or indication of how many bars we must suffer through?

Jump to today which triggered this. I’m driving through Atlanta in a PT Cruiser, feeling like a douchebag, I notice the readout showing me “291 miles until empty”. Yes, the car is so smart it can tell me how many miles I can drive before i have to refill the tank! Wait, it was 285 in the rental parking lot 15 miles ago. I pull in to my hotel some 8 miles later and it shows me 294 miles until empty. So apparently in the state of Georgia, driving the car actually fills the tank, not empties it. gas buyers rejoice!

It’s patch xxxday!

[This was originally published on the OSVDB blog.]

A while back, Microsoft announced they were moving to release patches on the second Tuesday of each month, lovingly called Patch Tuesday. Soon after, Oracle announced that they too would be moving to scheduled releases of patches on the Tuesday closest to the 15th day of January, April, July and October. Now, Cisco has announced they are moving to scheduled patches on the fourth Wednesday of the month in March and September of each calendar year.

In the attempt to make life easier on administrators and help avoid installing patches every few days, these scheduled releases are now causing organizations to enjoy life between monster patches.

Mar 11 – Microsoft
Mar 26 – Cisco
Apr 8 – Microsoft
Apr 15 – Oracle
May 13 – Microsoft
June 10 – Microsoft
July 8 – Microsoft
July 15 – Oracle
August 12 – Microsoft
September 9 – Microsoft
September 24 – Cisco
October 14 – Microsoft, Oracle
November 11 – Microsoft
December 9 – Microsoft

As you can see, October 14 promises to be a lot of fun for companies running Oracle products on Microsoft systems. While the scheduled dates look safe, I can’t wait until we see the ”perfect storm” of vendor patches.