Search Speak for Automaton

Alternate titles for this blog could be “Doodle Transition for Machina” perhaps! For at least a decade I have thought about just such an application and today I have Google Translate for Android. Load, aim, and it will process the text and translate on screen for you. Given the state of technology you would think it would be amazing by now, and it sometimes is.

The success largely depends on the language and that can also be seen in using translate.google.com, where some languages will translate fairly cleanly and others are very rough. One language I have to translate frequently is Chinese (simplified) and it is problematic for many things including company names and technical terms. With that in mind, I would expect it to translate with the same issues via the Google Translate app, and more specifically, do so consistently.

Since I am writing this, you know what’s coming…

This is the result of holding the phone up to a mail label from Japan. That’s all! Just moving the phone ever so slightly by tilting it or moving it half an inch closer / farther will make it change the translation. I think it finally got it a bit correct on that last one since the envelope didn’t contain anything living.

Hopefully the translation technology from Google will advance more quickly on Asian languages. Until then, I am just glad I didn’t get any “Sunrise Holy Poop” in that envelope.

Thoughts on 0-days and Risk in 2020

[Stupid WordPress. This was scheduled to publish Nov 23 but didn’t for some reason. Here it is, a bit late…]

On Friday, Maddie Stone from the Google P0 team Tweeted about the 0-day exploits her team tracks. As someone who checks that sheet weekly and tracks vulnerabilities, including ones ‘discovered in the wild’, this is a topic that is squarely in my tiny niche in the industry. Also, big fan of the P0 team!

I replied to her Tweet suggesting it come with a disclaimer that it didn’t represent “all” 0-days, rather they tracked high-end 0-day used primarily in “APT” attacks. Ben Hawkes, manager of the team, replied and agreed with that assessment. Before we proceed, let’s define 0-day real quick since the term is used for a variety of vulnerabilities, often incorrectly.

In this case, the context is a 0-day is a vulnerability that was actually found being exploited in the wild before there was public knowledge of it. In Risk Based Security’s VulnDB, we track that as “discovered in the wild“. Since VulnDB is comprehensive and our goal is to track every vulnerability, regardless of software or severity, we tend to aggregate a lot more than others. As of this post, we have over 78,000 vulnerabilities that aren’t found in CVE / NVD as a point of comparison. In my reply to Maddie I pointed out that we had seen 51 this year compared to their 22.

Next, Allen Householder replied to me asking a fun point, which is how many vulnerabilities did that really represent. Out of the 20,000+ vulnerabilities aggregated in 2020, we have 51 that are flagged as “discovered in the wild”. That represents only 0.25% of all vulnerabilities this year. One point I made previously is that Google’s team likely doesn’t care about a 0-day in the “Adning Advertising Plugin for WordPress” despite it being used to compromise WordPress blogs.

So with that number in mind, it goes back to the narrative that companies need to be scared of 0-days. They absolutely do! But… and this is the big qualifier that needs to come with that fear, is that perhaps they don’t need to be as afraid of 0-days as they do of already public vulnerabilities that they missed. With only 51 0-days in 2020, that means a vast majority of organizations simply aren’t likely to be targeted. Fully patching all known vulnerabilities that impact them should be priority one.

More to the point, vulnerabilities that have functional public exploits allowing anyone to trivially launch a viable attack are consistently a much bigger risk than the elusive 0-days. That is also one reminder of how often times CVSS falls short, if your vulnerability intelligence doesn’t provide Temporal scoring or exploit availability. Organizations making risk decisions only using the CVSS Base score are missing out on an important risk attribute.

I’ll end this blog with some arbitrary statistics around 0-days for fun! These are based on VulnDB data as of 11/21/2020. Note that metadata is less complete before 2012, which includes ‘discovered in the wild’ classification.

  • 241,690 vulnerabilities, only 641 are 0days (0.27%)
  • 14 are in Google products: Chrome (5), V8 (3), Android (6)
  • 146 are in Microsoft products: Windows (63), IE (36)
  • 13 are in Apple products
  • 7 are in Oracle products: Java (4)
  • 62 are in Adobe products: Flash (38), Reader (14)
  • 18 are in security products 😞
  • The oldest is from 1975 in RSTS/E! Yes, for real.
  • The oldest you likely recognize is Sendmail in November, 1983

Microsoft’s latest plea for CVD is as much propaganda as sincere.

[This was originally published on the OSVDB blog.]

Earlier today, Chris Betz, senior director of the Microsoft Security Response Center (MSRC), posted a blog calling for “better coordinated vulnerability disclosure“.

Before I begin a rebuttal of sorts, let me be absolutely clear. The entire OSVDB team is very impressed with Microsoft’s transition over the last decade as far as security response goes. The MSRC has evolved and matured greatly, which is a benefit to both Microsoft and their customers world-wide. This post is not meant to undermine their efforts at large, rather to point out that since day one, propaganda is still a valuable tool for the company. I will preface this with a reminder that this is not a new issue. I have personally blogged about this as far back as 2001, after Scott Culp (Microsoft at the time) wrote a polarizing piece about “information anarchy” that centered around disclosure issues. At some point Microsoft realized this was a bad position to take and that it didn’t endear them to the researchers providing free vulnerability information to them. Despite that, it took almost ten years for Microsoft to drop the term “responsible” disclosure (also biased against researchers) in favor of “coordinated” disclosure. Again, Microsoft has done a phenomenal job advancing their security program, especially the last three to five years. But… it is on the back of a confrontational policy toward researchers.

Reading yesterday’s blog, there are bits and pieces that stand out to me for various reasons. It is easy to gloss over many of these if you aren’t a masochist and spend most of your waking time buried in vulnerability aggregation and related topics.

In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD).

Not sure I have seen “CVD” as a formal initialism until now, which is interesting. After trying to brand “information anarchy” and pushing the “responsible disclosure” term, good to see you embrace a better term.

Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks.

And this line, early on in the blog, demonstrates you do not live in the real world of vulnerability disclosure. Microsoft has enjoyed their ‘ivory’ tower so to speak. Many researchers find and disclose vulnerabilities for entirely selfish reasons (e.g. bug bounties), which you basically do not offer. Yes, you have a bounty program, but it is very different from most and does not reward a vast majority of vulnerabilities reported to you. Microsoft has done well in creating a culture of “report vulnerabilities to us for free for the honor of being mentioned in one of our advisories”. And I get that! Being listed as a creditee in a Microsoft advisory is advertising itself as far as researcher talent. However… you are talking about a minority of researchers in the greater picture, that chase that honor.

Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree.

Oh sorry, let me qualify, your black and white tower. This absolutely does work for some vendors, especially those who have a poor history in dealing with vulnerability reports. You may not be one of them for the last 10 years, but you once were. Back in the late ’90s, Microsoft had a reputation for being horrible when dealing with researchers. No vulnerability disclosure policy, no bug bounty (even five years after Netscape had implemented one), and no standard process for receiving and addressing reports. Yes, you have a formal and mature process now, but many of us in the industry remember your beginnings.

It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack.

This is a great point. But, let’s read on and offer some context using your own words…

Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited.

Wait, if only a very small amount of vulnerabilities are exploited after a fix, and ‘almost none’ are exploited before a fix… why do you care if it is coordinated? You essentially invalidate any argument for a researcher coordinating disclosure with you. Why do they care if you clearly state that coordination doesn’t matter, and that the vulnerability will “almost [never]” be exploited? You can’t have this both ways.

CVD philosophy and action is playing out today as one company – Google – has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so.

And this is where you move from propaganda to an outright lie. The issue in question was disclosed on December 29, 2014. That is 15 days, not two days, before your January patch Tuesday. I’d love to hold my breath waiting for MSRC or Betz to explain this minor ’rounding error’ on dates, but I have a feeling I would come out on the losing side. Or is Microsoft simply not aware of public vulnerability disclosures and should perhaps invest in a solution for such vulnerability intelligence? Yes, blatant sales opportunity, but they are desperately begging for it given this statement. =)

[Update. Apparently Microsoft is unhappy over Issue 123 which was auto-published on January 11, as opposed to Issue 118 linked above auto-published on December 29. So they are correct on two days, but curious they aren’t complaining over 118 at the same time when both are local privilege escalation vulnerabilities.]

One could also argue that this is a local privilege escalation vulnerability, which requires a level of access to exploit that simply does not apply to a majority of Windows users. Betz goes on to say that software is complicated (it is), and that not every vulnerability is equal (also true), but also glosses over the fact that Google is in the same boat they are. A little over four years ago, the Google security team posted a blog talking about “rebooting” responsible disclosure and say this:

As software engineers, we understand the pain of trying to fix, test and release a product rapidly; this especially applies to widely-deployed and complicated client software. Recognizing this, we put a lot of effort into keeping our release processes agile so that security fixes can be pushed out to users as quickly as possible.

To be fair, Google also did not publish a timeline of any sorts with this disclosure. We don’t know anything that happened after the September 30, 2014 report to Microsoft. Did you ask for more time Google? Did Microsoft say it was being patched in January? If so, you look like total assholes, disclosure policy be damned. If they didn’t mentioned January specifically and only asked for more time, maybe it was fair you kept to your schedule. One of the two parties should publish all of the correspondence now. What’s the harm, the issue is public! Come on.. someone show their cards, prove the other wrong. Back to Microsoft’s blog…

What’s right for Google is not always right for customers.

This is absolutely true. But you forgot the important qualifier; what is is right for Microsoft, is not always right for customers.

For example, look at CVE-2010-3889 (heavily referenced) aka “Microsoft Windows on 32-bit win32k.sys Keyboard Layout Loading Local Privilege Escalation”. This is one of four vulnerabilities used by Stuxnet. Unfortunately, Microsoft has no clear answer if this is even patched, four years later. That CVE identifier doesn’t seem to exist in any Microsoft security advisory. Why not? Did you really let a vulnerability that may have aided an attack on an Iranian nuclear power plant go unpatched? Think of the ethics questions there! Or is this a case of the Microsoft security response process not being as mature as I give them credit, and this is a dupe of CVE-2010-2743? Why does it take a third-party four years to figure this out while writing a blog on a whim?

It is a zero sum game where all parties end up injured.

What does this even mean, other than propaganda? It is rarely, if ever, a case where “all parties” are injured. If a researcher discloses something to you and publishes prematurely, or publishes on their own without contacting you, usually that party is not ‘injured’ in doing so. That is simple fact.

Betz’ blog goes on to quote the Microsoft CVD policy which states:

Microsoft’s Approach to Coordinated Vulnerability Disclosure
Under the principle of Coordinated Vulnerability Disclosure, finders disclose newly discovered vulnerabilities in hardware, software, and services directly to the vendors of the affected product; to a national CERT or other coordinator who will report to the vendor privately; or to a private service that will likewise report to the vendor privately.

Perhaps you should qualify that statement, as US-CERT has a 45 day disclosure policy in most cases. That is half the time Google gave you. Quoting from the US-CERT policy:

Q: Will all vulnerabilities be disclosed within 45 days?
A: No. There may often be circumstances that will cause us to adjust our publication schedule. Threats that are especially serious or for which we have evidence of exploitation will likely cause us to shorten our release schedule. Threats that require “hard” changes (changes to standards, changes to core operating system components) will cause us to extend our publication schedule. We may not publish every vulnerability that is reported to us.

Note that it does not qualify “the vendor asks for more time”. That is the United States government saying a vendor gets 45 days to patch with rare exception. Oh wait Mr. Betz, before you go quoting “changes to core operating system components”, I will stop you there. Vulnerabilities in win32k.sys are not new. That 3.1 meg binary (on Windows 7) is the cause for a lot of grief for Windows users in that file alone. Given that history, you cannot say that changes to that file meet the US-CERT criteria.

Finally, this isn’t the first pissing match between Google and Microsoft on vulnerability disclosure. While Microsoft has routinely played the victim card and Google certainly seems more aggressive on their disclosure policy, there is a more than one bit of irony if one looks deeper. In random order…

Microsoft disclosed a vulnerability in Google Chrome, but didn’t do proper research. This vulnerability may be in WebKit as one person notes, meaning it could affect other browsers like Apple Safari. If it does, then Apple would get blindsided in this disclosure, and it would not be ‘coordinated’ or ‘responsible’, and would qualify as ‘information anarchy’ as Microsoft once called it. While we don’t know if it was ultimately in WebKit, we do know this vulnerability exists because Google Chrome was trying to work around issues with Microsoft software.

Look at MSVR11-011 and MSVR11-012 from 2011, where Microsoft “coordinated” two vulnerabilities with the FFmpeg team. To be sure, the FFmpeg team is outstanding at responding to and fixing vulnerabilities. However, in the real world, there are thousands of vendors that use FFmpeg as a library in their own products. While it may have been fixed in the base code, it can easily take somewhere between months and a decade for vendors to learn about and upgrade the library in their software. Only in a completely naive world could Microsoft call this “coordinated”.

Even better, let’s go back to the inaugural Microsoft Vulnerability Research (MSVR) advisory, MSVR11-001. This was a “Use-After-Free Object Lifetime Vulnerability in Chrome” that in reality was a vulnerability in WebKit, the underlying rendering library used by Chrome. The problem is that WebKit is used by a lot more than Chrome. So the first advisory from MSVR conveniently targets a Google product, but completely botches the “coordinated” disclosure, going to a single vendor using WebKit code, because the Microsoft researchers apparently didn’t diagnose the problem fully. No big deal right?

Wrong. I am sure Adobe, Samsung, Amazon, Tizen, Symbian, BlackBerry, Midori, and Android web browser users would disagree strongly. Do you really want to compare the number of users you blindsided with this “coordinated” disclosure to the ones you protected? Microsoft was a bigger jackass on this disclosure than Google ever was, plain and simple.

Finally, do I even need to go into the absolute mess than you call the “Advanced Notification Service” (ANS)? In case readers aren’t aware, this is not a single program. This is several different programs with various names like MAPP and others. Just three days ago, you Mr. Betz announced that ANS was changing. This is after another program got changed drastically, multiple companies were kicked out of the MAPP program, and who knows what else happened. All of which was founded on Microsoft giving advanced and sometimes detailed vulnerability information to questionable companies, that may not be friendly parties.

The entire notion of “coordinated” disclosure went out the window as far as Microsoft goes, when they first implemented these programs. You specifically gave a very limited number of organizations details about vulnerabilities, before other customers had access. That, by definition, is not coordination. That is favoritism in the name of the bottom line, and speaks strongly against any intent you outline in yesterday’s blog post.

While Microsoft has taken great effort to improve their security process, it is disingenuous to call this anything but propaganda.

Stop Using Google, It’s Dangerous!

[This was originally published on the OSVDB blog.]

Reported Phishing/Vulnerable Site! The web site http://www.google.com has been reported as a vulnerable site that may pose a threat to your web browsing. Vulnerable sites do not prioritize security and don’t care about their users and customers. These sites may pose a risk to you, exploit the trust between you and their site and may cause your computer to perform actions you did not approve.

To carry on the scary wording in the style of others; Some web sites are high profile and may seem trustworthy, but you shouldn’t trust them at all. They are full of buggy code, don’t care about protecting their users (that’s you!) and generally suck. Despite using their site as a virtual crutch, you should clearly stay away from them unless it is to send nasty mails or mock them. Again, do not trust Google’s web sites or search engine, because they have been known to be vulnerable. What assholes!

On a more serious note, if anyone at Google is reading this, I hope you pass this on to the jackasses that develop Google Toolbar or whatever hook they use to integrate with Firefox. Not only is it worse than malware (every piece of software tries to get me to install it), it uses misleading wording to scare customers from visiting perfectly safe and innocent web sites (namely this blog). While it caters to morons, it doesn’t give users a real opportunity to learn why a site was ‘blocked’ other than vague wording in the diagnostic page:

My only guess as to why this warning occurs was an incident earlier this year, in which the OSVDB blog fell victim to a zero-day exploit in WordPress. I blogged about the incident to make our readers aware of the incident and clear up any confusion. I assume that Google’s crawl of the this blog noted the script code and subsequently declared us an “attack site”, even though that is hardly the case.

The discouraging part is the “diagnostic page” says that Google visited ONE page in the last 90 days and 0 of those pages resulted in malicious software being downloaded. Google, if you are going to play Lord of the Browser, visit more than one page before you make that determination. To do anything less is a disservice to your users and a fast way to miss obvious malware. The third question mentions “intermediary” which is technically accurate as far as the script code that was injected in a few blog posts. However, the big red warning says nothing about ‘intermediary’ and explicitly labels us as some kind of malware hosting site with the intent of attacking people. That is libelous to say the least. Under ‘How did this happen’, Google mentions that sometimes third parties can inject such code, but doesn’t take the time to help clear this up. If the previous script injection issue is the cause of this, the fact that the script loaded content from a third party domain (in China no less) should be a good indication that WE did not host the malware. Sure, most users are dumb as a rock, but the few smart cookies that click for details should get just that.. details.

What Google Toolbar users may see when visiting this blog:

Finally, I opened the blog post calling Google’s search engine a threat, and I was serious. Google has a track record of vulnerabilities far worse than OSVDB does. Not only in their popular search engine, but their various products too. Besides, the mechanism for reporting potentially dangerous sites is a bit dubious to say the least.

Update: Ends up, we had another iframe injection into one of our posts (which is now removed), and the hunt for how this is happening now begins. That said, while Google’s warning that this site is “dangerous” may have been accurate, their mechanism for warning users in a vague manner (as shown in the image linked off ‘vague warning’) and not warning the site administrator is far from friendly. I can see that Google doesn’t care about warning sites of issues before warning the public, a far cry from ‘responsible disclosure’, something that Google pretends to care about:

This process of notifying a vendor before publicly releasing information is an industry-standard best practice known as responsible disclosure. Responsible disclosure is important to the ecology of the Internet. It allows companies like Google to keep users safe by fixing vulnerabilities and resolving security concerns before they are brought to the attention of the bad guys. We strongly encourage anyone who is interested in researching and reporting security issues to observe the simple courtesies and protocols of responsible disclosure.

Next time OSVDB is informed of a vulnerability that impacts Google products or services, I sure hope it doesn’t slip our mind to contact them. Perhaps the apparent race condition between the vague wording and the not-so-vague wording (below) that users may see constitutes a bug. If they can read this blog, they can see the bug in action and then contact us if they have more questions.

Update 2: Google apparently tried to send mail to our domain: From: Google Search Quality

OSVDB Chosen for Google Summer of Code 2007

[This was originally published on the OSVDB blog.]

For the second year now, OSVDB has been selected to participate in the Google Summer of Code program. It’s pretty neat to be in this program along with other relatively unheard of projects like Debian, FreeBSD, GNU, KDE, NetBSD, OpenSolaris, PHP, PostgreSQL, Python, Samba, Apache, EFF, Fedora and X.org. =)

As always, Google continues to give back to the community in ways most companies will never understand or appreciate.

Google VulnSearch?

[This was originally published on the OSVDB blog.]

Fall behind and someone will always beat you to the punch! Gadi Evron posted an entry over at Securiteam on the topic of using Google’s Codesearch to find vulns. Since he and others are writing about this, I don’t have to! However, i’ll post a few more thoughts before anyone else maybe!

First, we have this great ability to (ab)use Google’s Codesearch to find vulnerabilities through fast code analysis. Is this a fun but very short fad? Or will we see people use this to disclose vulnerabilities and give credit to their method? Will it lead to a lot of false positives> like we’re seeing with remote file inclusion? Several ‘researchers’ are grep’ing for a single stringle, finding it, and posting it as a remote file inclusion vulnerability without really analyzing the code or testing their own “proof of concept”. Hopefully, researchers will use this new tool to not only find vulnerabilities, but truly validate their finding before disclosing.

Second, who is going to be the first to create an interface that smoothly links the Google Codesearch with a robust static code analyzer? Imagine a web interface where you choose a few key things like what language, what types of vulnerabilities, and click click for all the results. The program would then use the Codesearch results to pipe into the code analyzer and spit out a list of high probability vulnerabilities.

Some of these ideas courtesy of email discussions with Chris Wysopal, Mudge and others.

Google Device Vulnerabilities, EULA and More…

[This was originally published on the OSVDB blog.]

H D Moore recently wrote that he discovered several vulnerabilities in Google Search Appliances. You can find details of these on the Metasploit Vulnerability Page, as well as search OSVDB for the corresponding entries. Normally this wouldn’t be worth posting about, however Moore’s comments on the Google EULA and how it impacts vulnerability research is worth noting. From his mail:

I found some fun bugs in the Google Search Appliance and uploaded the results in preparation for a Monday morning release. To get an idea of how many affected systems there are, just Google for inurl:proxystylesheet. Google released a patch on August 16th and I agreed to wait at least 60 days past that before disclosing the bugs.

A warning to anyone who owns one of these appliances – the EULA and confidentiality agreement prohibit any form of security research or publication of results. After I reported the issue, their security team offered to send me a Mini for patch verification, but agreeing to the license terms would prevent me from publishing any information about the product in the future. I got a beach towel and shirt instead 🙂

This also brings up why Google won’t publicly release their security advisories. Searching Google for “GA-2005-08-m” finds one reference to someone having problems with the latest patches, but no copies of the advisory. Seems Google is all about organizing and sharing world information.. unless it’s information on their own vulnerabilities? Oh wait, “the Google Search Appliance does not create security issues”!