Assessing the ‘War on Tech’: Huawei vs. U.S.

[I wrote this with Curtis Kang who did a lot of work researching various aspects of this article and provided invaluable help. His research and written contributions made this article possible. It was originally intended to be published on RiskBasedSecurity.com in early 2020 but was passed over so I am publishing it here.]


In 2019, we saw the steadily-growing social and policy conflicts between the United States and China reach a boiling point. China has been a major talking-point of President Trump’s platform since early in his campaign. However, it wasn’t until last year that we saw active policies enforcing a so-called “war on tech” between the U.S. and major Chinese companies like Huawei and ZTE, and those policies being “sidestepped”. We wanted to examine this from a data perspective, looking at the vulnerabilities in similar companies from both sides.

To set the stage, it is useful to briefly revisit the U.S. vs CN timeline.

The Trade War

Since taking office in January 2017, President Trump has had a specific interest in China, stating early-on that the “U.S. will be on a level playing field”. This led to several rounds of tariffs being imposed against China starting in March 2018, and retaliatory tariffs being imposed against the U.S. Early in 2019, there was conjecture that President Trump may use an executive order to limit some U.S. companies such as wireless carriers from purchasing Chinese electronic devices. That executive order was signed on May 15, 2019, citing the National Emergencies Act (50 U.S.C. 1601 et seq.) that would limit or ban purchases of “technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary”.

While the executive order did not list any country or company, it was widely accepted that it was a move against Huawei in particular. The order contained interesting language, saying that the banned technology or services “poses an undue risk of sabotage” and is “an unacceptable risk” to the United States, among other wording. Technology meeting those criteria would be determined by the Secretary of Commerce, in consultation with nine other bodies “and as appropriate, the heads of other executive departments and agencies”.

On May 20, 2019, the BIS modified the final rule and granted a Temporary General License (TGL) until August 19, 2019 for transactions regarding, among other things, “Continued Operation of Existing Networks and Equipment” and “Cybersecurity Research and Vulnerability Disclosure.” On August 19, 2019, the BIS extended the TGL by 90 days, or until November 19, 2019. Outside the TGL, any request for a license to sell or transfer commodities, software or technology to Huawei is reviewed “under a policy of presumption of denial.” In other words, the BIS provides virtually no avenue for a continued commercial relationship with Huawei after November 19, 2019.

Months later, when asked if China would retaliate, Chinese foreign ministry spokesman Geng Shuang told reporters “stay tuned.” Two weeks after that, China announced tariffs on $75 billion of U.S. products. This was followed in December with China announcing a ban on foreign technology in “all government offices and public institutions” within three years. The ban also prevented companies such as Google, Dropbox, and Facebook from being used within China. With this, the United States and China were in a new type of technology war based on the premise that the adversarial nation was producing equipment that “poses an undue risk of catastrophic effects”.

The Fear of Backdoors

Computer equipment that poses a risk in the context above, typically brings to mind computer vulnerabilities. Issues that, with the right knowledge, would allow one country to use software vulnerabilities to compromise assets in the adversary nation’s government, business, or infrastructure. Another common scenario brought up by security professionals and intelligence officials is that of a backdoor; computer code planted by a party that allows them, and only them, covert remote access to the device. Some members of the U.S. intelligence community would prefer these Chinese products not be used in the technological infrastructure saying it “would undercut the ability of the U.S. to defend itself.”

This fear, specifically of Huawei routers from China, has been front-and-center since 2011, and a growing concern even before that. In the past, the concerns largely stemmed from each nation compromising the other’s computer networks in government and business. More recently, with the race to implement a 5G network, security issues around that technology have been heavily scrutinized. This war of technology has reminded us of 2010, when Huawei published an open letter to the U.S. government attempting to allay fears and shift public perception after a decade of suspicion. The company went so far as to request a “thorough investigation” to prove that they are “a normal commercial institution and nothing more.” This prompted eight U.S. senators to urge the White House to examine Huawei contracts and the House Intelligence Committee to investigate and publish a report on both Huawei and ZTE.

Ultimately, that report was inconclusive and stated the following – “despite hours of interviews, extensive and repeated document requests, a review of open-source information, and an open hearing with witnesses from both companies, the Committee remains unsatisfied with the level of cooperation and candor provided by each company.” Even over six years later, in 2019, Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, officially stated that no one has found a backdoor in a Huawei product.

This is important to note, given the considerable scrutiny Huawei has received. In addition to their open letter in 2010, Huawei also disclosed their source code to a foreign government, something that no U.S. company has done. Despite the numerous information security companies attempting to find and potentially publish findings of an actual backdoor (including the NSA and specifically created testing centers in the UK), none have been confirmed. Given that the U.S. National Security Agency (NSA) has a significant budget and a vested interest in determining if a company like Huawei is shipping backdoored systems, and has not disclosed one, is compelling.

Ignoring Backdoors and Looking at the Data: Is a Ban Warranted?

Given that history and perspective on the growing tech war between the U.S. and China, we at Risk Based Security wanted to look at some concrete numbers around the vulnerabilities in the companies at the center of the issue.

While much of the focus on this topic has been on fear and the threat of backdoors planted by a vendor at the behest of their government, that is not necessarily where we want to direct attention. Using a backdoor, even if it is well-hidden, would likely bring unwanted attention by giving more positive attribution to those who compromised the machine. Nation-state level hackers would have their own ways into a wide variety of vendors and devices purely based on ‘natural’ vulnerabilities in the code. They simply do not need the access, and risk, a backdoor provides. Why provide hints to the enemy that you’ve “cracked the code” when you could hide behind an existing vulnerability?

Setting aside the possibility of backdoors, the question we’re interested in is this: does one of the government-used devices pose more of a risk due to its vulnerabilities? Despite this, we have found that the “war on tech” cannot be simplified into the classic “how many vulnerabilities are there in…” question, else unspoken bias drastically affects the perceived meaning of the numbers. While there is no way to do a perfect one-to-one comparison of U.S. versus Chinese vendors, there may be some that we can begin to compare, with disclaimers.

Phones: BlackBerry vs. Huawei / ZTE

For the general public, and based on much of the mainstream media reporting, Huawei are predominantly associated with their mobile phones. As more of our lives move to mobile, it is no surprise that those in power are concerned about the security of their phones and tablets. For U.S. and Chinese governments, it is widely viewed that BlackBerry and Huawei / ZTE phones, respectively, are dominant. For BlackBerry, they announced a five year deal for their latest handheld and their AtHoc software with the federal government back in July 2016, specifically the Department of Defense (DoD) Joint Emergency Mass Notification Systems (JEMNS). According to the press release, the DoD chose Blackberry because of the “secure end-to-end mobility offering .. that [shows the] secure platform is designed to meet their priorities”.

Despite the contract, BlackBerry is not the most widely used phone in the U.S. government. The U.S. Senate officially “ditched” BlackBerry in 2016, but allows them to continue to use specific devices per an official memo. In fact, BlackBerry themselves have stopped making their own handheld devices and have shifted to business software and other solutions like AtHoc, apparently used by 70% of federal employees including DoD, DHS, VA, DoE, DoA, PFPA, FEMA, IRS, and the TSA. For a majority of government employees, the most commonly used phones are now Apple and Samsung products.

With regards to China’s government, specific details about mobile phone adoption is not readily available. By simply looking at Huawei’s market share in China, one might safely assume that their devices are favored by some in the Chinese government. While it has long been rumored that Huawei has a very direct and complicated relationship with their government, which is supported both by Vietnamese academic and U.S. government research, Huawei says their relationship with the government is “no different” than any other company in China.

The U.S. government officially uses a mix of BlackBerry, Apple, and Samsung (Android), meaning that there are three major vendors and three major operating systems. For the Chinese government, apparently there is no officially sanctioned device, but it is very likely Huawei (formerly Android, but moving to Harmony OS / Hóngméng in 2020) and ZTE (Android) phones are heavily used. Looking at the last three calendar years, here is a comparison between the vendors to see how many vulnerabilities have been posted:

With these numbers it may seem like BlackBerry represents more risk. However, if BlackBerry shares the same vulnerabilities as any other Android device, and they disclose vulnerabilities in applications they ship, that number can be higher. The same can be said for any other Android phone that ships with packaged vulnerable apps and components as well, so the 1,338 Android vulnerabilities are not a full representation for other devices (e.g. Samsung, Huawei, ZTE). We then have to remind readers that comparing open source software such as Android to closed source such as BlackBerry OS and Apple can introduce bias in disclosure numbers. Another aspect to consider is that the amount of devices being used may influence how many people are actually performing security research on them.

Ultimately, this means neither the U.S. or China can justify banning devices based on phone vulnerability history alone. Trying to state one vendor is more “vulnerable” than the other using currently available vulnerability data alone requires so many disclaimers that the end result loses its potency.

Routers & TelCom: Huawei vs. Cisco et al

The second major aspect of concerns over technology from one country being pervasive in another is that of access. Everyone from the carriers to end users expects the equipment to function seamlessly, giving us access to the internet and mobile service. That service is built on a complex infrastructure of telecommunications (telecoms) hardware and software produced by companies such as Huawei, Cisco, Fujitsu, Nokia, and Ericsson. The telecom hardware includes routers, base transceiver stations, fiber optical networks, satellites, and a lot more. As of 2017, Chinese companies produced the most telecom equipment in the world, about 14% more than the United States.

Looking at these vendors for the last four calendar years, we get another lesson in how there is significant bias introduced into vulnerability statistics due to disclosures. Cisco had 2,227 vulnerabilities in that time. Compared to Huawei with only 813, one might conclude that Cisco’s software is inherently riskier. But compare Cisco with the three other companies. Fujitsu enjoys 79% of the market share by revenue, yet only had 24 vulnerabilities in that time frame. Going off that logic, can we conclude that Fujitsu is the most secure?

Consider that of Fujitu’s 24 vulnerabilities, only three are in their products and one of them a keyboard. The other 21 vulnerabilities are in third-party software or hardware (e.g. Intel processors). Cisco on the other hand has an incredible number of vulnerabilities reported, but they rarely publish that they are affected by vulnerabilities in OpenSSL and Intel for example, despite using those technologies in some of their devices.

Both Cisco and Fujitsui maintain contact pages for reporting security vulnerabilities, have a formal PSIRT team to respond to vulnerability reports, and both publish security advisories. Despite this, they have public disclosure histories that are about as opposite as you can find in many ways. We know for a fact both companies use hundreds of third-party libraries in their code, yet neither publish when third-party vulnerabilities affect their software. Based on our extensive history of tracking vulnerabilities, we are quite certain that Fujitsu products have, or have had, more vulnerabilities than they have officially disclosed. Any notion that Fujitsu (24) is a one-off situation can be dismissed when looking at Nokia (11) and Ericsson (8) for the same periods. That suggests Cisco and Huawei are outliers.

We can apply this same scrutiny to Huawei, with only 813 vulnerabilities despite their large market share, and their considerable transparency when it comes to third-party vulnerabilities. In the world of vulnerability research, access to software and equipment is essential, of course. Some may argue that Huawei equipment isn’t readily available to many researchers, and that might be true for U.S.-based researchers. But the last decade has shown an incredible number of extremely talented security researchers in China, who would presumably have more access. If one were to argue that China is looking to restrict vulnerability disclosure, that certainly will be something to consider moving forward. However, that plan is still preliminary and has not been implemented.

Conclusion: Overkill

You need comprehensive, detailed, and actionable data in order to make informed decisions. Following this mantra, we are comfortable in assessing that with the currently available vulnerability data, a hard stance condemning either side is not possible. As much as we would like it to be, the comparison of vulnerabilities within vendors cannot be a panacea.

That being said, does this mean that both the U.S. and Chinese governments are banning each other’s products solely for political posturing, or is it actually an informed decision? As we can see, it may be easy to arrive at a conclusion when looking at the data. But as informed citizens we all need to be aware of the disclaimers and hidden perspectives that the data may not overtly reveal. The answer is not so simple as “well, this has more vulnerabilities than that”.

Without concrete evidence of backdoors in Huawei products, the concern is definitely valid, but a total ban is overkill and may have far-reaching unintended consequences. As the “war on tech” has raged on, both the U.S. and China have suffered casualties.

Perlroth, Miller, and the First Remote iPhone Vuln

In what is sure to be my last blog (?!) born out of reading “This Is How They Tell Me The World Ends” by Nicole Perlroth, this article is basically a quick dive into a single paragraph that contains one sentence with an alleged fact pertaining to vulnerability history. As a self-described Vulnerability Historian, this is of course of interest especially if the statement is questionable. From page 63 of the book, here is the paragraph for full context and the relevant bits in bold:

But that’s not how he would be remembered. One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone. The conventional wisdom had always been that the iPhone – with its sleek design and closely held code – was more secure than the alternatives. But Charlie blew a hole right through that theory. He demonstrated before an audience of hundreds how easily he could remotely control anyone’s iPhone simply by steering their browser to a malicious website he created.

With that, we’ll examine three components of this claim:

  1. Was the vulnerability remote?
  2. Was the vulnerability in an iPhone?
  3. Was Miller the first?

Before jumping to conclusions on those answers, there’s a twist or two! If you’re already grumbling over me being “wordy” you can scroll down to the end for the cliff notes TL;DR and miss the rabbit hole adventure. And also thank me for not posting each section separately, teasing it out and making you wait two weeks for an answer.

Was it Remote?

Perlroth cites the quoted section above from a 2007 article by John Schwartz titled “iPhone Flaw Lets Hackers Take Over, Security Firm Says“. To make sure we understand the context from this article, with an important bit highlighted:

Once he was there, the site injected a bit of code into the iPhone that then took over the phone. The phone promptly followed instructions to transmit a set of files to the attacking computer that included recent text messages — including one that had been sent to the reporter’s cellphone moments before — as well as telephone contacts and email addresses. “We can get any file we want,” he said. Potentially, he added, the attack could be used to program the phone to make calls, running up large bills or even turning it into a portable bugging device.

For clarity, and to show this was widely reported, we see from Farhad Manjoo of Salon in his article “Security researchers find a dangerous iPhone flaw” that the attack vector is stated more clearly:

The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message. After it’s activated, an attacker can make the phone transmit files or passwords, run up wireless services or even record audio and relay it back to the attacker.

The reason the attack vector is so important is that it speaks to the first part of the claim in which Perlroth says it was the “first remote hack”. In the context of vulnerabilities, remote means that a vulnerability can be exploited remotely without user interaction from the victim. If the exploit requires the victim to perform an action of any kind, including clicking a link, it is a user-assisted or context-dependent attack. While that is a serious attack, since we know the bar for clicking a link is low, it is still important to make this distinction. Why? Let’s start with risk scoring and refer to Remote Code Execution (RCE) and Arbitrary Code Execution (ACE) for reference.

Using the Common Vulnerability Scoring System (CVSS), an industry standard for better or worse, we get four sets of scores to look at. First, understand that many organizations use a three-tier “stoplight” system for general risk severity (i.e. low, medium, high) or a five-tier system that adds an ‘informational’ and ‘critical’ rating. The five-tier system breaks down as 0.0 (informational), 0.1 – 3.9 (low), 4.0 – 6.9 (medium), 7.0 – 8.9 (high), 9.0 – 10.0 (critical). For organizations that prioritize at this higher level first, focusing on critical before moving to high-risk, the difference between an 8.9 and 9.0 may mean a lot. So let’s compare a RCE versus an ACE looking at both CVSS version 2 and 3, which are the same in spirit but different in implementation:

As we see, arbitrary code execution under CVSSv3 is scored as 8.8 which is only “high” risk while under CVSSv2 it is “critical”. Compare that to remote code execution which is “critical” under both scoring systems. So the distinction between remote and user-assisted is important in both severity and factual accuracy. Jumping back to specifics of the attack:

The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message.

This is clearly an arbitrary code execution situation as it requires a victim to visit a malicious web page in some manner. That distinction is one that Charlie Miller has made himself many times over the years. This is not a remote vulnerability. At this point it would be more accurate to say “first user-assisted code execution vulnerability in an iPhone“. That’s a bit different huh?

Was the vulnerability in an iPhone?

The simple answer is of course, “yes”. But it’s more complicated than that which we’ll see, as well as why that is important. When attributing a vulnerability to a given device like an iPhone we should note if the vulnerable code is in Apple’s own iPhone code or a third-party library used by the device. This distinction starts us down a rabbit hole.

First, we’ll reference the vulnerability in question which is CVE-2007-3944, and it was cited in the iPhone v1.0.1, macOS Security Update 2007-007, and Safari 3 Beta Update 3.0.3 updates:

Safari

CVE-ID: CVE-2007-3944

Available for: iPhone v1.0

Impact: Viewing a maliciously crafted web page may lead to arbitrary code execution

Description: Heap buffer overflows exist in the Perl Compatible Regular Expressions (PCRE) library used by the JavaScript engine in Safari. By enticing a user to visit a maliciously crafted web page, an attacker may trigger the issue, which may lead to arbitrary code execution. This update addresses the issue by performing additional validation of JavaScript regular expressions. Credit to Charlie Miller and Jake Honoroff of Independent Security Evaluators for reporting these issues.

How do we know it was this vulnerability and not a subsequent one? Perloth says it came one month after Miller’s paper, “The Legitimate Vulnerability Market” from May, 2007. Miller and Honoroff’s vulnerability was shared with Apple on July 17 and publicly disclosed on July 19. Close enough to a month and the next iPhone update was 1.1.1 in September which did not credit Miller. You can also notice that while Perlroth credits Charlie Miller, it was additionally credited to a second individual, Jake Honoroff. 

We can see that the first two advisories attribute the vulnerability to the code in Safari, while the Safari advisory attributes the vulnerability to WebKit, an open-source web browser engine used by Google Chrome, Apple Safari, Mozilla Firefox, Microsoft IE (recent versions) and other browsers. But the advisory tells us the issue is actually in Perl Compatible Regular Expressions (PCRE), which is a library used within a library (WebKit) used within Safari used within the iPhone. At this point it would be more accurate to say “first user-assisted code execution vulnerability in a transient dependency used by the iPhone“. That’s quite different huh?

We need to go further down the rabbit hole though. Since the vulnerability is in WebKit, which existed before the iPhone and the first security patch, we need to consider if any prior WebKit vulnerabilities might have impacted the iPhone and simply weren’t reported as such. We know iPhone development began in 2004 and the first release was June 29, 2007. We don’t know what that development was like, specifically how often they pulled in upstream code in WebKit. In theory that gives us a 3.5 year window but I think it is safe to say the developers would pull in code more often. There are at least two WebKit exploits from 2006, only one later in the year disclosed on November 14 that is ACE. I’d suspect that was patched well before the iPhone release since it was patched in macOS at that time.

Next we need to consider if other Safari vulnerabilities might have impacted the iPhone. One vulnerability jumps out quickly, an ACE in Safari patched on June 12, but it only impacts installs on Windows. Next we have a vague disclosure on June 11, 2007 about “ten flaws” in the SVG parsing engine that weren’t reported to Apple (CVE-2007-3718). These very well could represent vulnerabilities that impacted the iPhone, we simply don’t know. There were two more ACE vulnerabilities reported in Safari with no indication they were fixed, just reported (CVE-2007-3187). These could very well affect the iPhone as well. 

Finally, we have to consider if vulnerabilities in other third-party libraries used in the iPhone affect it. Apple doesn’t publish a list of those libraries but based on prior disclosures that affect macOS, which could also affect the iPhone, those include expat, troff, Libxslt, ICU / ICU4C, libXfont, libxml2, glibc, and some FreeBSD BDF font handling code. That’s a lot of code we don’t know about that is certainly a concern.

Did Miller’s vulnerability in question affect the iPhone? Yes, but, at this point it would be more accurate to say “first publicly disclosed user-assisted code execution vulnerability in a third-party library used by the iPhone after commercial sales began“. That’s even more specific huh?

Was Miller the First?

Since the iPhone advisory above covers the first security update for the device, that advisory represents the first batch of vulnerabilities patched after public release. The next thing we need to look at are the other vulnerabilities patched; are any of them ACE or RCE? Yes, one of the four other vulnerabilities is an ACE as well (CVE-2007-2399). It is described as:

Impact: Visiting a maliciously crafted website may lead to an unexpected application termination or arbitrary code execution

Description: An invalid type conversion when rendering frame sets could lead to memory corruption. Visiting a maliciously crafted web page may lead to an unexpected application termination or arbitrary code execution. Credit to Rhys Kidd of Westnet for reporting this issue.

So there are two ACE vulnerabilities fixed in the same advisory. How did Schwartz at the New York Times know that Miller and Honoroff’s vulnerability was first? Because Miller likely told him so. In the article Schwartz quotes Lynn Fox from Apple so they talked but I suspect that Schwartz did not verify that information and Fox did not volunteer it. From the NYT article:

The researchers, working for Independent Security Evaluators, a company that tests its clients’ computer security by hacking it, said that they could take control of iPhones through a WiFi connection or by tricking users into going to a Web site that contains malicious code. The hack, the first reported, allowed them to tap the wealth of personal information the phones contain.

[..]

A spokeswoman for Apple, Lynn Fox, said, “Apple takes security very seriously and has a great track record of addressing potential vulnerabilities before they can affect users.”

Per that article and other sources, we know that Independent Security Evaluators (ISE) reported the vulnerability to Apple on July 17. Looking in VulnDB I can see that Kidd reported his find to Apple on June 13, over a month before ISE did, and it is in the third-party library WebKit rather than a transient dependency of WebKit. So that settles it, right? Not quite.

We know that between these two vulnerabilities, Miller was not first. But we also know that neither were remote code execution either. Moving past the iPhone 1.0.1 update, we have to go through each subsequent release to figure out if any of the fixed vulnerabilities qualify. Fortunately, we only have to go one more version to 1.1.1 before we have our first candidate. On September 27, 2007, the update fixed vulnerability in Bluetooth functionality that can be exploited remotely:

Bluetooth

CVE-ID:  CVE-2007-3753

Impact:  An attacker within Bluetooth range may be able to cause an unexpected application termination or arbitrary code execution

Description:  An input validation issue exists in the iPhone’s Bluetooth server. By sending maliciously-crafted Service Discovery Protocol (SDP) packets to an iPhone with Bluetooth enabled, an attacker may trigger the issue, which may lead to unexpected application termination or arbitrary code execution. This update addresses the issue by performing additional validation of SDP packets. Credit to Kevin Mahaffey and John Hering of Flexilis Mobile Security for reporting this issue.

This technically qualifies as the first remote vulnerability in the iPhone! However, notice that it has to be exploited from within Bluetooth range which severely limits exploitation. In such cases CVSS would be scored as AV:A, meaning adjacent network, dropping the score a good bit. While this does fit the bill, meaning Kevin and John deserve serious kudos, it isn’t remote in the context most associate the term with. So let’s keep going to see the first fully remote vulnerability in an iPhone. We pass the releases for 1.1.2, 1.1.3, 2.0, and 2.1 to find the next of interest in 2.2 on November 20, 2008:

ImageIO

CVE-ID:  CVE-2008-2327

Impact:  Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution 

Description:  Multiple uninitialized memory access issues exist in libTIFF’s handling of LZW-encoded TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution. This update addresses the issue through proper memory initialization and additional validation of TIFF Images.

ImageIO

CVE-ID:  CVE-2008-1586

Impact:  Viewing a maliciously crafted TIFF image may lead to an unexpected device reset

Description:  A memory exhaustion issue exists in the handling of TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected device reset. This update addresses the issue by limiting the amount of memory allocated to open a TIFF image. Credit to Sergio ‘shadown’ Alvarez of n.runs AG for reporting this issue.

These two vulnerabilities are interesting because there is a potential for a remote attack here, but the advisory doesn’t make it clear in wording and they don’t provide CVSS scores. Since an image can be delivered a wide variety of ways, including via SMS, the fact that these occur in the ImageIO subsystem is of note. The Apple Developer documentation backs up this thought:

The Image I/O programming interface framework allows applications to read and write most image file formats. This framework offers high efficiency, color management, and access to image metadata.

A bit light on details but this suggests that if e.g. an SMS messaging app, or any other that remotely receives content and processes it, could be an avenue for remote code execution. Based on a chat with a colleague, it would require the victim opening the SMS app at the very least which is a low bar for exploitation, but he does not think the iPhone SMS app renders the images as a preview without clicking into a specific message. Low bar, but still requires some user interaction. We see the exact same thing for CVE-2008-3623 and CVE-2009-0040 in the iPhone 3.0 update on June 17, 2009. It is interesting to note that we’re now two years after the iPhone’s release and full remote vulnerability with no limitations or caveats.

CoreGraphics

CVE-ID:  CVE-2008-3623

Impact:  Viewing a maliciously crafted image may lead to an unexpected application termination or arbitrary code execution

ImageIO

CVE-ID:  CVE-2009-0040

Impact:  Processing a maliciously crafted PNG image may lead to an unexpected application termination or arbitrary code execution

This time, one of them is in CoreGraphics which does not seem to be as promising as ImageIO based on the documentation. Moving on we land on the iPhone 3.0.1 update released July 31, 2009 and see:

CoreTelephony

CVE-ID:  CVE-2009-2204

Impact:  Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution

Description:  A memory corruption issue exists in the decoding of SMS messages. Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution. This update addresses the issue through improved error handling. Credit to Charlie Miller of Independent Security Evaluators, and Collin Mulliner of Fraunhofer SIT for reporting this issue.

This has all the makings of what we’re after. While the advisory says “arbitrary code execution” that is a qualifier to “decoding of SMS messages”. Receiving the message triggers it as the payload is processed regardless of loading the message specifically. But notice that the same issue was also found by Collin Mulliner. So who found or reported it to Apple first? That is what ultimately settles this question. Since it lists two people with two different affiliations, that typically means mutual discovery or a “vulnerability collision”. 

I reached out to a contact at Apple and asked if they could figure out which of the two sent the email first to settle this debate. Low and behold, I was told that it was a single mail sent June 18, 2009 and both were listed as creditees! That is backed up by a presentation at Black Hat USA 2009 titled “Fuzzing the Phone in your Phone” given by both individuals.

Conclusion (tl;dr)

We began the blog with a quote from Nicole Perlroth’s book, “This Is How They Tell Me The World Ends”, in which she says “One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone.”  The question is if that quote is accurate, understanding she is citing CVE-2007-3944. The answer is, it’s complicated. Here’s the facts as best I can tell:

  1. Was it remote? No, the cited vulnerability is a user-assisted issue and cannot be carried out remotely w/o that victim clicking something.
  2. Was the vulnerability in an iPhone? Kind of. The vulnerability was in the Perl Compatible Regular Expressions (PCRE) library used by the JavaScript engine in Safari, bundled with the iPhone. Yes it affected the device, no the vulnerability wasn’t in Apple’s code let alone the iPhone native code base.
  3. Was Miller the first? It’s complicated. 
    1. If we go strictly by CVE-2007-3944, then no Miller was not the first. Rhys Kidd disclosed a user-assisted vulnerability in WebKit, the rendering engine in Safari, over one month earlier. Further, Jake Honoroff co-disclosed the vulnerability Miller found.
    2. If we go by remote without interaction but limited in attacker location, then no, Kevin Mahaffey and John Hering are the first with CVE-2007-3753 that must be exploited over Bluetooth.
    3. If we go by the actual first remote vulnerability, CVE-2009-2204 around two years later, then yes but Miller co-discovered it with Collin Mulliner and both earned that distinction.

In short: no, kind of, no, no, yes but. So Perlroth is incorrect in her assertion and very likely included it after talking to Miller for her book. The problem is that in the context of the 2007 vulnerability, Miller was wrong and Perlroth et al did not properly fact check that detail, instead relying on a co-worker’s article as gospel. We don’t know if Miller mentioned Honoroff in his discussions with Perlroth or if her text was written outside the scope of her discussion with Miller, but that detail was trivial to find if the claim was fact checked beyond her colleague’s article that also omitted it.

Regardless, I believe we have a definitive answer as to that one line in the book. It took eight pages to get to this answer and I apologize for that (not really).

An Analysis of Google’s Project Zero and Alleged Vendor Bias

[This was originally published on RiskBasedSecurity.com.]


Google announced a new initiative called Project Zero. The basic premise of the project was that Google invests heavily in their own security and had for quite some time been also tasking their researchers part time work on improving the security of other high-profile products.

Project Zero is Google’s investment into a well-staffed team that they hoped to get the ball rolling as they felt that people “should be able to use the web without fear that a criminal or state-sponsored actor is exploiting software bugs to infect your computer, steal secrets or monitor your communications.” Recently, there have been questions that have been raised wondering if Google’s Project Zero is really unfair towards and targeting Microsoft?  

From reading several news articles as well as the blog from Microsoft’s Chris Betz, who runs their Product Security Response, you would definitely think that this is the case. Since Project Zero was announced, we at Risk Based Security have been following it closely as we were interested in how it might influence vulnerability research. More specifically, we were curious if the project would help improve the code that many rely on such as third-party libraries and if it might ultimately impact the Bug Bounty economy.

With all of the recent conversations about Project Zero, we decided to publish some of our own analysis of with an attempt at using data to see if there is a bias towards targeting Microsoft, or any other company. One of the great things about Google and Project Zero is that they are generally pretty transparent with their research and try to clearly communicate their intentions about disclosure as well. But have they really been clear with their disclosure policy?

We know there is a 60-day Google company-wide intention for security disclosures that was published back in July, 2010. But what about the policy for Project Zero? We aren’t aware of a clear policy that has been published for the project or any sort of blog post on the requirements (nothing on their main blog, or the wiki in their tracker). In a Wired article last year, it mentions that Google staff stated:

“… they’ll alert the company responsible for a fix and give it between 60 and 90 days to issue a patch before publicly revealing the flaw on the Google Project Zero blog. In cases where the bug is being actively exploited by hackers, Google says it will move much faster, pressuring the vulnerable software’s creator to fix the problem or find a workaround in as little as seven days.”

Based on reading the individual tickets you can see that they have been communicating the 90-day policy since ID 44, opened on Jul 9, 2014 and since ID 89they have been very consistent about telling all vendors about the 90 day deadline and started automatically including the boilerplate text:

“This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public.”

In the initial announcement of Project Zero, there was no mention of a hard line of 90 days for a disclosure timeline, but it does mention they want to get things fixed in a “reasonable time”:

“Once the bug report becomes public (typically once a patch is available), you’ll be able to monitor vendor time-to-fix performance, see any discussion about exploitability, and view historical exploits and crash traces. We also commit to sending bug reports to vendors in as close to real-time as possible, and to working with them to get fixes to users in a reasonable time.“

Chris posted a blog update in November 2014, where he highlighted many bugs that were fixed well within Project Zero’s 90-day disclosure deadline and that it represented a solid security response by the teams responsible for Adobe Flash, Microsoft Office, Internet Explorer, and Microsoft Windows. A more important point to mention is the following:

We calibrate our deadline to balance two important factors: the right of end users to know about a risk versus giving vendors a fair chance to cleanly fix a bug under a reasonable embargo. Given the ratio of deadline hits to misses, we think we’ve calibrated correctly for the current state of the industry but we do hope to tighten the deadline in the future.”

It seems that they could benefit from posting a formal policy at some point in the future to clarify their disclosure process, but regardless, they have been making vendors well aware of their expected timelines. What we can say with certainty is that by publishing details about their research, it allows us to get a better insight into their work, their focus, as well as a limited view of their interaction with vendors. 

Show Me The Data

Project Zero has a tracker that they show their work in, and in some cases publish very detailed information for us to see their process. As of January 25, they have assigned 206 IDs in their tracker. One aspect that was not clear is the reason that so many IDs, especially earlier assignments, were restricted, meaning they were not open for viewing to the public. We speculated that these restricted IDs were ones that Google was not sharing for some specific reason, as they could have just been IDs that have been abandoned, or they were still being resolved with the affected vendors.

Considering they still have “Issue 1: This is a test” public, it didn’t seem that they were cleaning up older entries that are not relevant. We decided to reach out to Chris Evans who runs the Project Zero team asking for clarification on why so many entries were restricted, and he responded very quickly offering some great insight on the situation. Chris shared that most of the early IDs that are shown as restricted were due to a bunch of spam reports that were received when the project was officially launched.  He mentioned that there are a few other causes of early bugs not being made public.

  • The first few IDs were not filed as well as the project team would have liked, so they filed a new ID again and marked the earlier issues as invalid.
  • A couple more bugs were marked as duplicates because they unfortunately filed the same report twice.
  • There were a few more issues that he shared were valid, but just needed to be made public. In fact, our mail prompted him to open up (ID #125) which he described as “a very interesting Flash bug that has been fixed several days ago”. We thank Chris for opening it up tonight, even though it made us redo our statistics!
  • And finally, as we expected, every ID currently from 144 onwards is still less than 90 days old which aligns with their stated 90 day disclosure timeline.

When looking at the 206 Project Zero IDs, the breakdown of tickets is as follows:

ID Status# of Vulnerabilities
Total Fixed83
Total WontFix4
Total Restricted73
Total Invalid26

  The “WontFix” designation means that the vendor does not consider the reported issue to be a security problem, or does not cross privilege boundaries.  In terms of the severity and impact to organizations of the vulnerabilities that they have discovered, we looked quickly at the VulnDB assigned CVSSv2 score for each ID.

cvss

For those that don’t like seeing just a CVSSv2 average (as there are currently still a lot of issues with this version, and a new version hopefully around the corner), we pulled more data from VulnDB to help us understand Exploit availability.

Note that the high percentage of Proof of Concept (PoC) is largely due to Google researchers developing them to demonstrate the vulnerability, which typically makes it easier for the vendor to diagnose and understand the problem.

Microsoft

As mentioned, there are theories being voiced that Google is specifically targeting Microsoft and not practicing Coordinated Vulnerability Disclosure, putting customers at risk.  A post from ErrataRob reminds us that Microsoft seems to be enjoying a double standard with their views on vulnerability disclosure. Additionally, it has been noted that Microsoft’s latest plea for CVD is as much propaganda as sincere, in a response to Microsoft’s Betz’s article published on the OSVDB blog. It is important to understand where this perception comes from.

Articles posted like the one from PC World that talk about a “third unpatched Windows zero-day” being released within a month is polarizing in the context of the vulnerability disclosure debate. It also makes it sound like this is a straight rivalry with one party intent on hurting the other. From what we can see by analyzing the data, this doesn’t appear to be the case. The disclosures are a byproduct of Google’s stated policy of disclosing after 90 days, fix or no fix. The use of “0day” denotes that Google published full details of the vulnerability (after ~90 days) without a vendor-supplied fix available.

StatisticsMetric
Total 0Day19 Vulnerabilities
Average Disclosure Time59.68 Days

It is critical to remember that not all vulnerabilities are created equal, even within the same organization. While Microsoft may be able to patch a particular product such as Office or Internet Explorer quickly, they may not be able to do the same for Windows depending on the numbers of platforms and versions affected. The life cycle of a privately disclosed vulnerability varies wildly as it has to be confirmed, patched, and then heavily tested.

Organizations want and really need Microsoft to test those patches adequately before releasing, or you run into updates that can do more harm than good. So this supposed “vulnerability disclosure war” as some are calling it between two companies is also mixing in one Google employee’s spare time activity with his day job.  No matter how many times someone may say that “their thoughts and tweets do not represent their employers”, it can be a very fine line specifically when you are a researcher working for a company where a strong focus is on discovering vulnerabilities in other company’s products. It can lead to questions as to what constituents their employer’s versus their own time, and is it decided who has ownership when it is more convenient. This is especially true since that researcher’s day job and hobby are heavily intertwined.

In doing further analysis on the Google Project Zero tracker, we find that they have disclosed 20 vulnerabilities in Microsoft products as of this blog. Of those 20, Microsoft labeled 4 as ‘WontFix’ (they don’t consider them to be a security issue, and Google actually agrees with most), 13 were fixed within 90 days, and 3 were released without a fix due to the 90 day deadline being exceeded (the dreaded 0day!).  On the surface, considering ~81% have been handled in a coordinated fashion that actually makes it seems Microsoft is doing pretty well. Not only have they gone through the process with Google many times already, they are obviously well aware of the 90 day disclosure deadline.

While it is still debatable that Google should provide extra time for companies when requested, it has become clear that this is not an option in the eyes of Project Zero. Since Microsoft now knows that Google is basically not flexible on that deadline, the onus is solely placed on them moving forward. If Microsoft does not want to have a 0day disclosed by Project Zero they are going to have to determine how to improve their response timeline and potentially dedicate more resources to a given vulnerability than they have for the three that were not fixed in time. It would be hard for anyone at Microsoft to argue that producing secure products isn’t critical and that they are short on resources to correct known vulnerabilities that may impact their customers. But the real question that begs to be asked; is Microsoft all alone in this supposed ‘war’ and being unfairly picked on by Google? Again lets take a quick look at the data to help us answer the question:

Vendors# of Vulnerabilities
Apple39
Adobe36
Microsoft20
Linux Kernel7
LibreSSL2
glibc1
PHP1
Red Hat1

The data suggests that there is a simple answer; of course Microsoft is not being solely targeted. In fact, just a few days ago, Project Zero hit the 90 day deadline on a trio of Apple Mac OS X issues that went public without a fix. Ars Technica covered it in the same style as other articles covered the “0day” dropping on Microsoft.  We found it very interesting that there is no mention in this article that these aren’t the first Google Project Zero “0day” dropped against Apple. In fact, between July 4, 2014 and September 30, 2014, Google released eleven vulnerabilities in Apple products without a patch.  While Apple and Microsoft have each had “0day” issue due to missing the timeline.  

We are also able to see that Project Zero has sent a substantial amount of  vulnerability reports to the Adobe Flash team and they’ve hit all the deadlines, which is very impressive. That was the initial reason that prompted Brian Martin, our Director of Vulnerability Intelligence to really dig even further than we had previously into all of the public Project Zero vulnerabilities. The analysis focused on the vendor, vendor notification date, disclosure date (when the vulnerability was disclosed, which was by Google for some, and by the vendor for others), CVSSv2 score and exploit availability. 

Conclusion 

From our analysis of available information, we believe it removes some of the emotional arguments and allows one to make their own educated statements which appear to support that Google is not singling out Microsoft with their Project Zero efforts. Furthermore, it shows that  Apple has received the most attention with Adobe close behind, both of whom have received considerably more attention than Microsoft. It just appears at this point, Microsoft is the only vendor that is failing to meet the 90 day disclosure deadlines sometimes, and then publicly criticizing Google for their work (or as some may say, providing free security work to improve their products and ultimately ensure their customers are better protected).

We sincerely hope that cooler heads will prevail as we all work together to better secure our organizations and protect consumer confidential information. Google’s work at Project Zero is impressive, and clearly making a positive impact on the security of high-profile products. We hope that companies will continue to work with Google and understand the importance of fixing issues as soon as possible to reduce the time of exposure customer’s face. But as with all policies, there are times when some level of flexibility does make sense when the purpose of the project is to improve security.

If you look at companies that Google has recently acquired, such as Dropcam, even their disclosure policy states, “We ask that you give us a reasonable amount of time to respond to your report before making any information public”. Of course, “reasonable amount of time” has been the cornerstone of the coordinated vulnerability disclosure debate for decades and doesn’t appear to be going away anytime soon.

[Update: Since starting a dialogue with Chris Evans, 24 issues have been published in the last 15 hours. Some of the stats quoted in this blog are already considerably different (e.g. Time to disclose). Further, we are now tracking additional information with each entry in order to generate more metrics.]

January Set As ‘Month Of Apple Bugs’

[This was originally published on the OSVDB blog.]

January Set As ‘Month Of Apple Bugs’
http://www.informationweek.com/news/showArticle.jhtml;?articleID=196701178

The “Month of Apple Bugs” project, which will be similar to November’s “Month of Kernel Bugs” campaign, will be hosted by the kernel bug poster who goes by the initials “LMH,” and his partner, Kevin Finisterre, a researcher who has posted numerous Mac vulnerabilities and analyses on his own site.

More interesting this time, Landon Fuller has begun using his own blog to release unofficial patches for the MOAB vulnerabilities as they are released.