Assessing the ‘War on Tech’: Huawei vs. U.S.

[I wrote this with Curtis Kang who did a lot of work researching various aspects of this article and provided invaluable help. His research and written contributions made this article possible. It was originally intended to be published on RiskBasedSecurity.com in early 2020 but was passed over so I am publishing it here.]


In 2019, we saw the steadily-growing social and policy conflicts between the United States and China reach a boiling point. China has been a major talking-point of President Trump’s platform since early in his campaign. However, it wasn’t until last year that we saw active policies enforcing a so-called “war on tech” between the U.S. and major Chinese companies like Huawei and ZTE, and those policies being “sidestepped”. We wanted to examine this from a data perspective, looking at the vulnerabilities in similar companies from both sides.

To set the stage, it is useful to briefly revisit the U.S. vs CN timeline.

The Trade War

Since taking office in January 2017, President Trump has had a specific interest in China, stating early-on that the “U.S. will be on a level playing field”. This led to several rounds of tariffs being imposed against China starting in March 2018, and retaliatory tariffs being imposed against the U.S. Early in 2019, there was conjecture that President Trump may use an executive order to limit some U.S. companies such as wireless carriers from purchasing Chinese electronic devices. That executive order was signed on May 15, 2019, citing the National Emergencies Act (50 U.S.C. 1601 et seq.) that would limit or ban purchases of “technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary”.

While the executive order did not list any country or company, it was widely accepted that it was a move against Huawei in particular. The order contained interesting language, saying that the banned technology or services “poses an undue risk of sabotage” and is “an unacceptable risk” to the United States, among other wording. Technology meeting those criteria would be determined by the Secretary of Commerce, in consultation with nine other bodies “and as appropriate, the heads of other executive departments and agencies”.

On May 20, 2019, the BIS modified the final rule and granted a Temporary General License (TGL) until August 19, 2019 for transactions regarding, among other things, “Continued Operation of Existing Networks and Equipment” and “Cybersecurity Research and Vulnerability Disclosure.” On August 19, 2019, the BIS extended the TGL by 90 days, or until November 19, 2019. Outside the TGL, any request for a license to sell or transfer commodities, software or technology to Huawei is reviewed “under a policy of presumption of denial.” In other words, the BIS provides virtually no avenue for a continued commercial relationship with Huawei after November 19, 2019.

Months later, when asked if China would retaliate, Chinese foreign ministry spokesman Geng Shuang told reporters “stay tuned.” Two weeks after that, China announced tariffs on $75 billion of U.S. products. This was followed in December with China announcing a ban on foreign technology in “all government offices and public institutions” within three years. The ban also prevented companies such as Google, Dropbox, and Facebook from being used within China. With this, the United States and China were in a new type of technology war based on the premise that the adversarial nation was producing equipment that “poses an undue risk of catastrophic effects”.

The Fear of Backdoors

Computer equipment that poses a risk in the context above, typically brings to mind computer vulnerabilities. Issues that, with the right knowledge, would allow one country to use software vulnerabilities to compromise assets in the adversary nation’s government, business, or infrastructure. Another common scenario brought up by security professionals and intelligence officials is that of a backdoor; computer code planted by a party that allows them, and only them, covert remote access to the device. Some members of the U.S. intelligence community would prefer these Chinese products not be used in the technological infrastructure saying it “would undercut the ability of the U.S. to defend itself.”

This fear, specifically of Huawei routers from China, has been front-and-center since 2011, and a growing concern even before that. In the past, the concerns largely stemmed from each nation compromising the other’s computer networks in government and business. More recently, with the race to implement a 5G network, security issues around that technology have been heavily scrutinized. This war of technology has reminded us of 2010, when Huawei published an open letter to the U.S. government attempting to allay fears and shift public perception after a decade of suspicion. The company went so far as to request a “thorough investigation” to prove that they are “a normal commercial institution and nothing more.” This prompted eight U.S. senators to urge the White House to examine Huawei contracts and the House Intelligence Committee to investigate and publish a report on both Huawei and ZTE.

Ultimately, that report was inconclusive and stated the following – “despite hours of interviews, extensive and repeated document requests, a review of open-source information, and an open hearing with witnesses from both companies, the Committee remains unsatisfied with the level of cooperation and candor provided by each company.” Even over six years later, in 2019, Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, officially stated that no one has found a backdoor in a Huawei product.

This is important to note, given the considerable scrutiny Huawei has received. In addition to their open letter in 2010, Huawei also disclosed their source code to a foreign government, something that no U.S. company has done. Despite the numerous information security companies attempting to find and potentially publish findings of an actual backdoor (including the NSA and specifically created testing centers in the UK), none have been confirmed. Given that the U.S. National Security Agency (NSA) has a significant budget and a vested interest in determining if a company like Huawei is shipping backdoored systems, and has not disclosed one, is compelling.

Ignoring Backdoors and Looking at the Data: Is a Ban Warranted?

Given that history and perspective on the growing tech war between the U.S. and China, we at Risk Based Security wanted to look at some concrete numbers around the vulnerabilities in the companies at the center of the issue.

While much of the focus on this topic has been on fear and the threat of backdoors planted by a vendor at the behest of their government, that is not necessarily where we want to direct attention. Using a backdoor, even if it is well-hidden, would likely bring unwanted attention by giving more positive attribution to those who compromised the machine. Nation-state level hackers would have their own ways into a wide variety of vendors and devices purely based on ‘natural’ vulnerabilities in the code. They simply do not need the access, and risk, a backdoor provides. Why provide hints to the enemy that you’ve “cracked the code” when you could hide behind an existing vulnerability?

Setting aside the possibility of backdoors, the question we’re interested in is this: does one of the government-used devices pose more of a risk due to its vulnerabilities? Despite this, we have found that the “war on tech” cannot be simplified into the classic “how many vulnerabilities are there in…” question, else unspoken bias drastically affects the perceived meaning of the numbers. While there is no way to do a perfect one-to-one comparison of U.S. versus Chinese vendors, there may be some that we can begin to compare, with disclaimers.

Phones: BlackBerry vs. Huawei / ZTE

For the general public, and based on much of the mainstream media reporting, Huawei are predominantly associated with their mobile phones. As more of our lives move to mobile, it is no surprise that those in power are concerned about the security of their phones and tablets. For U.S. and Chinese governments, it is widely viewed that BlackBerry and Huawei / ZTE phones, respectively, are dominant. For BlackBerry, they announced a five year deal for their latest handheld and their AtHoc software with the federal government back in July 2016, specifically the Department of Defense (DoD) Joint Emergency Mass Notification Systems (JEMNS). According to the press release, the DoD chose Blackberry because of the “secure end-to-end mobility offering .. that [shows the] secure platform is designed to meet their priorities”.

Despite the contract, BlackBerry is not the most widely used phone in the U.S. government. The U.S. Senate officially “ditched” BlackBerry in 2016, but allows them to continue to use specific devices per an official memo. In fact, BlackBerry themselves have stopped making their own handheld devices and have shifted to business software and other solutions like AtHoc, apparently used by 70% of federal employees including DoD, DHS, VA, DoE, DoA, PFPA, FEMA, IRS, and the TSA. For a majority of government employees, the most commonly used phones are now Apple and Samsung products.

With regards to China’s government, specific details about mobile phone adoption is not readily available. By simply looking at Huawei’s market share in China, one might safely assume that their devices are favored by some in the Chinese government. While it has long been rumored that Huawei has a very direct and complicated relationship with their government, which is supported both by Vietnamese academic and U.S. government research, Huawei says their relationship with the government is “no different” than any other company in China.

The U.S. government officially uses a mix of BlackBerry, Apple, and Samsung (Android), meaning that there are three major vendors and three major operating systems. For the Chinese government, apparently there is no officially sanctioned device, but it is very likely Huawei (formerly Android, but moving to Harmony OS / Hóngméng in 2020) and ZTE (Android) phones are heavily used. Looking at the last three calendar years, here is a comparison between the vendors to see how many vulnerabilities have been posted:

With these numbers it may seem like BlackBerry represents more risk. However, if BlackBerry shares the same vulnerabilities as any other Android device, and they disclose vulnerabilities in applications they ship, that number can be higher. The same can be said for any other Android phone that ships with packaged vulnerable apps and components as well, so the 1,338 Android vulnerabilities are not a full representation for other devices (e.g. Samsung, Huawei, ZTE). We then have to remind readers that comparing open source software such as Android to closed source such as BlackBerry OS and Apple can introduce bias in disclosure numbers. Another aspect to consider is that the amount of devices being used may influence how many people are actually performing security research on them.

Ultimately, this means neither the U.S. or China can justify banning devices based on phone vulnerability history alone. Trying to state one vendor is more “vulnerable” than the other using currently available vulnerability data alone requires so many disclaimers that the end result loses its potency.

Routers & TelCom: Huawei vs. Cisco et al

The second major aspect of concerns over technology from one country being pervasive in another is that of access. Everyone from the carriers to end users expects the equipment to function seamlessly, giving us access to the internet and mobile service. That service is built on a complex infrastructure of telecommunications (telecoms) hardware and software produced by companies such as Huawei, Cisco, Fujitsu, Nokia, and Ericsson. The telecom hardware includes routers, base transceiver stations, fiber optical networks, satellites, and a lot more. As of 2017, Chinese companies produced the most telecom equipment in the world, about 14% more than the United States.

Looking at these vendors for the last four calendar years, we get another lesson in how there is significant bias introduced into vulnerability statistics due to disclosures. Cisco had 2,227 vulnerabilities in that time. Compared to Huawei with only 813, one might conclude that Cisco’s software is inherently riskier. But compare Cisco with the three other companies. Fujitsu enjoys 79% of the market share by revenue, yet only had 24 vulnerabilities in that time frame. Going off that logic, can we conclude that Fujitsu is the most secure?

Consider that of Fujitu’s 24 vulnerabilities, only three are in their products and one of them a keyboard. The other 21 vulnerabilities are in third-party software or hardware (e.g. Intel processors). Cisco on the other hand has an incredible number of vulnerabilities reported, but they rarely publish that they are affected by vulnerabilities in OpenSSL and Intel for example, despite using those technologies in some of their devices.

Both Cisco and Fujitsui maintain contact pages for reporting security vulnerabilities, have a formal PSIRT team to respond to vulnerability reports, and both publish security advisories. Despite this, they have public disclosure histories that are about as opposite as you can find in many ways. We know for a fact both companies use hundreds of third-party libraries in their code, yet neither publish when third-party vulnerabilities affect their software. Based on our extensive history of tracking vulnerabilities, we are quite certain that Fujitsu products have, or have had, more vulnerabilities than they have officially disclosed. Any notion that Fujitsu (24) is a one-off situation can be dismissed when looking at Nokia (11) and Ericsson (8) for the same periods. That suggests Cisco and Huawei are outliers.

We can apply this same scrutiny to Huawei, with only 813 vulnerabilities despite their large market share, and their considerable transparency when it comes to third-party vulnerabilities. In the world of vulnerability research, access to software and equipment is essential, of course. Some may argue that Huawei equipment isn’t readily available to many researchers, and that might be true for U.S.-based researchers. But the last decade has shown an incredible number of extremely talented security researchers in China, who would presumably have more access. If one were to argue that China is looking to restrict vulnerability disclosure, that certainly will be something to consider moving forward. However, that plan is still preliminary and has not been implemented.

Conclusion: Overkill

You need comprehensive, detailed, and actionable data in order to make informed decisions. Following this mantra, we are comfortable in assessing that with the currently available vulnerability data, a hard stance condemning either side is not possible. As much as we would like it to be, the comparison of vulnerabilities within vendors cannot be a panacea.

That being said, does this mean that both the U.S. and Chinese governments are banning each other’s products solely for political posturing, or is it actually an informed decision? As we can see, it may be easy to arrive at a conclusion when looking at the data. But as informed citizens we all need to be aware of the disclaimers and hidden perspectives that the data may not overtly reveal. The answer is not so simple as “well, this has more vulnerabilities than that”.

Without concrete evidence of backdoors in Huawei products, the concern is definitely valid, but a total ban is overkill and may have far-reaching unintended consequences. As the “war on tech” has raged on, both the U.S. and China have suffered casualties.

Commentary on Trend Micro’s Linux Threat Report 2021

On August 23, 2021, Trend Micro released a report titled “Linux Threat Report 2021 1H” by Magno Logan and Pawan Kinger. The report is based on Trend Micro’s Smart Protection Network (SPN) which they call “the data lake for all detections across all Trend Micro’s products“. Basically, every security product they make that detects vulnerabilities and reports them back to Trend Micro can and is used in their research like this, among other things. They further qualify that the report is based on additional data “collected data from honeypots, sensors, anonymized telemetry, and other backend services” and represents “the real-world prevalence of malware and vulnerability exploitation across enterprises” regardless of size or vertical.

Reports that talk about the most exploited vulnerabilities are compelling. They offer a short list of vulnerabilities that organizations can be absolutely sure they patched and offer no risk. Unfortunately, many of these reports have problems. I have written about some before including 2015 Verizon DBIR, 2016 Verizon DBIR, and Radware’s Top Web Exploits of 2020. I wish I had more time as I have seen other reports on exploit prevalence that had similar issues. In this case, Trend Micro’s report falls into at least one of the same traps that these prior reports have.

The first issue that pops out is the wording in the report that introduces a major point of confusion. In section two, titled “The Linux threat landscape: What are the top Linux threats?“, under the second heading titled “Vulnerabilities in Linux systems“, we get more details qualifying where the data came from to generate this part of the report:

“… we dissected IPS (Intrusion Prevention System) hits from Trend Micro Cloud One – Workload Security and sifted through over 50 million events, ignored false positives, eliminated test data, and layered data with available threat intel to draw some conclusions.”

Unfortunately, the next sentence immediately introduces some doubt and we don’t know how much doubt there is because they don’t qualify their error of marging:

“It should be noted that there can be a degree of error here due to the nature of the data and internet activity.”

If the margin for error is 1% in a dataset that large, not a big deal. If it is 10%, that can be problematic. If it is 50% then the report shouldn’t have even been written. We get to guess where that margin of error is apparently.

Now, for the section of the report that initially got my attention, we get to the top 15 vulnerabilities. I can’t finish that sentence because there is confusion:

If a list of vulnerabilities includes the top 15 that are “actively exploited” or “have a known proof of concept”, how do you even? Over 4,500 vulnerabilities in 2021 H1 have a public proof-of-concept or functional exploit. The next sentence clearly repeats the exact same thing. I can’t figure out how to explain that second part unless they are attempting to say “actively exploited and a public proof of concept” to distinguish from exploitation that is happening where the exploit is not actually published. That seems like a pretty big oversight given the nature of this section of the report. Further, it doesn’t qualify if the report is based on attempted exploitation that matches a signature or successful exploitation. After the table of vulnerabilities the report says “Table 1 shows the top vulnerabilities by volume of triggers.” which strongly suggests it is looking for exploit attempts. But that just leads to more questions like “if you see an attempt for that vulnerability but against a Windows server, does it count?

It gets even murkier looking at the table of the 15 vulnerabilities where one of them is listed as “N/A” for severity. That warrants digging into their list more closely and comparing the vulnerability information with that in VulnDB.

There are several observations to be made for this list:

  • CVE-2017-9805 is listed as ‘High’ severity suggesting they pulled at least some vulnerability data from the National Vulnerability Database. They score the vulnerability 8.1 (High) while VulnDB and CERT VU scores it 10.0. Looking at the original disclosure, there are no obvious qualifications that seem to justify an Access Complexity High (AC:H) rating.
  • Of the 430 vulnerabilities involving WordPress, base or plugins, that allow for remote code execution, why did only one make the list (CVE-2020-25213), and why that one? Given the amount of scanning for vulnerable WordPress installations I would expect more to be on the list. Hell, even the venerable CVE-2013-4338 given there are other CVE-2013s on the list.
  • The Atlassian Jira vulnerability is very curious given that it is a remote information disclosure issue and does not disclose sensitive information such as a password, that would result in further privilege escalation. Based on the logs of attrition.org over the last three months, there has been a single request for /secure/QueryComponent!Default.jspa. There have been five requests for /secure/QueryComponentRendererValue!Default.jspa (CVE-2020-36289) which is another information disclosure issue. There are also hundreds of information disclosure vulnerabilities that yield credentials which can be used to authenticate to an application to gain privileges. I would expect to see any one of those on the list before CVE-2020-14179.
  • Eclipse Jetty (CVE-2017-7657) is very curious to see on this list for several reasons. First, a four year old vulnerability that does not result in code execution. Second, there is a caveat for exploitation as explained in the Eclipse bug ticket: “was deployed behind an intermediary that imposed some authorization and that intermediary allowed arbitrarily large chunks to be passed on unchanged, then this flaw could be used to bypass the authorization imposed by the intermediary“. To see an HTTP request smuggling issue be that widely exploited over the thousands of other vulnerabilities that allow for a more serious impact in software found running on Linux is baffling. This strongly suggests the detection rule matching for that vulnerability is overly broad and triggers on exploit attempts for different issues.
  • The nginx vulnerability is listed as N/A which is curious. Looking at the associated NVD entry (CVE-2013-4547) we see they have a CVSSv2 score, but not a CVSSv3 score. That is due to it being a 2013 vulnerability and NVD not retroactively scoring all vulnerabilities. This, along with CVE-2017-9805 tells us that Trend Micro imported the scores from NVD but didn’t adjust for this one by using the CVSSv2 score, or developing their own CVSSv3 score. It seems weird to blindly use the CVSSv3 scores and have incomplete data when it is a simple correction to make.

Looking back to bullet #3, it’s interesting to compare the hits seen on our web server and then note that we also observed 10,659 requests for /wp-login.php in that same period. This, I think, illustrates a serious flaw in the methodology of this report. Most of the scanning we see for vulnerable WordPress instances is first looking for the presence of the software before attempting to exploit anything. Rather than throw hundreds of payloads for various flaws in the core software and vulnerable themes or plugins, it’s more efficient to check if the software is present first. Scan for the software to build a list of sites that are running WordPress before launching more significant attacks that may attract more attention.

As always, a real test to the veracity of this data would be for another firm that does large-scale monitoring of attacks to publish their own data, limited to the same approximate criteria as Trend Micro. That might explain bullet #4 at the very least.

Reflections on “CVE Approach for Cloud Vulnerabilities”

At Black Hat Briefings USA this week, Ami Luttwak and Shir Tamari called for a “CVE” style approach to documenting vulnerabilities that affect cloud offerings (note: I have not seen the talk). As one of two people (the other being Jake Kouns) that may have the longest history in this specific space, I wanted to give some history and very high-level reflection on the challenges and benefits of this endeavor. First, for perspective, CVE covers at least 56 pure cloud / site-specific vulnerabilities, the first going back as far as 2000. They cover at least another 461 that are hosted blockchain contracts. At times, some of the CVE (Editorial) Board has advocated for CVE to expand to cover cloud vulnerabilities while others argue against it. At least one who advocated for CVE coverage said they should get CVE IDs, while others that supported and disagreed with the idea saying that if cloud was covered, they should get their own ID scheme. MITRE has not made a clearly policy change on that in years, despite increasing coverage primarily of hosted blockchain contracts.

Next, the idea of “CVE for Cloud” has been done! Well, it was attempted at least. Back in the day, the Open Security Foundation was desperately trying to maintain the Open Sourced Vulnerability Database (OSVDB) and DatalossDB. Neither project had enough volunteers and community support simply wasn’t there. Despite that, we spun up a project called Cloutage (archive.org link) in 2010, that originally aimed to catalog instances of cloud service outages as well. The idea was this would tell companies which ones actually had the alleged 99.999% uptime, which regions of a provider were more prone to issues, and more. We were collecting incidents categorized as “hack”, “outage”, or “vuln” long before that with the intent of such a project, but that is when we finally made them publicly available in an organized fashion. So my reflections come with real-world experience; the challenges we faced, along with the pitfalls of such an endeavor, are still largely present. (For convenience, I will use the term ‘cloud’ to refer to service providers that are subscription offerings and not on-premises hardware / solutions.)

First, let’s begin with one significant change from the mid 2000s to today. Over ten years ago, cloud providers with a vulnerability were typically an issue they had to fix and were out of the customer’s control. While that is still prevalent today, we see another class of cloud vulnerabilities that better warrant a “CVE for Cloud” approach. Those are cases where a vulnerability is present in a cloud offering and requires a customer to mitigate it themselves. These will often be described as misconfigurations like the pervasive open S3 buckets that can be accessed without credentials, disclosing a world of information or hosting malicious software [1] [2] [3]. There is a fundamental difference between a security issue that requires the customer to mitigate versus one they must wait for the cloud provider to fix.

Historically, when a vast majority of the security issues fell into the “wait for the provider to fix” category, we cataloged those vulnerabilities but questioned the value. Sure, it had value in that a company could evaluate cloud providers to see which had more vulnerabilities and how long it took to fix sometimes. But even that was crippled in that companies would only announce them after the issue was fixed. That meant we knew a vulnerability was present for some undetermined time, and then patched. But cloud providers weren’t including how long customers were exposed and most providers simply were not disclosing at all. So the database was a fraction of the actual vulnerabilities, typically with no known exposure window, and customers were powerless to do anything about it.

Another issue we ran into was that the rare times a cloud provider disclosed the issue, it was extremely vague. More frustrating is when they used language implying that some, but not all of customers were impacted. They didn’t include regions or IP ranges or anything else to identify who was affected so a customer couldn’t even guarantee they were without opening a support ticket and hoping to get an honest answer. That was frustrating and not very helpful. How will security teams cope if they find out there is a public vulnerability in a cloud offering and no mitigating control? Seems like a recipe for stressful days. Of course, it’s no different than no except for today ignorance is bliss.

CVSS is a defacto risk scoring framework for vulnerabilities. It was designed around vulnerabilities that appear in CVE, which is predominantly on-premises software. Would CVSS scale to work for cloud vulnerabilities? CVSSv2 seems like it would be as suitable for cloud as on-premises given its more simple scoring elements. CVSSv3 seems like it could be more problematic and not take advantage of some of the changes over v2. First, the ‘physical’ location would effectively be eliminated. Second, would Authorization Scope be neutered? Defined initially as “ability for a vulnerability in one software component to impact resources beyond its means, or privileges“, on the surface it seems like every vulnerability would have to be evaluated more closely for scope change. Every vulnerability potentially causes the cloud provider, a different company than the one using the service, to get involved in some way. The cloud provider’s support and security teams spending time because $CompanyA’s instance got popped technically “impacts resources beyond its means”. I don’t think that is what the CVSS SIG had in mind though. That said, given the shared environments of many cloud offerings, sandbox escapes and lateral movement potentially impacts completely different organizations. Does a simple “scope changed” modifier to a score adequately convey the potential risk?

Finally, who’s going to run this “CVE for Cloud”? OSVDB and subsequent similar projects give a damning history showing there is a lot of vocal support for community-run vulnerability resources, but they very rarely see the actual time required from those vocal folks. If it is run by one person or a small group of dedicated volunteers, how long before they burnout or commercialize it? There’s a reason most of the public VDBs have gone away, or gone commercial to varying degrees (e.g. OSVDB, Secunia, BID). The only thing worse than such a project not getting off the ground is one that does, becomes an essential part of security programs, and then goes away.

I look forward to seeing the talk by Luttwak and Tamari, if it is made public. I’m curious if they have come up with solutions that have plagued vulnerability databases for over two decades.

[Update: Josh Bressers points out that the Cloud Security Alliance (CSA) wants to address this problem too.]

Perlroth and the History of Microsoft Vulns

While reading “This Is How They Tell Me The World Ends“, early in the book I ran across a single line that made me double-take. I took a note to revisit it after a complete read since it was so early in the book. For those familiar with my blogs, I tend to write about vulnerability statistics and this one fits the bill. This blog is a bit different in that a single line provoked it, but re-reading the section for clarity still takes me down other rabbit holes. Ultimately, this is a good example of how one sentence can have a lot of interpretations depending on how you read it, what statistics you use, and the deeper context that the sentence is embedded in.

Below are some additional lines that offer the full context of the line in question:

The first shift in the wind was Bill Gates’s memo. In 2002, after a series of escalating attacks on Microsoft’s software and customers, Gates declared that security would become Microsoft’s top priority. (P35)

On January 15, 2002, just as iDefense was getting going, Gates fired off the cybersecurity equivalent of the “shot heard round the world.” From that point on, Gates said, security would be the company’s “highest priority”. (P37)

What the security community wrote off as a stunt became an economic force. Microsoft froze new products and dredged up existing ones, ripping its software apart and training nearly ten thousand developers to build it back up again with security principles at the core. For the first time, procedures were put in place to embrace the hacking community. Microsoft set up a customer service line for hackers, tracked each caller and even logged their psychological quirks, noting which hackers needed to be handled with kid gloves, which had rock-star status, and which were just trolls. It instituted a regular system for rolling out software patches, releasing them on the second Tuesday of every month – “Patch Tuesday” – and offered customers free security tools.

And while plenty of zero-day bugs were still discovered, the frequency and severity of Microsoft bugs started to dry up. (P38)

For those not familiar with the memo, titled “Trustworthy computing”, it can be read in full here. The question that came to mind was, did the frequency and/or severity of Microsoft bugs go down? Before we answer, remember that this is fairly broad since it encompasses all Microsoft bugs, not specific to Windows or Internet Explorer for example. It is also important to note that Perlroth says they started to dry up, but not for how long. On the back of the Gates memo it would be expected that some researchers may change their attitude toward disclosure if they could sell the exploits for a higher payout. Finally, all of what follows is kind of moot because Perlroth’s statement is made on the back of a known unknown. That is, we know there are zero-day bugs discovered, but by nature, they are only zero-days if not publicly known.

Perlroth says two more lines that essentially tips her hand, I believe, demonstrating that her comments were made in mindsight based on extrapolation, not fact. First, she qualifies that she joined the security beat eight years after this memo. Second, she says:

The ripple effect of Gates’s [sic] memo could be seen far from Redmond, in underground dark web forums and in hotel rooms at the big security conferences.

The dark web barely existed in 2002. Given that Tor was released in September of that year, the first hint of dark web sites would have been starting. Gates’ memo was published eight months before Tor was released in fact. It’s hard to imagine that there were already established well-known forums to trade or sell vulnerabilities that would have a noticeable change at that point. With all of that in mind, I think that the rest of this rabbit hole is academic at best but illustrates why we must be careful when describing vulnerabilities in such a manner.

The Stats

All Microsoft Vulns, 2001 – 2005, per VulnDB

There was a significant drop in volume from 2002 to 2003 so it is easy to make this assessment in a very limited picture. But by 2004 it was back up quite a bit. Given what I outlined above about her tenure in the security beat along with questionable statements around the dark web as well as making statements based on unknown factors, the question here is how did she arrive at this conclusion. Further, the severity did not drop from 2002 to 2004 either.

The stats above are from VulnDB with the advantage of hindsight and a comprehensive collection of disclosures from that period. If someone made such a conclusion based on disclosures, it likely would have been based on CVE. Looking at only disclosures with a CVE ID, it does not change for the disclosure trends or severity.

Microsoft Vulns w/ CVE ID, 2001 – 2005, per VulnDB
Microsoft Windows Vulns, 2001 – 2005, per VulnDB
Microsoft Internet Explorer (MSIE), 2001 – 2005, per VulnDB

We see a dip in disclosures from 2002 to 2003 for both Windows and MSIE, but both rebound to varying degrees in 2004. Then Windows shoots up higher in 2005 while MSIE drops in 2005, which could just have been the browser war with Firefox and Opera heating up. That leads us to one more section from page 38:

Finally, did the bugs dry up or did their perceived value go higher, so people were less likely to disclose or sell for lower prices? For a book that dives deep into the value of 0days I figured this would be the hot take. Oh wait, it is, right after saying the frequency/severity dried up, Perlroth says:

Then, in the shadows, a growing number of defense contractors, intelligence analysts, and cybercriminals started doling out higher rewards to hackers who promised to keep their bug discoveries secret. In these subterranean circles, people started assigning a far higher value to Microsoft zero-day exploits than what iDefense was paying. 

So the fun part is go back to the charts and speculate. If the premise is that the Gates memo caused bugs to dry up because they were perceived more valuable, as outlined shortly after by Perlroth, why did the disclosures rebound in 2004? Did Microsoft suddenly stop caring about security a year later? Was 2003 just an abnormal, albeit coincidental, year for disclosures? Were there other factors at play?

There are a lot of questions that Perlroth nor the vulnerability statistics answer.

RSA Hack Thoughts

I read the article “The Full Story of the Stunning RSA Hack Can Finally Be Told” by Andy Greenberg in Wired and several things stood out to me. So this is my commentary on the article and events that are covered.


“It opened my eyes to supply chain attacks.” says Mikko Hypponen, chief research officer at F-Secure, who worked with Hirvonen on the company’s analysis of the RSA breach.

While the RSA hack was certainly novel in one way, going after the cryptographic seeds to the two-factor authentication fobs, the bigger concept was certainly not new. Even in the late 80’s and early 90’s, the same concept played out many times. While vulnerabilities were prevalent and breaking into most machines was fairly easy, there were high-value targets that proved challenging. To achieve that, some hackers would target the vendors of the operating systems and break in there first. The primary targets were the bug databases where customers reported issues as well as the source code of the operating system. These two things could give them a huge advantage in compromising additional systems. Seeing this same strategy play out twenty years later should not have been that new to anyone that had been around during that timeframe.

“After 10 years of rampant state-sponsored hacking and supply chain hijacks, the RSA breach can now be seen as the harbinger of our current era of digital insecurity – and a lesson about how a determined adversary can undermine the things we trust most.”

After two decades of every type of organization in just about every country getting hacked, defaced, and data stolen, how exactly is this a ‘harbinger’? Governments, military, and security companies all popped for decades, but this one is a harbinger to what exactly? More organizations getting hacked?

A staffer in Australia had received an email with the subject line “2011 Recruitment plan” and an Excel spreadsheet attached to it. He’d opened it. Inside the file was a script that exploited a zero-day vulnerability – a secret, unpatched security flaw – in Adobe Flash, planting a common piece of malicious software called Poison Ivy on the victim’s machine.

This paragraph sums up the “Advanced Persistent Threat” (APT) that hacked RSA. Other than using a zero-day vulnerability in Flash, one of five that year, nothing really stood out about this from the attacker’s side.

A hacker wouldn’t have even been able to exploit the Flash vulnerability if the victim had been running a more recent version of Windows or Microsoft Office, or if he’d had limited access to install programs on his PC – as most security administrators for corporate and government networks recommend, Hirvonen says.

Correct. Microsoft acknowledged shortly after details were published that if RSA has been running a newer version of Microsoft Office, it would have come with protections that likely would have seriously limited the attack and/or added additional hurdles for them to then pivot into the network. At every point of this story it is important to remember that this is a security company. They received huge money to give advice like “patch often” and “upgrade to the latest version” while not doing it themselves.

In fact, several RSA executives came to believe that at least two groups of hackers were in their network simultaneously – one highly skilled group exploiting the other’s access, perhaps, with or without their knowledge.

I wonder if anyone suggested the piggyback could have been the TAO group at the NSA? We know that is a modus operandi of theirs while watching nation-state adversary hackers.

On that Australian employee’s PC, someone had used a tool that pulled credentials out of the machine’s memory and then reused those usernames and passwords to log into other machines on the network. They’d then scraped those computers’ memories for more usernames and passwords—finding some that belonged to more privileged administrators. The hackers eventually got to a server containing hundreds of users’ credentials. Today that credential-stealing hopscotching technique is common. But in 2011 the analysts were surprised to see how the hackers fanned out across the network.

Which analysts were surprised? This was standard operating procedure for hackers in the late 80’s and early 90’s. This is exactly what the hacking group I was in did. The only difference is back then you were likely to find trusted relationships and common passwords between vastly different networks (e.g. an .edu machine and a .gov or .mil).

RSA executives told me that the part of their network responsible for manufacturing the SecurID hardware tokens was protected by an “air gap”—a total disconnection of computers from any machine that touches the internet. But in fact, Leetham says, one server on RSA’s internet-connected network was linked, through a firewall that allowed no other connections, to the seed warehouse on the manufacturing side.

To be clear, RSA executives did not understand what “air-gapped” means, or were lying about it. I feel this is an important take-away.

Breaches as extensive as the one carried out against RSA are often discovered months after the fact, when the intruders are long gone or lying dormant. But Duane says that the 2011 incident was different: Within days, the investigators had essentially caught up to the intruders and were watching them in action.

“I basically shut off RSA’s business,” he says. “I crippled the company in order to stop any potential further release of data.”

One person in legal suggested they didn’t actually need to tell their customers, Sam Curry remembers.

The RSA staffers began putting in nearly 20-hour workdays, driven by the chilling knowledge that the breach they were tracking was still unfolding.

This seems like a lot of fluffing RSA over this hack, but ultimately this was the same company that didn’t patch their Windows boxes and didn’t air-gap the seeds like execs claimed. Among all of these positive mentions for $person doing the right thing, we always get “that one guy we will not name” for proposing absolutely shitty ideas or having a bad take. I understand they won’t throw anyone under the bus but this is an important dichotomy.

“Recently, our security systems identified an extremely sophisticated cyberattack in progress,” (RSA notification)

Which part was sophisticated? Exploiting unpatched machines, pivoting, or stealing data? When seemingly every attack is a “highly sophisticated cyber attack“, is it really highly sophisticated?

In fact, by the time Castignola had landed in Massachusetts, both the NSA and the FBI had been called to help the company’s investigation, as had defense contractor Northrop Grumman and incident response firm Mandiant. (By chance, employees of Mandiant had already been on-site prior to the breach, installing security sensor equipment on RSA’s network.)

How’d that Mandiant software work out for RSA I wonder? It doesn’t seemed to have helped prevent or detect the intrusion at any point based on the story told.

Multiple executives insisted that they did find hidden listening devices—though some were so old that their batteries were dead. It was never clear if those bugs had any relation to the breach.

Uh, this isn’t burying the lede, but it is burying a big one. I have so many questions and I don’t recall there being answers to this specific bit. How were there so many listening devices in RSA executive offices? Had they never done a single bug sweep? Were each of the devices found investigated? Did they do a sweep of all offices after finding them? If not, why not?

“Well it didn’t take long for whoever cracked RSA to find a lock to fit that key,” Cringely wrote. “What if every RSA token has been compromised, everywhere?” Two days later, Reuters revealed the name of the hacked military contractor: Lockheed Martin, a company that represented a cornucopia of ultra-secret plans for weapons and intelligence technologies. In the days that followed, defense contractors Northrop Grumman and L-3 were also named in news reports.

Today, with 10 years of hindsight, Coviello and other former RSA executives tell a story that starkly contradicts accounts from the time : Most of the former RSA staff who spoke to me claim that it was never proven that SecurID had any role in the Lockheed breach. Coviello, Curry, Castignola, and Duane all argued that it was never confirmed that the intruders inside RSA’s systems had successfully stolen the full list of seed values in an uncorrupted, unencrypted form, nor the customer list mapped to those seeds necessary to exploit them. “I don’t think that Lockheed’s attack was related to us at all,” Coviello states flatly.

vs A Lockheed source with knowledge of the company’s incident response reaffirmed to WIRED the company’s original claims. “We stand by our forensic investigation findings,” the source says. vs In a briefing to the Senate Armed Services Committee a year after the RSA breach, NSA’s director, General Keith Alexander, said that the RSA hack “led to at least one US defense contractor being victimized by actors wielding counterfeit credentials,” and that the Department of Defense had been forced to replace every RSA token it used.

Can we figure out who is trying to re-write history here? Sure seems like RSA is despite several other organizations saying otherwise. That would explain why so many were willing to go on the record for this article.

Perlroth, How the World Ends, and Errata

This will be my fourth and very likely final blog on Nicole Perlroth’s book, “This Is How They Tell Me The World Ends”, as far as the subject matter goes. I may write a couple more that are centered around vulnerability history, based on something included in the book, but more along the lines of “setting the record straight” with a broader misconception in the industry that certainly isn’t exclusive to this book. I say ‘may’ because it will depend on my research into a couple of topics.


As I have mentioned in prior blogs, I enjoyed this book. I feel it was very well researched and it offered information about the world of vulnerabilities that was new to me, which I appreciated. I recommend this book if you are interested in the topic of zero-day vulnerabilities and the markets around them as it is comprehensive. Finally, I really appreciate that Perlroth included extensive notes at the end that offer a variety of formal and informal citations for further reading and justification for many comments made.

I offer this opinion once again because this blog will be a bit more negative, focusing on parts of the book that I took exception with. If I am correct about any of the following criticisms, it is just as much a reflection on her editors as it is on Perlroth, so this is not leveled at her specifically. I understand errors are made, we all make them; that said, the process of writing a book should have such content go through at least three sets of eyes (if not more) so I think it is fair to level this criticism to everyone involved. While I may use Perlroth’s name below, consider it to mean “Perlroth et al” in the context above.


Errata

p6: “After three years of covering nonstop Chinese espionage, a big part of me was reassured to see that our own hacking capabilities far exceeded the misspelled phishing emails Chinese hackers were using to break into American networks.” This line so early in the book made me groan and double-take as it seems to unfairly equate an incredible variety of Chinese threat actors into a single category. While I have no doubt this characterization is true for some, I think it is not true in the bigger picture. Further, it implies that the U.S. doesn’t misspell anything in phishing mails our hackers send out to foreign targets.

p7: “The [NSA] appeared to have acquired a vast library of invisible backdoors into almost every major app, social media platform, server, router, firewall, antivirus software, iPhone, Android phone, BlackBerry phone, laptop, desktop, and operating system.” Just a page after the prior quote, this started out with my skepticism. Perlroth seems to conflate zero-day exploit with backdoor, despite them being very different things. This may be a bit nitpicky, especially since the Wikipedia definition blurs the lines, but given the topic of the book is all about vulnerabilities and exploits I think it is important to point out. Coming up in InfoSec, a vulnerability could get you access to a resource and a backdoor could as well. The difference was that one was accidental and the other intentional, but both came from the vendor. Even if the NSA pressured a vendor to include a backdoor, which they have, it is still a vendor-shipped flaw in the code with intent to subvert the security of the system. Perhaps this is terminology that is all but lost like the classic hacker vs cracker vs … debate.

p7: “Zero-days are the most critical tool in a hacker’s arsenal. Discovering one is like discovering the secret password to the world’s data.” There’s a lot to unpack here. First, zero-days are not the most critical tool in a vast majority of hacker’s arsenals. As Perlroth covers, the use of phishing attacks that do not necessarily rely on a vulnerability, or uses known but unpatched ones, are quite effective. Second, the “secret password to the world’s data” is hyperbole since any one zero-day will get you access to a fraction of a single percent of the world’s data. This description makes it sound like just one, any one, has a level of access and power they simply do not.

8 “A series of seven zero-day exploits in Microsoft Windows and Siemens’ industrial software allowed American and Israeli spies to sabotage Iran’s nuclear program.” For a book on zero-day exploits to start out incorrectly stating how many zero-day exploits were used in Stuxnet is discouraging. More so that Perlroth later cites Kim Zetter’s definitive book on the topic with glowing praise, yet still gets this bit wrong. As previously reported and referenced on Wikipedia, Stuxnet used four zero-day exploits. [1] [2] [3]

p8: “Depending where the vulnerability is discovered, a zero-day exploit can grant the ability to invisibly spy on iPhone users the world over, dismantle the safety controls at a chemical plant, or send a spacecraft hurtling to earth [sic]. In one of the more glaring examples, a programming mistake, a single missing hyphen, sent the Mariner 1 – the first American spacecraft to attempt an exploration of Venus – off-course, forcing NASA to destroy its $150 million spacecraft 294 seconds after launch, or risk it crashing into a North Atlantic shipping lane or worse, a heavily populated city.” While there has been rumors and urban legends around hacking satellites, a vast majority of which have been debunked, using the Mariner 1 as an example of what can go wrong due to a vulnerability without caveat is unfair. That spacecraft had a bug in it that has not been said to be exploitable. This is essentially the same as the countless “vulnerability reports” of applications that do nothing more than demonstrating a stability issue leading to a crash, not something that can realistically be exploited by a bad actor. This example is frustrating because later in the book, Perlroth provides many examples that are just as compelling and actually happened as a result of vulnerabilities.

p63: “In the hacking community, Charlie’s paper was alternately celebrated and condemned. Some cast him as an unethical researcher who, by selling his zero-day to the government and waiting so long to come forward with it, had put millions of Linux users at risk. Some pushed to have his cybersecurity license stripped.” I can’t imagine what this is supposed to mean since there is no such thing as a “cybersecurity license.” Even if this was to mean some certification, that is very different than a license.

p123: “Once the worm was on that first Natanz computer, a second Microsoft Windows zero-exploit kicked in – though technically, this second exploit wasn’t a zero-day at all.” This isn’t ideal for explaining this topic to non-technical readers. Introducing a new term, presumably by mistake, then immediately contradicting it in the same sentence is confusing.

p222: “Jobert would send discs flying out of Michiel’s hard drive from two hundred yards away.” I debated if this belonged in the hyperbole blog or this one and settled for here. There is simply no analogy to be had and even as an exaggeration this makes no sense.

p257: “Ekoparty was still dwarfed by Def Con, Black Hat, and RSA, but what it lacked in numbers and glitz, it made up for in raw creative talent. Absent were the booth babes and snake-oil salesmen that had overrun the big hacking conferences in the States.” Perhaps a bit nitpicky here, but of the three conferences listed, only one is a “hacking conference”. That conference does not have booth babes and essentially only merchandise vendors, so no more snake-oil salesmen than any other conference, including Ekoparty I would wager. Further, note that Black Hat has been held on three continents for many years now.

p263: “When I got to my room, the door was ajar .. Everything was just how I had left it, except the safe that had held my laptop. It was wide open. My computer was still inside, but in a different position .. I wondered if this was some kind of warning shot. I took a sober look at the laptop. It was a loaner. I’d left my real computer at home and stuck to pen and paper at the conference. There’d been nothing on the laptop when I’d left; I wondered what was on it now. I wrapped it in an empty garbage bag, took the elevator back down to the lobby, and threw it in the trash.” Personally, I find this brief part of Perlroth’s visit attend Ekoparty in Buenos Aires mind-boggling. She describes the conference as having the “best exploits on the market”, representatives from large companies looking to recruit, and countless attendees looking to sell exploits, all in a chapter titled “Cyber Gauchos“. With all of that, and the topic of the book she was researching, why would you ever throw away that laptop? Keep it, take it to someone capable of determining if it was backdoored and how. If lucky, figure out where it was accessed from in the subsequent weeks to perhaps get an idea who was behind it. That would have been a fascinating story by itself and a great addition to this chapter. Instead? A laptop with what might have been high-end unique malware was just thrown in the trash.

p332: “The only trace that it had been used was a second, complementary NSA exploit, code-named DoublePulsar, that was often used to implant EternalBlue into machines.” I think this is backwards as DoublePulsar is the implant (backdoor) and EternalBlue the remote vulnerability (CVE-2017-0144) that can be exploited to implant it.


It’s Complicated

There is one more piece of Errata that is complicated to unpack. This is due to just two lines containing quite a few bits of information, but the associated citations in the Notes section being missing or problematic. From page 6 -7 in Chapter 1, pardon the image as WordPress.com doesn’t apparently let you highlight sentences, only blocks:

The notes for chapter 1 provide citations for some of the content including in this order: a Mariner 1 incident, Menn’s article on “the NSA’s interception of Yahoo data”, Fehri’s article on the Times delaying a NSA wire-tapping story, Snowden / Vargas-Cooper bit about the same delay, and a Perlroth story leak covered by Smith. Compare the cited references to the book paragraph quoted above and it breaks down as:

  • First line is not cited but covered by many easy-to-find articles including this one by Reuters in 2013.
  • Second line is problematic as Perlroth writes that the CIA infiltrated factory floors at “leading encryption chip makers” to backdoor them, but does not offer a citation. Given that it follows a voluntary backdoor in RSA, it is a separate series of events. The wording also does not match the well-known Crytpo AG saga. Given the severity of such incidents, it seems like this would come with a reference.
  • Third line is cited as coming from Joe Menn’s article “Exclusive: Yahoo Secretly Scanned Customer Emails for U.S. Intelligence“. The first issue is that the cited article about Yahoo & Google only mentions Google twice, both to say the company denied doing any searches. The second, and more serious issue, is that the article title itself specifically counters the narrative that Perlroth offers. Yahoo scanning customer emails on behalf of the U.S. Intelligence agencies is very different than them “hacking their way into the internal servers before the data was encrypted”.
  • Fourth line is cited in the notes.

If four lines in a book are that problematic, especially in chapter one, it can be difficult to digest the rest of the material. It may cause the reader to constantly question if what they are reading is accurate and well-founded.


Parting Gift

The following quote is in the book, but one where Perlroth quoted someone she spoke with. I offer this up as a parting gift because of just how absurd it is. I wish I could say it is out of context, and it might be, but any lost context seems not to have made it in the book if so.

That’s why the Europeans are so good at writing exploits, after babies, European parents get like a year to hack.” — Charlie Miller

Yeah…

Perlroth, Terminology, and Hyperbole

I finished reading “This Is How They Tell Me The World Ends” by Nicole Perlroth a few weeks ago but haven’t had time to write this blog, and likely another, based on specific aspects of the book. I have written two blogs on topics covered in the book after reading it already, but both written before completing the book.


Overall the book was an enjoyable read. It is clear that Perlroth covers the topic of zero-day exploits and the exploit market very well, based on a lot of research and interviews with key players. The book exposed some things that were new to me so I enjoyed some chapters very much. The book also gave me a sizable list of items to do further research on including several ideas for FOIA requests. Finally, I think the epilogue was especially well done and would serve as a great ~ 20 page primer on the topic and where the world is going in the realm of exploits and hacking campaigns. If you are interested in the topic I do recommend this book.


That said, this blog is about one issue I have with the content. Starting in the prologue and continuing throughout several chapters of the book, Perlroth uses language that is arguably one step past hyperbole, seemingly crossing the definition of “intensifier” and falling squarely into “extreme exaggeration“. This has been a problem for over twenty-five years in Information Security with one of our worst being “Cyber Pearl Harbor“, which is also used in this book. While such terms are dramatic and hook a reader they are counter-productive as they unfairly explain or refer to concepts that are not as serious or damaging as the terms used.

Equating two unrelated terms to explain one concept to an audience not familiar with it is common enough, and we all do it. But consider the definition on an analogy which is “a comparison of two otherwise unlike things based on resemblance of a particular aspect“. The key, I believe, is “resemblance of a particular aspect” which can really be interpreted differently. If I compare a rocket to an automobile to make a comparison about travel because they both can move and transport people, does that count? Sure, but it sucks as an analogy and doesn’t make the point very well. When that gets taken to an extreme, you have a logical fallacy known as a false analogy.

To me, that is where analogies or descriptions like “a Cyber Pearl Harbor” fall. Until a computer intrusion can routinely sink ships, destroy aircraft, kill over 2,300, and wound over 1,100 people in just over an hour, I don’t think that is an appropriate term to use. If such an event happens once, perhaps calling it “the Cyber Pearl Harbor” would be acceptable. Further, what part of the attack on Pearl Harbor resembles a computer attack? Until that can be answered, journalists and security professionals should endeavor to use more grounded analogies that can explain a concept without embellishing or incorrectly comparing something in the virtual computer domain to a kinetic real-world item or event. While Perlroth’s first use of this term was quoting “security experts”, she had the opportunity to temper that with a caveat or explanation, but did not.

Even calling exploits a “weapon” begins to push that boundary as most people think of a kinetic weapon like a knife or gun that has wounded or killed millions in the last 100 years. With that, here is a sampling of some of the analogies and terminology Perlroth used throughout her book to illustrate the problem. What is perhaps most unfortunate about this is that the book is well-written and did not need to do this to make it interesting. To me, it was actually a detraction and did not add to the topic.

  • xvi: Russian hackers made a blood sport of hacking anyone…
  • xvi: For five long years, they shelled Ukrainians with thousands of cyberattacks a day…
  • xviii: The very same Russian hackers that had been laying trapdoors and virtual explosives
  • xxi: .. is what happened when the NSA’s most powerful cyberweapons got into our adversary’s hands. So in March 2019 I went to Ukraine to survey the ruins for myself.
  • xxvi: If Snowden leaked the PowerPoint bullet points, the Shadow Brokers handed our enemies the actual bullets: the code
  • p8: In the process, “zero-day exploits” became the blood diamonds of the security trade.
  • P257: They were here to recruit, perhaps, or broker the latest and greatest in Argentine spy code.
  • p294: Russian hackers had been shelling Ukraine’s computer networks with cyberattacks, and the timing was ominous.
  • p295: And like those attacks, the KillDisk had a ticking time bomb.
  • p324: But nation-states could just as easily bolt digital bombs and data wipers onto the tools, detonate data, and take America’s government agencies, corporations, and critical infrastructure offline.
  • p334: Across the world, people started ripping their computers out of the wall.
  • p348: Nobody had even bothered to tell the mayor that the virus hitting his city had been traveling on a digital missile built by the nation’s premier intelligence agency.
  • p349: One assailant locked up its systems with ransomware; another detonated EternalBlue to steal data.
  • p381: It was Nakasone who played a critical role in leading Nitro Zeus, the U.S. operation to plant land mines in Iran’s grid.
  • p383: They – the hackers, the officials, the Ukrainians, the voices in the wilderness – had always warned me that a cyber-enabled cataclysmic boom would take us down.

One thing to note is that on rare occasion, Perlroth did temper such wording. One example can be found on page 49 where she says “Again, these weren’t weapons. They were gaping security holes that could be exploited to break into hardware and software, and the American taxpayer was being asked to bankroll the entire supply chain.” Unfortunately, this comes after several lines in the bullet points above and many more like it.

Similarly to using exaggerated terms for exploits and digital attacks, Perlroth does the same when describing hackers. While describing a complex world of zero-day exploits, brokering them, and the impact they can cause, she falls back on tired clichés to describe the people using these exploits. Here are a few examples:

  • xix: .. simply beyond that of any four-hundred-pound hacker working from his bed.
  • p22: .. he did not resemble the emaciated hackers and former intelligence types glued to their computer screens
  • p23: .. a little colorful for men who wore black T-shirts and preferred to work in windowless dungeons.
  • p23: .. their diet subsisted of sandwiches and Red Bull.
  • p28: Vendors didn’t want to deal with basement dwellers
  • p28: … pimply thirteen-year-olds in their parents’ basements
  • p28: … ponytailed coders from the web’s underbelly
  • p30: Hackers who barely made it out of their basements would get hammered…

If I used hyperbolic clichés to describe Nicole Perlroth, a New York Times reporter, I wonder how many journalists I would offend?

The Rundown: CVE IDs & RESERVED Status

During the process of assigning a CVE ID, there is a time period between the assignment and the disclosure, and again between the disclosure and it becoming available on MITRE’s CVE site or NIST’s National Vulnerability Database (NVD). During this period, the ID will be shown as RESERVED.

First, it is important to note that when an ID is part of a CVE Numbering Authority (CNA) pool of IDs to potentially be assigned, it is shown in RESERVED status. If an ID is not assigned that year, it is then supposed to be moved to REJECT status the following year per CVE rules. Bit odd they say the reason for the rejection will “will most often be stated“; most often and not always? If a CNA other than MITRE assigns an ID and the researcher and/or vendor later publicly discloses the vulnerability, it may still show as RESERVED. This happens when the CNA fails to notify MITRE despite being stipulated in CNA rules. It can also happen if the CNA notifies MITRE but it slips through the cracks. Pretty simple right?

If MITRE assigns the ID to a researcher, it is a lot more likely to stay in RESERVED status after disclosure because the researcher who publicly discloses the vulnerability doesn’t always notify MITRE. You may ask why MITRE doesn’t open the CVE with details themselves if it is public, and that is a great question! The simple answer is, MITRE does not really monitor public sources for disclosures any longer. Back in the day they would monitor Bugtraq and NTBugtraq and encouraged researchers to just disclose directly to those mail lists. During that time, they also said they monitored four sources for new vulnerability information but notably did not include either mail list, instead including four different summaries being published. I think we can chalk that up to an error in documentation.

For those not familiar with MITRE’s coverage for CVE, consider that they no longer publish three lists of interest. As late as March 6, 2016, MITRE maintained lists of what they considered:

  • Full Coverage Sources” – “For nearly all issues disclosed by the source that could be associated with a CVE entry, there will be an associated CVE entry, regardless of the criticality of the issue. Although a source is named as Full Coverage, we purposely use the phrasing “nearly all issues disclosed” to allow the flexibility to potentially postpone coverage of minor issues.”
  • Partial Coverage Sources” – “The source will be actively monitored but issues will be processed and associated with CVE entries based on a variety of editorial judgments.”
  • Must-Have Products” – “All products listed are considered to be “must have.” This means that we will ensure that a CVE-ID is issued for any public disclosure for the product provided that the following to provisions are met…”

By the end of 2016, that page maintained the same URL but changed content to become what would be their CNA coverage page. By early 2017, the old URL redirected to a new one about requesting a CVE ID and CNA coverage, which is roughly the same as currently available. This is an important shift in how CVE operates as MITRE basically threw in the towel trying to actively monitor disclosures and moved to relying almost entirely on CNAs and researchers coming to them.

The part that is truly baffling to me is that this tax-payer funded project, costing us millions a year, thought that monitoring 48 sources for “full” coverage, 45 sources for “partial” coverages, and guaranteeing 45 products was ever adequate to begin with, and somehow a burden at that point. They also disclaimed that they “actively [monitor] many sources beyond this list. These sources include things like blogs from vulnerability researchers, conference proceedings, and media outlets.” Despite that claim and coverage, MITRE was already missing thousands of vulnerability disclosures a year including ones from sources on their list.

What should worry consumers of CVE is that other vulnerability databases monitor a lot more sources than that for a lot less money. Any claims of it being more complicated or the issue being due to their processes mean there is an incredible amount of red tape or horribly outdated technical processes that were never updated. If another database can monitor literally several thousand sources a week for a fraction of the price, it speaks to MITRE not evolving over the years. Whew, glad that wraps it up!

Sorry, one last thing. Like entries in REJECT status, we can’t trust entries in RESERVED status either. Based on above and how MITRE operates, we know there are bound to be quite a few vulnerabilities where a researcher requested an ID, published details, and did not notify MITRE. Their backwards choice of not monitoring sources for disclosures means a disclosure may sit in RESERVED status for some time. How long? I went poking around a bit for fun and found this one. At the time of this blog, CVE-2000-1253 is still in RESERVED status (archive).

The issue? That was disclosed in 2015, and likely earlier. The actual vulnerability details were public at far back as 2003, maybe earlier. The good news? If you aren’t worried about remote root on a medical device, no need to be worried about this one.

The Rundown: CVE IDs & REJECT Status

For analysts and practitioners that digest CVE regularly, you will likely be familiar with CVEs that are in REJECT status. If you are new to CVE or not familiar with some of the more gritty details, a CVE assignment may be rejected for various reasons. When that happens, it will receive a capitalized REJECT status:

The REJECT links to a page that offers more information, but as of April, 2021, actually links to the correct page but the wrong anchor. I’ll link to the correct anchor for your reference, which gives us several reason an ID might be rejected including “it being a duplicate CVE Record, it being withdrawn by the original requester, it being assigned incorrectly, or some other administrative reason.

At the time of this blog, there are almost 9,500 IDs that have been rejected. A significant portion of those come from MITRE being more proactive and enforcing that CVE Numbering Authorities (CNA) reject unused IDs from prior years, along with a general increase in the total CVE assigned per year.

The process of rejecting, and the presence of REJECT entries is straightforward.

That’s right, if I am taking the time to blog about a topic seemingly so easy, there’s probably more to it. In this case, I wanted to point out a couple examples of CVE IDs that are in REJECT status, but highlight issues. The first is a simple one that underscores that the process of CNAs rejecting CVE IDs may have a problem, or that MITRE has an issue in the way they described the rejected ID. We’ll take CVE-2018-1226 (archive), that was rejected because “The CNA or individual who requested this candidate did not associate it with any vulnerability during 2018. Notes: none.” That is easy enough, right? The problem is that it was rejected by March 19, 2018. Not even a quarter of the way through 2018 and it was rejected because it was not associated with a vulnerability that year? That seems problematic. I’m sure there is a good explanation for this, but the description sure doesn’t do it.

You may think that pointing that out is pedantic, and you are right. However, there is an important reason we need to be pedantic and expect accurate descriptions from CVE, even for a rejected entry. What if the REJECT message was factually incorrect? What if that CVE ID represented a valid vulnerability that impacted your organization? If you rely on CVE/NVD you would have a blinds pot as a result of errors in their process, which are critical to you. Looking at two older rejected CVE IDs as an example, CVE-2015-0788 (archive) and CVE-2015-0789 (archive), we see that both are in REJECT status because they were not associated with a vulnerability in 2015.

Looking closer, we can see that the assigning CNA was Micro Focus International. As such they should be the single source of truth and provenance for any vulnerability information associated with those CVEs. MITRE would be secondary and should not necessarily be trusted if there is a dispute. In this case, there is a dispute in the form of NetIQ Identity Manager release notes listing both CVEs as fixed issues in version 4.5 Service Pack 2.

NetIQ was founded in 1995, acquired by Attachmate in 2006, and then acquired by Micro Focus in 2014. With this document we see the conflict where Micro Focus says that they were assigned, represented legitimate vulnerabilities, and were fixed and CVE says they are rejected.

The takeaway here is that while a CVE may be listed as REJECTed, trust, but verify.

The Rundown: CVE IDs, Meanings, & Assumptions

For almost two decades, CVE has been considered an industry standard for vulnerability tracking. A CVE ID can be affiliated with many vulnerabilities, in a format like CVE-2014-54321. Note my choice in ID, from 2014 with a consecutive set of numbers. That is because I specifically chose a ‘sample’ CVE that was set aside as an example of the CVE ID Syntax Change in 2014. This change occurred when it was determined that 9,999 IDs for a single year was not going to be sufficient. Technical guidance on this is available, as well as more basic information and the announcement about the change. Starting out with this hopefully demonstrates that there may be more to an ID than meets the eye.

Fundamentally, the ID is simple; you have the CVE prefix, followed by a year identifier, and a numeric identifier. In the CVE used above, it would represent ID 54321 with a 2014 year identifier. Fairly simple! But you are reading an entire blog on these IDs by me so the spoiler is here. It isn’t so simple unfortunately. I want to give a rundown of what a CVE ID really is, and set the record straight. Why? Because I don’t think MITRE has done a good job with that, and worse, actively works against what could be a clear and simple policy. We’ll use CVE-YEAR-12345 as a representative example for the purpose of discussing these IDs to be clear about which part of an ID we’re talking about.


When CVE was started in 1999, assignments were made based on a public disclosure. However, from the beginning, the YEAR portion almost immediately was not guaranteed to represent the year of disclosure. This was because MITRE’s policy was to assign an ID for a pre-1999 vulnerability using a CVE-1999 ID. We can see this with CVE-1999-0145 which was assigned for the infamous Sendmail WIZ command, allowing remote root access. This feature was publicly disclosed as a vulnerability on November 26, 1983 as best I have determined (the Sendmail changelog). While it was a known vulnerability and used before that, it was privately shared. If there is a public reference to this vulnerability before that date, leave a comment please!

The takeaway is that a vulnerability from 1983 has a CVE-1999 identifier. So from the very first year, MITRE set a clear precedent that the YEAR portion of an ID does not represent the year of discovery or disclosure. You may think this only happened for vulnerabilities prior to 1999, but that isn’t the case. In the big picture, meaning the 22 years of CVE running, an ID typically does represent the disclosure year. However, per one of CVE’s founders, “because of CVE reservation, sometimes it aligned with year of discovery“. That is entirely logical and expected as a CVE ID could be used to track a vulnerability internally at a company before it was disclosed. For example, BigVendor could use the CVE ID not only for their internal teams, such as communicating between security and engineering, but when discussing a vulnerability with the researcher. If a researcher reported several vulnerabilities, using an ID to refer to one of them was much easier than the file/function/vector.

For the early CVE Numbering Authorities (CNA), companies that were authorized to assign a CVE without going through MITRE, this was a common side effect of assigning. If a researcher discovered a vulnerability on December 25 and immediately reported it to the vendor, it may be given e.g. a CVE-2020 ID. When the vendor fixed the vulnerability and the disclosure was coordinated, that might happen in 2021. The founder of CVE I spoke to told me there “weren’t any hard and fast rules for CNAs” even at the start. So one CNA might assign upon learning of the vulnerability while another might assign on public disclosure.

Not convinced for some reason? Let’s check the CVE FAQ about “year portion of a CVE ID”!

What is the significance and meaning of the YEAR portion of a CVE ID
CVE IDs have the format CVE-YYYY-NNNNN. The YYYY portion is the year that the CVE ID was assigned OR the year the vulnerability was made public (if before the CVE ID was assigned).

The year portion is not used to indicate when the vulnerability was discovered, but only when it was made public or assigned.

Examples:

A vulnerability is discovered in 2016, and a CVE ID is requested for that vulnerability in 2016. The CVE ID would be of the form “CVE-2016-NNNN”.
A vulnerability is discovered in 2015 and made public in 2016. If the CVE ID is requested in 2016, the CVE ID would be of the form “CVE-2016-NNNNN”.

All clear, no doubts, case closed!

That clear policy is conflicting or may introduce confusion in places. Looking at MITRE’s page on CVE Identifiers, we see that the “The process of creating a CVE Record begins with the discovery of a potential cybersecurity vulnerability.” My emphasis on ‘discovery’ as that means the ID would reflect when it was discovered, and not necessarily even when it was reported to the vendor. There are many cases where a researcher finds a vulnerability but may wait days, weeks, months, or even years before reporting it to the vendor for different reasons. So it is more applicable that the ID will be assigned based on when the vendor learns of the vulnerability in cases of coordinated disclosure with a CNA. Otherwise, a bulk of CVEs are assigned based on the disclosure year.

It gets messier. At the beginning of each year, each CNA will get a pool of CVE IDs assigned. The size of the pool varies by CNA and is roughly based on the prior year of assignments. A CNA that disclosed 10 vulnerabilities in the prior year is likely to get 10 – 15 IDs the subsequent year. Per section 5.1.4 of the CNA rules, any IDs that are not used in a calendar year should be REJECTed if they were not assigned to an issue. “Those CVE IDs that were unused would be rejected.” But then, it stipulates that the CNA can get “CVE IDs for previous calendar years can always be requested if
necessary.
” So per current rules, a CNA can request a new ID from a prior year despite REJECTing IDs that were previously included in their pool. That means it is entirely optional, up to each CNA, on how they assign.

[Update: Note that the pool of IDs a CNA gets one year may not be the same the next. Not only in regards to the size of the pool, but the first ID may be in an entirely different range. e.g. 2019-1000 vs 2020-8000.]

The take-away from all this is that we now have many reasons why a CVE ID YEAR component does not necessarily tie to when it was disclosed. The more important take-away? If you are generating statistics based on the YEAR component, you are doing it wrong. Any statistics you generate are immediately inaccurate and cannot be trusted. So please don’t do it!


Finally, a brief overview of the numeric string used after the YEAR. Going back to our example, CVE-YEAR-12345, it is easy to start to make assumptions about 12345. The most prevalent assumption, and completely incorrect, is that IDs are issued in a sequential order. This is not true! Covered above, CNAs are given pools of IDs at the beginning of each year. Oracle and IBM assign over 700 vulnerabilities a year, so the pool of IDs they receive is substantial. There are over 160 participating CNAs currently, and if each only received 100 IDs, that is over 16,000 IDs that are assigned before January 1st.

In 2021, the effect of this can be seen very clearly. Halfway through April and we’re already seeing public IDs in the 30k range. For example, CVE-2021-30030 is open and represents a vulnerability first disclosed on March 28th. According to VulnDB, there are only 7,074 total vulnerabilities disclosed this year so far. That means we can clearly see that CVE IDs are not assigned in order.