Assessing the ‘War on Tech’: Huawei vs. U.S.

[I wrote this with Curtis Kang who did a lot of work researching various aspects of this article and provided invaluable help. His research and written contributions made this article possible. It was originally intended to be published on RiskBasedSecurity.com in early 2020 but was passed over so I am publishing it here.]


In 2019, we saw the steadily-growing social and policy conflicts between the United States and China reach a boiling point. China has been a major talking-point of President Trump’s platform since early in his campaign. However, it wasn’t until last year that we saw active policies enforcing a so-called “war on tech” between the U.S. and major Chinese companies like Huawei and ZTE, and those policies being “sidestepped”. We wanted to examine this from a data perspective, looking at the vulnerabilities in similar companies from both sides.

To set the stage, it is useful to briefly revisit the U.S. vs CN timeline.

The Trade War

Since taking office in January 2017, President Trump has had a specific interest in China, stating early-on that the “U.S. will be on a level playing field”. This led to several rounds of tariffs being imposed against China starting in March 2018, and retaliatory tariffs being imposed against the U.S. Early in 2019, there was conjecture that President Trump may use an executive order to limit some U.S. companies such as wireless carriers from purchasing Chinese electronic devices. That executive order was signed on May 15, 2019, citing the National Emergencies Act (50 U.S.C. 1601 et seq.) that would limit or ban purchases of “technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary”.

While the executive order did not list any country or company, it was widely accepted that it was a move against Huawei in particular. The order contained interesting language, saying that the banned technology or services “poses an undue risk of sabotage” and is “an unacceptable risk” to the United States, among other wording. Technology meeting those criteria would be determined by the Secretary of Commerce, in consultation with nine other bodies “and as appropriate, the heads of other executive departments and agencies”.

On May 20, 2019, the BIS modified the final rule and granted a Temporary General License (TGL) until August 19, 2019 for transactions regarding, among other things, “Continued Operation of Existing Networks and Equipment” and “Cybersecurity Research and Vulnerability Disclosure.” On August 19, 2019, the BIS extended the TGL by 90 days, or until November 19, 2019. Outside the TGL, any request for a license to sell or transfer commodities, software or technology to Huawei is reviewed “under a policy of presumption of denial.” In other words, the BIS provides virtually no avenue for a continued commercial relationship with Huawei after November 19, 2019.

Months later, when asked if China would retaliate, Chinese foreign ministry spokesman Geng Shuang told reporters “stay tuned.” Two weeks after that, China announced tariffs on $75 billion of U.S. products. This was followed in December with China announcing a ban on foreign technology in “all government offices and public institutions” within three years. The ban also prevented companies such as Google, Dropbox, and Facebook from being used within China. With this, the United States and China were in a new type of technology war based on the premise that the adversarial nation was producing equipment that “poses an undue risk of catastrophic effects”.

The Fear of Backdoors

Computer equipment that poses a risk in the context above, typically brings to mind computer vulnerabilities. Issues that, with the right knowledge, would allow one country to use software vulnerabilities to compromise assets in the adversary nation’s government, business, or infrastructure. Another common scenario brought up by security professionals and intelligence officials is that of a backdoor; computer code planted by a party that allows them, and only them, covert remote access to the device. Some members of the U.S. intelligence community would prefer these Chinese products not be used in the technological infrastructure saying it “would undercut the ability of the U.S. to defend itself.”

This fear, specifically of Huawei routers from China, has been front-and-center since 2011, and a growing concern even before that. In the past, the concerns largely stemmed from each nation compromising the other’s computer networks in government and business. More recently, with the race to implement a 5G network, security issues around that technology have been heavily scrutinized. This war of technology has reminded us of 2010, when Huawei published an open letter to the U.S. government attempting to allay fears and shift public perception after a decade of suspicion. The company went so far as to request a “thorough investigation” to prove that they are “a normal commercial institution and nothing more.” This prompted eight U.S. senators to urge the White House to examine Huawei contracts and the House Intelligence Committee to investigate and publish a report on both Huawei and ZTE.

Ultimately, that report was inconclusive and stated the following – “despite hours of interviews, extensive and repeated document requests, a review of open-source information, and an open hearing with witnesses from both companies, the Committee remains unsatisfied with the level of cooperation and candor provided by each company.” Even over six years later, in 2019, Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, officially stated that no one has found a backdoor in a Huawei product.

This is important to note, given the considerable scrutiny Huawei has received. In addition to their open letter in 2010, Huawei also disclosed their source code to a foreign government, something that no U.S. company has done. Despite the numerous information security companies attempting to find and potentially publish findings of an actual backdoor (including the NSA and specifically created testing centers in the UK), none have been confirmed. Given that the U.S. National Security Agency (NSA) has a significant budget and a vested interest in determining if a company like Huawei is shipping backdoored systems, and has not disclosed one, is compelling.

Ignoring Backdoors and Looking at the Data: Is a Ban Warranted?

Given that history and perspective on the growing tech war between the U.S. and China, we at Risk Based Security wanted to look at some concrete numbers around the vulnerabilities in the companies at the center of the issue.

While much of the focus on this topic has been on fear and the threat of backdoors planted by a vendor at the behest of their government, that is not necessarily where we want to direct attention. Using a backdoor, even if it is well-hidden, would likely bring unwanted attention by giving more positive attribution to those who compromised the machine. Nation-state level hackers would have their own ways into a wide variety of vendors and devices purely based on ‘natural’ vulnerabilities in the code. They simply do not need the access, and risk, a backdoor provides. Why provide hints to the enemy that you’ve “cracked the code” when you could hide behind an existing vulnerability?

Setting aside the possibility of backdoors, the question we’re interested in is this: does one of the government-used devices pose more of a risk due to its vulnerabilities? Despite this, we have found that the “war on tech” cannot be simplified into the classic “how many vulnerabilities are there in…” question, else unspoken bias drastically affects the perceived meaning of the numbers. While there is no way to do a perfect one-to-one comparison of U.S. versus Chinese vendors, there may be some that we can begin to compare, with disclaimers.

Phones: BlackBerry vs. Huawei / ZTE

For the general public, and based on much of the mainstream media reporting, Huawei are predominantly associated with their mobile phones. As more of our lives move to mobile, it is no surprise that those in power are concerned about the security of their phones and tablets. For U.S. and Chinese governments, it is widely viewed that BlackBerry and Huawei / ZTE phones, respectively, are dominant. For BlackBerry, they announced a five year deal for their latest handheld and their AtHoc software with the federal government back in July 2016, specifically the Department of Defense (DoD) Joint Emergency Mass Notification Systems (JEMNS). According to the press release, the DoD chose Blackberry because of the “secure end-to-end mobility offering .. that [shows the] secure platform is designed to meet their priorities”.

Despite the contract, BlackBerry is not the most widely used phone in the U.S. government. The U.S. Senate officially “ditched” BlackBerry in 2016, but allows them to continue to use specific devices per an official memo. In fact, BlackBerry themselves have stopped making their own handheld devices and have shifted to business software and other solutions like AtHoc, apparently used by 70% of federal employees including DoD, DHS, VA, DoE, DoA, PFPA, FEMA, IRS, and the TSA. For a majority of government employees, the most commonly used phones are now Apple and Samsung products.

With regards to China’s government, specific details about mobile phone adoption is not readily available. By simply looking at Huawei’s market share in China, one might safely assume that their devices are favored by some in the Chinese government. While it has long been rumored that Huawei has a very direct and complicated relationship with their government, which is supported both by Vietnamese academic and U.S. government research, Huawei says their relationship with the government is “no different” than any other company in China.

The U.S. government officially uses a mix of BlackBerry, Apple, and Samsung (Android), meaning that there are three major vendors and three major operating systems. For the Chinese government, apparently there is no officially sanctioned device, but it is very likely Huawei (formerly Android, but moving to Harmony OS / Hóngméng in 2020) and ZTE (Android) phones are heavily used. Looking at the last three calendar years, here is a comparison between the vendors to see how many vulnerabilities have been posted:

With these numbers it may seem like BlackBerry represents more risk. However, if BlackBerry shares the same vulnerabilities as any other Android device, and they disclose vulnerabilities in applications they ship, that number can be higher. The same can be said for any other Android phone that ships with packaged vulnerable apps and components as well, so the 1,338 Android vulnerabilities are not a full representation for other devices (e.g. Samsung, Huawei, ZTE). We then have to remind readers that comparing open source software such as Android to closed source such as BlackBerry OS and Apple can introduce bias in disclosure numbers. Another aspect to consider is that the amount of devices being used may influence how many people are actually performing security research on them.

Ultimately, this means neither the U.S. or China can justify banning devices based on phone vulnerability history alone. Trying to state one vendor is more “vulnerable” than the other using currently available vulnerability data alone requires so many disclaimers that the end result loses its potency.

Routers & TelCom: Huawei vs. Cisco et al

The second major aspect of concerns over technology from one country being pervasive in another is that of access. Everyone from the carriers to end users expects the equipment to function seamlessly, giving us access to the internet and mobile service. That service is built on a complex infrastructure of telecommunications (telecoms) hardware and software produced by companies such as Huawei, Cisco, Fujitsu, Nokia, and Ericsson. The telecom hardware includes routers, base transceiver stations, fiber optical networks, satellites, and a lot more. As of 2017, Chinese companies produced the most telecom equipment in the world, about 14% more than the United States.

Looking at these vendors for the last four calendar years, we get another lesson in how there is significant bias introduced into vulnerability statistics due to disclosures. Cisco had 2,227 vulnerabilities in that time. Compared to Huawei with only 813, one might conclude that Cisco’s software is inherently riskier. But compare Cisco with the three other companies. Fujitsu enjoys 79% of the market share by revenue, yet only had 24 vulnerabilities in that time frame. Going off that logic, can we conclude that Fujitsu is the most secure?

Consider that of Fujitu’s 24 vulnerabilities, only three are in their products and one of them a keyboard. The other 21 vulnerabilities are in third-party software or hardware (e.g. Intel processors). Cisco on the other hand has an incredible number of vulnerabilities reported, but they rarely publish that they are affected by vulnerabilities in OpenSSL and Intel for example, despite using those technologies in some of their devices.

Both Cisco and Fujitsui maintain contact pages for reporting security vulnerabilities, have a formal PSIRT team to respond to vulnerability reports, and both publish security advisories. Despite this, they have public disclosure histories that are about as opposite as you can find in many ways. We know for a fact both companies use hundreds of third-party libraries in their code, yet neither publish when third-party vulnerabilities affect their software. Based on our extensive history of tracking vulnerabilities, we are quite certain that Fujitsu products have, or have had, more vulnerabilities than they have officially disclosed. Any notion that Fujitsu (24) is a one-off situation can be dismissed when looking at Nokia (11) and Ericsson (8) for the same periods. That suggests Cisco and Huawei are outliers.

We can apply this same scrutiny to Huawei, with only 813 vulnerabilities despite their large market share, and their considerable transparency when it comes to third-party vulnerabilities. In the world of vulnerability research, access to software and equipment is essential, of course. Some may argue that Huawei equipment isn’t readily available to many researchers, and that might be true for U.S.-based researchers. But the last decade has shown an incredible number of extremely talented security researchers in China, who would presumably have more access. If one were to argue that China is looking to restrict vulnerability disclosure, that certainly will be something to consider moving forward. However, that plan is still preliminary and has not been implemented.

Conclusion: Overkill

You need comprehensive, detailed, and actionable data in order to make informed decisions. Following this mantra, we are comfortable in assessing that with the currently available vulnerability data, a hard stance condemning either side is not possible. As much as we would like it to be, the comparison of vulnerabilities within vendors cannot be a panacea.

That being said, does this mean that both the U.S. and Chinese governments are banning each other’s products solely for political posturing, or is it actually an informed decision? As we can see, it may be easy to arrive at a conclusion when looking at the data. But as informed citizens we all need to be aware of the disclaimers and hidden perspectives that the data may not overtly reveal. The answer is not so simple as “well, this has more vulnerabilities than that”.

Without concrete evidence of backdoors in Huawei products, the concern is definitely valid, but a total ban is overkill and may have far-reaching unintended consequences. As the “war on tech” has raged on, both the U.S. and China have suffered casualties.

Commentary on Trend Micro’s Linux Threat Report 2021

On August 23, 2021, Trend Micro released a report titled “Linux Threat Report 2021 1H” by Magno Logan and Pawan Kinger. The report is based on Trend Micro’s Smart Protection Network (SPN) which they call “the data lake for all detections across all Trend Micro’s products“. Basically, every security product they make that detects vulnerabilities and reports them back to Trend Micro can and is used in their research like this, among other things. They further qualify that the report is based on additional data “collected data from honeypots, sensors, anonymized telemetry, and other backend services” and represents “the real-world prevalence of malware and vulnerability exploitation across enterprises” regardless of size or vertical.

Reports that talk about the most exploited vulnerabilities are compelling. They offer a short list of vulnerabilities that organizations can be absolutely sure they patched and offer no risk. Unfortunately, many of these reports have problems. I have written about some before including 2015 Verizon DBIR, 2016 Verizon DBIR, and Radware’s Top Web Exploits of 2020. I wish I had more time as I have seen other reports on exploit prevalence that had similar issues. In this case, Trend Micro’s report falls into at least one of the same traps that these prior reports have.

The first issue that pops out is the wording in the report that introduces a major point of confusion. In section two, titled “The Linux threat landscape: What are the top Linux threats?“, under the second heading titled “Vulnerabilities in Linux systems“, we get more details qualifying where the data came from to generate this part of the report:

“… we dissected IPS (Intrusion Prevention System) hits from Trend Micro Cloud One – Workload Security and sifted through over 50 million events, ignored false positives, eliminated test data, and layered data with available threat intel to draw some conclusions.”

Unfortunately, the next sentence immediately introduces some doubt and we don’t know how much doubt there is because they don’t qualify their error of marging:

“It should be noted that there can be a degree of error here due to the nature of the data and internet activity.”

If the margin for error is 1% in a dataset that large, not a big deal. If it is 10%, that can be problematic. If it is 50% then the report shouldn’t have even been written. We get to guess where that margin of error is apparently.

Now, for the section of the report that initially got my attention, we get to the top 15 vulnerabilities. I can’t finish that sentence because there is confusion:

If a list of vulnerabilities includes the top 15 that are “actively exploited” or “have a known proof of concept”, how do you even? Over 4,500 vulnerabilities in 2021 H1 have a public proof-of-concept or functional exploit. The next sentence clearly repeats the exact same thing. I can’t figure out how to explain that second part unless they are attempting to say “actively exploited and a public proof of concept” to distinguish from exploitation that is happening where the exploit is not actually published. That seems like a pretty big oversight given the nature of this section of the report. Further, it doesn’t qualify if the report is based on attempted exploitation that matches a signature or successful exploitation. After the table of vulnerabilities the report says “Table 1 shows the top vulnerabilities by volume of triggers.” which strongly suggests it is looking for exploit attempts. But that just leads to more questions like “if you see an attempt for that vulnerability but against a Windows server, does it count?

It gets even murkier looking at the table of the 15 vulnerabilities where one of them is listed as “N/A” for severity. That warrants digging into their list more closely and comparing the vulnerability information with that in VulnDB.

There are several observations to be made for this list:

  • CVE-2017-9805 is listed as ‘High’ severity suggesting they pulled at least some vulnerability data from the National Vulnerability Database. They score the vulnerability 8.1 (High) while VulnDB and CERT VU scores it 10.0. Looking at the original disclosure, there are no obvious qualifications that seem to justify an Access Complexity High (AC:H) rating.
  • Of the 430 vulnerabilities involving WordPress, base or plugins, that allow for remote code execution, why did only one make the list (CVE-2020-25213), and why that one? Given the amount of scanning for vulnerable WordPress installations I would expect more to be on the list. Hell, even the venerable CVE-2013-4338 given there are other CVE-2013s on the list.
  • The Atlassian Jira vulnerability is very curious given that it is a remote information disclosure issue and does not disclose sensitive information such as a password, that would result in further privilege escalation. Based on the logs of attrition.org over the last three months, there has been a single request for /secure/QueryComponent!Default.jspa. There have been five requests for /secure/QueryComponentRendererValue!Default.jspa (CVE-2020-36289) which is another information disclosure issue. There are also hundreds of information disclosure vulnerabilities that yield credentials which can be used to authenticate to an application to gain privileges. I would expect to see any one of those on the list before CVE-2020-14179.
  • Eclipse Jetty (CVE-2017-7657) is very curious to see on this list for several reasons. First, a four year old vulnerability that does not result in code execution. Second, there is a caveat for exploitation as explained in the Eclipse bug ticket: “was deployed behind an intermediary that imposed some authorization and that intermediary allowed arbitrarily large chunks to be passed on unchanged, then this flaw could be used to bypass the authorization imposed by the intermediary“. To see an HTTP request smuggling issue be that widely exploited over the thousands of other vulnerabilities that allow for a more serious impact in software found running on Linux is baffling. This strongly suggests the detection rule matching for that vulnerability is overly broad and triggers on exploit attempts for different issues.
  • The nginx vulnerability is listed as N/A which is curious. Looking at the associated NVD entry (CVE-2013-4547) we see they have a CVSSv2 score, but not a CVSSv3 score. That is due to it being a 2013 vulnerability and NVD not retroactively scoring all vulnerabilities. This, along with CVE-2017-9805 tells us that Trend Micro imported the scores from NVD but didn’t adjust for this one by using the CVSSv2 score, or developing their own CVSSv3 score. It seems weird to blindly use the CVSSv3 scores and have incomplete data when it is a simple correction to make.

Looking back to bullet #3, it’s interesting to compare the hits seen on our web server and then note that we also observed 10,659 requests for /wp-login.php in that same period. This, I think, illustrates a serious flaw in the methodology of this report. Most of the scanning we see for vulnerable WordPress instances is first looking for the presence of the software before attempting to exploit anything. Rather than throw hundreds of payloads for various flaws in the core software and vulnerable themes or plugins, it’s more efficient to check if the software is present first. Scan for the software to build a list of sites that are running WordPress before launching more significant attacks that may attract more attention.

As always, a real test to the veracity of this data would be for another firm that does large-scale monitoring of attacks to publish their own data, limited to the same approximate criteria as Trend Micro. That might explain bullet #4 at the very least.

Niki7a vs Magen – Charity Drive

Like the epic rumble in the jungle, after a quick offer to throw money to charity on behalf of their suffering, other generous souls jumped in to do the same. The result? Niki7a and Magen are now locked in a legendary contest to see who can wear their DEF CON 29 bracelet the longest. Whoever wears the bracelet the longest gets to pick the charity we donate to. Magen has opted to have donations go to SailFuture, “a comprehensive child welfare agency, and private school based in St. Petersburg, Florida” who’s mission is “to innovate and develop effective programs for high-risk youth, particularly youth in foster care.” At the time of this blog, Niki7a is still deciding.

Image courtesy @drwolfff

This started out with Nikit7a’s innocent Tweet:

Magen qualified that there are challenge coin rules to keep each other honest:

With all of the people kicking in to donate, I will maintain a spreadsheet to track pledges and totals!

Reflections on “CVE Approach for Cloud Vulnerabilities”

At Black Hat Briefings USA this week, Ami Luttwak and Shir Tamari called for a “CVE” style approach to documenting vulnerabilities that affect cloud offerings (note: I have not seen the talk). As one of two people (the other being Jake Kouns) that may have the longest history in this specific space, I wanted to give some history and very high-level reflection on the challenges and benefits of this endeavor. First, for perspective, CVE covers at least 56 pure cloud / site-specific vulnerabilities, the first going back as far as 2000. They cover at least another 461 that are hosted blockchain contracts. At times, some of the CVE (Editorial) Board has advocated for CVE to expand to cover cloud vulnerabilities while others argue against it. At least one who advocated for CVE coverage said they should get CVE IDs, while others that supported and disagreed with the idea saying that if cloud was covered, they should get their own ID scheme. MITRE has not made a clearly policy change on that in years, despite increasing coverage primarily of hosted blockchain contracts.

Next, the idea of “CVE for Cloud” has been done! Well, it was attempted at least. Back in the day, the Open Security Foundation was desperately trying to maintain the Open Sourced Vulnerability Database (OSVDB) and DatalossDB. Neither project had enough volunteers and community support simply wasn’t there. Despite that, we spun up a project called Cloutage (archive.org link) in 2010, that originally aimed to catalog instances of cloud service outages as well. The idea was this would tell companies which ones actually had the alleged 99.999% uptime, which regions of a provider were more prone to issues, and more. We were collecting incidents categorized as “hack”, “outage”, or “vuln” long before that with the intent of such a project, but that is when we finally made them publicly available in an organized fashion. So my reflections come with real-world experience; the challenges we faced, along with the pitfalls of such an endeavor, are still largely present. (For convenience, I will use the term ‘cloud’ to refer to service providers that are subscription offerings and not on-premises hardware / solutions.)

First, let’s begin with one significant change from the mid 2000s to today. Over ten years ago, cloud providers with a vulnerability were typically an issue they had to fix and were out of the customer’s control. While that is still prevalent today, we see another class of cloud vulnerabilities that better warrant a “CVE for Cloud” approach. Those are cases where a vulnerability is present in a cloud offering and requires a customer to mitigate it themselves. These will often be described as misconfigurations like the pervasive open S3 buckets that can be accessed without credentials, disclosing a world of information or hosting malicious software [1] [2] [3]. There is a fundamental difference between a security issue that requires the customer to mitigate versus one they must wait for the cloud provider to fix.

Historically, when a vast majority of the security issues fell into the “wait for the provider to fix” category, we cataloged those vulnerabilities but questioned the value. Sure, it had value in that a company could evaluate cloud providers to see which had more vulnerabilities and how long it took to fix sometimes. But even that was crippled in that companies would only announce them after the issue was fixed. That meant we knew a vulnerability was present for some undetermined time, and then patched. But cloud providers weren’t including how long customers were exposed and most providers simply were not disclosing at all. So the database was a fraction of the actual vulnerabilities, typically with no known exposure window, and customers were powerless to do anything about it.

Another issue we ran into was that the rare times a cloud provider disclosed the issue, it was extremely vague. More frustrating is when they used language implying that some, but not all of customers were impacted. They didn’t include regions or IP ranges or anything else to identify who was affected so a customer couldn’t even guarantee they were without opening a support ticket and hoping to get an honest answer. That was frustrating and not very helpful. How will security teams cope if they find out there is a public vulnerability in a cloud offering and no mitigating control? Seems like a recipe for stressful days. Of course, it’s no different than no except for today ignorance is bliss.

CVSS is a defacto risk scoring framework for vulnerabilities. It was designed around vulnerabilities that appear in CVE, which is predominantly on-premises software. Would CVSS scale to work for cloud vulnerabilities? CVSSv2 seems like it would be as suitable for cloud as on-premises given its more simple scoring elements. CVSSv3 seems like it could be more problematic and not take advantage of some of the changes over v2. First, the ‘physical’ location would effectively be eliminated. Second, would Authorization Scope be neutered? Defined initially as “ability for a vulnerability in one software component to impact resources beyond its means, or privileges“, on the surface it seems like every vulnerability would have to be evaluated more closely for scope change. Every vulnerability potentially causes the cloud provider, a different company than the one using the service, to get involved in some way. The cloud provider’s support and security teams spending time because $CompanyA’s instance got popped technically “impacts resources beyond its means”. I don’t think that is what the CVSS SIG had in mind though. That said, given the shared environments of many cloud offerings, sandbox escapes and lateral movement potentially impacts completely different organizations. Does a simple “scope changed” modifier to a score adequately convey the potential risk?

Finally, who’s going to run this “CVE for Cloud”? OSVDB and subsequent similar projects give a damning history showing there is a lot of vocal support for community-run vulnerability resources, but they very rarely see the actual time required from those vocal folks. If it is run by one person or a small group of dedicated volunteers, how long before they burnout or commercialize it? There’s a reason most of the public VDBs have gone away, or gone commercial to varying degrees (e.g. OSVDB, Secunia, BID). The only thing worse than such a project not getting off the ground is one that does, becomes an essential part of security programs, and then goes away.

I look forward to seeing the talk by Luttwak and Tamari, if it is made public. I’m curious if they have come up with solutions that have plagued vulnerability databases for over two decades.

[Update: Josh Bressers points out that the Cloud Security Alliance (CSA) wants to address this problem too.]