Assessing the ‘War on Tech’: Huawei vs. U.S.

[I wrote this with Curtis Kang who did a lot of work researching various aspects of this article and provided invaluable help. His research and written contributions made this article possible. It was originally intended to be published on RiskBasedSecurity.com in early 2020 but was passed over so I am publishing it here.]


In 2019, we saw the steadily-growing social and policy conflicts between the United States and China reach a boiling point. China has been a major talking-point of President Trump’s platform since early in his campaign. However, it wasn’t until last year that we saw active policies enforcing a so-called “war on tech” between the U.S. and major Chinese companies like Huawei and ZTE, and those policies being “sidestepped”. We wanted to examine this from a data perspective, looking at the vulnerabilities in similar companies from both sides.

To set the stage, it is useful to briefly revisit the U.S. vs CN timeline.

The Trade War

Since taking office in January 2017, President Trump has had a specific interest in China, stating early-on that the “U.S. will be on a level playing field”. This led to several rounds of tariffs being imposed against China starting in March 2018, and retaliatory tariffs being imposed against the U.S. Early in 2019, there was conjecture that President Trump may use an executive order to limit some U.S. companies such as wireless carriers from purchasing Chinese electronic devices. That executive order was signed on May 15, 2019, citing the National Emergencies Act (50 U.S.C. 1601 et seq.) that would limit or ban purchases of “technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary”.

While the executive order did not list any country or company, it was widely accepted that it was a move against Huawei in particular. The order contained interesting language, saying that the banned technology or services “poses an undue risk of sabotage” and is “an unacceptable risk” to the United States, among other wording. Technology meeting those criteria would be determined by the Secretary of Commerce, in consultation with nine other bodies “and as appropriate, the heads of other executive departments and agencies”.

On May 20, 2019, the BIS modified the final rule and granted a Temporary General License (TGL) until August 19, 2019 for transactions regarding, among other things, “Continued Operation of Existing Networks and Equipment” and “Cybersecurity Research and Vulnerability Disclosure.” On August 19, 2019, the BIS extended the TGL by 90 days, or until November 19, 2019. Outside the TGL, any request for a license to sell or transfer commodities, software or technology to Huawei is reviewed “under a policy of presumption of denial.” In other words, the BIS provides virtually no avenue for a continued commercial relationship with Huawei after November 19, 2019.

Months later, when asked if China would retaliate, Chinese foreign ministry spokesman Geng Shuang told reporters “stay tuned.” Two weeks after that, China announced tariffs on $75 billion of U.S. products. This was followed in December with China announcing a ban on foreign technology in “all government offices and public institutions” within three years. The ban also prevented companies such as Google, Dropbox, and Facebook from being used within China. With this, the United States and China were in a new type of technology war based on the premise that the adversarial nation was producing equipment that “poses an undue risk of catastrophic effects”.

The Fear of Backdoors

Computer equipment that poses a risk in the context above, typically brings to mind computer vulnerabilities. Issues that, with the right knowledge, would allow one country to use software vulnerabilities to compromise assets in the adversary nation’s government, business, or infrastructure. Another common scenario brought up by security professionals and intelligence officials is that of a backdoor; computer code planted by a party that allows them, and only them, covert remote access to the device. Some members of the U.S. intelligence community would prefer these Chinese products not be used in the technological infrastructure saying it “would undercut the ability of the U.S. to defend itself.”

This fear, specifically of Huawei routers from China, has been front-and-center since 2011, and a growing concern even before that. In the past, the concerns largely stemmed from each nation compromising the other’s computer networks in government and business. More recently, with the race to implement a 5G network, security issues around that technology have been heavily scrutinized. This war of technology has reminded us of 2010, when Huawei published an open letter to the U.S. government attempting to allay fears and shift public perception after a decade of suspicion. The company went so far as to request a “thorough investigation” to prove that they are “a normal commercial institution and nothing more.” This prompted eight U.S. senators to urge the White House to examine Huawei contracts and the House Intelligence Committee to investigate and publish a report on both Huawei and ZTE.

Ultimately, that report was inconclusive and stated the following – “despite hours of interviews, extensive and repeated document requests, a review of open-source information, and an open hearing with witnesses from both companies, the Committee remains unsatisfied with the level of cooperation and candor provided by each company.” Even over six years later, in 2019, Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, officially stated that no one has found a backdoor in a Huawei product.

This is important to note, given the considerable scrutiny Huawei has received. In addition to their open letter in 2010, Huawei also disclosed their source code to a foreign government, something that no U.S. company has done. Despite the numerous information security companies attempting to find and potentially publish findings of an actual backdoor (including the NSA and specifically created testing centers in the UK), none have been confirmed. Given that the U.S. National Security Agency (NSA) has a significant budget and a vested interest in determining if a company like Huawei is shipping backdoored systems, and has not disclosed one, is compelling.

Ignoring Backdoors and Looking at the Data: Is a Ban Warranted?

Given that history and perspective on the growing tech war between the U.S. and China, we at Risk Based Security wanted to look at some concrete numbers around the vulnerabilities in the companies at the center of the issue.

While much of the focus on this topic has been on fear and the threat of backdoors planted by a vendor at the behest of their government, that is not necessarily where we want to direct attention. Using a backdoor, even if it is well-hidden, would likely bring unwanted attention by giving more positive attribution to those who compromised the machine. Nation-state level hackers would have their own ways into a wide variety of vendors and devices purely based on ‘natural’ vulnerabilities in the code. They simply do not need the access, and risk, a backdoor provides. Why provide hints to the enemy that you’ve “cracked the code” when you could hide behind an existing vulnerability?

Setting aside the possibility of backdoors, the question we’re interested in is this: does one of the government-used devices pose more of a risk due to its vulnerabilities? Despite this, we have found that the “war on tech” cannot be simplified into the classic “how many vulnerabilities are there in…” question, else unspoken bias drastically affects the perceived meaning of the numbers. While there is no way to do a perfect one-to-one comparison of U.S. versus Chinese vendors, there may be some that we can begin to compare, with disclaimers.

Phones: BlackBerry vs. Huawei / ZTE

For the general public, and based on much of the mainstream media reporting, Huawei are predominantly associated with their mobile phones. As more of our lives move to mobile, it is no surprise that those in power are concerned about the security of their phones and tablets. For U.S. and Chinese governments, it is widely viewed that BlackBerry and Huawei / ZTE phones, respectively, are dominant. For BlackBerry, they announced a five year deal for their latest handheld and their AtHoc software with the federal government back in July 2016, specifically the Department of Defense (DoD) Joint Emergency Mass Notification Systems (JEMNS). According to the press release, the DoD chose Blackberry because of the “secure end-to-end mobility offering .. that [shows the] secure platform is designed to meet their priorities”.

Despite the contract, BlackBerry is not the most widely used phone in the U.S. government. The U.S. Senate officially “ditched” BlackBerry in 2016, but allows them to continue to use specific devices per an official memo. In fact, BlackBerry themselves have stopped making their own handheld devices and have shifted to business software and other solutions like AtHoc, apparently used by 70% of federal employees including DoD, DHS, VA, DoE, DoA, PFPA, FEMA, IRS, and the TSA. For a majority of government employees, the most commonly used phones are now Apple and Samsung products.

With regards to China’s government, specific details about mobile phone adoption is not readily available. By simply looking at Huawei’s market share in China, one might safely assume that their devices are favored by some in the Chinese government. While it has long been rumored that Huawei has a very direct and complicated relationship with their government, which is supported both by Vietnamese academic and U.S. government research, Huawei says their relationship with the government is “no different” than any other company in China.

The U.S. government officially uses a mix of BlackBerry, Apple, and Samsung (Android), meaning that there are three major vendors and three major operating systems. For the Chinese government, apparently there is no officially sanctioned device, but it is very likely Huawei (formerly Android, but moving to Harmony OS / Hóngméng in 2020) and ZTE (Android) phones are heavily used. Looking at the last three calendar years, here is a comparison between the vendors to see how many vulnerabilities have been posted:

With these numbers it may seem like BlackBerry represents more risk. However, if BlackBerry shares the same vulnerabilities as any other Android device, and they disclose vulnerabilities in applications they ship, that number can be higher. The same can be said for any other Android phone that ships with packaged vulnerable apps and components as well, so the 1,338 Android vulnerabilities are not a full representation for other devices (e.g. Samsung, Huawei, ZTE). We then have to remind readers that comparing open source software such as Android to closed source such as BlackBerry OS and Apple can introduce bias in disclosure numbers. Another aspect to consider is that the amount of devices being used may influence how many people are actually performing security research on them.

Ultimately, this means neither the U.S. or China can justify banning devices based on phone vulnerability history alone. Trying to state one vendor is more “vulnerable” than the other using currently available vulnerability data alone requires so many disclaimers that the end result loses its potency.

Routers & TelCom: Huawei vs. Cisco et al

The second major aspect of concerns over technology from one country being pervasive in another is that of access. Everyone from the carriers to end users expects the equipment to function seamlessly, giving us access to the internet and mobile service. That service is built on a complex infrastructure of telecommunications (telecoms) hardware and software produced by companies such as Huawei, Cisco, Fujitsu, Nokia, and Ericsson. The telecom hardware includes routers, base transceiver stations, fiber optical networks, satellites, and a lot more. As of 2017, Chinese companies produced the most telecom equipment in the world, about 14% more than the United States.

Looking at these vendors for the last four calendar years, we get another lesson in how there is significant bias introduced into vulnerability statistics due to disclosures. Cisco had 2,227 vulnerabilities in that time. Compared to Huawei with only 813, one might conclude that Cisco’s software is inherently riskier. But compare Cisco with the three other companies. Fujitsu enjoys 79% of the market share by revenue, yet only had 24 vulnerabilities in that time frame. Going off that logic, can we conclude that Fujitsu is the most secure?

Consider that of Fujitu’s 24 vulnerabilities, only three are in their products and one of them a keyboard. The other 21 vulnerabilities are in third-party software or hardware (e.g. Intel processors). Cisco on the other hand has an incredible number of vulnerabilities reported, but they rarely publish that they are affected by vulnerabilities in OpenSSL and Intel for example, despite using those technologies in some of their devices.

Both Cisco and Fujitsui maintain contact pages for reporting security vulnerabilities, have a formal PSIRT team to respond to vulnerability reports, and both publish security advisories. Despite this, they have public disclosure histories that are about as opposite as you can find in many ways. We know for a fact both companies use hundreds of third-party libraries in their code, yet neither publish when third-party vulnerabilities affect their software. Based on our extensive history of tracking vulnerabilities, we are quite certain that Fujitsu products have, or have had, more vulnerabilities than they have officially disclosed. Any notion that Fujitsu (24) is a one-off situation can be dismissed when looking at Nokia (11) and Ericsson (8) for the same periods. That suggests Cisco and Huawei are outliers.

We can apply this same scrutiny to Huawei, with only 813 vulnerabilities despite their large market share, and their considerable transparency when it comes to third-party vulnerabilities. In the world of vulnerability research, access to software and equipment is essential, of course. Some may argue that Huawei equipment isn’t readily available to many researchers, and that might be true for U.S.-based researchers. But the last decade has shown an incredible number of extremely talented security researchers in China, who would presumably have more access. If one were to argue that China is looking to restrict vulnerability disclosure, that certainly will be something to consider moving forward. However, that plan is still preliminary and has not been implemented.

Conclusion: Overkill

You need comprehensive, detailed, and actionable data in order to make informed decisions. Following this mantra, we are comfortable in assessing that with the currently available vulnerability data, a hard stance condemning either side is not possible. As much as we would like it to be, the comparison of vulnerabilities within vendors cannot be a panacea.

That being said, does this mean that both the U.S. and Chinese governments are banning each other’s products solely for political posturing, or is it actually an informed decision? As we can see, it may be easy to arrive at a conclusion when looking at the data. But as informed citizens we all need to be aware of the disclaimers and hidden perspectives that the data may not overtly reveal. The answer is not so simple as “well, this has more vulnerabilities than that”.

Without concrete evidence of backdoors in Huawei products, the concern is definitely valid, but a total ban is overkill and may have far-reaching unintended consequences. As the “war on tech” has raged on, both the U.S. and China have suffered casualties.

Commentary on Trend Micro’s Linux Threat Report 2021

On August 23, 2021, Trend Micro released a report titled “Linux Threat Report 2021 1H” by Magno Logan and Pawan Kinger. The report is based on Trend Micro’s Smart Protection Network (SPN) which they call “the data lake for all detections across all Trend Micro’s products“. Basically, every security product they make that detects vulnerabilities and reports them back to Trend Micro can and is used in their research like this, among other things. They further qualify that the report is based on additional data “collected data from honeypots, sensors, anonymized telemetry, and other backend services” and represents “the real-world prevalence of malware and vulnerability exploitation across enterprises” regardless of size or vertical.

Reports that talk about the most exploited vulnerabilities are compelling. They offer a short list of vulnerabilities that organizations can be absolutely sure they patched and offer no risk. Unfortunately, many of these reports have problems. I have written about some before including 2015 Verizon DBIR, 2016 Verizon DBIR, and Radware’s Top Web Exploits of 2020. I wish I had more time as I have seen other reports on exploit prevalence that had similar issues. In this case, Trend Micro’s report falls into at least one of the same traps that these prior reports have.

The first issue that pops out is the wording in the report that introduces a major point of confusion. In section two, titled “The Linux threat landscape: What are the top Linux threats?“, under the second heading titled “Vulnerabilities in Linux systems“, we get more details qualifying where the data came from to generate this part of the report:

“… we dissected IPS (Intrusion Prevention System) hits from Trend Micro Cloud One – Workload Security and sifted through over 50 million events, ignored false positives, eliminated test data, and layered data with available threat intel to draw some conclusions.”

Unfortunately, the next sentence immediately introduces some doubt and we don’t know how much doubt there is because they don’t qualify their error of marging:

“It should be noted that there can be a degree of error here due to the nature of the data and internet activity.”

If the margin for error is 1% in a dataset that large, not a big deal. If it is 10%, that can be problematic. If it is 50% then the report shouldn’t have even been written. We get to guess where that margin of error is apparently.

Now, for the section of the report that initially got my attention, we get to the top 15 vulnerabilities. I can’t finish that sentence because there is confusion:

If a list of vulnerabilities includes the top 15 that are “actively exploited” or “have a known proof of concept”, how do you even? Over 4,500 vulnerabilities in 2021 H1 have a public proof-of-concept or functional exploit. The next sentence clearly repeats the exact same thing. I can’t figure out how to explain that second part unless they are attempting to say “actively exploited and a public proof of concept” to distinguish from exploitation that is happening where the exploit is not actually published. That seems like a pretty big oversight given the nature of this section of the report. Further, it doesn’t qualify if the report is based on attempted exploitation that matches a signature or successful exploitation. After the table of vulnerabilities the report says “Table 1 shows the top vulnerabilities by volume of triggers.” which strongly suggests it is looking for exploit attempts. But that just leads to more questions like “if you see an attempt for that vulnerability but against a Windows server, does it count?

It gets even murkier looking at the table of the 15 vulnerabilities where one of them is listed as “N/A” for severity. That warrants digging into their list more closely and comparing the vulnerability information with that in VulnDB.

There are several observations to be made for this list:

  • CVE-2017-9805 is listed as ‘High’ severity suggesting they pulled at least some vulnerability data from the National Vulnerability Database. They score the vulnerability 8.1 (High) while VulnDB and CERT VU scores it 10.0. Looking at the original disclosure, there are no obvious qualifications that seem to justify an Access Complexity High (AC:H) rating.
  • Of the 430 vulnerabilities involving WordPress, base or plugins, that allow for remote code execution, why did only one make the list (CVE-2020-25213), and why that one? Given the amount of scanning for vulnerable WordPress installations I would expect more to be on the list. Hell, even the venerable CVE-2013-4338 given there are other CVE-2013s on the list.
  • The Atlassian Jira vulnerability is very curious given that it is a remote information disclosure issue and does not disclose sensitive information such as a password, that would result in further privilege escalation. Based on the logs of attrition.org over the last three months, there has been a single request for /secure/QueryComponent!Default.jspa. There have been five requests for /secure/QueryComponentRendererValue!Default.jspa (CVE-2020-36289) which is another information disclosure issue. There are also hundreds of information disclosure vulnerabilities that yield credentials which can be used to authenticate to an application to gain privileges. I would expect to see any one of those on the list before CVE-2020-14179.
  • Eclipse Jetty (CVE-2017-7657) is very curious to see on this list for several reasons. First, a four year old vulnerability that does not result in code execution. Second, there is a caveat for exploitation as explained in the Eclipse bug ticket: “was deployed behind an intermediary that imposed some authorization and that intermediary allowed arbitrarily large chunks to be passed on unchanged, then this flaw could be used to bypass the authorization imposed by the intermediary“. To see an HTTP request smuggling issue be that widely exploited over the thousands of other vulnerabilities that allow for a more serious impact in software found running on Linux is baffling. This strongly suggests the detection rule matching for that vulnerability is overly broad and triggers on exploit attempts for different issues.
  • The nginx vulnerability is listed as N/A which is curious. Looking at the associated NVD entry (CVE-2013-4547) we see they have a CVSSv2 score, but not a CVSSv3 score. That is due to it being a 2013 vulnerability and NVD not retroactively scoring all vulnerabilities. This, along with CVE-2017-9805 tells us that Trend Micro imported the scores from NVD but didn’t adjust for this one by using the CVSSv2 score, or developing their own CVSSv3 score. It seems weird to blindly use the CVSSv3 scores and have incomplete data when it is a simple correction to make.

Looking back to bullet #3, it’s interesting to compare the hits seen on our web server and then note that we also observed 10,659 requests for /wp-login.php in that same period. This, I think, illustrates a serious flaw in the methodology of this report. Most of the scanning we see for vulnerable WordPress instances is first looking for the presence of the software before attempting to exploit anything. Rather than throw hundreds of payloads for various flaws in the core software and vulnerable themes or plugins, it’s more efficient to check if the software is present first. Scan for the software to build a list of sites that are running WordPress before launching more significant attacks that may attract more attention.

As always, a real test to the veracity of this data would be for another firm that does large-scale monitoring of attacks to publish their own data, limited to the same approximate criteria as Trend Micro. That might explain bullet #4 at the very least.

Niki7a vs Magen – Charity Drive

Like the epic rumble in the jungle, after a quick offer to throw money to charity on behalf of their suffering, other generous souls jumped in to do the same. The result? Niki7a and Magen are now locked in a legendary contest to see who can wear their DEF CON 29 bracelet the longest. Whoever wears the bracelet the longest gets to pick the charity we donate to. Magen has opted to have donations go to SailFuture, “a comprehensive child welfare agency, and private school based in St. Petersburg, Florida” who’s mission is “to innovate and develop effective programs for high-risk youth, particularly youth in foster care.” At the time of this blog, Niki7a is still deciding.

Image courtesy @drwolfff

This started out with Nikit7a’s innocent Tweet:

Magen qualified that there are challenge coin rules to keep each other honest:

With all of the people kicking in to donate, I will maintain a spreadsheet to track pledges and totals!

Reflections on “CVE Approach for Cloud Vulnerabilities”

At Black Hat Briefings USA this week, Ami Luttwak and Shir Tamari called for a “CVE” style approach to documenting vulnerabilities that affect cloud offerings (note: I have not seen the talk). As one of two people (the other being Jake Kouns) that may have the longest history in this specific space, I wanted to give some history and very high-level reflection on the challenges and benefits of this endeavor. First, for perspective, CVE covers at least 56 pure cloud / site-specific vulnerabilities, the first going back as far as 2000. They cover at least another 461 that are hosted blockchain contracts. At times, some of the CVE (Editorial) Board has advocated for CVE to expand to cover cloud vulnerabilities while others argue against it. At least one who advocated for CVE coverage said they should get CVE IDs, while others that supported and disagreed with the idea saying that if cloud was covered, they should get their own ID scheme. MITRE has not made a clearly policy change on that in years, despite increasing coverage primarily of hosted blockchain contracts.

Next, the idea of “CVE for Cloud” has been done! Well, it was attempted at least. Back in the day, the Open Security Foundation was desperately trying to maintain the Open Sourced Vulnerability Database (OSVDB) and DatalossDB. Neither project had enough volunteers and community support simply wasn’t there. Despite that, we spun up a project called Cloutage (archive.org link) in 2010, that originally aimed to catalog instances of cloud service outages as well. The idea was this would tell companies which ones actually had the alleged 99.999% uptime, which regions of a provider were more prone to issues, and more. We were collecting incidents categorized as “hack”, “outage”, or “vuln” long before that with the intent of such a project, but that is when we finally made them publicly available in an organized fashion. So my reflections come with real-world experience; the challenges we faced, along with the pitfalls of such an endeavor, are still largely present. (For convenience, I will use the term ‘cloud’ to refer to service providers that are subscription offerings and not on-premises hardware / solutions.)

First, let’s begin with one significant change from the mid 2000s to today. Over ten years ago, cloud providers with a vulnerability were typically an issue they had to fix and were out of the customer’s control. While that is still prevalent today, we see another class of cloud vulnerabilities that better warrant a “CVE for Cloud” approach. Those are cases where a vulnerability is present in a cloud offering and requires a customer to mitigate it themselves. These will often be described as misconfigurations like the pervasive open S3 buckets that can be accessed without credentials, disclosing a world of information or hosting malicious software [1] [2] [3]. There is a fundamental difference between a security issue that requires the customer to mitigate versus one they must wait for the cloud provider to fix.

Historically, when a vast majority of the security issues fell into the “wait for the provider to fix” category, we cataloged those vulnerabilities but questioned the value. Sure, it had value in that a company could evaluate cloud providers to see which had more vulnerabilities and how long it took to fix sometimes. But even that was crippled in that companies would only announce them after the issue was fixed. That meant we knew a vulnerability was present for some undetermined time, and then patched. But cloud providers weren’t including how long customers were exposed and most providers simply were not disclosing at all. So the database was a fraction of the actual vulnerabilities, typically with no known exposure window, and customers were powerless to do anything about it.

Another issue we ran into was that the rare times a cloud provider disclosed the issue, it was extremely vague. More frustrating is when they used language implying that some, but not all of customers were impacted. They didn’t include regions or IP ranges or anything else to identify who was affected so a customer couldn’t even guarantee they were without opening a support ticket and hoping to get an honest answer. That was frustrating and not very helpful. How will security teams cope if they find out there is a public vulnerability in a cloud offering and no mitigating control? Seems like a recipe for stressful days. Of course, it’s no different than no except for today ignorance is bliss.

CVSS is a defacto risk scoring framework for vulnerabilities. It was designed around vulnerabilities that appear in CVE, which is predominantly on-premises software. Would CVSS scale to work for cloud vulnerabilities? CVSSv2 seems like it would be as suitable for cloud as on-premises given its more simple scoring elements. CVSSv3 seems like it could be more problematic and not take advantage of some of the changes over v2. First, the ‘physical’ location would effectively be eliminated. Second, would Authorization Scope be neutered? Defined initially as “ability for a vulnerability in one software component to impact resources beyond its means, or privileges“, on the surface it seems like every vulnerability would have to be evaluated more closely for scope change. Every vulnerability potentially causes the cloud provider, a different company than the one using the service, to get involved in some way. The cloud provider’s support and security teams spending time because $CompanyA’s instance got popped technically “impacts resources beyond its means”. I don’t think that is what the CVSS SIG had in mind though. That said, given the shared environments of many cloud offerings, sandbox escapes and lateral movement potentially impacts completely different organizations. Does a simple “scope changed” modifier to a score adequately convey the potential risk?

Finally, who’s going to run this “CVE for Cloud”? OSVDB and subsequent similar projects give a damning history showing there is a lot of vocal support for community-run vulnerability resources, but they very rarely see the actual time required from those vocal folks. If it is run by one person or a small group of dedicated volunteers, how long before they burnout or commercialize it? There’s a reason most of the public VDBs have gone away, or gone commercial to varying degrees (e.g. OSVDB, Secunia, BID). The only thing worse than such a project not getting off the ground is one that does, becomes an essential part of security programs, and then goes away.

I look forward to seeing the talk by Luttwak and Tamari, if it is made public. I’m curious if they have come up with solutions that have plagued vulnerability databases for over two decades.

[Update: Josh Bressers points out that the Cloud Security Alliance (CSA) wants to address this problem too.]

July 2021 Reviews

[A summary of my movie and TV reviews from last month, posted to Attrition.org, mixed in with other reviews.]


The Tomorrow War (2021)
Medium: Movie (Amazon)
Rating: 0.5/5 don’t wait until tomorrow to ignore this trash
Reference(s): IMDB Listing || Trailer
This movie was going swimmingly when everything was just regular life in today’s time. The second the “tomorrow” part started it went to shit. From that point on it was a series of overdone clichés from hundreds of Sci-Fi movies before it mixed in with clichés from hundreds of war movies. I think the blame is on the writers who wrote a series of scenes that relied on us forgetting about logic, continuity, and time travel. The comic relief? Not funny. The heart-warming moments? Not warming. This movie is trash.


The Watch (2020)
Medium: TV (BBC)
Rating: 4.4/5 worth the .. irreverent policing
Reference(s): IMDB Listing || Trailer
This is not a show about superheroes. In fact, barely heroes at all but they find their way. The series is based on a world and characters created by Terry Pratchett, who I haven’t read. The series was a little rough to get into not knowing anything about it and the characters were odd at first, then I appreciated them as over-the-top quirky weirdos that find their way together as they “police” the city. The casting is superior and each actor really embraces their role, bringing something special to their character. Season one offers a fun story that is wrapped up neatly at the end, of course with a teaser that it will return.


Debris S1 (2021)
Medium: TV (Peacock)
Rating: 2.5/5 potentially good show lost in all the pieces
Reference(s): IMDB Listing || Trailer
Two episodes in and the show was interesting. I immediately formed ideas of what I thought would come later in the season but in one case they did it in the first episode. It set a great pace and suggested a promising show with some creativity. I could suspend disbelief on the whole alien technology thing but couldn’t when it came to the “protocols” used to investigate alien debris. Then, about half way through the series, it began to spiral out of control with sub-plots and ideas that simply weren’t compelling. Add to that an attempt to bring some kind of sexual tension between the two main characters that just wasn’t there, and it’s easy to understand why the show was canceled after one season.


Black Widow (2021)
Medium: Movie (Disney+)
Rating: 1 / 5 i’d rather watch the peacock spider
Reference(s): IMDB Listing || Trailer
Marvel whipped out its script; the script they use to write movie scripts for MCU films. Formulaic, predictable, and have gotten boring. Black Widow has potential for a more interesting back story and tie-in to the current timeline and Marvel fell way short on that I think. Comedic character? Check. Over-engineered fight scenes that aren’t clever? Check. Basically, instead of making the character more interesting and getting us more invested, we’re left with a movie that didn’t really matter if it came out. Personally, I find the Black Widow character entirely more interesting after the first she appeared in and a lot less so after this one. Finally, I hope the camera crew didn’t injure their backs bending over to get all the shots highlighting the women’s asses.


Hitman’s Wife’s Bodyguard (2021)
Medium: Movie
Rating: 2.5 / 5 The word ‘fuck’ is 1/3rd of the plot
Reference(s): IMDB Listing || Trailer
If you saw the first movie, you know exactly what to expect with this one. Over-the top gun fights, Ryan Reynolds being himself as you try to divorce your mind from thinking “Deadpool”, and apparently an attempt to break the record for the use of the word “fuck”. As with many sequels, they try to make it more compelling by adding some big name actors. In this case, Morgan Freeman, who didn’t really add much to the movie. So this is good mindless entertainment while you work on your puzzle or cross-stitch or sort your Legos.

Perlroth, Miller, and the First Remote iPhone Vuln

In what is sure to be my last blog (?!) born out of reading “This Is How They Tell Me The World Ends” by Nicole Perlroth, this article is basically a quick dive into a single paragraph that contains one sentence with an alleged fact pertaining to vulnerability history. As a self-described Vulnerability Historian, this is of course of interest especially if the statement is questionable. From page 63 of the book, here is the paragraph for full context and the relevant bits in bold:

But that’s not how he would be remembered. One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone. The conventional wisdom had always been that the iPhone – with its sleek design and closely held code – was more secure than the alternatives. But Charlie blew a hole right through that theory. He demonstrated before an audience of hundreds how easily he could remotely control anyone’s iPhone simply by steering their browser to a malicious website he created.

With that, we’ll examine three components of this claim:

  1. Was the vulnerability remote?
  2. Was the vulnerability in an iPhone?
  3. Was Miller the first?

Before jumping to conclusions on those answers, there’s a twist or two! If you’re already grumbling over me being “wordy” you can scroll down to the end for the cliff notes TL;DR and miss the rabbit hole adventure. And also thank me for not posting each section separately, teasing it out and making you wait two weeks for an answer.

Was it Remote?

Perlroth cites the quoted section above from a 2007 article by John Schwartz titled “iPhone Flaw Lets Hackers Take Over, Security Firm Says“. To make sure we understand the context from this article, with an important bit highlighted:

Once he was there, the site injected a bit of code into the iPhone that then took over the phone. The phone promptly followed instructions to transmit a set of files to the attacking computer that included recent text messages — including one that had been sent to the reporter’s cellphone moments before — as well as telephone contacts and email addresses. “We can get any file we want,” he said. Potentially, he added, the attack could be used to program the phone to make calls, running up large bills or even turning it into a portable bugging device.

For clarity, and to show this was widely reported, we see from Farhad Manjoo of Salon in his article “Security researchers find a dangerous iPhone flaw” that the attack vector is stated more clearly:

The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message. After it’s activated, an attacker can make the phone transmit files or passwords, run up wireless services or even record audio and relay it back to the attacker.

The reason the attack vector is so important is that it speaks to the first part of the claim in which Perlroth says it was the “first remote hack”. In the context of vulnerabilities, remote means that a vulnerability can be exploited remotely without user interaction from the victim. If the exploit requires the victim to perform an action of any kind, including clicking a link, it is a user-assisted or context-dependent attack. While that is a serious attack, since we know the bar for clicking a link is low, it is still important to make this distinction. Why? Let’s start with risk scoring and refer to Remote Code Execution (RCE) and Arbitrary Code Execution (ACE) for reference.

Using the Common Vulnerability Scoring System (CVSS), an industry standard for better or worse, we get four sets of scores to look at. First, understand that many organizations use a three-tier “stoplight” system for general risk severity (i.e. low, medium, high) or a five-tier system that adds an ‘informational’ and ‘critical’ rating. The five-tier system breaks down as 0.0 (informational), 0.1 – 3.9 (low), 4.0 – 6.9 (medium), 7.0 – 8.9 (high), 9.0 – 10.0 (critical). For organizations that prioritize at this higher level first, focusing on critical before moving to high-risk, the difference between an 8.9 and 9.0 may mean a lot. So let’s compare a RCE versus an ACE looking at both CVSS version 2 and 3, which are the same in spirit but different in implementation:

As we see, arbitrary code execution under CVSSv3 is scored as 8.8 which is only “high” risk while under CVSSv2 it is “critical”. Compare that to remote code execution which is “critical” under both scoring systems. So the distinction between remote and user-assisted is important in both severity and factual accuracy. Jumping back to specifics of the attack:

The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message.

This is clearly an arbitrary code execution situation as it requires a victim to visit a malicious web page in some manner. That distinction is one that Charlie Miller has made himself many times over the years. This is not a remote vulnerability. At this point it would be more accurate to say “first user-assisted code execution vulnerability in an iPhone“. That’s a bit different huh?

Was the vulnerability in an iPhone?

The simple answer is of course, “yes”. But it’s more complicated than that which we’ll see, as well as why that is important. When attributing a vulnerability to a given device like an iPhone we should note if the vulnerable code is in Apple’s own iPhone code or a third-party library used by the device. This distinction starts us down a rabbit hole.

First, we’ll reference the vulnerability in question which is CVE-2007-3944, and it was cited in the iPhone v1.0.1, macOS Security Update 2007-007, and Safari 3 Beta Update 3.0.3 updates:

Safari

CVE-ID: CVE-2007-3944

Available for: iPhone v1.0

Impact: Viewing a maliciously crafted web page may lead to arbitrary code execution

Description: Heap buffer overflows exist in the Perl Compatible Regular Expressions (PCRE) library used by the JavaScript engine in Safari. By enticing a user to visit a maliciously crafted web page, an attacker may trigger the issue, which may lead to arbitrary code execution. This update addresses the issue by performing additional validation of JavaScript regular expressions. Credit to Charlie Miller and Jake Honoroff of Independent Security Evaluators for reporting these issues.

How do we know it was this vulnerability and not a subsequent one? Perloth says it came one month after Miller’s paper, “The Legitimate Vulnerability Market” from May, 2007. Miller and Honoroff’s vulnerability was shared with Apple on July 17 and publicly disclosed on July 19. Close enough to a month and the next iPhone update was 1.1.1 in September which did not credit Miller. You can also notice that while Perlroth credits Charlie Miller, it was additionally credited to a second individual, Jake Honoroff. 

We can see that the first two advisories attribute the vulnerability to the code in Safari, while the Safari advisory attributes the vulnerability to WebKit, an open-source web browser engine used by Google Chrome, Apple Safari, Mozilla Firefox, Microsoft IE (recent versions) and other browsers. But the advisory tells us the issue is actually in Perl Compatible Regular Expressions (PCRE), which is a library used within a library (WebKit) used within Safari used within the iPhone. At this point it would be more accurate to say “first user-assisted code execution vulnerability in a transient dependency used by the iPhone“. That’s quite different huh?

We need to go further down the rabbit hole though. Since the vulnerability is in WebKit, which existed before the iPhone and the first security patch, we need to consider if any prior WebKit vulnerabilities might have impacted the iPhone and simply weren’t reported as such. We know iPhone development began in 2004 and the first release was June 29, 2007. We don’t know what that development was like, specifically how often they pulled in upstream code in WebKit. In theory that gives us a 3.5 year window but I think it is safe to say the developers would pull in code more often. There are at least two WebKit exploits from 2006, only one later in the year disclosed on November 14 that is ACE. I’d suspect that was patched well before the iPhone release since it was patched in macOS at that time.

Next we need to consider if other Safari vulnerabilities might have impacted the iPhone. One vulnerability jumps out quickly, an ACE in Safari patched on June 12, but it only impacts installs on Windows. Next we have a vague disclosure on June 11, 2007 about “ten flaws” in the SVG parsing engine that weren’t reported to Apple (CVE-2007-3718). These very well could represent vulnerabilities that impacted the iPhone, we simply don’t know. There were two more ACE vulnerabilities reported in Safari with no indication they were fixed, just reported (CVE-2007-3187). These could very well affect the iPhone as well. 

Finally, we have to consider if vulnerabilities in other third-party libraries used in the iPhone affect it. Apple doesn’t publish a list of those libraries but based on prior disclosures that affect macOS, which could also affect the iPhone, those include expat, troff, Libxslt, ICU / ICU4C, libXfont, libxml2, glibc, and some FreeBSD BDF font handling code. That’s a lot of code we don’t know about that is certainly a concern.

Did Miller’s vulnerability in question affect the iPhone? Yes, but, at this point it would be more accurate to say “first publicly disclosed user-assisted code execution vulnerability in a third-party library used by the iPhone after commercial sales began“. That’s even more specific huh?

Was Miller the First?

Since the iPhone advisory above covers the first security update for the device, that advisory represents the first batch of vulnerabilities patched after public release. The next thing we need to look at are the other vulnerabilities patched; are any of them ACE or RCE? Yes, one of the four other vulnerabilities is an ACE as well (CVE-2007-2399). It is described as:

Impact: Visiting a maliciously crafted website may lead to an unexpected application termination or arbitrary code execution

Description: An invalid type conversion when rendering frame sets could lead to memory corruption. Visiting a maliciously crafted web page may lead to an unexpected application termination or arbitrary code execution. Credit to Rhys Kidd of Westnet for reporting this issue.

So there are two ACE vulnerabilities fixed in the same advisory. How did Schwartz at the New York Times know that Miller and Honoroff’s vulnerability was first? Because Miller likely told him so. In the article Schwartz quotes Lynn Fox from Apple so they talked but I suspect that Schwartz did not verify that information and Fox did not volunteer it. From the NYT article:

The researchers, working for Independent Security Evaluators, a company that tests its clients’ computer security by hacking it, said that they could take control of iPhones through a WiFi connection or by tricking users into going to a Web site that contains malicious code. The hack, the first reported, allowed them to tap the wealth of personal information the phones contain.

[..]

A spokeswoman for Apple, Lynn Fox, said, “Apple takes security very seriously and has a great track record of addressing potential vulnerabilities before they can affect users.”

Per that article and other sources, we know that Independent Security Evaluators (ISE) reported the vulnerability to Apple on July 17. Looking in VulnDB I can see that Kidd reported his find to Apple on June 13, over a month before ISE did, and it is in the third-party library WebKit rather than a transient dependency of WebKit. So that settles it, right? Not quite.

We know that between these two vulnerabilities, Miller was not first. But we also know that neither were remote code execution either. Moving past the iPhone 1.0.1 update, we have to go through each subsequent release to figure out if any of the fixed vulnerabilities qualify. Fortunately, we only have to go one more version to 1.1.1 before we have our first candidate. On September 27, 2007, the update fixed vulnerability in Bluetooth functionality that can be exploited remotely:

Bluetooth

CVE-ID:  CVE-2007-3753

Impact:  An attacker within Bluetooth range may be able to cause an unexpected application termination or arbitrary code execution

Description:  An input validation issue exists in the iPhone’s Bluetooth server. By sending maliciously-crafted Service Discovery Protocol (SDP) packets to an iPhone with Bluetooth enabled, an attacker may trigger the issue, which may lead to unexpected application termination or arbitrary code execution. This update addresses the issue by performing additional validation of SDP packets. Credit to Kevin Mahaffey and John Hering of Flexilis Mobile Security for reporting this issue.

This technically qualifies as the first remote vulnerability in the iPhone! However, notice that it has to be exploited from within Bluetooth range which severely limits exploitation. In such cases CVSS would be scored as AV:A, meaning adjacent network, dropping the score a good bit. While this does fit the bill, meaning Kevin and John deserve serious kudos, it isn’t remote in the context most associate the term with. So let’s keep going to see the first fully remote vulnerability in an iPhone. We pass the releases for 1.1.2, 1.1.3, 2.0, and 2.1 to find the next of interest in 2.2 on November 20, 2008:

ImageIO

CVE-ID:  CVE-2008-2327

Impact:  Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution 

Description:  Multiple uninitialized memory access issues exist in libTIFF’s handling of LZW-encoded TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution. This update addresses the issue through proper memory initialization and additional validation of TIFF Images.

ImageIO

CVE-ID:  CVE-2008-1586

Impact:  Viewing a maliciously crafted TIFF image may lead to an unexpected device reset

Description:  A memory exhaustion issue exists in the handling of TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected device reset. This update addresses the issue by limiting the amount of memory allocated to open a TIFF image. Credit to Sergio ‘shadown’ Alvarez of n.runs AG for reporting this issue.

These two vulnerabilities are interesting because there is a potential for a remote attack here, but the advisory doesn’t make it clear in wording and they don’t provide CVSS scores. Since an image can be delivered a wide variety of ways, including via SMS, the fact that these occur in the ImageIO subsystem is of note. The Apple Developer documentation backs up this thought:

The Image I/O programming interface framework allows applications to read and write most image file formats. This framework offers high efficiency, color management, and access to image metadata.

A bit light on details but this suggests that if e.g. an SMS messaging app, or any other that remotely receives content and processes it, could be an avenue for remote code execution. Based on a chat with a colleague, it would require the victim opening the SMS app at the very least which is a low bar for exploitation, but he does not think the iPhone SMS app renders the images as a preview without clicking into a specific message. Low bar, but still requires some user interaction. We see the exact same thing for CVE-2008-3623 and CVE-2009-0040 in the iPhone 3.0 update on June 17, 2009. It is interesting to note that we’re now two years after the iPhone’s release and full remote vulnerability with no limitations or caveats.

CoreGraphics

CVE-ID:  CVE-2008-3623

Impact:  Viewing a maliciously crafted image may lead to an unexpected application termination or arbitrary code execution

ImageIO

CVE-ID:  CVE-2009-0040

Impact:  Processing a maliciously crafted PNG image may lead to an unexpected application termination or arbitrary code execution

This time, one of them is in CoreGraphics which does not seem to be as promising as ImageIO based on the documentation. Moving on we land on the iPhone 3.0.1 update released July 31, 2009 and see:

CoreTelephony

CVE-ID:  CVE-2009-2204

Impact:  Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution

Description:  A memory corruption issue exists in the decoding of SMS messages. Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution. This update addresses the issue through improved error handling. Credit to Charlie Miller of Independent Security Evaluators, and Collin Mulliner of Fraunhofer SIT for reporting this issue.

This has all the makings of what we’re after. While the advisory says “arbitrary code execution” that is a qualifier to “decoding of SMS messages”. Receiving the message triggers it as the payload is processed regardless of loading the message specifically. But notice that the same issue was also found by Collin Mulliner. So who found or reported it to Apple first? That is what ultimately settles this question. Since it lists two people with two different affiliations, that typically means mutual discovery or a “vulnerability collision”. 

I reached out to a contact at Apple and asked if they could figure out which of the two sent the email first to settle this debate. Low and behold, I was told that it was a single mail sent June 18, 2009 and both were listed as creditees! That is backed up by a presentation at Black Hat USA 2009 titled “Fuzzing the Phone in your Phone” given by both individuals.

Conclusion (tl;dr)

We began the blog with a quote from Nicole Perlroth’s book, “This Is How They Tell Me The World Ends”, in which she says “One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone.”  The question is if that quote is accurate, understanding she is citing CVE-2007-3944. The answer is, it’s complicated. Here’s the facts as best I can tell:

  1. Was it remote? No, the cited vulnerability is a user-assisted issue and cannot be carried out remotely w/o that victim clicking something.
  2. Was the vulnerability in an iPhone? Kind of. The vulnerability was in the Perl Compatible Regular Expressions (PCRE) library used by the JavaScript engine in Safari, bundled with the iPhone. Yes it affected the device, no the vulnerability wasn’t in Apple’s code let alone the iPhone native code base.
  3. Was Miller the first? It’s complicated. 
    1. If we go strictly by CVE-2007-3944, then no Miller was not the first. Rhys Kidd disclosed a user-assisted vulnerability in WebKit, the rendering engine in Safari, over one month earlier. Further, Jake Honoroff co-disclosed the vulnerability Miller found.
    2. If we go by remote without interaction but limited in attacker location, then no, Kevin Mahaffey and John Hering are the first with CVE-2007-3753 that must be exploited over Bluetooth.
    3. If we go by the actual first remote vulnerability, CVE-2009-2204 around two years later, then yes but Miller co-discovered it with Collin Mulliner and both earned that distinction.

In short: no, kind of, no, no, yes but. So Perlroth is incorrect in her assertion and very likely included it after talking to Miller for her book. The problem is that in the context of the 2007 vulnerability, Miller was wrong and Perlroth et al did not properly fact check that detail, instead relying on a co-worker’s article as gospel. We don’t know if Miller mentioned Honoroff in his discussions with Perlroth or if her text was written outside the scope of her discussion with Miller, but that detail was trivial to find if the claim was fact checked beyond her colleague’s article that also omitted it.

Regardless, I believe we have a definitive answer as to that one line in the book. It took eight pages to get to this answer and I apologize for that (not really).

Perlroth and the History of Microsoft Vulns

While reading “This Is How They Tell Me The World Ends“, early in the book I ran across a single line that made me double-take. I took a note to revisit it after a complete read since it was so early in the book. For those familiar with my blogs, I tend to write about vulnerability statistics and this one fits the bill. This blog is a bit different in that a single line provoked it, but re-reading the section for clarity still takes me down other rabbit holes. Ultimately, this is a good example of how one sentence can have a lot of interpretations depending on how you read it, what statistics you use, and the deeper context that the sentence is embedded in.

Below are some additional lines that offer the full context of the line in question:

The first shift in the wind was Bill Gates’s memo. In 2002, after a series of escalating attacks on Microsoft’s software and customers, Gates declared that security would become Microsoft’s top priority. (P35)

On January 15, 2002, just as iDefense was getting going, Gates fired off the cybersecurity equivalent of the “shot heard round the world.” From that point on, Gates said, security would be the company’s “highest priority”. (P37)

What the security community wrote off as a stunt became an economic force. Microsoft froze new products and dredged up existing ones, ripping its software apart and training nearly ten thousand developers to build it back up again with security principles at the core. For the first time, procedures were put in place to embrace the hacking community. Microsoft set up a customer service line for hackers, tracked each caller and even logged their psychological quirks, noting which hackers needed to be handled with kid gloves, which had rock-star status, and which were just trolls. It instituted a regular system for rolling out software patches, releasing them on the second Tuesday of every month – “Patch Tuesday” – and offered customers free security tools.

And while plenty of zero-day bugs were still discovered, the frequency and severity of Microsoft bugs started to dry up. (P38)

For those not familiar with the memo, titled “Trustworthy computing”, it can be read in full here. The question that came to mind was, did the frequency and/or severity of Microsoft bugs go down? Before we answer, remember that this is fairly broad since it encompasses all Microsoft bugs, not specific to Windows or Internet Explorer for example. It is also important to note that Perlroth says they started to dry up, but not for how long. On the back of the Gates memo it would be expected that some researchers may change their attitude toward disclosure if they could sell the exploits for a higher payout. Finally, all of what follows is kind of moot because Perlroth’s statement is made on the back of a known unknown. That is, we know there are zero-day bugs discovered, but by nature, they are only zero-days if not publicly known.

Perlroth says two more lines that essentially tips her hand, I believe, demonstrating that her comments were made in mindsight based on extrapolation, not fact. First, she qualifies that she joined the security beat eight years after this memo. Second, she says:

The ripple effect of Gates’s [sic] memo could be seen far from Redmond, in underground dark web forums and in hotel rooms at the big security conferences.

The dark web barely existed in 2002. Given that Tor was released in September of that year, the first hint of dark web sites would have been starting. Gates’ memo was published eight months before Tor was released in fact. It’s hard to imagine that there were already established well-known forums to trade or sell vulnerabilities that would have a noticeable change at that point. With all of that in mind, I think that the rest of this rabbit hole is academic at best but illustrates why we must be careful when describing vulnerabilities in such a manner.

The Stats

All Microsoft Vulns, 2001 – 2005, per VulnDB

There was a significant drop in volume from 2002 to 2003 so it is easy to make this assessment in a very limited picture. But by 2004 it was back up quite a bit. Given what I outlined above about her tenure in the security beat along with questionable statements around the dark web as well as making statements based on unknown factors, the question here is how did she arrive at this conclusion. Further, the severity did not drop from 2002 to 2004 either.

The stats above are from VulnDB with the advantage of hindsight and a comprehensive collection of disclosures from that period. If someone made such a conclusion based on disclosures, it likely would have been based on CVE. Looking at only disclosures with a CVE ID, it does not change for the disclosure trends or severity.

Microsoft Vulns w/ CVE ID, 2001 – 2005, per VulnDB
Microsoft Windows Vulns, 2001 – 2005, per VulnDB
Microsoft Internet Explorer (MSIE), 2001 – 2005, per VulnDB

We see a dip in disclosures from 2002 to 2003 for both Windows and MSIE, but both rebound to varying degrees in 2004. Then Windows shoots up higher in 2005 while MSIE drops in 2005, which could just have been the browser war with Firefox and Opera heating up. That leads us to one more section from page 38:

Finally, did the bugs dry up or did their perceived value go higher, so people were less likely to disclose or sell for lower prices? For a book that dives deep into the value of 0days I figured this would be the hot take. Oh wait, it is, right after saying the frequency/severity dried up, Perlroth says:

Then, in the shadows, a growing number of defense contractors, intelligence analysts, and cybercriminals started doling out higher rewards to hackers who promised to keep their bug discoveries secret. In these subterranean circles, people started assigning a far higher value to Microsoft zero-day exploits than what iDefense was paying. 

So the fun part is go back to the charts and speculate. If the premise is that the Gates memo caused bugs to dry up because they were perceived more valuable, as outlined shortly after by Perlroth, why did the disclosures rebound in 2004? Did Microsoft suddenly stop caring about security a year later? Was 2003 just an abnormal, albeit coincidental, year for disclosures? Were there other factors at play?

There are a lot of questions that Perlroth nor the vulnerability statistics answer.

RSA Hack Thoughts

I read the article “The Full Story of the Stunning RSA Hack Can Finally Be Told” by Andy Greenberg in Wired and several things stood out to me. So this is my commentary on the article and events that are covered.


“It opened my eyes to supply chain attacks.” says Mikko Hypponen, chief research officer at F-Secure, who worked with Hirvonen on the company’s analysis of the RSA breach.

While the RSA hack was certainly novel in one way, going after the cryptographic seeds to the two-factor authentication fobs, the bigger concept was certainly not new. Even in the late 80’s and early 90’s, the same concept played out many times. While vulnerabilities were prevalent and breaking into most machines was fairly easy, there were high-value targets that proved challenging. To achieve that, some hackers would target the vendors of the operating systems and break in there first. The primary targets were the bug databases where customers reported issues as well as the source code of the operating system. These two things could give them a huge advantage in compromising additional systems. Seeing this same strategy play out twenty years later should not have been that new to anyone that had been around during that timeframe.

“After 10 years of rampant state-sponsored hacking and supply chain hijacks, the RSA breach can now be seen as the harbinger of our current era of digital insecurity – and a lesson about how a determined adversary can undermine the things we trust most.”

After two decades of every type of organization in just about every country getting hacked, defaced, and data stolen, how exactly is this a ‘harbinger’? Governments, military, and security companies all popped for decades, but this one is a harbinger to what exactly? More organizations getting hacked?

A staffer in Australia had received an email with the subject line “2011 Recruitment plan” and an Excel spreadsheet attached to it. He’d opened it. Inside the file was a script that exploited a zero-day vulnerability – a secret, unpatched security flaw – in Adobe Flash, planting a common piece of malicious software called Poison Ivy on the victim’s machine.

This paragraph sums up the “Advanced Persistent Threat” (APT) that hacked RSA. Other than using a zero-day vulnerability in Flash, one of five that year, nothing really stood out about this from the attacker’s side.

A hacker wouldn’t have even been able to exploit the Flash vulnerability if the victim had been running a more recent version of Windows or Microsoft Office, or if he’d had limited access to install programs on his PC – as most security administrators for corporate and government networks recommend, Hirvonen says.

Correct. Microsoft acknowledged shortly after details were published that if RSA has been running a newer version of Microsoft Office, it would have come with protections that likely would have seriously limited the attack and/or added additional hurdles for them to then pivot into the network. At every point of this story it is important to remember that this is a security company. They received huge money to give advice like “patch often” and “upgrade to the latest version” while not doing it themselves.

In fact, several RSA executives came to believe that at least two groups of hackers were in their network simultaneously – one highly skilled group exploiting the other’s access, perhaps, with or without their knowledge.

I wonder if anyone suggested the piggyback could have been the TAO group at the NSA? We know that is a modus operandi of theirs while watching nation-state adversary hackers.

On that Australian employee’s PC, someone had used a tool that pulled credentials out of the machine’s memory and then reused those usernames and passwords to log into other machines on the network. They’d then scraped those computers’ memories for more usernames and passwords—finding some that belonged to more privileged administrators. The hackers eventually got to a server containing hundreds of users’ credentials. Today that credential-stealing hopscotching technique is common. But in 2011 the analysts were surprised to see how the hackers fanned out across the network.

Which analysts were surprised? This was standard operating procedure for hackers in the late 80’s and early 90’s. This is exactly what the hacking group I was in did. The only difference is back then you were likely to find trusted relationships and common passwords between vastly different networks (e.g. an .edu machine and a .gov or .mil).

RSA executives told me that the part of their network responsible for manufacturing the SecurID hardware tokens was protected by an “air gap”—a total disconnection of computers from any machine that touches the internet. But in fact, Leetham says, one server on RSA’s internet-connected network was linked, through a firewall that allowed no other connections, to the seed warehouse on the manufacturing side.

To be clear, RSA executives did not understand what “air-gapped” means, or were lying about it. I feel this is an important take-away.

Breaches as extensive as the one carried out against RSA are often discovered months after the fact, when the intruders are long gone or lying dormant. But Duane says that the 2011 incident was different: Within days, the investigators had essentially caught up to the intruders and were watching them in action.

“I basically shut off RSA’s business,” he says. “I crippled the company in order to stop any potential further release of data.”

One person in legal suggested they didn’t actually need to tell their customers, Sam Curry remembers.

The RSA staffers began putting in nearly 20-hour workdays, driven by the chilling knowledge that the breach they were tracking was still unfolding.

This seems like a lot of fluffing RSA over this hack, but ultimately this was the same company that didn’t patch their Windows boxes and didn’t air-gap the seeds like execs claimed. Among all of these positive mentions for $person doing the right thing, we always get “that one guy we will not name” for proposing absolutely shitty ideas or having a bad take. I understand they won’t throw anyone under the bus but this is an important dichotomy.

“Recently, our security systems identified an extremely sophisticated cyberattack in progress,” (RSA notification)

Which part was sophisticated? Exploiting unpatched machines, pivoting, or stealing data? When seemingly every attack is a “highly sophisticated cyber attack“, is it really highly sophisticated?

In fact, by the time Castignola had landed in Massachusetts, both the NSA and the FBI had been called to help the company’s investigation, as had defense contractor Northrop Grumman and incident response firm Mandiant. (By chance, employees of Mandiant had already been on-site prior to the breach, installing security sensor equipment on RSA’s network.)

How’d that Mandiant software work out for RSA I wonder? It doesn’t seemed to have helped prevent or detect the intrusion at any point based on the story told.

Multiple executives insisted that they did find hidden listening devices—though some were so old that their batteries were dead. It was never clear if those bugs had any relation to the breach.

Uh, this isn’t burying the lede, but it is burying a big one. I have so many questions and I don’t recall there being answers to this specific bit. How were there so many listening devices in RSA executive offices? Had they never done a single bug sweep? Were each of the devices found investigated? Did they do a sweep of all offices after finding them? If not, why not?

“Well it didn’t take long for whoever cracked RSA to find a lock to fit that key,” Cringely wrote. “What if every RSA token has been compromised, everywhere?” Two days later, Reuters revealed the name of the hacked military contractor: Lockheed Martin, a company that represented a cornucopia of ultra-secret plans for weapons and intelligence technologies. In the days that followed, defense contractors Northrop Grumman and L-3 were also named in news reports.

Today, with 10 years of hindsight, Coviello and other former RSA executives tell a story that starkly contradicts accounts from the time : Most of the former RSA staff who spoke to me claim that it was never proven that SecurID had any role in the Lockheed breach. Coviello, Curry, Castignola, and Duane all argued that it was never confirmed that the intruders inside RSA’s systems had successfully stolen the full list of seed values in an uncorrupted, unencrypted form, nor the customer list mapped to those seeds necessary to exploit them. “I don’t think that Lockheed’s attack was related to us at all,” Coviello states flatly.

vs A Lockheed source with knowledge of the company’s incident response reaffirmed to WIRED the company’s original claims. “We stand by our forensic investigation findings,” the source says. vs In a briefing to the Senate Armed Services Committee a year after the RSA breach, NSA’s director, General Keith Alexander, said that the RSA hack “led to at least one US defense contractor being victimized by actors wielding counterfeit credentials,” and that the Department of Defense had been forced to replace every RSA token it used.

Can we figure out who is trying to re-write history here? Sure seems like RSA is despite several other organizations saying otherwise. That would explain why so many were willing to go on the record for this article.

Squirrel Tech Support

Last year in October, I did a release of Fox squirrels for Greenwood Wildlife Rehabilitation after they had been rehabilitated. These squirrels couldn’t go back exactly where they came from because the owner of the property wasn’t available to give permission, which is required by Colorado Parks and Wildlife regulations. A wonderful lady that was on the release candidate list offered to take them so I drove them to their new home. Since it was an October release and winter was close, each of the two batches got their own nest box to start out. Before I put them up, I noticed that there were a lot of other nest boxes already up. Come to find out she had been offering to take squirrels for many years.

I warned her about the danger of overcrowding, when the squirrel density is too high it can cause serious issues and lead to death for some squirrels. As they compete for food sources and are forced to spread out, they can move to yards or areas that are not as suitable for them. Forcing a squirrel out of their home has a high probability of leading them to their death, as they find themselves in a new area without food, shelter, or knowing escape routes. I ended up putting the boxes up because that late in the season we had no other viable release sites. Since she was supplementing their food with sunflower seeds, bird seed, and corn, it was a lot better option than anything else.

Jump to today when I get a call from her asking if I could help. Apparently one of her old nest boxes, that she thinks may be 20 years old, fell out of the tree this morning. She said no one from Greenwood or anywhere else she called could come help her put the box up. So I found myself driving out to Arvada to see if I could get it done quickly before hauling ass down south for an early afternoon appointment. I spent an hour, most of it trying to figure out a way to get it back up in the tree and stable. This was tricky because the support board for the nest box had rotted out, leading to it falling, and it wasn’t usable. I had to run to the local hardware store for a hammer and some eye hooks but ultimately it just wasn’t happening.

 Rotted original nest box.
Rotted original nest box.

I left but told her to call me later that afternoon while I tried to think of a solution. Shortly after I left, I got a call from her saying that Greenwood could spare one of the big nest boxes, identical to the one that fell. She left immediately to get it which meant over an hour on the road. I had planned on returning tomorrow to work on it but the idea of squirrels not having their nest overnight didn’t sit well with me. Unprotected and sleeping in a tree is very risky; predators and even the wind are threats. After my appointment down south I drove back to Arvada with my own ladder and drill which I knew would be needed for the new box.

New box built by a Boy Scout troop and donated to Greenwood.
New box built by a Boy Scout troop and donated to Greenwood. (Box is upside down)

I don’t know much about relocating squirrels from one nest to another. Since they enjoyed a protective nest box, I wasn’t sure if that factored in if it was being replaced by a similar one. They obviously look different and no doubt smell different to the squirrels as well. So I removed some of the bedding from the old nest box and put it in the new.

Bedding in the old box included leaves, twigs, dirt, and almost unrecognizable piece of fleece that was put in originally.
Bedding in the old box included leaves, twigs, dirt, and almost unrecognizable piece of fleece that was put in originally.

My hope was that the bedding being moved over would help the squirrels understand this was the new home. The next challenge came in the form of where to put the box. Whoever had put the old one up had a much taller ladder than hers or mine, so there was no way to get it back up that high. The angle of the tree made it so most of the trunk space was not suitable due to it being uneven, the box being angled, or branches.

The new box with old bedding moved over along with two new pieces of fleece since we didn't know how many squirrels lived in it
The new box with old bedding moved over along with two new pieces of fleece since we didn’t know how many squirrels lived in it.

Ultimately, we ended up removing an old bird house that had never been used since installation and putting the new box in its place. It wasn’t quite as high as I would have liked, but higher than some other nest boxes that have been put up. One side offered easy access to the box off the tree trunk but the other might not have been perfect, but a squirrel could definitely go from trunk to that entrance too. I left her place at 7pm with a strong hope that the squirrels who watched us do all that understood what had happened. She told me she’d watch tonight to try to see if any went in and would watch in the morning like she always did to see if squirrels emerged. I left a huge handful of sunflower seeds on top of the box and even more below at the foot of the trunk to help them while they adjust.

Today was the first day in my new career as Squirrel Tech Support apparently.

The new nest box hanging in the tree.

[Update: This morning she texted to let me know that squirrels were eating the seeds and one went into the new nest box. She says “Looking good for them!!!”]

June 2021 Reviews

[A summary of my movie and TV reviews from last month, posted to Attrition.org, mixed in with other reviews.]


Stowaway (2021)
Medium: Movie (Netflix)
Reference(s): IMDB Listing || Trailer
Rating: 3 / 5 Shot for the stars, fell short
This movie had a lot going for it; a tiny cast (four people), decent budget, and a human drama as the engine. It stars Anna Kendrick, Daniel Dae Kim, Toni Collette, and the standout Shamier Anderson. Perhaps it is that the three characters are reserved scientists and/or astronauts making Anderson’s performance more vibrant. Somehow in every scene he invoked a world of emotion that made me feel when he was happy or sad. The movie was on track to be pretty amazing until the last third where it went downhill, falling into several traps that near-term Sci-Fi movies do. But ignoring that aspect, it still just went out into left field and broke down completely on characters whose actions just stopped making sense. It’s good for a quiet movie to fall asleep to and I hope to see Anderson in more movies. [Update: After writing this, before publishing, I saw I get my wish!]


Infinite (2021)
Medium: Movie (Paramount+)
Rating: 1 / 5 My contempt for the man who recommended this is infinite
Reference(s): IMDB Listing || Trailer
Just after starting the movie, during the opening voice over, I wrote down that this movie “spells out this is ‘Wanted‘ meets ‘The Old Guard‘”. Damn I nailed it. It even had some fun unique guns and bullets just like Wanted! Mark Walberg demonstrates that he has just given up on movies in my opinion. Rather than over-acting like he did in some earlier in his career, now he has resigned himself to be this flat character with no acting range, poorly manufactured emotion, and 100% reliance on his body. Like every action movie for too many years, the ending was ridiculous. Suspend disbelief on reincarnation? Sure! A motorcycle jumping off a cliff and landing on the wing of a plane flying below? Nope. The only redeeming part of this movie was Chiwetel Ejiofor having fun as the villain.


Lupin, Part 2 (2021)
Medium: TV (Netflix)
Rating: 3.5 / 5 n’est-il pas censé être un voleur?
Reference(s): IMDB Listing || Trailer
When we last left off, Lupin was barely a tale about a thief who pulled heists. After the second season, he still isn’t one. Instead, it’s basically a cat-and-mouse game with the bad guy(s) in an attempt to get his kidnapped son back, then expose the bad guy for doing bad things. The last episode is supposed to be the climax where all is revealed and it falls way short. Not a single part of that episode had originality; instead it was pieces cobbled together from other movies along with several missed opportunities to end it better. The upside is that this season does put a nice bow around the story arc giving a fairly clean ending despite a credits teaser saying it would return. I hope in part 3 we get to finally see him do something remotely interesting as far as being a thief.


Synchronic (2019)
Medium: Movie (Netflix)
Rating: 4 / 5 Pretty fun and trippy
Reference(s): IMDB Listing || Trailer
Points to this movie for some originality and a neat premise. The time is now(ish) and a new designer drug has the ability to make you experience wild things. Instead of police, the movie is seen and experienced through the eyes of two paramedics which is a great change and appropriate. Anthony Mackie stars as one of the paramedics and does a good job in this movie, unlike some others around this time and since (he appears to star in 38% of all movies lately). A ways into the movie we learn that this drug, Synchronic, isn’t just warping perception in your mind. Instead, it allows you to briefly travel back in time. The catch? For a very short period of time, and how far back you go depends on where you are when you take the drug. It’s a good watch.