[A summary of my movie and TV reviews from last month, posted to Attrition.org, mixed in with other reviews.]
The Tomorrow War (2021) Medium: Movie (Amazon) Rating: 0.5/5 don’t wait until tomorrow to ignore this trash Reference(s): IMDB Listing || Trailer This movie was going swimmingly when everything was just regular life in today’s time. The second the “tomorrow” part started it went to shit. From that point on it was a series of overdone clichés from hundreds of Sci-Fi movies before it mixed in with clichés from hundreds of war movies. I think the blame is on the writers who wrote a series of scenes that relied on us forgetting about logic, continuity, and time travel. The comic relief? Not funny. The heart-warming moments? Not warming. This movie is trash.
The Watch (2020) Medium: TV (BBC) Rating: 4.4/5 worth the .. irreverent policing Reference(s): IMDB Listing || Trailer This is not a show about superheroes. In fact, barely heroes at all but they find their way. The series is based on a world and characters created by Terry Pratchett, who I haven’t read. The series was a little rough to get into not knowing anything about it and the characters were odd at first, then I appreciated them as over-the-top quirky weirdos that find their way together as they “police” the city. The casting is superior and each actor really embraces their role, bringing something special to their character. Season one offers a fun story that is wrapped up neatly at the end, of course with a teaser that it will return.
Debris S1 (2021) Medium: TV (Peacock) Rating: 2.5/5 potentially good show lost in all the pieces Reference(s): IMDB Listing || Trailer Two episodes in and the show was interesting. I immediately formed ideas of what I thought would come later in the season but in one case they did it in the first episode. It set a great pace and suggested a promising show with some creativity. I could suspend disbelief on the whole alien technology thing but couldn’t when it came to the “protocols” used to investigate alien debris. Then, about half way through the series, it began to spiral out of control with sub-plots and ideas that simply weren’t compelling. Add to that an attempt to bring some kind of sexual tension between the two main characters that just wasn’t there, and it’s easy to understand why the show was canceled after one season.
Black Widow (2021) Medium: Movie (Disney+) Rating: 1 / 5 i’d rather watch the peacock spider Reference(s): IMDB Listing || Trailer Marvel whipped out its script; the script they use to write movie scripts for MCU films. Formulaic, predictable, and have gotten boring. Black Widow has potential for a more interesting back story and tie-in to the current timeline and Marvel fell way short on that I think. Comedic character? Check. Over-engineered fight scenes that aren’t clever? Check. Basically, instead of making the character more interesting and getting us more invested, we’re left with a movie that didn’t really matter if it came out. Personally, I find the Black Widow character entirely more interesting after the first she appeared in and a lot less so after this one. Finally, I hope the camera crew didn’t injure their backs bending over to get all the shots highlighting the women’s asses.
Hitman’s Wife’s Bodyguard (2021) Medium: Movie Rating: 2.5 / 5 The word ‘fuck’ is 1/3rd of the plot Reference(s): IMDB Listing || Trailer If you saw the first movie, you know exactly what to expect with this one. Over-the top gun fights, Ryan Reynolds being himself as you try to divorce your mind from thinking “Deadpool”, and apparently an attempt to break the record for the use of the word “fuck”. As with many sequels, they try to make it more compelling by adding some big name actors. In this case, Morgan Freeman, who didn’t really add much to the movie. So this is good mindless entertainment while you work on your puzzle or cross-stitch or sort your Legos.
In what is sure to be my last blog (?!) born out of reading “This Is How They Tell Me The World Ends” by Nicole Perlroth, this article is basically a quick dive into a single paragraph that contains one sentence with an alleged fact pertaining to vulnerability history. As a self-described Vulnerability Historian, this is of course of interest especially if the statement is questionable. From page 63 of the book, here is the paragraph for full context and the relevant bits in bold:
But that’s not how he would be remembered. One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone. The conventional wisdom had always been that the iPhone – with its sleek design and closely held code – was more secure than the alternatives. But Charlie blew a hole right through that theory. He demonstrated before an audience of hundreds how easily he could remotely control anyone’s iPhone simply by steering their browser to a malicious website he created.
With that, we’ll examine three components of this claim:
Was the vulnerability remote?
Was the vulnerability in an iPhone?
Was Miller the first?
Before jumping to conclusions on those answers, there’s a twist or two! If you’re already grumbling over me being “wordy” you can scroll down to the end for the cliff notes TL;DR and miss the rabbit hole adventure. And also thank me for not posting each section separately, teasing it out and making you wait two weeks for an answer.
Once he was there, the site injected a bit of code into the iPhone that then took over the phone. The phone promptly followed instructions to transmit a set of files to the attacking computer that included recent text messages — including one that had been sent to the reporter’s cellphone moments before — as well as telephone contacts and email addresses. “We can get any file we want,” he said. Potentially, he added, the attack could be used to program the phone to make calls, running up large bills or even turning it into a portable bugging device.
The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message. After it’s activated, an attacker can make the phone transmit files or passwords, run up wireless services or even record audio and relay it back to the attacker.
The reason the attack vector is so important is that it speaks to the first part of the claim in which Perlroth says it was the “first remote hack”. In the context of vulnerabilities, remote means that a vulnerability can be exploited remotely without user interaction from the victim. If the exploit requires the victim to perform an action of any kind, including clicking a link, it is a user-assisted or context-dependent attack. While that is a serious attack, since we know the bar for clicking a link is low, it is still important to make this distinction. Why? Let’s start with risk scoring and refer to Remote Code Execution (RCE) and Arbitrary Code Execution (ACE) for reference.
Using the Common Vulnerability Scoring System (CVSS), an industry standard for better or worse, we get four sets of scores to look at. First, understand that many organizations use a three-tier “stoplight” system for general risk severity (i.e. low, medium, high) or a five-tier system that adds an ‘informational’ and ‘critical’ rating. The five-tier system breaks down as 0.0 (informational), 0.1 – 3.9 (low), 4.0 – 6.9 (medium), 7.0 – 8.9 (high), 9.0 – 10.0 (critical). For organizations that prioritize at this higher level first, focusing on critical before moving to high-risk, the difference between an 8.9 and 9.0 may mean a lot. So let’s compare a RCE versus an ACE looking at both CVSS version 2 and 3, which are the same in spirit but different in implementation:
As we see, arbitrary code execution under CVSSv3 is scored as 8.8 which is only “high” risk while under CVSSv2 it is “critical”. Compare that to remote code execution which is “critical” under both scoring systems. So the distinction between remote and user-assisted is important in both severity and factual accuracy. Jumping back to specifics of the attack:
The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message.
This is clearly an arbitrary code execution situation as it requires a victim to visit a malicious web page in some manner. That distinction is one that Charlie Miller has madehimselfmany timesover the years. This is not a remote vulnerability. At this point it would be more accurate to say “first user-assisted code execution vulnerability in an iPhone“. That’s a bit different huh?
Was the vulnerability in an iPhone?
The simple answer is of course, “yes”. But it’s more complicated than that which we’ll see, as well as why that is important. When attributing a vulnerability to a given device like an iPhone we should note if the vulnerable code is in Apple’s own iPhone code or a third-party library used by the device. This distinction starts us down a rabbit hole.
Impact: Viewing a maliciously crafted web page may lead to arbitrary code execution
How do we know it was this vulnerability and not a subsequent one? Perloth says it came one month after Miller’s paper, “The Legitimate Vulnerability Market” from May, 2007. Miller and Honoroff’s vulnerability was shared with Apple on July 17 and publicly disclosed on July 19. Close enough to a month and the next iPhone update was 1.1.1 in September which did not credit Miller. You can also notice that while Perlroth credits Charlie Miller, it was additionally credited to a second individual, Jake Honoroff.
We can see that the first two advisories attribute the vulnerability to the code in Safari, while the Safari advisory attributes the vulnerability to WebKit, an open-source web browser engine used by Google Chrome, Apple Safari, Mozilla Firefox, Microsoft IE (recent versions) and other browsers. But the advisory tells us the issue is actually in Perl Compatible Regular Expressions (PCRE), which is a library used within a library (WebKit) used within Safari used within the iPhone. At this point it would be more accurate to say “first user-assisted code execution vulnerability in a transient dependency used by the iPhone“. That’s quite different huh?
We need to go further down the rabbit hole though. Since the vulnerability is in WebKit, which existed before the iPhone and the first security patch, we need to consider if any prior WebKit vulnerabilities might have impacted the iPhone and simply weren’t reported as such. We know iPhone development began in 2004 and the first release was June 29, 2007. We don’t know what that development was like, specifically how often they pulled in upstream code in WebKit. In theory that gives us a 3.5 year window but I think it is safe to say the developers would pull in code more often. There are at least two WebKit exploits from 2006, only one later in the year disclosed on November 14 that is ACE. I’d suspect that was patched well before the iPhone release since it was patched in macOS at that time.
Next we need to consider if other Safari vulnerabilities might have impacted the iPhone. One vulnerability jumps out quickly, an ACE in Safari patched on June 12, but it only impacts installs on Windows. Next we have a vague disclosure on June 11, 2007 about “ten flaws” in the SVG parsing engine that weren’t reported to Apple (CVE-2007-3718). These very well could represent vulnerabilities that impacted the iPhone, we simply don’t know. There were two more ACE vulnerabilities reported in Safari with no indication they were fixed, just reported (CVE-2007-3187). These could very well affect the iPhone as well.
Finally, we have to consider if vulnerabilities in other third-party libraries used in the iPhone affect it. Apple doesn’t publish a list of those libraries but based on prior disclosures that affect macOS, which could also affect the iPhone, those include expat, troff, Libxslt, ICU / ICU4C, libXfont, libxml2, glibc, and some FreeBSD BDF font handling code. That’s a lot of code we don’t know about that is certainly a concern.
Did Miller’s vulnerability in question affect the iPhone? Yes, but, at this point it would be more accurate to say “first publicly disclosed user-assisted code execution vulnerability in a third-party library used by the iPhone after commercial sales began“. That’s even more specific huh?
Was Miller the First?
Since the iPhone advisory above covers the first security update for the device, that advisory represents the first batch of vulnerabilities patched after public release. The next thing we need to look at are the other vulnerabilities patched; are any of them ACE or RCE? Yes, one of the four other vulnerabilities is an ACE as well (CVE-2007-2399). It is described as:
Impact: Visiting a maliciously crafted website may lead to an unexpected application termination or arbitrary code execution
Description: An invalid type conversion when rendering frame sets could lead to memory corruption. Visiting a maliciously crafted web page may lead to an unexpected application termination or arbitrary code execution. Credit to Rhys Kidd of Westnet for reporting this issue.
So there are two ACE vulnerabilities fixed in the same advisory. How did Schwartz at the New York Times know that Miller and Honoroff’s vulnerability was first? Because Miller likely told him so. In the article Schwartz quotes Lynn Fox from Apple so they talked but I suspect that Schwartz did not verify that information and Fox did not volunteer it. From the NYT article:
The researchers, working for Independent Security Evaluators, a company that tests its clients’ computer security by hacking it, said that they could take control of iPhones through a WiFi connection or by tricking users into going to a Web site that contains malicious code. The hack, the first reported, allowed them to tap the wealth of personal information the phones contain.
A spokeswoman for Apple, Lynn Fox, said, “Apple takes security very seriously and has a great track record of addressing potential vulnerabilities before they can affect users.”
Per that article and other sources, we know that Independent Security Evaluators (ISE) reported the vulnerability to Apple on July 17. Looking in VulnDB I can see that Kidd reported his find to Apple on June 13, over a month before ISE did, and it is in the third-party library WebKit rather than a transient dependency of WebKit. So that settles it, right? Not quite.
We know that between these two vulnerabilities, Miller was not first. But we also know that neither were remote code execution either. Moving past the iPhone 1.0.1 update, we have to go through each subsequent release to figure out if any of the fixed vulnerabilities qualify. Fortunately, we only have to go one more version to 1.1.1 before we have our first candidate. On September 27, 2007, the update fixed vulnerability in Bluetooth functionality that can be exploited remotely:
Impact: An attacker within Bluetooth range may be able to cause an unexpected application termination or arbitrary code execution
Description: An input validation issue exists in the iPhone’s Bluetooth server. By sending maliciously-crafted Service Discovery Protocol (SDP) packets to an iPhone with Bluetooth enabled, an attacker may trigger the issue, which may lead to unexpected application termination or arbitrary code execution. This update addresses the issue by performing additional validation of SDP packets. Credit to Kevin Mahaffey and John Hering of Flexilis Mobile Security for reporting this issue.
This technically qualifies as the first remote vulnerability in the iPhone! However, notice that it has to be exploited from within Bluetooth range which severely limits exploitation. In such cases CVSS would be scored as AV:A, meaning adjacent network, dropping the score a good bit. While this does fit the bill, meaning Kevin and John deserve serious kudos, it isn’t remote in the context most associate the term with. So let’s keep going to see the first fully remote vulnerability in an iPhone. We pass the releases for 1.1.2, 1.1.3, 2.0, and 2.1 to find the next of interest in 2.2 on November 20, 2008:
Impact: Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution
Description: Multiple uninitialized memory access issues exist in libTIFF’s handling of LZW-encoded TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution. This update addresses the issue through proper memory initialization and additional validation of TIFF Images.
Impact: Viewing a maliciously crafted TIFF image may lead to an unexpected device reset
Description: A memory exhaustion issue exists in the handling of TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected device reset. This update addresses the issue by limiting the amount of memory allocated to open a TIFF image. Credit to Sergio ‘shadown’ Alvarez of n.runs AG for reporting this issue.
These two vulnerabilities are interesting because there is a potential for a remote attack here, but the advisory doesn’t make it clear in wording and they don’t provide CVSS scores. Since an image can be delivered a wide variety of ways, including via SMS, the fact that these occur in the ImageIO subsystem is of note. The Apple Developer documentation backs up this thought:
The Image I/O programming interface framework allows applications to read and write most image file formats. This framework offers high efficiency, color management, and access to image metadata.
A bit light on details but this suggests that if e.g. an SMS messaging app, or any other that remotely receives content and processes it, could be an avenue for remote code execution. Based on a chat with a colleague, it would require the victim opening the SMS app at the very least which is a low bar for exploitation, but he does not think the iPhone SMS app renders the images as a preview without clicking into a specific message. Low bar, but still requires some user interaction. We see the exact same thing for CVE-2008-3623 and CVE-2009-0040 in the iPhone 3.0 update on June 17, 2009. It is interesting to note that we’re now two years after the iPhone’s release and full remote vulnerability with no limitations or caveats.
Impact: Viewing a maliciously crafted image may lead to an unexpected application termination or arbitrary code execution
Impact: Processing a maliciously crafted PNG image may lead to an unexpected application termination or arbitrary code execution
Impact: Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution
Description: A memory corruption issue exists in the decoding of SMS messages. Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution. This update addresses the issue through improved error handling. Credit to Charlie Miller of Independent Security Evaluators, and Collin Mulliner of Fraunhofer SIT for reporting this issue.
This has all the makings of what we’re after. While the advisory says “arbitrary code execution” that is a qualifier to “decoding of SMS messages”. Receiving the message triggers it as the payload is processed regardless of loading the message specifically. But notice that the same issue was also found by Collin Mulliner. So who found or reported it to Apple first? That is what ultimately settles this question. Since it lists two people with two different affiliations, that typically means mutual discovery or a “vulnerability collision”.
I reached out to a contact at Apple and asked if they could figure out which of the two sent the email first to settle this debate. Low and behold, I was told that it was a single mail sent June 18, 2009 and both were listed as creditees! That is backed up by a presentation at Black Hat USA 2009 titled “Fuzzing the Phone in your Phone” given by both individuals.
We began the blog with a quote from Nicole Perlroth’s book, “This Is How They Tell Me The World Ends”, in which she says “One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone.” The question is if that quote is accurate, understanding she is citing CVE-2007-3944. The answer is, it’s complicated. Here’s the facts as best I can tell:
Was it remote? No, the cited vulnerability is a user-assisted issue and cannot be carried out remotely w/o that victim clicking something.
Was Miller the first? It’s complicated.
If we go strictly by CVE-2007-3944, then no Miller was not the first. Rhys Kidd disclosed a user-assisted vulnerability in WebKit, the rendering engine in Safari, over one month earlier. Further, Jake Honoroff co-disclosed the vulnerability Miller found.
If we go by remote without interaction but limited in attacker location, then no, Kevin Mahaffey and John Hering are the first with CVE-2007-3753 that must be exploited over Bluetooth.
If we go by the actual first remote vulnerability, CVE-2009-2204 around two years later, then yes but Miller co-discovered it with Collin Mulliner and both earned that distinction.
In short: no, kind of, no, no, yes but. So Perlroth is incorrect in her assertion and very likely included it after talking to Miller for her book. The problem is that in the context of the 2007 vulnerability, Miller was wrong and Perlroth et al did not properly fact check that detail, instead relying on a co-worker’s article as gospel. We don’t know if Miller mentioned Honoroff in his discussions with Perlroth or if her text was written outside the scope of her discussion with Miller, but that detail was trivial to find if the claim was fact checked beyond her colleague’s article that also omitted it.
Regardless, I believe we have a definitive answer as to that one line in the book. It took eight pages to get to this answer and I apologize for that (not really).
While reading “This Is How They Tell Me The World Ends“, early in the book I ran across a single line that made me double-take. I took a note to revisit it after a complete read since it was so early in the book. For those familiar with my blogs, I tend to write about vulnerability statistics and this one fits the bill. This blog is a bit different in that a single line provoked it, but re-reading the section for clarity still takes me down other rabbit holes. Ultimately, this is a good example of how one sentence can have a lot of interpretations depending on how you read it, what statistics you use, and the deeper context that the sentence is embedded in.
Below are some additional lines that offer the full context of the line in question:
The first shift in the wind was Bill Gates’s memo. In 2002, after a series of escalating attacks on Microsoft’s software and customers, Gates declared that security would become Microsoft’s top priority. (P35)
On January 15, 2002, just as iDefense was getting going, Gates fired off the cybersecurity equivalent of the “shot heard round the world.” From that point on, Gates said, security would be the company’s “highest priority”. (P37)
What the security community wrote off as a stunt became an economic force. Microsoft froze new products and dredged up existing ones, ripping its software apart and training nearly ten thousand developers to build it back up again with security principles at the core. For the first time, procedures were put in place to embrace the hacking community. Microsoft set up a customer service line for hackers, tracked each caller and even logged their psychological quirks, noting which hackers needed to be handled with kid gloves, which had rock-star status, and which were just trolls. It instituted a regular system for rolling out software patches, releasing them on the second Tuesday of every month – “Patch Tuesday” – and offered customers free security tools.
And while plenty of zero-day bugs were still discovered, the frequency and severity of Microsoft bugs started to dry up. (P38)
For those not familiar with the memo, titled “Trustworthy computing”, it can be read in full here. The question that came to mind was, did the frequency and/or severity of Microsoft bugs go down? Before we answer, remember that this is fairly broad since it encompasses all Microsoft bugs, not specific to Windows or Internet Explorer for example. It is also important to note that Perlroth says they started to dry up, but not for how long. On the back of the Gates memo it would be expected that some researchers may change their attitude toward disclosure if they could sell the exploits for a higher payout. Finally, all of what follows is kind of moot because Perlroth’s statement is made on the back of a known unknown. That is, we know there are zero-day bugs discovered, but by nature, they are only zero-days if not publicly known.
Perlroth says two more lines that essentially tips her hand, I believe, demonstrating that her comments were made in mindsight based on extrapolation, not fact. First, she qualifies that she joined the security beat eight years after this memo. Second, she says:
The ripple effect of Gates’s [sic] memo could be seen far from Redmond, in underground dark web forums and in hotel rooms at the big security conferences.
The dark web barely existed in 2002. Given that Tor was released in September of that year, the first hint of dark web sites would have been starting. Gates’ memo was published eight months before Tor was released in fact. It’s hard to imagine that there were already established well-known forums to trade or sell vulnerabilities that would have a noticeable change at that point. With all of that in mind, I think that the rest of this rabbit hole is academic at best but illustrates why we must be careful when describing vulnerabilities in such a manner.
There was a significant drop in volume from 2002 to 2003 so it is easy to make this assessment in a very limited picture. But by 2004 it was back up quite a bit. Given what I outlined above about her tenure in the security beat along with questionable statements around the dark web as well as making statements based on unknown factors, the question here is how did she arrive at this conclusion. Further, the severity did not drop from 2002 to 2004 either.
The stats above are from VulnDB with the advantage of hindsight and a comprehensive collection of disclosures from that period. If someone made such a conclusion based on disclosures, it likely would have been based on CVE. Looking at only disclosures with a CVE ID, it does not change for the disclosure trends or severity.
We see a dip in disclosures from 2002 to 2003 for both Windows and MSIE, but both rebound to varying degrees in 2004. Then Windows shoots up higher in 2005 while MSIE drops in 2005, which could just have been the browser war with Firefox and Opera heating up. That leads us to one more section from page 38:
Finally, did the bugs dry up or did their perceived value go higher, so people were less likely to disclose or sell for lower prices? For a book that dives deep into the value of 0days I figured this would be the hot take. Oh wait, it is, right after saying the frequency/severity dried up, Perlroth says:
Then, in the shadows, a growing number of defense contractors, intelligence analysts, and cybercriminals started doling out higher rewards to hackers who promised to keep their bug discoveries secret. In these subterranean circles, people started assigning a far higher value to Microsoft zero-day exploits than what iDefense was paying.
So the fun part is go back to the charts and speculate. If the premise is that the Gates memo caused bugs to dry up because they were perceived more valuable, as outlined shortly after by Perlroth, why did the disclosures rebound in 2004? Did Microsoft suddenly stop caring about security a year later? Was 2003 just an abnormal, albeit coincidental, year for disclosures? Were there other factors at play?
There are a lot of questions that Perlroth nor the vulnerability statistics answer.
I read the article “The Full Story of the Stunning RSA Hack Can Finally Be Told” by Andy Greenberg in Wired and several things stood out to me. So this is my commentary on the article and events that are covered.
“It opened my eyes to supply chain attacks.” says Mikko Hypponen, chief research officer at F-Secure, who worked with Hirvonen on the company’s analysis of the RSA breach.
While the RSA hack was certainly novel in one way, going after the cryptographic seeds to the two-factor authentication fobs, the bigger concept was certainly not new. Even in the late 80’s and early 90’s, the same concept played out many times. While vulnerabilities were prevalent and breaking into most machines was fairly easy, there were high-value targets that proved challenging. To achieve that, some hackers would target the vendors of the operating systems and break in there first. The primary targets were the bug databases where customers reported issues as well as the source code of the operating system. These two things could give them a huge advantage in compromising additional systems. Seeing this same strategy play out twenty years later should not have been that new to anyone that had been around during that timeframe.
“After 10 years of rampant state-sponsored hacking and supply chain hijacks, the RSA breach can now be seen as the harbinger of our current era of digital insecurity – and a lesson about how a determined adversary can undermine the things we trust most.”
After two decades of every type of organization in just about every country getting hacked, defaced, and data stolen, how exactly is this a ‘harbinger’? Governments, military, and security companies all popped for decades, but this one is a harbinger to what exactly? More organizations getting hacked?
A staffer in Australia had received an email with the subject line “2011 Recruitment plan” and an Excel spreadsheet attached to it. He’d opened it. Inside the file was a script that exploited a zero-day vulnerability – a secret, unpatched security flaw – in Adobe Flash, planting a common piece of malicious software called Poison Ivy on the victim’s machine.
This paragraph sums up the “Advanced Persistent Threat” (APT) that hacked RSA. Other than using a zero-day vulnerability in Flash, one of five that year, nothing really stood out about this from the attacker’s side.
A hacker wouldn’t have even been able to exploit the Flash vulnerability if the victim had been running a more recent version of Windows or Microsoft Office, or if he’d had limited access to install programs on his PC – as most security administrators for corporate and government networks recommend, Hirvonen says.
Correct. Microsoft acknowledged shortly after details were published that if RSA has been running a newer version of Microsoft Office, it would have come with protections that likely would have seriously limited the attack and/or added additional hurdles for them to then pivot into the network. At every point of this story it is important to remember that this is a security company. They received huge money to give advice like “patch often” and “upgrade to the latest version” while not doing it themselves.
In fact, several RSA executives came to believe that at least two groups of hackers were in their network simultaneously – one highly skilled group exploiting the other’s access, perhaps, with or without their knowledge.
I wonder if anyone suggested the piggyback could have been the TAO group at the NSA? We know that is a modus operandi of theirs while watching nation-state adversary hackers.
On that Australian employee’s PC, someone had used a tool that pulled credentials out of the machine’s memory and then reused those usernames and passwords to log into other machines on the network. They’d then scraped those computers’ memories for more usernames and passwords—finding some that belonged to more privileged administrators. The hackers eventually got to a server containing hundreds of users’ credentials. Today that credential-stealing hopscotching technique is common. But in 2011 the analysts were surprised to see how the hackers fanned out across the network.
Which analysts were surprised? This was standard operating procedure for hackers in the late 80’s and early 90’s. This is exactly what the hacking group I was in did. The only difference is back then you were likely to find trusted relationships and common passwords between vastly different networks (e.g. an .edu machine and a .gov or .mil).
RSA executives told me that the part of their network responsible for manufacturing the SecurID hardware tokens was protected by an “air gap”—a total disconnection of computers from any machine that touches the internet. But in fact, Leetham says, one server on RSA’s internet-connected network was linked, through a firewall that allowed no other connections, to the seed warehouse on the manufacturing side.
To be clear, RSA executives did not understand what “air-gapped” means, or were lying about it. I feel this is an important take-away.
Breaches as extensive as the one carried out against RSA are often discovered months after the fact, when the intruders are long gone or lying dormant. But Duane says that the 2011 incident was different: Within days, the investigators had essentially caught up to the intruders and were watching them in action.
“I basically shut off RSA’s business,” he says. “I crippled the company in order to stop any potential further release of data.”
One person in legal suggested they didn’t actually need to tell their customers, Sam Curry remembers.
The RSA staffers began putting in nearly 20-hour workdays, driven by the chilling knowledge that the breach they were tracking was still unfolding.
This seems like a lot of fluffing RSA over this hack, but ultimately this was the same company that didn’t patch their Windows boxes and didn’t air-gap the seeds like execs claimed. Among all of these positive mentions for $person doing the right thing, we always get “that one guy we will not name” for proposing absolutely shitty ideas or having a bad take. I understand they won’t throw anyone under the bus but this is an important dichotomy.
“Recently, our security systems identified an extremely sophisticated cyberattack in progress,” (RSA notification)
Which part was sophisticated? Exploiting unpatched machines, pivoting, or stealing data? When seemingly every attack is a “highly sophisticated cyber attack“, is it really highly sophisticated?
In fact, by the time Castignola had landed in Massachusetts, both the NSA and the FBI had been called to help the company’s investigation, as had defense contractor Northrop Grumman and incident response firm Mandiant. (By chance, employees of Mandiant had already been on-site prior to the breach, installing security sensor equipment on RSA’s network.)
How’d that Mandiant software work out for RSA I wonder? It doesn’t seemed to have helped prevent or detect the intrusion at any point based on the story told.
Multiple executives insisted that they did find hidden listening devices—though some were so old that their batteries were dead. It was never clear if those bugs had any relation to the breach.
Uh, this isn’t burying the lede, but it is burying a big one. I have so many questions and I don’t recall there being answers to this specific bit. How were there so many listening devices in RSA executive offices? Had they never done a single bug sweep? Were each of the devices found investigated? Did they do a sweep of all offices after finding them? If not, why not?
“Well it didn’t take long for whoever cracked RSA to find a lock to fit that key,” Cringely wrote. “What if every RSA token has been compromised, everywhere?” Two days later, Reuters revealed the name of the hacked military contractor: Lockheed Martin, a company that represented a cornucopia of ultra-secret plans for weapons and intelligence technologies. In the days that followed, defense contractors Northrop Grumman and L-3 were also named in news reports.
Today, with 10 years of hindsight, Coviello and other former RSA executives tell a story that starkly contradicts accounts from the time : Most of the former RSA staff who spoke to me claim that it was never proven that SecurID had any role in the Lockheed breach. Coviello, Curry, Castignola, and Duane all argued that it was never confirmed that the intruders inside RSA’s systems had successfully stolen the full list of seed values in an uncorrupted, unencrypted form, nor the customer list mapped to those seeds necessary to exploit them. “I don’t think that Lockheed’s attack was related to us at all,” Coviello states flatly.
vs A Lockheed source with knowledge of the company’s incident response reaffirmed to WIRED the company’s original claims. “We stand by our forensic investigation findings,” the source says. vs In a briefing to the Senate Armed Services Committee a year after the RSA breach, NSA’s director, General Keith Alexander, said that the RSA hack “led to at least one US defense contractor being victimized by actors wielding counterfeit credentials,” and that the Department of Defense had been forced to replace every RSA token it used.
Can we figure out who is trying to re-write history here? Sure seems like RSA is despite several other organizations saying otherwise. That would explain why so many were willing to go on the record for this article.
Last year in October, I did a release of Fox squirrels for Greenwood Wildlife Rehabilitation after they had been rehabilitated. These squirrels couldn’t go back exactly where they came from because the owner of the property wasn’t available to give permission, which is required by Colorado Parks and Wildlife regulations. A wonderful lady that was on the release candidate list offered to take them so I drove them to their new home. Since it was an October release and winter was close, each of the two batches got their own nest box to start out. Before I put them up, I noticed that there were a lot of other nest boxes already up. Come to find out she had been offering to take squirrels for many years.
I warned her about the danger of overcrowding, when the squirrel density is too high it can cause serious issues and lead to death for some squirrels. As they compete for food sources and are forced to spread out, they can move to yards or areas that are not as suitable for them. Forcing a squirrel out of their home has a high probability of leading them to their death, as they find themselves in a new area without food, shelter, or knowing escape routes. I ended up putting the boxes up because that late in the season we had no other viable release sites. Since she was supplementing their food with sunflower seeds, bird seed, and corn, it was a lot better option than anything else.
Jump to today when I get a call from her asking if I could help. Apparently one of her old nest boxes, that she thinks may be 20 years old, fell out of the tree this morning. She said no one from Greenwood or anywhere else she called could come help her put the box up. So I found myself driving out to Arvada to see if I could get it done quickly before hauling ass down south for an early afternoon appointment. I spent an hour, most of it trying to figure out a way to get it back up in the tree and stable. This was tricky because the support board for the nest box had rotted out, leading to it falling, and it wasn’t usable. I had to run to the local hardware store for a hammer and some eye hooks but ultimately it just wasn’t happening.
I left but told her to call me later that afternoon while I tried to think of a solution. Shortly after I left, I got a call from her saying that Greenwood could spare one of the big nest boxes, identical to the one that fell. She left immediately to get it which meant over an hour on the road. I had planned on returning tomorrow to work on it but the idea of squirrels not having their nest overnight didn’t sit well with me. Unprotected and sleeping in a tree is very risky; predators and even the wind are threats. After my appointment down south I drove back to Arvada with my own ladder and drill which I knew would be needed for the new box.
I don’t know much about relocating squirrels from one nest to another. Since they enjoyed a protective nest box, I wasn’t sure if that factored in if it was being replaced by a similar one. They obviously look different and no doubt smell different to the squirrels as well. So I removed some of the bedding from the old nest box and put it in the new.
My hope was that the bedding being moved over would help the squirrels understand this was the new home. The next challenge came in the form of where to put the box. Whoever had put the old one up had a much taller ladder than hers or mine, so there was no way to get it back up that high. The angle of the tree made it so most of the trunk space was not suitable due to it being uneven, the box being angled, or branches.
Ultimately, we ended up removing an old bird house that had never been used since installation and putting the new box in its place. It wasn’t quite as high as I would have liked, but higher than some other nest boxes that have been put up. One side offered easy access to the box off the tree trunk but the other might not have been perfect, but a squirrel could definitely go from trunk to that entrance too. I left her place at 7pm with a strong hope that the squirrels who watched us do all that understood what had happened. She told me she’d watch tonight to try to see if any went in and would watch in the morning like she always did to see if squirrels emerged. I left a huge handful of sunflower seeds on top of the box and even more below at the foot of the trunk to help them while they adjust.
Today was the first day in my new career as Squirrel Tech Support apparently.
[Update: This morning she texted to let me know that squirrels were eating the seeds and one went into the new nest box. She says “Looking good for them!!!”]