Privasec’s Ridiculous Claim of a “World Record” in Vulnerability Disclosure

On May 9, 2019, Privasec published an odd press release with a URL slug of “privasec-queensland-telstra-acquisition” but a title of “Privasec Red’s Consultant Breaks World Record By Disclosing Most Number Of Open-Source CVEs.” This claim is simply wrong. To believe it requires either a complete understanding of the vulnerability disclosure landscape or intent to deceive. Neither is a good look for a security company.

The Claim

The claim that Sajeeb Asim Lohani (a.k.a. sml555 a.k.a. ProDigySML) has disclosed 120 vulnerabilities and it is a record that is fairly trivial to debunk. I say fairly trivial because it requires a good vulnerability dataset that tracks creditee information. Since CVE / NVD do not do that, I am curious how Privasec came to their conclusion. SecurityFocus’ BID and IBM X-Force are public databases that track creditee, but neither allow for a way to readily poll for that statistic. Even scraping that data, mangling it, and making a local searchable dataset should quickly show that 120 is probably not the record. [Update: IBM XFD shows 60 total]. So Privasec’s first mistake is not disclaiming how they determined their claim.

The Debunking

Using VulnDB, which also tracks creditee and makes it easy to search along with statistics around the researcher, I don’t even see 120 vulnerabilities creditee to Lohani. This is after combining three separate creditees, Lohani, sml555, and ProDigySML, that were all one into a single creditee. That yielded 78 vulnerabilities:

Why 78 vs the claimed 120, regardless if the most or not? There are several possibilities here and they may be mutually inclusive. The easiest explanation is there are over 40 disclosures by Lohani that have not been aggregated by VulnDB. Given the historical data and thousands of sources monitored, that would be a bit suspect. Given that he “was nominated for AISA Rookie of the Year in 2017“, that suggests this isn’t an issue of disclosures being historical and the data being incomplete.

Another possibility is that Privasec is trying to hide behind a single word in this press release. Note that it says he “has broken the world record by privately disclosing 120 Open-Source CVEs.” The problem with trying to use this as an out is that how do they know how many other vulnerabilities were privately disclosed? Besides, they also make a point to say “Open-Source CVEs”, which presumably means “public” CVEs. This on top of the PR headline not qualifying their claim at all.

One last possibility is that there are over 40 more of his vulnerabilities with a CVE, but all in RESERVED status. If that was the case, you’d expect them to have contacted MITRE to get them published; after all, they do say “open-source” Additionally, they likely don’t have knowledge of the RESERVED entries that are actually public, which numbers in the thousands.

The Counter

If not Lohani, who has the most vulnerabilities to their name? Probably Mateusz Jurczyk (j00ru) but I would have to do some more data massaging to verify it. He (1,717) and Gynvael Coldwind (1,143) both come to mind for an incredible number of vulnerabilities, many disclosed together. Another name from a ways back is r0t (811), who rode the web application wave with many XSS and file inclusion vulnerabilities. Compare any of those to Lohani with his 120 claim as the “world record” and you can see it is quite absurd. Hell, Jurczyk has more Microsoft Windows vulnerabilities with a CVE assignment than Lohani has in total. It’s clear Privasec didn’t do their homework, or simply didn’t care to.

The Offer

Am I wrong? Possibly. I outlined several reasons why the numbers might be off on either side. So I have an offer for Lohani and Privasec; prove me wrong. It’s quite simple too, since you have the data used for the 120 figure. Share a list of Lohani’s vulnerabilities with me. A simple list of the CVE IDs is all I need, I will do the heavy lifting to verify that number is accurate. You’re still wrong about that “world record” either way, that is proven above. But I would love to see the list of 120 you claim regardless.

The Charity Challenge for Banshee

Unfortunately for them, the fax machine was invented in 1843. Banshee admitted defeat, so Durian it is! But I wanted to give some encouragement and started a charity pledge drive. Of course, me being me, I created a tracking sheet for this and as of this blog, there is already $1945 in pledges to help support Love and Justice in the Streets.

Since Durian isn’t in season, Banshee is going to consume it at the next DEF CON in front of witnesses to make it official. Until then, I’d love to see more pledges! Send me a tweet and tag @banasidhe in it!

Exotic Tropical Tropical Fruit Durian Malaysia

Forbes: Lazy Vulnerability Reporting & A Bit of Bias

It may have been almost two decades ago, I joked with colleagues that many Information Security news articles could just be done via Mad Libs. We later joked that breach notifications often appeared to be done via Mad Libs, using the same phrases with different organization names and the number of affected customers. Over the last few years, it seems Forbes has gotten lazy in their reporting on computer vulnerabilities.

First, a bit of background by querying Risk Based Security’s VulnDB, which I work on. While we track news articles on vulnerabilities, it is important to note that it is done in a best faith effort. We try to capture higher profile articles in the bigger publications within InfoSec and those outside the proverbial “echo chamber”, which includes Forbes, New York Times, Washington Post, Fox, etc. So by no means is this comprehensive, but it is important to understand the methodology which is using Google Alerts based on “CVE” strings. This started several years ago, maybe around 2015 give or take. Articles included before that were as they came across social media, referenced in modern disclosures, or some other current manner despite the publication date.

The first Forbes article we have associated goes back to June 17, 2001, covering a vulnerability in a GE Healthcare device. Up to 2010, almost every Forbes article we have is in a GE device along with one about Oracle and one about Linux Kernel. That alone is kind of interesting. From 2010 to 2020 we have Forbes articles covering a wide variety of vendors including Google, Onity, GE, Apple, Magento, PLX, and more. They also included articles covering big disclosures that covered multiple vendors of DVR systems, SIM cards, micro processors, and more. Last year, in 2020, Forbes produces a steady stream of articles for all the big vendors including Cisco, Microsoft, Apple, Google, Intel, Citrix, Zoom, and more.

This year though, it seems like Forbes got lazy. Perhaps it is burnout writing what is essentially the same article? You might think that, but no, because that is exactly what they started doing. Coverage is heavily based around Google Chrome and components in it, but disclosed via Google Chrome’s blog. Of the 48 vulnerabilities in 2021 cataloged by VulnDB, that have an associated Forbes article, only 12 are in non-Chrome products. What’s the gist of their coverage? Here’s three examples, see if you notice the similarities.

You may see the common phrase, “2 Billion Chrome Users”. Don’t worry, in a recent article that got increased to 2.6 billion! If it isn’t in the headline, you can find the phrase in almost every article talking about Chrome vulnerabilities. I get that these articles are repetitive, because there are only so many ways you can say Google fixed vulnerabilities in their browser.

That said, what’s more interesting to me is that they appear to have a single similar article for Mozilla Firefox vulnerabilities in all their time while continuing to encourage users to ditch Chrome. If I didn’t know better, I might think Forbes has chosen a side in the browser wars.

An 83 Word Excuse Instead of a 1 Character Fix (NCSC.nl)

The National Cyber Security Center of the Netherlands (NCSC.nl) has a curious take on sharing security information. On October 25, 2021 I contacted them to inform them of a simple typo in one of their advisories. I send mails or Tweets like this several times a week to researchers, vendors, and news outlets as CVE typos are prevalent. The issue is that a mistyped CVE ID can cause a lot of headache for organizations that take vulnerability intelligence seriously. Imagine an alert about a new CVE affecting enterprise software running your most critical assets and you can only find a news article on it saying the issue is critical. The vendor advisory doesn’t reference it and almost nothing to be found on Google or social media. What do you do? Before you spin up the entire team and tell them to stay late planning for emergency remediation, you need to know what you are dealing with.

Most of the time, the Tweets and emails get a quick reply acknowledging it was a typo and they fix it when possible. Every so often I get no reply and the typo stays there, likely forever. That typically happens on sites that appear to be automated aggregation of content for the sole purpose of getting clicks to generate ad revenue. They have no contact information, no social media, and no author bylines. Otherwise, generally such notifications are well received.

In the case of NCSC.nl I figured I would get a prompt reply and a quick fix. I got the prompt reply, but not the fix. First, note that they provide limited advisory services notifying their stakeholders of vulnerabilities and a page describing what the advisories are. They also have a PDF with a bigger explanation of what a security advisory is. Per Google translate, the advisories “… aim is to describe what the vulnerability is and what could possibly happen if it is exploited.” Simple and straight-forward. As most security professionals know, accuracy in an advisory is important. A typo in a CVE could point to the wrong vulnerability which might be the wrong software completely, or the right software and the wrong vulnerability. I contacted their info@ to let them know about the typo:

https://advisories.ncsc.nl/advisory?id=NCSC-2021-0840

[..] CVE-2021-3715 , CVE-2021-38160 , CVE-2021-4049 [Link]

That should be CVE-2021-40490 at the end.

Brian

The prompt reply I received the next morning was rather baffling. They ‘investigated’ the issue, confirmed I was correct, and wrote a 62 word reply over six lines instead of just fixing the single character that was missing.

Thank you for your e-mail. Hereby we confirm that we have received your
email and investigated the issue. We would like to thank you for your
friendly remark. However, we have decided not to update the
advisory as the CVE number is written correctly in other places in the
advisory.
Feel free to contact us again if there are any questions left.

I naturally questioned them on this odd reply and refusal to fix an inaccurate CVE identifier:

Yes, I have questions.

Why wouldn’t you correct a simple typo? More specifically, for a CVE ID that can cause confusion for security practitioners trying to ensure they have accurate vulnerability intelligence. Anyone reading your advisory may go down a proverbial rabbit hole trying to figure out what CVE-2021-4049 represents and waste considerable time.

Consider that that typo caused our team to respond trying to figure out what that ID represents. Fortunately, we have amazing vulnerability intelligence and it was fairly easy to deduce what happened.

Your apathy in this matter is staggering.

I hoped that an explanation, with a bit of shaming, might prompt them to just fix the single missing character. Nope…

Thank you for your e-mail. We appreciate your concerns. When the advisory
needs to be updated the typo will be corrected.

OK, but the advisory literally needs to be updated to fix the typo. This recursive excuse is just absurd. 21 word reply this time instead of a one character fix. They appreciate my concerns, but not enough to fix ONE CHARACTER.

It’s hard to have faith in Information Security when a national security center doesn’t understand the importance of accuracy and integrity. I hope organizations in the Netherlands are not relying on the NCSC.

The Charity Challenge for The Only Mike E.

Well, maybe not the only one, but this one promises to be fun! It started with such a harmless Tweet, essentially Mike digging his own mouth-hole grave. From there it really spiraled out of control and it just keeps getting better.

This evening I put my Amazon skills where his mouth is and sent items, along with the offers to donate to charity on his behalf. Here’s how it works:

  • I send him a food(ish) item.
  • He has to consume it in some fashion. Restrictions or requirements may apply.
  • He posts a picture or video of consumption.
  • We believe he won’t hoodwink us.
  • I donate to the charity of his choice!

If he completes all of the challenges below, to their fullest potential, he will raise $1,700 for charity. That said, we are both hoping that others will offer to donate a few bucks to see these challenges go down. Even better, if anyone will offer to match even one of the challenges, we could all do a lot of good here. At Mike’s expense, so really doing two types of good. If you’d like to pledge, send a tweet to @attritionorg with which challenge(s) you would like to sponsor and the charity pledge amount. Those who pledge will be added at the end of this blog!

Challenge #1: Blair’s Ultra Death Sauce! This is rated between 800,000 and 1,100,000 SHU depending on the site you read. Either way, this will bring some spice into his life. It may also cause him not to do #UnicornChef for a week as his taste buds regrow. This sauce is reportedly the hottest sauce that they have made, hotter than their “Mega Death” sauce. What’s in it? Red Habanero Pods, Cayenne Chiles, White Vinegar, Natural Pepper Flavor (contains Sotbean Oil), Tomato Puree, Pirri-Pirri Chiles, Pumpkin Seed, Lime Juice, Salt, and Spices. Look at that… tomatoes, lime juice, pumpkin seed, salt, and spices. All ingredients he has probably cooked with, so this one is an easy one. The bounty: $100 for eating a cracker or similar with a healthy dash of the sauce or $200 for shaking several dashes of sauce directly into his mouth-hole. An extra $50 for using it to prank his wife on video.

Challenge #2: Mrs. Butterworth’s Sugar Free Syrup! Note that I was generous and got him the sugar-free version of this. As a type-1 diabetic, I only want him to hate his mouth, not his pancreas. Anyway, this challenge is simple! Anyone familiar with the movie Super Troopers knows how this goes. Here’s a video clip reminder. Just tilt the head back, open the mouth-hole, and chug it down like a syrup pro. The bounty: $100 for chugging a quarter of the bottle. $200 for chugging half the bottle. $500 for chugging the entire thing in the same approximate time as we see in the movie.

Challenge #3: Fatworks, Pasture Raised Leaf Lard, a.k.a. “Baker’s Lard”, Artisanally Rendered, for Traditional & Gourmet Baking, Sautéing, and Frying. Bonus… this is keto and paleo friendly! This was Apacid’s idea along with the syrup. He’s quite sick in the head. I think one large spoonful stuffed into his mouth-hole to “chew” on is a sufficient challenge, then he can use the rest for cooking. No extra flavoring, no additives, just a nice chunk of pure lard. The bounty: $100 for a large spoonful. Bonus, he can redeem this bounty up to three times if done on the same day!

Challenge #4: Not believing these were good enough, then Mike made the mistake of proposing his own challenge to which I quickly agreed. “I’ll eat a bowl of dog kibble in maple syrup for your animal shelters out there.” If he completes all of the second challenge he’ll have to use his own maple syrup for this one. He gets to pick his favorite dog kibble and syrup, and eat at his leisure. Of course, if he drowns it in enough syrup he won’t taste the kibble but that is up to him. The bounty: $250 for the entire bowl, to the animal charity of his choice. He has chosen the New Hampshire SPCA for this one.


Pledgers:

  • John Rex – $100 for each of the first four challenges. [ref]

Assessing the ‘War on Tech’: Huawei vs. U.S.

[I wrote this with Curtis Kang who did a lot of work researching various aspects of this article and provided invaluable help. His research and written contributions made this article possible. It was originally intended to be published on RiskBasedSecurity.com in early 2020 but was passed over so I am publishing it here.]


In 2019, we saw the steadily-growing social and policy conflicts between the United States and China reach a boiling point. China has been a major talking-point of President Trump’s platform since early in his campaign. However, it wasn’t until last year that we saw active policies enforcing a so-called “war on tech” between the U.S. and major Chinese companies like Huawei and ZTE, and those policies being “sidestepped”. We wanted to examine this from a data perspective, looking at the vulnerabilities in similar companies from both sides.

To set the stage, it is useful to briefly revisit the U.S. vs CN timeline.

The Trade War

Since taking office in January 2017, President Trump has had a specific interest in China, stating early-on that the “U.S. will be on a level playing field”. This led to several rounds of tariffs being imposed against China starting in March 2018, and retaliatory tariffs being imposed against the U.S. Early in 2019, there was conjecture that President Trump may use an executive order to limit some U.S. companies such as wireless carriers from purchasing Chinese electronic devices. That executive order was signed on May 15, 2019, citing the National Emergencies Act (50 U.S.C. 1601 et seq.) that would limit or ban purchases of “technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary”.

While the executive order did not list any country or company, it was widely accepted that it was a move against Huawei in particular. The order contained interesting language, saying that the banned technology or services “poses an undue risk of sabotage” and is “an unacceptable risk” to the United States, among other wording. Technology meeting those criteria would be determined by the Secretary of Commerce, in consultation with nine other bodies “and as appropriate, the heads of other executive departments and agencies”.

On May 20, 2019, the BIS modified the final rule and granted a Temporary General License (TGL) until August 19, 2019 for transactions regarding, among other things, “Continued Operation of Existing Networks and Equipment” and “Cybersecurity Research and Vulnerability Disclosure.” On August 19, 2019, the BIS extended the TGL by 90 days, or until November 19, 2019. Outside the TGL, any request for a license to sell or transfer commodities, software or technology to Huawei is reviewed “under a policy of presumption of denial.” In other words, the BIS provides virtually no avenue for a continued commercial relationship with Huawei after November 19, 2019.

Months later, when asked if China would retaliate, Chinese foreign ministry spokesman Geng Shuang told reporters “stay tuned.” Two weeks after that, China announced tariffs on $75 billion of U.S. products. This was followed in December with China announcing a ban on foreign technology in “all government offices and public institutions” within three years. The ban also prevented companies such as Google, Dropbox, and Facebook from being used within China. With this, the United States and China were in a new type of technology war based on the premise that the adversarial nation was producing equipment that “poses an undue risk of catastrophic effects”.

The Fear of Backdoors

Computer equipment that poses a risk in the context above, typically brings to mind computer vulnerabilities. Issues that, with the right knowledge, would allow one country to use software vulnerabilities to compromise assets in the adversary nation’s government, business, or infrastructure. Another common scenario brought up by security professionals and intelligence officials is that of a backdoor; computer code planted by a party that allows them, and only them, covert remote access to the device. Some members of the U.S. intelligence community would prefer these Chinese products not be used in the technological infrastructure saying it “would undercut the ability of the U.S. to defend itself.”

This fear, specifically of Huawei routers from China, has been front-and-center since 2011, and a growing concern even before that. In the past, the concerns largely stemmed from each nation compromising the other’s computer networks in government and business. More recently, with the race to implement a 5G network, security issues around that technology have been heavily scrutinized. This war of technology has reminded us of 2010, when Huawei published an open letter to the U.S. government attempting to allay fears and shift public perception after a decade of suspicion. The company went so far as to request a “thorough investigation” to prove that they are “a normal commercial institution and nothing more.” This prompted eight U.S. senators to urge the White House to examine Huawei contracts and the House Intelligence Committee to investigate and publish a report on both Huawei and ZTE.

Ultimately, that report was inconclusive and stated the following – “despite hours of interviews, extensive and repeated document requests, a review of open-source information, and an open hearing with witnesses from both companies, the Committee remains unsatisfied with the level of cooperation and candor provided by each company.” Even over six years later, in 2019, Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations, officially stated that no one has found a backdoor in a Huawei product.

This is important to note, given the considerable scrutiny Huawei has received. In addition to their open letter in 2010, Huawei also disclosed their source code to a foreign government, something that no U.S. company has done. Despite the numerous information security companies attempting to find and potentially publish findings of an actual backdoor (including the NSA and specifically created testing centers in the UK), none have been confirmed. Given that the U.S. National Security Agency (NSA) has a significant budget and a vested interest in determining if a company like Huawei is shipping backdoored systems, and has not disclosed one, is compelling.

Ignoring Backdoors and Looking at the Data: Is a Ban Warranted?

Given that history and perspective on the growing tech war between the U.S. and China, we at Risk Based Security wanted to look at some concrete numbers around the vulnerabilities in the companies at the center of the issue.

While much of the focus on this topic has been on fear and the threat of backdoors planted by a vendor at the behest of their government, that is not necessarily where we want to direct attention. Using a backdoor, even if it is well-hidden, would likely bring unwanted attention by giving more positive attribution to those who compromised the machine. Nation-state level hackers would have their own ways into a wide variety of vendors and devices purely based on ‘natural’ vulnerabilities in the code. They simply do not need the access, and risk, a backdoor provides. Why provide hints to the enemy that you’ve “cracked the code” when you could hide behind an existing vulnerability?

Setting aside the possibility of backdoors, the question we’re interested in is this: does one of the government-used devices pose more of a risk due to its vulnerabilities? Despite this, we have found that the “war on tech” cannot be simplified into the classic “how many vulnerabilities are there in…” question, else unspoken bias drastically affects the perceived meaning of the numbers. While there is no way to do a perfect one-to-one comparison of U.S. versus Chinese vendors, there may be some that we can begin to compare, with disclaimers.

Phones: BlackBerry vs. Huawei / ZTE

For the general public, and based on much of the mainstream media reporting, Huawei are predominantly associated with their mobile phones. As more of our lives move to mobile, it is no surprise that those in power are concerned about the security of their phones and tablets. For U.S. and Chinese governments, it is widely viewed that BlackBerry and Huawei / ZTE phones, respectively, are dominant. For BlackBerry, they announced a five year deal for their latest handheld and their AtHoc software with the federal government back in July 2016, specifically the Department of Defense (DoD) Joint Emergency Mass Notification Systems (JEMNS). According to the press release, the DoD chose Blackberry because of the “secure end-to-end mobility offering .. that [shows the] secure platform is designed to meet their priorities”.

Despite the contract, BlackBerry is not the most widely used phone in the U.S. government. The U.S. Senate officially “ditched” BlackBerry in 2016, but allows them to continue to use specific devices per an official memo. In fact, BlackBerry themselves have stopped making their own handheld devices and have shifted to business software and other solutions like AtHoc, apparently used by 70% of federal employees including DoD, DHS, VA, DoE, DoA, PFPA, FEMA, IRS, and the TSA. For a majority of government employees, the most commonly used phones are now Apple and Samsung products.

With regards to China’s government, specific details about mobile phone adoption is not readily available. By simply looking at Huawei’s market share in China, one might safely assume that their devices are favored by some in the Chinese government. While it has long been rumored that Huawei has a very direct and complicated relationship with their government, which is supported both by Vietnamese academic and U.S. government research, Huawei says their relationship with the government is “no different” than any other company in China.

The U.S. government officially uses a mix of BlackBerry, Apple, and Samsung (Android), meaning that there are three major vendors and three major operating systems. For the Chinese government, apparently there is no officially sanctioned device, but it is very likely Huawei (formerly Android, but moving to Harmony OS / Hóngméng in 2020) and ZTE (Android) phones are heavily used. Looking at the last three calendar years, here is a comparison between the vendors to see how many vulnerabilities have been posted:

With these numbers it may seem like BlackBerry represents more risk. However, if BlackBerry shares the same vulnerabilities as any other Android device, and they disclose vulnerabilities in applications they ship, that number can be higher. The same can be said for any other Android phone that ships with packaged vulnerable apps and components as well, so the 1,338 Android vulnerabilities are not a full representation for other devices (e.g. Samsung, Huawei, ZTE). We then have to remind readers that comparing open source software such as Android to closed source such as BlackBerry OS and Apple can introduce bias in disclosure numbers. Another aspect to consider is that the amount of devices being used may influence how many people are actually performing security research on them.

Ultimately, this means neither the U.S. or China can justify banning devices based on phone vulnerability history alone. Trying to state one vendor is more “vulnerable” than the other using currently available vulnerability data alone requires so many disclaimers that the end result loses its potency.

Routers & TelCom: Huawei vs. Cisco et al

The second major aspect of concerns over technology from one country being pervasive in another is that of access. Everyone from the carriers to end users expects the equipment to function seamlessly, giving us access to the internet and mobile service. That service is built on a complex infrastructure of telecommunications (telecoms) hardware and software produced by companies such as Huawei, Cisco, Fujitsu, Nokia, and Ericsson. The telecom hardware includes routers, base transceiver stations, fiber optical networks, satellites, and a lot more. As of 2017, Chinese companies produced the most telecom equipment in the world, about 14% more than the United States.

Looking at these vendors for the last four calendar years, we get another lesson in how there is significant bias introduced into vulnerability statistics due to disclosures. Cisco had 2,227 vulnerabilities in that time. Compared to Huawei with only 813, one might conclude that Cisco’s software is inherently riskier. But compare Cisco with the three other companies. Fujitsu enjoys 79% of the market share by revenue, yet only had 24 vulnerabilities in that time frame. Going off that logic, can we conclude that Fujitsu is the most secure?

Consider that of Fujitu’s 24 vulnerabilities, only three are in their products and one of them a keyboard. The other 21 vulnerabilities are in third-party software or hardware (e.g. Intel processors). Cisco on the other hand has an incredible number of vulnerabilities reported, but they rarely publish that they are affected by vulnerabilities in OpenSSL and Intel for example, despite using those technologies in some of their devices.

Both Cisco and Fujitsui maintain contact pages for reporting security vulnerabilities, have a formal PSIRT team to respond to vulnerability reports, and both publish security advisories. Despite this, they have public disclosure histories that are about as opposite as you can find in many ways. We know for a fact both companies use hundreds of third-party libraries in their code, yet neither publish when third-party vulnerabilities affect their software. Based on our extensive history of tracking vulnerabilities, we are quite certain that Fujitsu products have, or have had, more vulnerabilities than they have officially disclosed. Any notion that Fujitsu (24) is a one-off situation can be dismissed when looking at Nokia (11) and Ericsson (8) for the same periods. That suggests Cisco and Huawei are outliers.

We can apply this same scrutiny to Huawei, with only 813 vulnerabilities despite their large market share, and their considerable transparency when it comes to third-party vulnerabilities. In the world of vulnerability research, access to software and equipment is essential, of course. Some may argue that Huawei equipment isn’t readily available to many researchers, and that might be true for U.S.-based researchers. But the last decade has shown an incredible number of extremely talented security researchers in China, who would presumably have more access. If one were to argue that China is looking to restrict vulnerability disclosure, that certainly will be something to consider moving forward. However, that plan is still preliminary and has not been implemented.

Conclusion: Overkill

You need comprehensive, detailed, and actionable data in order to make informed decisions. Following this mantra, we are comfortable in assessing that with the currently available vulnerability data, a hard stance condemning either side is not possible. As much as we would like it to be, the comparison of vulnerabilities within vendors cannot be a panacea.

That being said, does this mean that both the U.S. and Chinese governments are banning each other’s products solely for political posturing, or is it actually an informed decision? As we can see, it may be easy to arrive at a conclusion when looking at the data. But as informed citizens we all need to be aware of the disclaimers and hidden perspectives that the data may not overtly reveal. The answer is not so simple as “well, this has more vulnerabilities than that”.

Without concrete evidence of backdoors in Huawei products, the concern is definitely valid, but a total ban is overkill and may have far-reaching unintended consequences. As the “war on tech” has raged on, both the U.S. and China have suffered casualties.

Commentary on Trend Micro’s Linux Threat Report 2021

On August 23, 2021, Trend Micro released a report titled “Linux Threat Report 2021 1H” by Magno Logan and Pawan Kinger. The report is based on Trend Micro’s Smart Protection Network (SPN) which they call “the data lake for all detections across all Trend Micro’s products“. Basically, every security product they make that detects vulnerabilities and reports them back to Trend Micro can and is used in their research like this, among other things. They further qualify that the report is based on additional data “collected data from honeypots, sensors, anonymized telemetry, and other backend services” and represents “the real-world prevalence of malware and vulnerability exploitation across enterprises” regardless of size or vertical.

Reports that talk about the most exploited vulnerabilities are compelling. They offer a short list of vulnerabilities that organizations can be absolutely sure they patched and offer no risk. Unfortunately, many of these reports have problems. I have written about some before including 2015 Verizon DBIR, 2016 Verizon DBIR, and Radware’s Top Web Exploits of 2020. I wish I had more time as I have seen other reports on exploit prevalence that had similar issues. In this case, Trend Micro’s report falls into at least one of the same traps that these prior reports have.

The first issue that pops out is the wording in the report that introduces a major point of confusion. In section two, titled “The Linux threat landscape: What are the top Linux threats?“, under the second heading titled “Vulnerabilities in Linux systems“, we get more details qualifying where the data came from to generate this part of the report:

“… we dissected IPS (Intrusion Prevention System) hits from Trend Micro Cloud One – Workload Security and sifted through over 50 million events, ignored false positives, eliminated test data, and layered data with available threat intel to draw some conclusions.”

Unfortunately, the next sentence immediately introduces some doubt and we don’t know how much doubt there is because they don’t qualify their error of marging:

“It should be noted that there can be a degree of error here due to the nature of the data and internet activity.”

If the margin for error is 1% in a dataset that large, not a big deal. If it is 10%, that can be problematic. If it is 50% then the report shouldn’t have even been written. We get to guess where that margin of error is apparently.

Now, for the section of the report that initially got my attention, we get to the top 15 vulnerabilities. I can’t finish that sentence because there is confusion:

If a list of vulnerabilities includes the top 15 that are “actively exploited” or “have a known proof of concept”, how do you even? Over 4,500 vulnerabilities in 2021 H1 have a public proof-of-concept or functional exploit. The next sentence clearly repeats the exact same thing. I can’t figure out how to explain that second part unless they are attempting to say “actively exploited and a public proof of concept” to distinguish from exploitation that is happening where the exploit is not actually published. That seems like a pretty big oversight given the nature of this section of the report. Further, it doesn’t qualify if the report is based on attempted exploitation that matches a signature or successful exploitation. After the table of vulnerabilities the report says “Table 1 shows the top vulnerabilities by volume of triggers.” which strongly suggests it is looking for exploit attempts. But that just leads to more questions like “if you see an attempt for that vulnerability but against a Windows server, does it count?

It gets even murkier looking at the table of the 15 vulnerabilities where one of them is listed as “N/A” for severity. That warrants digging into their list more closely and comparing the vulnerability information with that in VulnDB.

There are several observations to be made for this list:

  • CVE-2017-9805 is listed as ‘High’ severity suggesting they pulled at least some vulnerability data from the National Vulnerability Database. They score the vulnerability 8.1 (High) while VulnDB and CERT VU scores it 10.0. Looking at the original disclosure, there are no obvious qualifications that seem to justify an Access Complexity High (AC:H) rating.
  • Of the 430 vulnerabilities involving WordPress, base or plugins, that allow for remote code execution, why did only one make the list (CVE-2020-25213), and why that one? Given the amount of scanning for vulnerable WordPress installations I would expect more to be on the list. Hell, even the venerable CVE-2013-4338 given there are other CVE-2013s on the list.
  • The Atlassian Jira vulnerability is very curious given that it is a remote information disclosure issue and does not disclose sensitive information such as a password, that would result in further privilege escalation. Based on the logs of attrition.org over the last three months, there has been a single request for /secure/QueryComponent!Default.jspa. There have been five requests for /secure/QueryComponentRendererValue!Default.jspa (CVE-2020-36289) which is another information disclosure issue. There are also hundreds of information disclosure vulnerabilities that yield credentials which can be used to authenticate to an application to gain privileges. I would expect to see any one of those on the list before CVE-2020-14179.
  • Eclipse Jetty (CVE-2017-7657) is very curious to see on this list for several reasons. First, a four year old vulnerability that does not result in code execution. Second, there is a caveat for exploitation as explained in the Eclipse bug ticket: “was deployed behind an intermediary that imposed some authorization and that intermediary allowed arbitrarily large chunks to be passed on unchanged, then this flaw could be used to bypass the authorization imposed by the intermediary“. To see an HTTP request smuggling issue be that widely exploited over the thousands of other vulnerabilities that allow for a more serious impact in software found running on Linux is baffling. This strongly suggests the detection rule matching for that vulnerability is overly broad and triggers on exploit attempts for different issues.
  • The nginx vulnerability is listed as N/A which is curious. Looking at the associated NVD entry (CVE-2013-4547) we see they have a CVSSv2 score, but not a CVSSv3 score. That is due to it being a 2013 vulnerability and NVD not retroactively scoring all vulnerabilities. This, along with CVE-2017-9805 tells us that Trend Micro imported the scores from NVD but didn’t adjust for this one by using the CVSSv2 score, or developing their own CVSSv3 score. It seems weird to blindly use the CVSSv3 scores and have incomplete data when it is a simple correction to make.

Looking back to bullet #3, it’s interesting to compare the hits seen on our web server and then note that we also observed 10,659 requests for /wp-login.php in that same period. This, I think, illustrates a serious flaw in the methodology of this report. Most of the scanning we see for vulnerable WordPress instances is first looking for the presence of the software before attempting to exploit anything. Rather than throw hundreds of payloads for various flaws in the core software and vulnerable themes or plugins, it’s more efficient to check if the software is present first. Scan for the software to build a list of sites that are running WordPress before launching more significant attacks that may attract more attention.

As always, a real test to the veracity of this data would be for another firm that does large-scale monitoring of attacks to publish their own data, limited to the same approximate criteria as Trend Micro. That might explain bullet #4 at the very least.

Niki7a vs Magen – Charity Drive

Like the epic rumble in the jungle, after a quick offer to throw money to charity on behalf of their suffering, other generous souls jumped in to do the same. The result? Niki7a and Magen are now locked in a legendary contest to see who can wear their DEF CON 29 bracelet the longest. Whoever wears the bracelet the longest gets to pick the charity we donate to. Magen has opted to have donations go to SailFuture, “a comprehensive child welfare agency, and private school based in St. Petersburg, Florida” who’s mission is “to innovate and develop effective programs for high-risk youth, particularly youth in foster care.” At the time of this blog, Niki7a is still deciding.

Image courtesy @drwolfff

This started out with Nikit7a’s innocent Tweet:

Magen qualified that there are challenge coin rules to keep each other honest:

With all of the people kicking in to donate, I will maintain a spreadsheet to track pledges and totals!

Reflections on “CVE Approach for Cloud Vulnerabilities”

At Black Hat Briefings USA this week, Ami Luttwak and Shir Tamari called for a “CVE” style approach to documenting vulnerabilities that affect cloud offerings (note: I have not seen the talk). As one of two people (the other being Jake Kouns) that may have the longest history in this specific space, I wanted to give some history and very high-level reflection on the challenges and benefits of this endeavor. First, for perspective, CVE covers at least 56 pure cloud / site-specific vulnerabilities, the first going back as far as 2000. They cover at least another 461 that are hosted blockchain contracts. At times, some of the CVE (Editorial) Board has advocated for CVE to expand to cover cloud vulnerabilities while others argue against it. At least one who advocated for CVE coverage said they should get CVE IDs, while others that supported and disagreed with the idea saying that if cloud was covered, they should get their own ID scheme. MITRE has not made a clearly policy change on that in years, despite increasing coverage primarily of hosted blockchain contracts.

Next, the idea of “CVE for Cloud” has been done! Well, it was attempted at least. Back in the day, the Open Security Foundation was desperately trying to maintain the Open Sourced Vulnerability Database (OSVDB) and DatalossDB. Neither project had enough volunteers and community support simply wasn’t there. Despite that, we spun up a project called Cloutage (archive.org link) in 2010, that originally aimed to catalog instances of cloud service outages as well. The idea was this would tell companies which ones actually had the alleged 99.999% uptime, which regions of a provider were more prone to issues, and more. We were collecting incidents categorized as “hack”, “outage”, or “vuln” long before that with the intent of such a project, but that is when we finally made them publicly available in an organized fashion. So my reflections come with real-world experience; the challenges we faced, along with the pitfalls of such an endeavor, are still largely present. (For convenience, I will use the term ‘cloud’ to refer to service providers that are subscription offerings and not on-premises hardware / solutions.)

First, let’s begin with one significant change from the mid 2000s to today. Over ten years ago, cloud providers with a vulnerability were typically an issue they had to fix and were out of the customer’s control. While that is still prevalent today, we see another class of cloud vulnerabilities that better warrant a “CVE for Cloud” approach. Those are cases where a vulnerability is present in a cloud offering and requires a customer to mitigate it themselves. These will often be described as misconfigurations like the pervasive open S3 buckets that can be accessed without credentials, disclosing a world of information or hosting malicious software [1] [2] [3]. There is a fundamental difference between a security issue that requires the customer to mitigate versus one they must wait for the cloud provider to fix.

Historically, when a vast majority of the security issues fell into the “wait for the provider to fix” category, we cataloged those vulnerabilities but questioned the value. Sure, it had value in that a company could evaluate cloud providers to see which had more vulnerabilities and how long it took to fix sometimes. But even that was crippled in that companies would only announce them after the issue was fixed. That meant we knew a vulnerability was present for some undetermined time, and then patched. But cloud providers weren’t including how long customers were exposed and most providers simply were not disclosing at all. So the database was a fraction of the actual vulnerabilities, typically with no known exposure window, and customers were powerless to do anything about it.

Another issue we ran into was that the rare times a cloud provider disclosed the issue, it was extremely vague. More frustrating is when they used language implying that some, but not all of customers were impacted. They didn’t include regions or IP ranges or anything else to identify who was affected so a customer couldn’t even guarantee they were without opening a support ticket and hoping to get an honest answer. That was frustrating and not very helpful. How will security teams cope if they find out there is a public vulnerability in a cloud offering and no mitigating control? Seems like a recipe for stressful days. Of course, it’s no different than no except for today ignorance is bliss.

CVSS is a defacto risk scoring framework for vulnerabilities. It was designed around vulnerabilities that appear in CVE, which is predominantly on-premises software. Would CVSS scale to work for cloud vulnerabilities? CVSSv2 seems like it would be as suitable for cloud as on-premises given its more simple scoring elements. CVSSv3 seems like it could be more problematic and not take advantage of some of the changes over v2. First, the ‘physical’ location would effectively be eliminated. Second, would Authorization Scope be neutered? Defined initially as “ability for a vulnerability in one software component to impact resources beyond its means, or privileges“, on the surface it seems like every vulnerability would have to be evaluated more closely for scope change. Every vulnerability potentially causes the cloud provider, a different company than the one using the service, to get involved in some way. The cloud provider’s support and security teams spending time because $CompanyA’s instance got popped technically “impacts resources beyond its means”. I don’t think that is what the CVSS SIG had in mind though. That said, given the shared environments of many cloud offerings, sandbox escapes and lateral movement potentially impacts completely different organizations. Does a simple “scope changed” modifier to a score adequately convey the potential risk?

Finally, who’s going to run this “CVE for Cloud”? OSVDB and subsequent similar projects give a damning history showing there is a lot of vocal support for community-run vulnerability resources, but they very rarely see the actual time required from those vocal folks. If it is run by one person or a small group of dedicated volunteers, how long before they burnout or commercialize it? There’s a reason most of the public VDBs have gone away, or gone commercial to varying degrees (e.g. OSVDB, Secunia, BID). The only thing worse than such a project not getting off the ground is one that does, becomes an essential part of security programs, and then goes away.

I look forward to seeing the talk by Luttwak and Tamari, if it is made public. I’m curious if they have come up with solutions that have plagued vulnerability databases for over two decades.

[Update: Josh Bressers points out that the Cloud Security Alliance (CSA) wants to address this problem too.]

July 2021 Reviews (The Tomorrow War, The Watch, Debris S1, Black Widow, Hitman’s Wife’s Bodyguard)

[A summary of my movie and TV reviews from last month, posted to Attrition.org, mixed in with other reviews.]


The Tomorrow War (2021)
Medium: Movie (Amazon)
Rating: 0.5/5 don’t wait until tomorrow to ignore this trash
Reference(s): IMDB Listing || Trailer
This movie was going swimmingly when everything was just regular life in today’s time. The second the “tomorrow” part started it went to shit. From that point on it was a series of overdone clichés from hundreds of Sci-Fi movies before it mixed in with clichés from hundreds of war movies. I think the blame is on the writers who wrote a series of scenes that relied on us forgetting about logic, continuity, and time travel. The comic relief? Not funny. The heart-warming moments? Not warming. This movie is trash.


The Watch (2020)
Medium: TV (BBC)
Rating: 4.4/5 worth the .. irreverent policing
Reference(s): IMDB Listing || Trailer
This is not a show about superheroes. In fact, barely heroes at all but they find their way. The series is based on a world and characters created by Terry Pratchett, who I haven’t read. The series was a little rough to get into not knowing anything about it and the characters were odd at first, then I appreciated them as over-the-top quirky weirdos that find their way together as they “police” the city. The casting is superior and each actor really embraces their role, bringing something special to their character. Season one offers a fun story that is wrapped up neatly at the end, of course with a teaser that it will return.


Debris S1 (2021)
Medium: TV (Peacock)
Rating: 2.5/5 potentially good show lost in all the pieces
Reference(s): IMDB Listing || Trailer
Two episodes in and the show was interesting. I immediately formed ideas of what I thought would come later in the season but in one case they did it in the first episode. It set a great pace and suggested a promising show with some creativity. I could suspend disbelief on the whole alien technology thing but couldn’t when it came to the “protocols” used to investigate alien debris. Then, about half way through the series, it began to spiral out of control with sub-plots and ideas that simply weren’t compelling. Add to that an attempt to bring some kind of sexual tension between the two main characters that just wasn’t there, and it’s easy to understand why the show was canceled after one season.


Black Widow (2021)
Medium: Movie (Disney+)
Rating: 1 / 5 i’d rather watch the peacock spider
Reference(s): IMDB Listing || Trailer
Marvel whipped out its script; the script they use to write movie scripts for MCU films. Formulaic, predictable, and have gotten boring. Black Widow has potential for a more interesting back story and tie-in to the current timeline and Marvel fell way short on that I think. Comedic character? Check. Over-engineered fight scenes that aren’t clever? Check. Basically, instead of making the character more interesting and getting us more invested, we’re left with a movie that didn’t really matter if it came out. Personally, I find the Black Widow character entirely more interesting after the first she appeared in and a lot less so after this one. Finally, I hope the camera crew didn’t injure their backs bending over to get all the shots highlighting the women’s asses.


Hitman’s Wife’s Bodyguard (2021)
Medium: Movie
Rating: 2.5 / 5 The word ‘fuck’ is 1/3rd of the plot
Reference(s): IMDB Listing || Trailer
If you saw the first movie, you know exactly what to expect with this one. Over-the top gun fights, Ryan Reynolds being himself as you try to divorce your mind from thinking “Deadpool”, and apparently an attempt to break the record for the use of the word “fuck”. As with many sequels, they try to make it more compelling by adding some big name actors. In this case, Morgan Freeman, who didn’t really add much to the movie. So this is good mindless entertainment while you work on your puzzle or cross-stitch or sort your Legos.