Comic: “Damn movies…”

I’m not an artist but I occasionally have ideas. I contracted ‘Quickcartoon‘ on Fiverr to put one of my ideas from early this year into a comic strip. Pretty sure I had this idea years ago and from time to time remember it, often after watching a movie that involves the C.I.A.

Woman: Where do you work?
Man: State dept.

Woman: Oooh, you are a CIA agent?!
Man: (Damn movies…)

[Man mopping at the U.S. Department of State]

Note: This is copyright to me but free to use for non-commercial purposes.

More authorities, more CVEs; Oh, and more commentary.

On November 10, TechBeacon published a great article by Rob Lemos titled “More authorities, more CVEs: What it means for app sec teams” in which I was quoted, along with several other people.

Like many articles of this nature, those who provide input often will talk for as long as half an hour and ultimately get a couple lines quoted. We do it to provide background and context on the topic as well as have an open discussion on vulnerability trends. That means there are ‘outtake’ opinions and facts, as well as our potential reaction to other parts of the article that did not include our input. So this blog just covers some of my random observations to compliment the article.

Until 2016, more than 80% of software security issues assigned a CVE identifier belonged to only 10 classes, or weaknesses, as classified by their Common Weakness Enumeration (CWE) category. But in 2019, the top 10 weaknesses only accounted for 59% of reported vulnerabilities.

The Common Weakness Enumeration (CWE) index is interesting to me and I wonder if it has gotten so big to degrade its value. Consider that there are now 891 CWE identifiers as of August 20 in version 4.2 of the framework. Per the article, only 10 of them account for 59% of vulnerabilities which will no doubt include XSS, SQLi, and CSRF as examples. That makes me wonder the value of abstracting so much as it means that hundreds of those CWEs will represent a handful of vulnerabilities at most.

Digging into the 2,298 page PDF documenting version 4.2, you can jump toward the end of the CWE list and see that several have been created but have no “Observed Examples”. In fact, searching for that phrase only yields 397 hits. Does that mean that out of 891 CWE IDs representing weaknesses, that MITRE has only come up with 397 that match known vulnerabilities? I certainly expect otherwise and hope this is just documentation shortcoming as I feel that every CWE ID should be linked to a concrete real-world example. I’d love to see

I’d love to see a simple breakdown of the top 100 CWE along with how many vulnerabilities are associated with them (via NVD, since MITRE doesn’t actually apply CWE to entries) and what percentage of the overall vulnerabilities that represents. It might be very telling just how useful CWE is and if the project is being pushed too heavily from an academic standpoint. Before you judge that comment, let me know how useful this CWE report from MITRE is, and make sure you load it in Chrome.


It’s an open question whether the addition of coding repositories will lead to another expansion in the number of vulnerabilities.

I don’t think that is an open question at all. I think the number of vulnerabilities will go up as a result of more coding repositories becoming a CNA. But that isn’t really the issue here. Instead, the real questions should be centered around what quality of CVE entries they will provide, if they will adhere to CNA standards, and if MITRE will enforce CNA policy on them.

Based on history, the short answers to those questions are: quality will go down, no, and no. As soon as MITRE provides a breakdown of how many IDs were published by each CNA it is difficult to know. Speaking of, why hasn’t MITRE published such statistics? Rhetorical question, apologies.


Open source vulnerabilities: Do you know what’s in your software?

I, along with many others, can’t stress this enough! Please make sure you understand what third-party software your developers are using. This affects your organizations from both a vulnerability standpoint, as well as legal accountability. Using a third-party library against its license could open you up to some hardships.

The only reason I quoted this section is because I just read an article in the latest Wired that mentions Bootstrap is thought to be used on nearly 20% of all web sites across the Internet. That is incredible.


Patch Tuesday is becoming a bottleneck

There is a lot more that can be said on this topic. It reminds me of a 2015 blog I wrote that actually goes back farther to 2007 where this problem was predicted long before the nightmare IT teams experience once a month. It’s only going to get worse as more vendors jump on this patch schedule and the only ones who will suffer are their paying customers. But hey, let’s let them keep using the term “responsible” disclosure too.


But exploitability alone doesn’t solve the problem—three quarters of the 17,300 vulnerabilities identified ranked CVSS exploitability rating of 8.0 or higher.

I’m not sure a more perfect example of why CVSS has become worthless exists. On its own, especially using the Base score only, is it really helpful that so many vulnerabilities are ‘High Risk‘? This is also a good reminder of another blog I have been meaning to write for a while that outlines the distribution of CVSSv2 versus CVSSv3 and how it impacts scoring. With a couple charts you will get a nice visual of just how poorly thought out some of the framework was. Of course, this has already been evaluated by others years back as well.


Finally, because I don’t hold the copyright to the picture used in the TechBeacon article header, I offer my version:

Picture of me by D2d in November, 2018 at Tomayo, in Denver, CO.

Vulnerability Counts Are a Moving Target

At the end of each year, we see articles covering how many vulnerabilities were disclosed the prior year. Because the articles are written about the same time of year, it gives a fairly good initial comparison from year to year; at least, on the surface. This is the foundation of statements such asSecurity vulnerabilities in critical infrastructure up 600%”. My company, Risk Based Security, even includes that general type of observation in our vulnerability reports, with caveats. These sensationalized counts and figures are also often used to make claims that one product is more or less secure than another, when the vulnerability counts cannot typically be used for such claims as they are built on severely incomplete data. In reality, we must remember that such numbers are only a snapshot in time and serve as a quick comparison between years, not much more.

Before we get to the “moving target” topic, we need to cover a bit of background on how all this happens.

First, consider that even with a large team doing vulnerability aggregation, there is a limit to the number of sources that can be monitored. While a team might monitor over 4,000 sources on a daily to weekly basis, we know there are a lot more out there. As new researchers create their blogs, older vendors finally create advisory pages, and new vendors pop up, the new sources are growing at an incredible rate. Additionally, consider that there are over a million results for “site:github.com changelog.md” (not to mention variations like “release_notes” or “changelog.txt” and similar) that could potentially host a trove of vague vulnerability mentions. Even more daunting, back in 2010 GitHub was hosting 1 million repositories and now they are over 100 million. That means there are an overwhelming number of bug trackers, pull requests, and over a billion commits on a single site. Any company that claims to monitor all of that or “millions” or sources? Do your due diligence and be ready to walk away from them.

Second, due to available resources, vulnerability aggregation teams have to prioritize their activity. This is usually done by vendor, product, and the time frame where higher deployment vendors and products get the most attention. With “time frame”, emphasis is placed on the more recent vulnerabilities as they are most likely to be relevant to organizations. Moving past that, a vulnerability intelligence (VI) provider must be working with clients to learn what resources they use in their organization, as it allows them to prioritize and ensure that they are covering exactly what is deployed first and foremost. After all that, as time permits, they have to come up with new ways to expand source coverage without compromising quality or speed.

With that in mind, consider a vendor that finally publishes a changelog or makes their bug tracker open for everyone. While a VI team should love to go through such resources as far back as possible, they have to limit themselves to vulnerabilities for the current year, and then some amount of time farther back in case clients are using older versions (especially for third-party dependencies). Time permitting, the team should then go back even further to dig for older and more obscure vulnerabilities. While these may or may not immediately benefit clients based on the software they are running, it does contribute directly to the vulnerability history of a given product or vendor. This is invaluable in determining the “cost of ownership” for a product and is vital to making a decision between multiple vendors offering the same type of solutions. With all of that data, it is trivial for a VI provider to provide a quick and easy-to-understand picture of that cost.

Even with very limited time to dig that far back into sources, the impact can still be seen clearly. In January of 2013, Risk Based Security’s VulnDB team had aggregated 8,822 vulnerabilities for the 2012 calendar year, and CVE covered only 4,765 of them (54%). Compared to the prior year (7,911 in 2011), we could say that disclosures increased around 10%. The next question we must ask is if those numbers aged well and hold true today.

Looking at VulnDB now, there were 10,856 vulnerabilities disclosed in 2012. So in the past eight years, the team has managed to find an additional 2,034 vulnerabilities disclosed that year. That means comparing 2012’s updated 10,856 count with the older 7,911 count for 2011, the percent increase was closer to 28%. But wait, we can no longer use the 7,911 count for 2011 either, since that too is a moving target! Ultimately, as previously stated, these disclosure numbers are only good as a snapshot in time. Depending when you perform the count, you may find wildly varying results that could heavily bias any conclusions you try to make. Do the people writing the statistics you read and cite disclaim that?

In January of 2013, I started taking counts every month for how many vulnerabilities VulnDB aggregated for the 2012 calendar year. Almost eight years later, and this blog and chart shows just how much that number can change. As with all vulnerability statistics, please make sure you fully understand what they really mean and disclaim as needed!

Viewing 2012 disclosures; chart shows 7k / 11k as min / max values.

With this visual we can see that in the years after 2012, vulnerability aggregation continued to rise considerably. Over time that growth tapers off as the team simply didn’t have time to keep digging that far back into changelogs and bug trackers looking for more vulnerabilities that were less beneficial to organizations.

The tl;dr takeaways:

  • The number of vulnerabilities disclosed in a given year is static, but the VI teams that aggregate the information won’t find them all that year.
  • Vulnerabilities by year, as reported, will slowly climb over time as additional aggregation work is performed.
  • While that newly aggregated data may be “old” as far as what software is being used within an organization, it still contributes to better metadata and product/vendor evaluation (“cost of ownership”)

Photos: Presidential Race Called / Denver Celebration

I woke up this morning to the presidential race being called in favor of Joe Biden and started watching live feeds from around the country. After checking the Denver Channel 7 news feed from the helicopter I saw there were quite a few people at the capitol celebrating. Since I am fairly close I decided to walk down and experience it in person. The following pictures were taken on a Samsung Galaxy 8, so pardon the low quality. =)

First, it was a beautiful day and reached 76 days with the nice fall air. The news chopper as well as Denver Police Air 1 were circling above, as well as two small drones and a few birds. While watching the feed earlier it was interesting that a dozen Trump supporters were present, sectioned off with police tape. There were three lines of police separating them from the hundreds of Biden/Harris supporters. By the time I got there the police were already leaving the area despite a handful of Trump supporters remaining.

The police had blocked off around ten blocks to prevent traffic on three sides of the capitol as well as the Civic Center. This is obviously good for safety and worth it in the big picture, even if an inconvenience to local traffic. The empty roads in the middle of a busy city always fascinate me.

A few of the lingering Trump supporters stuck around having a discussion about politics which was encouraging to see. One supporter stood in the middle of those celebrating with a Trump Train flag as a cape and was joined shortly after by someone wearing all black and camouflage. Biden supporters asked him what his purpose was in demonstrating like that and he would not answer. I later noticed the flag-caped supporter getting into a car with a purple heart license plate confirming my hunch he was ex-military.

The Biden/Harris supporters that showed up were truly happy and celebrating the morning’s news. At one point a truck playing music for the crowd spurred a hundred people to do the electric slide.

Even with a few lingering Trump supporters, the crowd ignored them and just enjoyed themselves. After several hours at the capitol the group started marching into downtown and I headed home.

Regardless of your politics, I hope you appreciate that our country desperately needs to heal.

How Many Trees Are You Celebrating @arborday?

The Arbor Day Foundation is a 501c(3) nonprofit organization founded in 1972 that seeks to “inspire people to plant, nurture, and celebrate trees“.

I received a “Colorado Tree Survey” from them today, part of what is a never-ending stream of snail-mail spam that I have written about before. For this envelope, the thing that caught my attention was the weight of the envelope.

To say that it was heavier than other junk mail would be an understatement. In fact, it was well over 5x heavier than most of my junk mail including the ones that send small pads of paper and mailing labels.

That’s right, it came in over three pounds. For Arbor to presumably send these out to at least hundreds of thousands of people in Colorado. It’s fair to say that took quite a few trees to produce not to mention the general carbon emissions required to produce and distribute. This seems to be at odds with their general mission purpose especially in the age of email.

Given that 23.4% of their income is not spent on program expenses it is discouraging to say the least. To put it into better perspective, over six million dollars went to administrative expenses and their CEO Matt Harris made $336,445 in 2019. Considering that $961,603 of their income came from government grants that year it is really frustrating to see nonprofit entities spend money on such salaries and waste money on killing trees to conduct surveys about celebrating trees.

If I print this blog out and mail it back in place of the survey, I wonder if they would appreciate the irony.