I’m not an artist but I occasionally have ideas. I contracted ‘Quickcartoon‘ on Fiverr to put one of my ideas from early this year into a comic strip. Pretty sure I had this idea years ago and from time to time remember it, often after watching a movie that involves the C.I.A.
Woman: Where do you work? Man: State dept. — Woman: Oooh, you are a CIA agent?! Man: (Damn movies…) — [Man mopping at the U.S. Department of State]
Note: This is copyright to me but free to use for non-commercial purposes.
On November 10, TechBeacon published a great article by Rob Lemos titled “More authorities, more CVEs: What it means for app sec teams” in which I was quoted, along with several other people.
Like many articles of this nature, those who provide input often will talk for as long as half an hour and ultimately get a couple lines quoted. We do it to provide background and context on the topic as well as have an open discussion on vulnerability trends. That means there are ‘outtake’ opinions and facts, as well as our potential reaction to other parts of the article that did not include our input. So this blog just covers some of my random observations to compliment the article.
Until 2016, more than 80% of software security issues assigned a CVE identifier belonged to only 10 classes, or weaknesses, as classified by their Common Weakness Enumeration (CWE) category. But in 2019, the top 10 weaknesses only accounted for 59% of reported vulnerabilities.
The Common Weakness Enumeration (CWE) index is interesting to me and I wonder if it has gotten so big to degrade its value. Consider that there are now 891 CWE identifiers as of August 20 in version 4.2 of the framework. Per the article, only 10 of them account for 59% of vulnerabilities which will no doubt include XSS, SQLi, and CSRF as examples. That makes me wonder the value of abstracting so much as it means that hundreds of those CWEs will represent a handful of vulnerabilities at most.
Digging into the 2,298 page PDF documenting version 4.2, you can jump toward the end of the CWE list and see that several have been created but have no “Observed Examples”. In fact, searching for that phrase only yields 397 hits. Does that mean that out of 891 CWE IDs representing weaknesses, that MITRE has only come up with 397 that match known vulnerabilities? I certainly expect otherwise and hope this is just documentation shortcoming as I feel that every CWE ID should be linked to a concrete real-world example. I’d love to see
I’d love to see a simple breakdown of the top 100 CWE along with how many vulnerabilities are associated with them (via NVD, since MITRE doesn’t actually apply CWE to entries) and what percentage of the overall vulnerabilities that represents. It might be very telling just how useful CWE is and if the project is being pushed too heavily from an academic standpoint. Before you judge that comment, let me know how useful this CWE report from MITRE is, and make sure you load it in Chrome.
It’s an open question whether the addition of coding repositories will lead to another expansion in the number of vulnerabilities.
I don’t think that is an open question at all. I think the number of vulnerabilities will go up as a result of more coding repositories becoming a CNA. But that isn’t really the issue here. Instead, the real questions should be centered around what quality of CVE entries they will provide, if they will adhere to CNA standards, and if MITRE will enforce CNA policy on them.
Based on history, the short answers to those questions are: quality will go down, no, and no. As soon as MITRE provides a breakdown of how many IDs were published by each CNA it is difficult to know. Speaking of, why hasn’t MITRE published such statistics? Rhetorical question, apologies.
Open source vulnerabilities: Do you know what’s in your software?
I, along with many others, can’t stress this enough! Please make sure you understand what third-party software your developers are using. This affects your organizations from both a vulnerability standpoint, as well as legal accountability. Using a third-party library against its license could open you up to some hardships.
The only reason I quoted this section is because I just read an article in the latest Wired that mentions Bootstrap is thought to be used on nearly 20% of all web sites across the Internet. That is incredible.
Patch Tuesday is becoming a bottleneck
There is a lot more that can be said on this topic. It reminds me of a 2015 blog I wrote that actually goes back farther to 2007 where this problem was predicted long before the nightmare IT teams experience once a month. It’s only going to get worse as more vendors jump on this patch schedule and the only ones who will suffer are their paying customers. But hey, let’s let them keep using the term “responsible” disclosure too.
But exploitability alone doesn’t solve the problem—three quarters of the 17,300 vulnerabilities identified ranked CVSS exploitability rating of 8.0 or higher.
I’m not sure a more perfect example of why CVSS has become worthless exists. On its own, especially using the Base score only, is it really helpful that so many vulnerabilities are ‘High Risk‘? This is also a good reminder of another blog I have been meaning to write for a while that outlines the distribution of CVSSv2 versus CVSSv3 and how it impacts scoring. With a couple charts you will get a nice visual of just how poorly thought out some of the framework was. Of course, this has already been evaluated by others years back as well.
Finally, because I don’t hold the copyright to the picture used in the TechBeacon article header, I offer my version:
At the end of each year, we see articles covering how many vulnerabilities were disclosed the prior year. Because the articles are written about the same time of year, it gives a fairly good initial comparison from year to year; at least, on the surface. This is the foundation of statements such as “Security vulnerabilities in critical infrastructure up 600%”. My company, Risk Based Security, even includes that general type of observation in our vulnerability reports, with caveats. These sensationalized counts and figures are also often used to make claims that one product is more or less secure than another, when the vulnerability counts cannot typically be used for such claims as they are built on severely incomplete data. In reality, we must remember that such numbers are only a snapshot in time and serve as a quick comparison between years, not much more.
Before we get to the “moving target” topic, we need to cover a bit of background on how all this happens.
First, consider that even with a large team doing vulnerability aggregation, there is a limit to the number of sources that can be monitored. While a team might monitor over 4,000 sources on a daily to weekly basis, we know there are a lot more out there. As new researchers create their blogs, older vendors finally create advisory pages, and new vendors pop up, the new sources are growing at an incredible rate. Additionally, consider that there are over a million results for “site:github.com changelog.md” (not to mention variations like “release_notes” or “changelog.txt” and similar) that could potentially host a trove of vague vulnerability mentions. Even more daunting, back in 2010 GitHub was hosting 1 million repositories and now they are over 100 million. That means there are an overwhelming number of bug trackers, pull requests, and over a billion commits on a single site. Any company that claims to monitor all of that or “millions” or sources? Do your due diligence and be ready to walk away from them.
Second, due to available resources, vulnerability aggregation teams have to prioritize their activity. This is usually done by vendor, product, and the time frame where higher deployment vendors and products get the most attention. With “time frame”, emphasis is placed on the more recent vulnerabilities as they are most likely to be relevant to organizations. Moving past that, a vulnerability intelligence (VI) provider must be working with clients to learn what resources they use in their organization, as it allows them to prioritize and ensure that they are covering exactly what is deployed first and foremost. After all that, as time permits, they have to come up with new ways to expand source coverage without compromising quality or speed.
With that in mind, consider a vendor that finally publishes a changelog or makes their bug tracker open for everyone. While a VI team should love to go through such resources as far back as possible, they have to limit themselves to vulnerabilities for the current year, and then some amount of time farther back in case clients are using older versions (especially for third-party dependencies). Time permitting, the team should then go back even further to dig for older and more obscure vulnerabilities. While these may or may not immediately benefit clients based on the software they are running, it does contribute directly to the vulnerability history of a given product or vendor. This is invaluable in determining the “cost of ownership” for a product and is vital to making a decision between multiple vendors offering the same type of solutions. With all of that data, it is trivial for a VI provider to provide a quick and easy-to-understand picture of that cost.
Even with very limited time to dig that far back into sources, the impact can still be seen clearly. In January of 2013, Risk Based Security’s VulnDB team had aggregated 8,822 vulnerabilities for the 2012 calendar year, and CVE covered only 4,765 of them (54%). Compared to the prior year (7,911 in 2011), we could say that disclosures increased around 10%. The next question we must ask is if those numbers aged well and hold true today.
Looking at VulnDB now, there were 10,856 vulnerabilities disclosed in 2012. So in the past eight years, the team has managed to find an additional 2,034 vulnerabilities disclosed that year. That means comparing 2012’s updated 10,856 count with the older 7,911 count for 2011, the percent increase was closer to 28%. But wait, we can no longer use the 7,911 count for 2011 either, since that too is a moving target! Ultimately, as previously stated, these disclosure numbers are only good as a snapshot in time. Depending when you perform the count, you may find wildly varying results that could heavily bias any conclusions you try to make. Do the people writing the statistics you read and cite disclaim that?
In January of 2013, I started taking counts every month for how many vulnerabilities VulnDB aggregated for the 2012 calendar year. Almost eight years later, and this blog and chart shows just how much that number can change. As with all vulnerability statistics, please make sure you fully understand what they really mean and disclaim as needed!
With this visual we can see that in the years after 2012, vulnerability aggregation continued to rise considerably. Over time that growth tapers off as the team simply didn’t have time to keep digging that far back into changelogs and bug trackers looking for more vulnerabilities that were less beneficial to organizations.
The tl;dr takeaways:
The number of vulnerabilities disclosed in a given year is static, but the VI teams that aggregate the information won’t find them all that year.
Vulnerabilities by year, as reported, will slowly climb over time as additional aggregation work is performed.
While that newly aggregated data may be “old” as far as what software is being used within an organization, it still contributes to better metadata and product/vendor evaluation (“cost of ownership”)
I woke up this morning to the presidential race being called in favor of Joe Biden and started watching live feeds from around the country. After checking the Denver Channel 7 news feed from the helicopter I saw there were quite a few people at the capitol celebrating. Since I am fairly close I decided to walk down and experience it in person. The following pictures were taken on a Samsung Galaxy 8, so pardon the low quality. =)
First, it was a beautiful day and reached 76 days with the nice fall air. The news chopper as well as Denver Police Air 1 were circling above, as well as two small drones and a few birds. While watching the feed earlier it was interesting that a dozen Trump supporters were present, sectioned off with police tape. There were three lines of police separating them from the hundreds of Biden/Harris supporters. By the time I got there the police were already leaving the area despite a handful of Trump supporters remaining.
The police had blocked off around ten blocks to prevent traffic on three sides of the capitol as well as the Civic Center. This is obviously good for safety and worth it in the big picture, even if an inconvenience to local traffic. The empty roads in the middle of a busy city always fascinate me.
A few of the lingering Trump supporters stuck around having a discussion about politics which was encouraging to see. One supporter stood in the middle of those celebrating with a Trump Train flag as a cape and was joined shortly after by someone wearing all black and camouflage. Biden supporters asked him what his purpose was in demonstrating like that and he would not answer. I later noticed the flag-caped supporter getting into a car with a purple heart license plate confirming my hunch he was ex-military.
The Biden/Harris supporters that showed up were truly happy and celebrating the morning’s news. At one point a truck playing music for the crowd spurred a hundred people to do the electric slide.
Even with a few lingering Trump supporters, the crowd ignored them and just enjoyed themselves. After several hours at the capitol the group started marching into downtown and I headed home.
Regardless of your politics, I hope you appreciate that our country desperately needs to heal.
The Arbor Day Foundation is a 501c(3) nonprofit organization founded in 1972 that seeks to “inspire people to plant, nurture, and celebrate trees“.
I received a “Colorado Tree Survey” from them today, part of what is a never-ending stream of snail-mail spam that I have written about before. For this envelope, the thing that caught my attention was the weight of the envelope.
To say that it was heavier than other junk mail would be an understatement. In fact, it was well over 5x heavier than most of my junk mail including the ones that send small pads of paper and mailing labels.
That’s right, it came in over three pounds. For Arbor to presumably send these out to at least hundreds of thousands of people in Colorado. It’s fair to say that took quite a few trees to produce not to mention the general carbon emissions required to produce and distribute. This seems to be at odds with their general mission purpose especially in the age of email.
Given that 23.4% of their income is not spent on program expenses it is discouraging to say the least. To put it into better perspective, over six million dollars went to administrative expenses and their CEO Matt Harris made $336,445 in 2019. Considering that $961,603 of their income came from government grants that year it is really frustrating to see nonprofit entities spend money on such salaries and waste money on killing trees to conduct surveys about celebrating trees.
If I print this blog out and mail it back in place of the survey, I wonder if they would appreciate the irony.
After removing many half-eaten Styrofoam peanuts I found the contents of this box sent by Punkis. Yes, that Punkis; the old, old, decrepit Attrition.org staff member. Bromancing the stone with Modify all those years he left the traffic of the greater Los Angeles region for the warm wonderful climate of a state within spitting distance of Canada. I invited him to our Discord server, he sent me this box. I need to invite him to more things, including the 7th annual aluminum foil eating contest hosted by Lyger.
The first thing that stood out to me was that book. What a truly wonderful book! It seems really familiar, like I might have read it in the past? Then I realized… THAT WAS MY BOOK. That asshole had it for 20 years and finally returned it. I knew he reads slow but wow…
A few highlights from the box. First, a bottle from Punkis’ stash of Horny Goat Weed, a “dietary supplement” for “libido support“. Apparently he stocked up for the pandemic and had too much stock. Bottle “sealed for your protection” a lesson he never learned even after his 18th kid.
The “Squirrel in Underpants” air freshener is a nice touch. He sent it to me after it hung in his truck for two years never making it smell better. I appreciate the unwrapped gifts like this.
The third item is actually not from his pandemic planning stash. He apparently bought 2,000 bottles of this “I Just Shit in the Woods” hand sanitizer years ago, never knowing it would be a life saver. Like many things, he overestimated how much he’d need by a wee bit.
Finally, he sent me one of his kidney stones but had the courtesy of making a nice little card saying it was this magical fossilized coral that was alive 350 million years ago. Hell, it’s probably a relative one generation removed.
I haven’t seen Punkis in a decade and more, but he certainly hasn’t lost his touch on sending the love. ❤
For those who know me, they are well aware that I have a slight ‘spreadsheet’ problem. More specifically, Google Sheets since they are collaborative and sync across devices. Not the point! I tend to make Sheets for more and more things and track many data points in my life around my health. I also have my fair share of less useful sheets and my friends are quick to volunteer me to make a new one.
In other cases, sometimes a Sheet full of data is the only way to solve a friendly disagreement. A while back someone asked me what the “oddest” thing I have tracked was. I think the title of this blog post spoiled the answer to that.
What might be more amusing is the conversation that sparked the creation of that sheet, and what disagreement it resolved (the ratio of nudity between men/women in Altered Carbon). ‘J’ericho and ‘F’riend for the dialogue that has been edited slightly for readability:
F: I finished altered carbon last night, binged in one sitting J: I started it again. There was floating penis in the first 10 seconds of the show? F: I saw two [penises]. Total. I stand by my complaint. J: You are judging based on the # of distinct penises, not the frequency of seeing said penises. How many distinct sets of boobs were there? F: 87654. And frontal fur patch at least 3-4. Total imbalance. The P:B/B (Penis to Bush/Boob) ratio sucked. J: Need a spreadsheet with incidents by episode, duration, comments like “at a distance, floating in water”, for objective analysis. F: I use an obviously visible for 2+ seconds rule J: I wouldn’t. There are times where it could flash for a second but be VERY prominent and count. F: Obscure millisecond drive by penis hardly counts when they do close ups on boobs. And half the Clothes were stupidly see through. We need to discuss metrics. Asses should be weighed at half value. They are gender neutral. J: Once you have the base data, you can figure out your formulas. F: I stand firmly behind the P:B/B metric. J: But need the base data first! F: I think we have advanced to next level data nerding 🙂 [..] J: Curious how you missed the blatant penis in S01E02 F: That was the one in the crypt right? J: Penis 1 was in the Bancroft vault, penis 2 was in the Jack Me Off sex club. F: Oh I forgot it? Or missed it J: See! Boob bias! F: Or it was unremarkable J: It was prominent, gratuitous cock. F: Come ON! I’ll go inspect. J: Time stamp is in sheet. F: A time stamp, in the sheet. Anything less would be not up to your standards
Ultimately, the data proves that she was right, the P:B/B ratio was heavily skewed toward female nudity, and I was right that her perception of the ratio was slightly off.
It’s been a while since I wrote up a ‘Box of Shit‘ but felt it was time after receiving one from Jamie (@w1bble). He sent it from that far away place trying to find an exit or something; U.K. politics are so weird, not like the U.S. They talk funny too.
Speaking of weird, this guy and his box!
Wyld Stallyns indeed. Also “wyld” is the apparent support for Ticketmaster and hey wait, they have InfoSec style stickers?! Apparently so, and several of them. Although, I think the “MEH” one would go over well to many of their captive-audience customers who frown upon certain fees!
The box also included another sheet of InfoSec stickers along with a sheet of white stickers, shown here in great contrast for clarity. Some day I may learn photoshop and how to actually provide contrast; until then you can download the image and figure it out yourself. After moving the stickers out of the way, the box began to reveal itself!
And it revealed… more stickers. But since I am somewhat of a sticker peddler that tries to put them in the hands of people who appreciate them, I dig it. Certainly some good ones in this batch, many I had not seen before.
After throwing stickers in the air like I found new wealth I dug into the box more. The w1bble already knows I dig those “love hearts” candy after my last trip to his fruity country. Think along the lines of sweet tarts… that fizz a bit. Also included, and part of just about every box in history, is vendor swag. A suave monkey, puzzle I will send off to have done for me, “dodgy bluetooth trackers” as he calls them, mints for me to freshen up, and a smaller nicer box with more shit in it. The two circuit board thingies were “beer tokens @ 44CON, they got soldered onsite and were more interesting ..“.
Inside the little box is the true treasure! A sterling silver squirrel keychain that clearly represents the Eurasian red squirrel that are rumored to exist in the U.K. On my last visit there I saw none and was told I would have to travel far outside London to find them. Of course, I knew that to be a trap and would not venture out into the wild lands where weird Brits might abduct me and lecture me about how good their fish and chips are (Narrator: They aren’t good).
That “Scottish Consulate’s Unicorn Dust Hangover Remedy” will sure come in handy. I am so thankful for Scotland’s plentiful unicorns that provide this unlike those fantasy U.K. red squirrels you only hear about in legends.
Thanks w1bble for this very generous and well-crafted Box of Shit! I rate it 8/10. You lost a point for not including a red squirrel. And the last point is only given for the perfect box, which is like a U.K. red squirrel and doesn’t exist.
Indulge me briefly please. I have a simple proposal for you that is good for everyone and I have taken the liberty to “show my work”. Read on!
You serve really tasty sub sandwiches and they have become my favorite, displacing Subway and a local sandwich shop. I get the regular #5 Super Sub but do it my way, not yours. That includes adding deli mustard and mayo. Using your nutrition calculator I see that the sandwich jumps from 830 to 1070 calories when I add mayonnaise, meaning it is 240 calories. In reality it is probably a lot more than that given how liberally it is used.
According to Wikipedia you have 1,592 locations. From one of your news articles we know that one location can sell over 1,000 sandwiches in a day. While I know not every store sells that many a day, and I know that not every customer gets mayo, indulge me in a hypothetical.
If a store sells half that amount (500) and half ordered mayo (250) that means you sell 398,000 sandwiches a day (1592 * 250). We know from the calculator that the mayo is 240 calories so multiply by that number of sandwiches and we get 95,520,000 million calories in mayo alone each day. That is pretty incredible.
That made me wonder why you don’t offer an alternate to the full mayo, either light or fat free. Let’s take a look at a leading brand and compare the calories in 1 tablespoon. Kraft Real Mayo is 90 calories, Kraft Light Mayo is 35 calories, and Kraft Fat Free is only 10 calories. Dropping from full to light cuts out 62% (55) calories and dropping to to fat free cuts out 89% (80) calories. Since we only know your sandwich gains 240 calories and not the brand or how much per tablespoon, we have to use those percentages to figure out the cut from 240.
Now we know that 62% of the 240 means 91 calories on a sandwich with light mayo and a big difference with 89% of the 240 meaning only 26 calories on the sandwich. Jumping back to that 95,520,000 calories in mayo a day we have a good contrast. Switching to light mayo would cut that down to 36,297,600 calories a day and switching to fat free would cut that down to 10,507,200 calories a day.
While you probably aren’t open 365 days a year or if so, not full hours, forgive my short cut to just call it a year when we look at the bigger picture. As is, you are potentially dishing out 34,864,800,000 calories every year. That is almost 35 billion.
If Jersey Mike’s offered low-fat mayo it would potentially drop to 13,248,624,000 calories which saves over 21 billion a year. Moving to fat-free would cut it down to a measly 3,835,128,000 calories saving over 31 billion a year. The kicker? If you charged $0.25 for upgrading to less calories that would be $99,500 a day or $36,317,500 a year in profit, assuming you can get fat free for the same price as regular.
Finally, for the critics who might cry that fat-free mayo doesn’t taste as good as the full, you are absolutely right! But on a sub sandwich with a dozen ingredients and the mayonnaise mixed in with deli mustard? I doubt one in a thousand people could taste the difference. For those who disagree? Fine, see above. They could offer both.
Jersey Mike’s could offer low-fat mayo and remove 21.6 billion calories from customer diets or offer fat-free mayo and remove 31 billion calories from customer diets each year while making over 36 million dollars in the process. How about it Mike… this is a win-win.
I just read “The History of Common Vulnerabilities and Exposures (CVE)” by Ary Widdes from Tripwire and found it to be a great summary of the 20+ years of the program. I say that as an outspoken CVE and MITRE critic even! I do have a couple of objections however, with the conclusion, and then a fun bounty!
Widdes concludes the history by saying:
A lot has changed in the 21 years since the CVE List’s inception – both in terms of technology and vulnerabilities. Without the CVE List, it’s possible that security professionals would still be using multiple tools from multiple vendors just to ensure complete coverage. It’s also possible that someone else would have created a service similar to the CVE List. Either way, from idea to whitepaper to database, the CVE List has become a core part of vulnerability and patch management.
There’s a lot to unpack here so I will take it one sentence at a time, starting with the second.
“Without the CVE List, it’s possible that security professionals would still be using multiple tools from multiple vendors just to ensure complete coverage.”
No, there is no “possible” here. That is a simple reality with an important caveat. The reality is that teams of all types still use multiple tools from multiple vendors to do their job. The caveat, and more to the point of that sentence, is that CVE doesn’t offer “complete coverage” and many of the vulnerability scanners only cover a third of the issues in CVE for various reasons. Even using a combination of firewalls, vulnerability scanners, intrusion detection/prevention, audits, and a slew of other tools, organizations are likely seeing half of what CVE has to offer at best. Widdes’ conclusion here gives undue credit to CVE and the state of vulnerability coverage it offers.
It’s also possible that someone else would have created a service similar to the CVE List.
This is where the vulnerability historian in me wants to rage a bit. This statement is unequivocally false for the simple reason that vulnerability databases existed before CVE, both free (e.g. X-Force) and commercial (e.g. RSI), in 1997 alone . The first vulnerability database was created in 1973, specific to Multics, but also when there weren’t that many other systems to catalog bugs or vulnerabilities in. In 1983 we saw the Mt Xinu Bug List and in 1985 Matt Bishop’s List of UNIX Holes, both of which were more comprehensive than one platform. If we consider a vulnerability database implemented via product, we had ISS, SATAN, Ballista, and Nessus between 1995 and the creation of CVE in 1999. Many of the hackers turned security professionals may fondly remember Fyodor’s Exploit World (1996 – 1998) from both aspects of their lives. Those same folks probably also remember Packet Storm (1998) which is still running today.
Either way, from idea to whitepaper to database, the CVE List has become a core part of vulnerability and patch management.
This, unfortunately, is true. I say unfortunately because of my long-standing criticisms of CVE over the past decade, but won’t go into here.
If there is anyone at MITRE open to outright bribery, including all-you-can-eat sushi dinners, I will pay a bounty to get my hands on that list of 8,400 submissions! While I know there are likely a lot of duplicates, the vulnerability historian in me would love to audit that data to see if MITRE decided to skip any that would be considered vulnerabilities by today’s standards, or where someone else back then had more knowledge of a vulnerability than was submitted. That data is over twenty years old and was solicited, processed, and partially published with U.S. taxpayer funded money. There’s no reason not to make it public. =)
 The Repent Security Inc. (RSI) database existed in 1997 but may not have been offered as a commercial product until 1998.