The people on the mountain are mostly good about social distancing in on the lift rides (two people for a four-chair lift), but not so much in line. You get a stark reminder of this when it is 10 degrees and you can see everyone’s breath.
While not much of the terrain is open, the runs that are available are in good shape so the conditions are good.
At the bottom of the mountain there is more care about distancing with extra ski racks, someone to help guide people, and cordoned off areas near the ticket windows.
At the lifts, the operators are doing nothing to enforce distancing. They are also ignoring people not following the mask mandate. People wearing a gaiter over the mouth only for example, go unchallenged. The lift crews are also largely ignoring people in line and talking to each other. Rare to get a friendly greeting.
With only three runs open on upper Peak 9, they are more crowded than prior years. There has been little to no enforcement of slow zones or the no jumping section of Cashier run.
It isn’t just the limited runs, it is a lot more crowded this week than compared to prior years. With the new reservation system it sounds like a good idea to limit the people on the mountain. But apparently Vail Resorts isn’t publishing just what that limit is. So instead of being on a run with a couple other people during the first week of December, I find myself on the run with 25 or more. This does not bode well for the rest of the season. As it stands, it seems like the reservation system is purely for show. Picture above shows eight people on the last leg of Cashier, with another 20 behind me. But, at least one bluebird day this week!
The revamped web site is riddled with bugs, so many and so severe as to hinder functionality. Want to get a buddy ticket? Good luck since you have to sign in to view prices, despite being signed in. EpicMix statistics? Gone completely. EpicMix app? While there appear to be some good new features, they come at the cost of removing some statistics too. Want to chat with someone about it? Enjoy the 463 minute wait.
Many years ago, at early DEF CONs before 2000, I became a critic of Hacker Jeopardy after some of the questions had wrong answers. The host had written the questions and answers but got some wrong. The next year I offered to sanity check them before the game and did so, finding a few errors shortly before the game started. I think this happened a year after that but my memory is fuzzy as to how many years I helped. At some point I offered to help write questions well in advance of the next DEF CON and began scribbling ideas in a notebook. I found that notebook recently!
Below were to be the proposed topics and questions in order of difficulty. I have not included a few questions which would have been acceptable to most attendees back then, but shouldn’t have been in hindsight. One of the questions revolved around ‘open secrets’ of two individuals in the scene, one being John Draper and the other continuing to be an open secret to this day.
After a recent DEF CON which had a Hacker Jeopardy that had every team miss which port Telnet is on , I wonder how teams would do with these. Some may be subjective, but they had more widely-known backstories at the time.
This charlatan is best known for her delusions of grandeur, Erik Bloodaxe reading her mail, the FBI harassing her, and more. [Carolyn Meinel]
PGP is a lost concept to this charlatan. [Winn Schwartau]
This well-trained monkey/charlatan hacked a bank once. [Ira Winkler]
This charlatan can help you learn the SECRETS of hacking a public library or BBS. [Knightmare]
This charllatan is master of using ‘grep’ for his IDS at NASA! [Dan Ridge or ‘B-grep’ or ‘wizkid’]
Which DEF CON
This first answer on my list strikes me as wrong. My own memory today says only ~ 200 showed up to DEF CON 2, but now I wonder if it was really ~ 400, which would explain an answer of ~ 300 showing to DEF CON 1. But conventional wisdom and our poor memories often cite the first one only having ~ 100 there. Anyone have a more definitive memory?
Only 300 people showed to this DEF CON 
Which two hackers were thrown out of the Aladdin at which DEF CON? [Pete Shipley / Voyager @ 5]
The Sahara was serving minors Heineken beer at which DEF CON? 
W’ere here to help…
We are hackers who will be glad to narc you for teenpron.gif! [EHAP or Ethical Hackers Against Porn]
We are hackers who will be glad to get you legal counsel like the other 0 we have helped. [HDF or Hackers Defense Foundation]
Spending a quarter million to prove what everyone knew by building “DeepCrack” is the only thing we’ve done in years. [EFF]
We’ll be glad to repost your advisories six months after you do! [CERT]
Pay us thousands, and our 17 year veterans will babble … err teach you to hack Japanese banks. [se7en]
What does RSA stand for? [Rivest, Shamir, Adleman]
Win95 SSH supports what flavors of encryption? [Idea, 3DES, Blowfish]
Name one ITAR loophole [printing or missile]
What crypto engine is unix crypt() based on? [Enigma]
Everything under the Sun
Sun was derived from what flavor of Unix, while Solaris hails from which? [BSD vs SysV]
What is the default debugger installed with Solaris? [adb]
How many returns does it take t overflow AND exploit a vulnerable binary on the sparc architecture? 
(I only had notes for 4 questions, nothing written out)
The idea for this was Unix commands that were also commonly joked about euphemisms for sexual activity. There were many, many more back in the day but I only ended up with three questions in my notes for some reason.
Foreplay as ‘stinky pinky’ [finger]
This function might lead to child processes [fork()]
These two commands make 69 [head + tail]
Ultimately these were never used I don’t believe, and as I recall, the host and question writer for Hacker Jeopardy at the time said ‘yes’ to collaborating on questions in advance of the next convention, but did not follow-through at all so the idea died off.
So this idea for a comic goes way back to 2000/2001 and fortunately I had a friend who sketched it out. The file date is from August, 2001, but I don’t know for sure that is when it was created. Idea by me, art by H. Poteat.
The rise of social media has been interesting to say the least. Many on twitter have found it to give them a type of power as they can voice their complaints directly to a company that has wronged them. Everything from bad customer service, bad prices, minor inconvenience, or even perceived slights that likely never happened as described.
This ability has given rise to social media teams at these companies that are often extensions of the customer support teams that traditionally handle phone and email based contact. Since the complaint isn’t a direct communication between the offended and offender, companies have figured out that it behooves them to control the narrative as much as possible. Since the original Tweet(s) have outlined a bad experience they must try to head off any additional commentary be it from the offended to friends of theirs sharing their own negative experiences or even random users that see a RT or search for those sharing negative stories.
Without fail, the offending company will reply and immediately ask you to take it to direct messages (DMs) to control the narrative. They show they are quick and eager to resolve your issue! After that they only need to provide a base level of customer service and hope that satisfies you. It is interesting to note that they will do this by asking you to send your name and specific information to assist you, even if your complaint isn’t specific to you. Don’t let them do this.
If your complaint is generic and not specific to your account or personal details, don’t go to DMs with them. Have the conversation publicly so everyone can see it and those searching down the road can find it. If you do take it to DMs and they don’t resolve it? Take them to task, again. Keep doing it until they make things right or ignore you like OptumRX did with me. Apparently you can only call them out for dreadful customer service and a web portal written by seven year-olds so many times before they give up trying to get you to go to DMs and away from the public eye.
I’m not an artist but I occasionally have ideas. I contracted ‘Quickcartoon‘ on Fiverr to put one of my ideas from early this year into a comic strip. Pretty sure I had this idea years ago and from time to time remember it, often after watching a movie that involves the C.I.A.
Woman: Where do you work? Man: State dept. — Woman: Oooh, you are a CIA agent?! Man: (Damn movies…) — [Man mopping at the U.S. Department of State]
Note: This is copyright to me but free to use for non-commercial purposes.
On November 10, TechBeacon published a great article by Rob Lemos titled “More authorities, more CVEs: What it means for app sec teams” in which I was quoted, along with several other people.
Like many articles of this nature, those who provide input often will talk for as long as half an hour and ultimately get a couple lines quoted. We do it to provide background and context on the topic as well as have an open discussion on vulnerability trends. That means there are ‘outtake’ opinions and facts, as well as our potential reaction to other parts of the article that did not include our input. So this blog just covers some of my random observations to compliment the article.
Until 2016, more than 80% of software security issues assigned a CVE identifier belonged to only 10 classes, or weaknesses, as classified by their Common Weakness Enumeration (CWE) category. But in 2019, the top 10 weaknesses only accounted for 59% of reported vulnerabilities.
The Common Weakness Enumeration (CWE) index is interesting to me and I wonder if it has gotten so big to degrade its value. Consider that there are now 891 CWE identifiers as of August 20 in version 4.2 of the framework. Per the article, only 10 of them account for 59% of vulnerabilities which will no doubt include XSS, SQLi, and CSRF as examples. That makes me wonder the value of abstracting so much as it means that hundreds of those CWEs will represent a handful of vulnerabilities at most.
Digging into the 2,298 page PDF documenting version 4.2, you can jump toward the end of the CWE list and see that several have been created but have no “Observed Examples”. In fact, searching for that phrase only yields 397 hits. Does that mean that out of 891 CWE IDs representing weaknesses, that MITRE has only come up with 397 that match known vulnerabilities? I certainly expect otherwise and hope this is just documentation shortcoming as I feel that every CWE ID should be linked to a concrete real-world example. I’d love to see
I’d love to see a simple breakdown of the top 100 CWE along with how many vulnerabilities are associated with them (via NVD, since MITRE doesn’t actually apply CWE to entries) and what percentage of the overall vulnerabilities that represents. It might be very telling just how useful CWE is and if the project is being pushed too heavily from an academic standpoint. Before you judge that comment, let me know how useful this CWE report from MITRE is, and make sure you load it in Chrome.
It’s an open question whether the addition of coding repositories will lead to another expansion in the number of vulnerabilities.
I don’t think that is an open question at all. I think the number of vulnerabilities will go up as a result of more coding repositories becoming a CNA. But that isn’t really the issue here. Instead, the real questions should be centered around what quality of CVE entries they will provide, if they will adhere to CNA standards, and if MITRE will enforce CNA policy on them.
Based on history, the short answers to those questions are: quality will go down, no, and no. As soon as MITRE provides a breakdown of how many IDs were published by each CNA it is difficult to know. Speaking of, why hasn’t MITRE published such statistics? Rhetorical question, apologies.
Open source vulnerabilities: Do you know what’s in your software?
I, along with many others, can’t stress this enough! Please make sure you understand what third-party software your developers are using. This affects your organizations from both a vulnerability standpoint, as well as legal accountability. Using a third-party library against its license could open you up to some hardships.
The only reason I quoted this section is because I just read an article in the latest Wired that mentions Bootstrap is thought to be used on nearly 20% of all web sites across the Internet. That is incredible.
Patch Tuesday is becoming a bottleneck
There is a lot more that can be said on this topic. It reminds me of a 2015 blog I wrote that actually goes back farther to 2007 where this problem was predicted long before the nightmare IT teams experience once a month. It’s only going to get worse as more vendors jump on this patch schedule and the only ones who will suffer are their paying customers. But hey, let’s let them keep using the term “responsible” disclosure too.
But exploitability alone doesn’t solve the problem—three quarters of the 17,300 vulnerabilities identified ranked CVSS exploitability rating of 8.0 or higher.
I’m not sure a more perfect example of why CVSS has become worthless exists. On its own, especially using the Base score only, is it really helpful that so many vulnerabilities are ‘High Risk‘? This is also a good reminder of another blog I have been meaning to write for a while that outlines the distribution of CVSSv2 versus CVSSv3 and how it impacts scoring. With a couple charts you will get a nice visual of just how poorly thought out some of the framework was. Of course, this has already been evaluated by others years back as well.
Finally, because I don’t hold the copyright to the picture used in the TechBeacon article header, I offer my version:
At the end of each year, we see articles covering how many vulnerabilities were disclosed the prior year. Because the articles are written about the same time of year, it gives a fairly good initial comparison from year to year; at least, on the surface. This is the foundation of statements such as “Security vulnerabilities in critical infrastructure up 600%”. My company, Risk Based Security, even includes that general type of observation in our vulnerability reports, with caveats. These sensationalized counts and figures are also often used to make claims that one product is more or less secure than another, when the vulnerability counts cannot typically be used for such claims as they are built on severely incomplete data. In reality, we must remember that such numbers are only a snapshot in time and serve as a quick comparison between years, not much more.
Before we get to the “moving target” topic, we need to cover a bit of background on how all this happens.
First, consider that even with a large team doing vulnerability aggregation, there is a limit to the number of sources that can be monitored. While a team might monitor over 4,000 sources on a daily to weekly basis, we know there are a lot more out there. As new researchers create their blogs, older vendors finally create advisory pages, and new vendors pop up, the new sources are growing at an incredible rate. Additionally, consider that there are over a million results for “site:github.com changelog.md” (not to mention variations like “release_notes” or “changelog.txt” and similar) that could potentially host a trove of vague vulnerability mentions. Even more daunting, back in 2010 GitHub was hosting 1 million repositories and now they are over 100 million. That means there are an overwhelming number of bug trackers, pull requests, and over a billion commits on a single site. Any company that claims to monitor all of that or “millions” or sources? Do your due diligence and be ready to walk away from them.
Second, due to available resources, vulnerability aggregation teams have to prioritize their activity. This is usually done by vendor, product, and the time frame where higher deployment vendors and products get the most attention. With “time frame”, emphasis is placed on the more recent vulnerabilities as they are most likely to be relevant to organizations. Moving past that, a vulnerability intelligence (VI) provider must be working with clients to learn what resources they use in their organization, as it allows them to prioritize and ensure that they are covering exactly what is deployed first and foremost. After all that, as time permits, they have to come up with new ways to expand source coverage without compromising quality or speed.
With that in mind, consider a vendor that finally publishes a changelog or makes their bug tracker open for everyone. While a VI team should love to go through such resources as far back as possible, they have to limit themselves to vulnerabilities for the current year, and then some amount of time farther back in case clients are using older versions (especially for third-party dependencies). Time permitting, the team should then go back even further to dig for older and more obscure vulnerabilities. While these may or may not immediately benefit clients based on the software they are running, it does contribute directly to the vulnerability history of a given product or vendor. This is invaluable in determining the “cost of ownership” for a product and is vital to making a decision between multiple vendors offering the same type of solutions. With all of that data, it is trivial for a VI provider to provide a quick and easy-to-understand picture of that cost.
Even with very limited time to dig that far back into sources, the impact can still be seen clearly. In January of 2013, Risk Based Security’s VulnDB team had aggregated 8,822 vulnerabilities for the 2012 calendar year, and CVE covered only 4,765 of them (54%). Compared to the prior year (7,911 in 2011), we could say that disclosures increased around 10%. The next question we must ask is if those numbers aged well and hold true today.
Looking at VulnDB now, there were 10,856 vulnerabilities disclosed in 2012. So in the past eight years, the team has managed to find an additional 2,034 vulnerabilities disclosed that year. That means comparing 2012’s updated 10,856 count with the older 7,911 count for 2011, the percent increase was closer to 28%. But wait, we can no longer use the 7,911 count for 2011 either, since that too is a moving target! Ultimately, as previously stated, these disclosure numbers are only good as a snapshot in time. Depending when you perform the count, you may find wildly varying results that could heavily bias any conclusions you try to make. Do the people writing the statistics you read and cite disclaim that?
In January of 2013, I started taking counts every month for how many vulnerabilities VulnDB aggregated for the 2012 calendar year. Almost eight years later, and this blog and chart shows just how much that number can change. As with all vulnerability statistics, please make sure you fully understand what they really mean and disclaim as needed!
With this visual we can see that in the years after 2012, vulnerability aggregation continued to rise considerably. Over time that growth tapers off as the team simply didn’t have time to keep digging that far back into changelogs and bug trackers looking for more vulnerabilities that were less beneficial to organizations.
The tl;dr takeaways:
The number of vulnerabilities disclosed in a given year is static, but the VI teams that aggregate the information won’t find them all that year.
Vulnerabilities by year, as reported, will slowly climb over time as additional aggregation work is performed.
While that newly aggregated data may be “old” as far as what software is being used within an organization, it still contributes to better metadata and product/vendor evaluation (“cost of ownership”)
I woke up this morning to the presidential race being called in favor of Joe Biden and started watching live feeds from around the country. After checking the Denver Channel 7 news feed from the helicopter I saw there were quite a few people at the capitol celebrating. Since I am fairly close I decided to walk down and experience it in person. The following pictures were taken on a Samsung Galaxy 8, so pardon the low quality. =)
First, it was a beautiful day and reached 76 days with the nice fall air. The news chopper as well as Denver Police Air 1 were circling above, as well as two small drones and a few birds. While watching the feed earlier it was interesting that a dozen Trump supporters were present, sectioned off with police tape. There were three lines of police separating them from the hundreds of Biden/Harris supporters. By the time I got there the police were already leaving the area despite a handful of Trump supporters remaining.
The police had blocked off around ten blocks to prevent traffic on three sides of the capitol as well as the Civic Center. This is obviously good for safety and worth it in the big picture, even if an inconvenience to local traffic. The empty roads in the middle of a busy city always fascinate me.
A few of the lingering Trump supporters stuck around having a discussion about politics which was encouraging to see. One supporter stood in the middle of those celebrating with a Trump Train flag as a cape and was joined shortly after by someone wearing all black and camouflage. Biden supporters asked him what his purpose was in demonstrating like that and he would not answer. I later noticed the flag-caped supporter getting into a car with a purple heart license plate confirming my hunch he was ex-military.
The Biden/Harris supporters that showed up were truly happy and celebrating the morning’s news. At one point a truck playing music for the crowd spurred a hundred people to do the electric slide.
Even with a few lingering Trump supporters, the crowd ignored them and just enjoyed themselves. After several hours at the capitol the group started marching into downtown and I headed home.
Regardless of your politics, I hope you appreciate that our country desperately needs to heal.
The Arbor Day Foundation is a 501c(3) nonprofit organization founded in 1972 that seeks to “inspire people to plant, nurture, and celebrate trees“.
I received a “Colorado Tree Survey” from them today, part of what is a never-ending stream of snail-mail spam that I have written about before. For this envelope, the thing that caught my attention was the weight of the envelope.
To say that it was heavier than other junk mail would be an understatement. In fact, it was well over 5x heavier than most of my junk mail including the ones that send small pads of paper and mailing labels.
That’s right, it came in over three pounds. For Arbor to presumably send these out to at least hundreds of thousands of people in Colorado. It’s fair to say that took quite a few trees to produce not to mention the general carbon emissions required to produce and distribute. This seems to be at odds with their general mission purpose especially in the age of email.
Given that 23.4% of their income is not spent on program expenses it is discouraging to say the least. To put it into better perspective, over six million dollars went to administrative expenses and their CEO Matt Harris made $336,445 in 2019. Considering that $961,603 of their income came from government grants that year it is really frustrating to see nonprofit entities spend money on such salaries and waste money on killing trees to conduct surveys about celebrating trees.
If I print this blog out and mail it back in place of the survey, I wonder if they would appreciate the irony.
After removing many half-eaten Styrofoam peanuts I found the contents of this box sent by Punkis. Yes, that Punkis; the old, old, decrepit Attrition.org staff member. Bromancing the stone with Modify all those years he left the traffic of the greater Los Angeles region for the warm wonderful climate of a state within spitting distance of Canada. I invited him to our Discord server, he sent me this box. I need to invite him to more things, including the 7th annual aluminum foil eating contest hosted by Lyger.
The first thing that stood out to me was that book. What a truly wonderful book! It seems really familiar, like I might have read it in the past? Then I realized… THAT WAS MY BOOK. That asshole had it for 20 years and finally returned it. I knew he reads slow but wow…
A few highlights from the box. First, a bottle from Punkis’ stash of Horny Goat Weed, a “dietary supplement” for “libido support“. Apparently he stocked up for the pandemic and had too much stock. Bottle “sealed for your protection” a lesson he never learned even after his 18th kid.
The “Squirrel in Underpants” air freshener is a nice touch. He sent it to me after it hung in his truck for two years never making it smell better. I appreciate the unwrapped gifts like this.
The third item is actually not from his pandemic planning stash. He apparently bought 2,000 bottles of this “I Just Shit in the Woods” hand sanitizer years ago, never knowing it would be a life saver. Like many things, he overestimated how much he’d need by a wee bit.
Finally, he sent me one of his kidney stones but had the courtesy of making a nice little card saying it was this magical fossilized coral that was alive 350 million years ago. Hell, it’s probably a relative one generation removed.
I haven’t seen Punkis in a decade and more, but he certainly hasn’t lost his touch on sending the love. ❤