Log4Shell: Redefining Painful Disclosure

Log4Shell is yet another example of why we simply don’t get security right, and it strongly suggests there is little hope for change. There are plenty of blogs and articles that do a great analysis of the vulnerability from the exploitation and impact angle of this vulnerability. There are a lot fewer that examine why the bug exists in the first place, and why it took so long to notice. I’ll briefly touch on that and then look at log4shell from a bigger picture in the context of vendor responses and how they show all the chinks in the armor, enough to demonstrate that we don’t really have a set of armor at all.

If you are familiar with Log4Shell, you may want to skip the next two paragraphs, but part of the abridged summary spells out some CVE assignment confusion and explains how this blog will refer to the various issues.

What is Log4Shell? (Abridged)

On November 30, 2021, a vulnerability was made public in the Apache Log4j library that affected version 2.x but not 1.x. This is largely because, despite the same name, Log4j 2.x represents a complete rewrite of the Log4j 1.x codebase. For ease, I will refer to the library as Log4j in this blog but specifically mean the 2.x version.

The gist of the vulnerability is that when the Log4j library receives user-supplied input, it not only logs the information but has the ability to perform lookup actions. This can result in the execution of arbitrary code. Since logging is frequently implemented in a way to receive user-supplied input, this can represent a significant issue and allow for a trivial compromise of the system. In large enterprises, these logs can be automatically forwarded and consolidated on logging hosts that make administrative work easier. So that simple remote code execution might impact a different system than the one you intended.

The vulnerability was originally assigned CVE-2021-44228 but the initial fix was found to be incomplete. This led to a second CVE assignment, CVE-2021-45046, that immediately started being confused with a very separate issue affecting Log4j 1.x only, and only if the attacker had privileged access to begin with (CVE-2021-4104). Red Hat assigned CVE-2021-4125 to represent the original vulnerability but as applied to their specific products, which is in violation of CVE Numbering Authority (CNA) rules. To add more confusion, the second attempt to fix the original vulnerability resulted in a distinct vulnerability being discovered that resulted in a denial of service and assigned CVE-2021-45105. Finally, yet another flaw was discovered but it too required privileged local access, despite being called “remote” by many, and it was assigned CVE-2021-44832.

This blog will generally talk about the original vulnerability (2021-44228) unless stated otherwise and will be referred to by its nickname “Log4Shell”. Note that while the vulnerability was technically made public on November 30 and the vendor patched it on December 6, it was not widely covered until December 10th.

Log4j History (Abridged)

First, let’s look at a slightly different history of high-risk log4j vulnerabilities, including this one. This will become important in subsequent sections. Note that the severity scores represented are CVSSv2. 

There are three more Log4j vulnerabilities, all without a CVE ID, but scored below 6.9 CVSSv2. On top of CVSS scores, it is also important to note that the current Log4j vulnerability (2021-44228) has been scored CVSSv2 10.0 / CVSSv3 10.0 by NVD. This, along with many other scores, is not correct as they fall into many traps that all stem from not following the CVSS specifications. As VulnDB notes, based on RBS Chief Research Officer Carsten Eiram’s analysis:

The CVSS score for this entry has been updated to reflect adherence to the CVSS scoring guidelines. On the surface this appears to be a CVSSv2 10.0 / CVSSv3 10.0 issue, as reflected in NVD and briefly by RBS. In some scenarios this is an accurate score, in others they are not. However, scoring must be done based on Log4j and the context it runs in, not speculation on possible use-case scenarios. The base score reflects that, and CVSS environmental scores allow an organization to re-cast the score based on their specific environment. So if e.g. a web server is passing user-controlled content to the logs which are then forwarded to a logging host, that represents a scope change and would get scored MS:C. Since Log4j is a library, it would have to be configured / implemented in such a way as to allow user-controlled strings which warrants Access Complexity Medium (AC:M) for CVSSv2, leading to a 9.3 score. In this case, AC:M is used for just that and does not represent user interaction required, which is often what AC:M designates. As always for libraries, they can only be scored for that *one* product, the library itself. All the products bunding the library will be affected in a myriad of ways that result in different scores, and organizations will have to tweak those scores depending on their individual configurations. A re-based score may be derived for different business units or applications even, depending on the organization’s security workflow.

Carsten Eiram, Chief Research Officer, Risk Based Security

This same scoring issue can be seen with NVD and CVE-2019-17571 (v2 7.5 / v3 9.8) as well as NVD’s score for CVE-2017-5645 (v2 7.5 / v3 9.8). The discrepancy between the v2 and v3 scores immediately stand out since their v2 score assumes the library runs with lower privileges (C:P/I:P/A:P) while their v3 score reflects that it may run with higher privileges (C:C/I:C/A:C). NVD’s scoring of Log4j issues giving it a 10 / 10 score does not follow specifications and the v3 score is scored (scope changed) with specific scenarios in mind that are not default. 

Regardless of these scores, 2021-44228, 2019-17571, and 2017-5645 should all be treated equally as serious until an organization determines how it impacts them. In reality? That isn’t the case and it doesn’t make sense, other than the fact that news, researchers, and vendors all cried out that this one is “critical” while the previous vulnerabilities were largely ignored. Very similar to the Struts-Shock vulnerability which made the news primarily because it was used to breach Equifax, while other Apache Struts vulnerabilities didn’t enjoy any attention. Perhaps if the other 21 remote code execution vulnerabilities in Struts had names they too would be better known?

Vendor Disclosure Disappointment

This Log4j vulnerability stands apart from prior issues in the library, even ones with the same severity. We see vendor’s publish advisories calling it a “global event” and that it is “industry-wide“, using language that seems to convey that being vulnerable “couldn’t be helped, it affects everyone!” Sure? While that may be true it begs the question why we see them publish an advisory for Log4Shell but not prior vulnerabilities. In other cases, a vendor may definitively (and ironically)  say that it “affects all service providers using the Log4J Java library” which is also untrue while also saying it “affects certain applications and services” of theirs. This is in addition to a staggering number of vendors publishing advisories saying they are not impacted by the vulnerability, something we have never seen at this volume. Of course, I have to mention the vendors that are “quickly working to assess any potential impact to our clients and the company” … three weeks later.

[1/8/2022 Next two paragraphs updated, thanks to L.P. for the pointer on origin of “LogJam” confusion.]

Despite not being the researchers to discover this vulnerability, LunaSec was one of the first to write an easy-to-digest blog explaining the issue. This is valuable to many administrators that need to quickly understand the risk. However, LunaSec decided to name the vulnerability despite not finding it. They chose “logjam” for the name, oblivious to the fact that Logjam was the name given to the “Diffie–Hellman Key Exchange DHE_EXPORT Cipher Handling Downgrade MitM Weakness” vulnerability back in 2015. That vulnerability didn’t fly under the radar by any means; consider that VulnDB has over 1,000 references for it, meaning many hundreds of vendors wrote advisories and patched it in their software. LunaSec apparently had no knowledge of this vulnerability and didn’t do a simple search for the name before using it. This is amateur hour, extremely counter-productive, and introduces serious confusion around the vulnerability.

Vendors that don’t publish advisories frequently are also more prone to make simple mistakes that introduce confusion. Take Pandora FMS or Kiteworks for example, who conflated Log4Shell with Logjam, no doubt because they saw LunaSec call it that. The two issues are extremely different. One is in Apache’s Log4J library and the other in the Diffie–Hellman Key Exchange that requires a privileged network position to intercept traffic. But this mistake isn’t just the domain of those new to security advisories as Micro Focus has decades of sharing vulnerability information, typically in changelogs instead of formal advisories, made the same mistake. You might chalk this up to a simple error, but remember that Micro Focus wrote at least five advisories referencing LogJam. [1] [2] [3] [4] [5]

For those using CVE IDs to track this vulnerability it can get confusing. Log4Shell has three associated CVE IDs: the original (2021-44228), the incomplete fix (2021-45046), and a duplicate assignment from a CNA (2021-4125). Then we have waves of incorrect CVE IDs being used, typically transposed number typos. This may cause someone to think there is yet another Log4J vulnerability though, or cause them to miss an advisory as they are searching for the correct CVEs. We can see some examples from researchers (2021-4438), bloggers and universities (2021-44248), companies (2021-44248), security news (2021-44882), vendors (2021-45056), security companies (2021-44226) and likely more variations. There are more incorrect CVE IDs attributed to this vulnerability than correct ones.

In many cases, it is the first time we see an advisory from hundreds of companies, despite them  “caring deeply” and “taking your security seriously“. Yes, they responded quickly to this vulnerability, but why not the rest? Why aren’t there advisories on any of the vulnerabilities specific to your code? Why no advisories about other third-party library vulnerabilities? It only proves they didn’t care about all the prior vulnerabilities because there simply wasn’t the same level of public attention. They don’t care enough to let you know of the risk of dozens or hundreds of other vulnerabilities, but somehow they “deeply care“.

Dozens of vendors are publishing advisories that are only available to paying customers. When a vendor publishes behind a paywall it makes it difficult for many teams to access that information. While Alice and Bob from the security team may have access, Christine, Dave, Eve, and Frank may not. In other cases the IT/networking team may have access to the account in order to download new firmware, but the security team does not. Those restricted advisories also stay out of reach of third parties that aggregate that information, many of which are retained to provide vulnerability intelligence to the mutual customer. These companies don’t want to check hundreds of vendors for security updates every day; they want someone else to. 

This also applies to vendors that say they will contact customers directly who require updates or publish an anemic advisory saying you need to contact customer service or “contact our support department for product-specific questions“. In many cases those notifications are not going to the appropriate people. Some accounts are set up by different divisions or managers that handle vendor accounts. In other cases there is employee turnover so that vendors are contacting people who left the company.

There are vendors that may understand the issue, may respond accordingly, but turn around and do not communicate that well to customers. For example, L-Soft’s advisory says that one version of their software uses Log4j 2.15.0 “and is therefore not vulnerable to CVE-2021-44228 (Log4Shell) but may still be vulnerable to CVE-2021-45046 and CVE-2021-45105“. This is incorrect, as 2.15.0 is vulnerable to Log4Shell in many instances. If it doesn’t affect them due to configuration then the advisory should specifically say that.

How and where vendors choose to disclose can also prove to be problematic. Modern large organizations may literally use products and services from thousands of vendors. That is why they don’t want to try to monitor them all for security updates, and it is easy to understand why. In addition to monitoring vendor security advisories, knowledge bases, wikis, support forums, and email distributions, some vendors opt to disclose elsewhere. In the case of Log4Shell, we’re seeing that happen on Twitter and Reddit

In other cases we are seeing “advisories” being shared on GitHub in an image (PNG), which doesn’t let you cut & paste the text. When the article isn’t in your native language, it’s basically worthless since you can’t trivially translate it. It reminds me of vendors that publish changelogs or advisories in PDF format, but have disabled the copy & paste functionality in it. 

The researcher SwitHack has done an incredible job collecting vendor advisories related to Log4Shell, but in doing so, has created a similar situation where disclosures happen in comments to their Gist. This collection of references also highlights the problem of “linkrot”; when a link goes 404 and the content is moved or removed completely. It’s been less than one month and half a dozen references are already 404. Some vendors publish their advisory in a place that almost guarantees it will move or vanish in time.

For those who rely on the version of software to track vulnerabilities, this is typically the best approach. In some cases, vendors work against your efforts when they say “The following product IS affected: AppDev Pack versions 1.0.5 – 1.0.11” and then “As of this writing the latest release of AppDev Pack, version 1.0.11, updates the referenced version of log4j2 to 2.15.0.” In this case, not only do you have to download the same version of the software as of a given date, you are still not protected weeks later because it only updates Log4J to 2.15.0 instead of 2.17.0 which resolves all of the known issues.

There is another type of vendor that likes conveying information in tables. For some data that is great, but for vulnerability information it can often be more difficult to process. Solace decided to share their matrices in image format that are basically unreadable as is. You have to copy the image URL and load it in a new tab to read it, since they aren’t even clickable links. HCL Technologies gives us one table to decipher, a word I use with reason. This handy table will let you figure out if you are impacted:

When it comes to clarity in vulnerability disclosure it can be tricky in a case like Log4Shell, given it has three CVE IDs and additional vulnerabilities were discovered within days of the original disclosure. However, when a vendor references three CVE IDs for Log4J vulnerabilities and then decides to also include a 2018 vulnerability in Log4Net, which is not affected by the Log4Shell issue, it can introduce confusion. Specific to Log4J, some of the initial confusion came in the form of version 1.x versus 2.x. While they share the same name, Log4J, version 2.x was a complete rewrite of 1.x and doesn’t share much beyond the name. Given that several vulnerabilities do not affect 1.x while one vulnerability affects 1.x and not 2.x, it’s messy. So a vendor responding to the Log4Shell vulnerability (in 2.x) proudly saying something like “We’re not vulnerable, we use 1.x!” and then not mentioning if they are impacted by CVE-2021-4104 (in 1.x) is frustrating.

If all of this seems discouraging and you better appreciate some frustrations around vulnerability intelligence, consider this; not even the vendor, Apache, gets it right. Three weeks after Log4Shell was disclosed, the Apache blog aggregating information about their own products is missing information (Daffodil, HBase, ManifoldCF, etc.) and incorrect about some (Geode, Jena, etc.).If the vendor can’t aggregate accurate and timely information on their own product portfolio, what hope do the rest of us have?

Keeping Up

As you can see from the ‘Disclosure Disappointments’ above, trying to stay current with the Log4Shell vulnerability is a nightmare. VulnDB is currently cataloging over 1,100 vendor advisories in addition to vendor specific solution references, exploits, and a wide variety of additional references. In less than 30 days, VulnDB 275958 (Log4Shell) has more references than Spectre, Meltdown, SWEET32, SLOTH, FREAK, Shellshock, Heartbleed, and Logjam. The only vulnerability with more references right now is POODLE, but that is likely to change in the coming weeks as more vendors finally publish their advisories.

Every day the VulnDB team continues to scour vendor web sites, Twitter, GitHub, and other sources looking for more information. When a new advisory is found we have to determine if any information needs to be cataloged and update the entry accordingly. Consider adding one reference or one vendor/product/version combination or updating one field as a single data point, and imagine what that looks like. As of this blog, that is 9,749 points of data which is staggering for one vulnerability, especially for one published less than a month ago. Here’s what that looks like:

You can see when the news sank in and vendors really started responding to it. For the next half year or more, that entry will keep getting updated. For those who think that managing a vulnerability database is “just collecting public information“, you are right. But summiting Mount Everest is also just “walking up that hill“.

What can you do? The customers have the power here. If your vendor releases a poorly worded advisory, or this is their very first advisory, take them to task. You must hold them accountable and be ready to switch vendors, which is painful I know, so that your money can teach them a lesson. Demand not only better security practices from your vendors, but better communication about vulnerabilities that impact you. Unfortunately, that threat of switching vendors is typically an empty one, and vendors know it.

Based on everything above, enumerating many of the failures around disclosing a single vulnerability, it’s hard to imagine our industry has much hope. How can organizations really expect to maintain security in the face of vendors that say they care, but act otherwise? Vendors and the information security sector are expected to provide a level of defense that will repel attackers. Unfortunately, we’re our own worst enemy and the staggering number of data breaches are evidence of that. But hey, at least we put on a good show.

The Duality of Expertise: Microsoft

[This was originally published on the OSVDB blog.]

The notion of expertise in any field is fascinating. It crosses so many aspects of humans and our perception. For example, two people in the same discipline, each with the highest honors academic can grant, can still have very different expertise within that field. Society and science have advanced so we don’t have just have “science” experts and medical doctors can specialize to extreme degrees. Within Information Security, we see the same where there are experts in penetration testing, malware analysis, reverse engineering, security administration, and more.

In the context of a software company, especially one that does not specifically specialize in security (and is trivial to argue was late to the security game), you cannot shoehorn them into any specific discipline or expertise. We can all absolutely agree there is an absolute incredible level of expertise across a variety of disciplines within Microsoft. So when Microsoft releases yet another report that speaks to vulnerability disclosures, the only thing I can think of is duality. Especially in the context of a report that puts forth some expertise that they are uniquely qualified to speak on, while mixed with a topic that pre-dates Microsoft and they certainly aren’t qualified to speak on to some degree.

A Tweet from Carsten Eiram pointed me to the latest report, and brought up the obvious fact that it seemed to be way off when it comes to vulnerability disclosures.


The “MS SIR” he refers to is the Microsoft Security Intelligence Report, Volume 21 which covers “January through June, 2016” (direct PDF download).

It’s always amusing to me that you get legal disclaimers in such analysis papers before you even get a summary of the paper:


Basically, the take away is that they don’t stand behind their data. Honestly, the fact I am blogging about this suggests that is a good move and that they should not. The next thing that is fascinating is that it was written by 33 authors and 14 contributors. Since you don’t know which of them worked on the vulnerability section, it means we get to hold them all accountable. Either they can break it down by author and section, or they all signed off on the entire paper. Sorry, the joys of academic papers!

After the legal disclaimers, then you start to get the analysis disclaimers, which are more telling to me. Companies love to blindly throw legal disclaimers on anything and everything (e.g. I bet you still get legal disclaimers in the footer of emails you receive, that have no merit). When they start to explain their results via disclaimers while not actually including their methodology, anyone reading the report should be concerned. From their “About this report” section:

This volume of the Microsoft Security Intelligence Report focuses on the first and second quarters of 2016, with trend data for the last several quarters presented on a quarterly basis. Because vulnerability disclosures can be highly inconsistent from quarter to quarter and often occur disproportionately at certain times of the year, statistics about vulnerability disclosures are presented on a half-yearly basis.

This is a fairly specific statement that speaks as if it is fact that vulnerability trends vary by quarter (they do!), but potentially ignores the fact that they can also vary by half-year or year. We have seen that impact not only a year, but the comparison to every year prior (e.g. Will Dormann in 2014 and his Tapioca project). Arbitrarily saying that it is a ‘quarter’ or ‘half-year’ does not demonstrate experience in aggregating vulnerabilities, instead it is a rather arbitrary and short time-frame. Focusing on a quarter can easily ignore some of the biases that impact vulnerability aggregation as outlined by Steve Christey and my talk titled “Buying Into the Bias: Why Vulnerability Statistics Suck” (PPT).

Jumping down to the “Ten years of exploits: A long-term study of exploitation of vulnerabilities in Microsoft software” section, Microsoft states:

However, despite the increasing number of disclosures, the number of remote code execution (RCE) and elevation of privilege (EOP) vulnerabilities in Microsoft software has declined

Doing a title search of Risk Based Security’s VulnDB for “microsoft windows local privilege escalation” tells a potentially different story. While 2015 is technically lower than 2011 and 2013, it is significantly higher than 2012 and 2014. I can’t say for certain why these dips occur, but they are very interesting.


Thousands of vulnerabilities are publicly disclosed across the industry every year. The 4,512 vulnerabilities disclosed during the second half of 2014 (2H14) is the largest
number of vulnerabilities disclosed in any half-year period since the Common Vulnerabilities and Exposures system was launched in 1999.

This quote from the report explicitly shows serious bias in their source data, and further shows that they do not consider their wording. This would be a bit more accurate saying “The 4,512 vulnerabilities aggregated by MITRE during the second half of 2014…” The simple fact is, a lot more than 4,512 vulnerabilities were disclosed during that time. VulnDB shows that they aggregated 8,836 vulnerabilities in that same period, but less than the 9,016 vulnerabilities aggregated in the second half of 2015. Microsoft also doesn’t disclaim that the second half of 2014 is when the aforementioned Will Dormann released the results of his ‘Tapioca’ project totaling over 20,500 vulnerabilities, only 1,384 of which received CVE IDs. Why? Because CVE basically said “it isn’t worth it”, and they weren’t the only vulnerability database to do so. With all of this in mind, Microsoft’s comment about the second half of 2014 becomes a lot more complicated.

The information in this section is compiled from vulnerability disclosure data that is published in the National Vulnerability Database (NVD), the US government’s repository of standards-based vulnerability management data at nvd.nist.gov. The NVD represents all disclosures that have a published CVE (Common Vulnerabilities and Exposures) identifier.

This is a curious statement, since CVE is run by MITRE under a contract from the Department of Homeland Security (DHS), making it a “US government repository” too. More importantly, NVD is essentially a clone of CVE that just wraps additional meta-data around each entry (e.g. CPE, CWE, and CVSS scoring). This also reminds us that they opted to use a limited data set, one that is well known in the Information Security field as being woefully incomplete. So even a company as large as Microsoft, with expected expertise in vulnerabilities, opts to use a sub-par data set which drastically influences statistics.

Figure 23. Remote code executable (RCE) and elevation of privilege (EOP) vulnerability disclosures in Microsoft software known to be exploited before the corresponding security update release or within 30 days afterward, 2006–2015

The explanation for Figure 23 is problematic in several ways. Does it cherry pick RCE and EOP while ignoring context-dependent (aka user-assisted) issues? Or does this represent all Microsoft vulnerabilities? This is important to ask as most web browser exploits are considered to be context-dependent and coveted by the bad guys. This could be Microsoft conveniently leaving out a subset of vulnerabilities that would make the stats look worse. Next, looking at 2015 as an example from their chart, they say 18 vulnerabilities were exploited and 397 were not. Of the 560 Microsoft vulnerabilities aggregated by VulnDB in 2015, 48 have a known public exploit. Rather than check each one to determine the time from disclosure to exploit publication, I’ll ask a more important question. What is the provenance of Microsoft’s exploitation data? That isn’t something CVE or NVD track.

Figure 25 illustrates the number of vulnerability disclosures across the software industry for each half-year period since 2H13

Once again, Microsoft fails to use the correct wording. This is not the number of vulnerability disclosures, this is the number of disclosures aggregated by MITRE/CVE. Here is their chart from the report:


Under the chart they claim:

Vulnerability disclosures across the industry decreased 9.8 percent between 2H15 and 1H16, to just above 3,000.

As mentioned earlier, since Microsoft is using a sub-par data set, I feel it is important to see what this chart would look like using more complete data. More importantly, watch how it invalidates their claim about an industry decrease of 9.8 percent between 2H15 and 1H16, since RBS shows the drop is closer to 18%.


I have blogged about vulnerability statistics, focusing on these types of reports, for quite a while now. And yet, every year we see the exact same mistakes made by just about every company publishing statistics on vulnerabilities. Remember, unless they are aggregating vulnerabilities every day, they are losing a serious understanding of the data they work with.

March 19, 2017 Update – Carsten Eiram (@carsteneiram) pointed out that the pattern of local privilege escalation numbers actually follow an expected pattern with knowledge of researcher activity and trends:

In 2011, Tarjei Mandt while he was at Norman found a metric ton of LPEs in win32k.sys as part of a project.

In 2013, it was Mateusz Jurczyk’s turn to also hit win32k.sys by focusing on a bug-class he dubbed “double-fetch” (he’s currently starting that project up again to see if he with tweaks can find more vulns).

And in 2015, Nils Sommer (reported via Google P0) was hitting win32k.sys again along with a few other drivers and churned out a respectable amount of LPE vulnerabilities.

So 2012 and 2014 represent “standard” years while 2011, 2013, and 2015 had specific high-profile researchers focus on Windows LPE flaws via various fuzzing projects.

So the explanation is the same explanation we almost always see: vulnerability disclosures and statistics are so incredibly researcher driven based on which product / component / vulnerability type a researcher or group of researchers decides to focus on.

2007 Black Hat / DEF CON

Tuesday, July 31st, 2007 – Black Hat – Day 1

Flight was uneventful. McCarran has a new car rental complex a ways from the airport. Leaving the complex dumps you directly on the strip, how convenient! I imagine someone on the tourism board is happy with themselves.

Rented from Hertz as usual. While I did receive a mostly free upgrade from compact to full-size with GPS thrown in, the car came with on-board warnings about tire and oil pressure. The GPS apparently saved a co-worker who had to ferry another 10 miles off strip, but it failed miserably in helping us find a real bank branch we needed (it likes to report any ATM as a bank location).

This year we stayed at the Platinum Hotel & Spa which had obscenely good prices considering the hotel. Finished in late 2006, the rooms are actually individually owned and rented out like time shares from what I understand. The larger regular rooms are 1200 square feet and have a full size fridge, three seat bar, washer/dryer, balcony with four seat table, king+ size bed, huge jacuzzi tub, large shower and more. The exercise room and pool were more than adequate and the complimentary valet parking a nice touch, even though I prefer having free self-park if given the choice. Since the hotel is off strip the bar doesn’t get much traffic which is a shame, as the bartender Christie is really cute and very personable.

The Black Hat speaker party was fairly empty but nice of the organizers.

Wednesday, August 1st, 2007 – Black Hat – Day 2

Wednesday was the first of the long days many of us come to expect from con. Despite being up at an early hour I still ended up missing (skipping) many talks. Seems like each year the talks get more bland, more repetitious or offered at DEF CON too.

I participated in Hacker Court again, this year focusing on the legality of border searches as pertains to electronics and digital material. As with previous years, we had a great lineup of professionals participating. Kevin Bankston (EFF lawyer), Jennifer Grannick (Executive Director, Center for Internet and Society), Richard Salgado (former DOJ lawyer), Jon Klein (forensics expert) and Jesse Kornblum (forensics expert) under the guidance of Carole Fennelly put on a mock trial that explores legal issues surrounding computers, security and the law that haven’t been tested in the courts. For those interested in computer law, this presentation is fascinating on many levels. While it can be long for a conference presentation (pushing a bit past two hours), it is interlaced with humor and popular media references to keep the crowd happy. Despite the experts and unique presentation, Black Hat keeps pushing us more and more out of the way each year. Last minute cuts on the time slot, last minute room changes, virtually no press/advertising even though it is often the only talk and always concurrent with free food and an open bar. People still don’t realize they can get the free food/booze, walk 25 feet and sit down to enjoy it in the presentation room. All said and done, it ends up being a long involved process for preparing and organizing for very few people watching.

That evening I visited the party put on by Cisco at Pure. Had a relaxed evening with good food, free booze and a chance to chat with some of the Cisco folks that I have business dealings with.

We missed the SPI Dynamics part at Tao unfortunately. While Spudlet and I wanted to check out the club and I wanted a chance to talk with SPI more regarding their product, they had invited some 300+ people to a party that could only accommodate 150 or so due to the space they had reserved. Sorry SPI, if I want to wait in line for a Vegas club, I can do that any night.

Following suit, the OWASP party at the Shadow Bar in Caesars was overbooked and had a long line. Given the small size of the bar, they really should have known to find more room. Perhaps we can joke about OWASP being vulnerable to an overflow situation. I regret missing this as I had wanted to discuss various aspects of OWASP as relates to OSVDB.

Fortunately, the VIM informal sit down went well. Folks from CVE, Secunia and OSVDB had a great discussion about issues pertaining to vulnerability databases. This was the first time we had met some guys from Secunia so it was nice getting their perspective as their database is commercial and offers a different perspective.

Thursday, August 2nd, 2007 – Black Hat – Day 3

Thursday started with a simple breakfast with Lyger and BK. I stole half his boiled egg off the chef salad he ordered and failed to finish. We headed back to the convention to listen to a turbo talk titled “Social Network Site Data Mining” by Stephen Patton (CISSP!). This talk should have been titled “Look ma, free entry into con!” as it was barely worthy of being called “remedial web surfing social sites“. The fact that this guy actually investigates anything is scary and I couldn’t even bring myself to point out all of the flaws in his talk after the first I brought up.

Rick, DK, Lyger and I had decided to grab lunch at Spago. Besides knowing it was an upscale restaurant by Wolfgang Puck, I had only heard it was snooty and joked about in the song Car Phone by Sheeler & Sheeler / Dr. Demento. The front of the restaurant opened up to the Forum Shops attached to Caesars Palace and before we hit the host I joked about not being pretty enough to sit out there. A minute later the host suggested we could get immediate seating in the back, implying we were either too ugly for the front or all of those tables magically had reservations ten minutes after they had opened. We laughed and sat at the back of the pretty social bus, three of us enjoying a shrimp po’boy sandwich, Lyger enjoying two Bud Lites (how else does he maintain his girlish figure?!)

Shortly after we returned to the convention and caught “Revolutionizing the Field of Grey-box Attack Surface Testing with Evolutionary Fuzzing” by Jared DeMott (President, VDA Labs). His talk and new fuzzing tool were definitely interesting, but the slides with the background of Jesus carrying a cross (and no joke/explanation) were a bit weird. We headed to the next talk quietly reflecting our lord and savior (or joking about fuzzing jesus). Next up was “Unforgivable Vulnerabilities” by Steven Christey of CVE/MITRE. At the conclusion of his great talk I wondered if anyone else saw the hypocrisy of Litchfield/NGSS calling for VAAL, which is heavily dependent on publishing vulnerability information, which NGSS simply does not do. At the end, I also then questioned that if we called vulnerabilities in regular software “unforgivable”, what do we call vulnerabilities in security software? The best answer anyone came up with was “criminal” to which I agreed.

We spent a considerable amount of the afternoon talking with vendors, specifically Cenzic and SPI Dynamics, as many folks at work are no longer happy with Watchfire’s Appscan. Both tools have promise and we will be testing them the coming weeks. The fact that an extremely well respected web application hacker type works for one is re-assuring. Hopefully one of them works out.

Later that evening we hit Hooters Hotel for the Hooters restaurant and some hot wings, for the yearly OSVDB mangler dinner. Hot wings, booze and boobs, what a way to celebrate open source projects! Next up was the Microsoft party at Pure, this time on the terrace. While it does have a really swell view of the strip, it’s Las Vegas in the dead of summer and hot as hell. Miserably so. Worse when the music is still set to “club” loud, not “geek social” volumes. Still generous of Microsoft, just wish they would treat it a bit more like an informal Blue Hat.

Friday, August 3rd, 2007 – DEF CON – Day 1

First, the badges issued for DEF CON 15 are neat, but the guy who designed them didn’t really test them. after programming them to scroll custom text (including “osvdb”, “I ❤ Satan” and “ATM” depending on who was with me), just walking around caused the badges to get bumped and lose the programming. like previous years, they also ran out of badges within the first four hours of Friday (giving out some 6800 apparently?!)

First meal of the day, Pink Taco at the Hard Rock. Always exceptionally good food and neat atmosphere. This year we had a pitcher of margaritas with a high quality tequila that cost ~ 60 bucks. For a few bucks more it could have been Patron but I didn’t order. Amusing that a bottle of Patron is 700 bucks and another bottle goes for 1100+ (usually about 250 in stores). I know it’s Vegas, but that kind of markup is just stupid. spending money for the sake of showing off, nothing more.

Spent the afternoon walking around the convention, meeting and chatting with various folks new and old. as years before, the vendor area just screams ‘sell out’ with mostly ‘mainstream’ geek stuff and ridiculously overpriced retro/legacy hardware.

Seemingly competing with the Wall of Sheep, another group had a similar display on a different wall, but with logins to various HTTPS sites. one yelled out that they would show how it was done later that night, suggesting a pretty efficient MITM utility. slick stuff

Opting for a quick dinner before the nightly parties, we stopped by the restaurant at the Platinum hotel. Since it had a good lunch menu with conservative prices (for a Vegas hotel), we figured the dinner menu would be good. If so, we sure won’t know! If you are going to charge 40 – 50 dollars a plate, let me give you some advice. First, don’t drench my papers and electronics I set at the corner of the table in water. High class restaurants use wait staff that know to hold a cloth under the pitcher when pouring to avoid that. Second, if you are going to charge more than Roy’s or Nobu, you better have an exciting menu that moves beyond “steak” and you better come with reviews to back that price. After seeing the menu, we apologized and left.

Despite invites to the iDefense party at Body English and another party in a hotel suite by iSIGHT, it was a no brainer to head over to the Bellagio to join Dave Aitel and Immunity at Caramel. This small lounge is beatiful, well staffed and an ideal location to have a social gathering. While chatting, the staff came around with trays of finger foods that put some restaurants to shame. All in all, this was hands down the classiest and nicest get-together at either convention and reminds me why such parties were thrown to begin with.

Saturday, August 4th, 2007 – DEF CON – Day 2

Last day of the convention week for me, sounds more like an orgy of food (more so than previous days), but I assure you the gaps were nothing more than bad memory and little to note!

Began the day with Pink Taco again, this time with Rick and a lot of business talk. Next was a lunch (no, I didn’t eat) with thewronghands and Konstantinos to talk about all things abnormal.

More time passed and I met Steven Christey, Tornio, Jake, Sullo, str0ke and Lyger for a good discussion about vulnerability databases.

More time passed, some booze was consumed and I broke off for a private dinner at PF Changs. Some great discussion and humor to be had, most of us headed back to the Riviera to hang out at the 303 and Ninja parties, which were side by side. Each offered good music, booze and great people. Props to Caesar for the Ninja party and the various 303 thugs, especially Pyro, for their party. 2 or 3am rolled around and off we went, for some sleep before a day of travel.

Sunday, August 5th, 2007 – Travel Musings

Traveling via plane is getting progressively worse it seems. More delays, more crowds, more full flights, more canceled flights and little sign it is getting better. Spudlet and I decided to leave early to try to get out on stand-by. She had to work early Monday, I had to travel again most of the day. We check in about 1 minute apart, both head to the first flight. Neither of us make it on, shift to the second flight available. I manage to get on that and get told that she is “two or three” names down the list. After sitting down I watch eight people board the plane, but no Spudlet. I have since found out that your frequent flyer mileage total help dictate how fast you get a stand-by seat, not when you got on the list.

We push back from the gate, hit the runway and get notified of a one hour delay due to weather in DEN, even though Lyger had just taken off from DEN shortly before. the crew manages to serve most of the plane water, but not me.

After an hour on the runway, we head back to the gate after being notified of an additional 45 minute delay, are given that much time to grab snacks off the plane. Meanwhile, Spudlet was passed up not only for the plane I was on, but one more flight and then got a seat on the fourth flight available. Despite boarding her plane as we were returning to gate, she hadn’t caught wind of the delay until my text messages start rolling in. I get back on my plane just as the gate crew goes wild and stops anyone else from boarding. I quickly learn that my flight crew was ‘illegal’ (worked too many hours that day) and had to get a new crew. Once they were on board, they let the rest of the people back on the plane and we end up leaving the gate short eight people.

Right after leaving the gate my iPod locks up. I’m sitting next to an unhealthy big woman that forces me off one arm rest. i can’t see the movie since the screen dropped down right above my head. overall a pretty bad flight.

All said and done, i land one hour after Spudlet, one hour after my luggage and one hour before my original flight. so getting to the airport at 11:30a instead of 6:30p bought me one hour, and i still walked in the door after midnight.

I mention my luggage above because my suitcase obviously did not travel on the same flight I did. this is a sore point of contention for me, because I am fucking tired of United telling me i can not get on a stand-by list because “you checked your bag, it has to travel on the same flight you do“. I’ve been told that three separate times, all by United gate agents, all quoting some mythical bullshit ‘regulation’ that they selectively enforce. I still need to find out if that is a United or FAA regulation. if the latter, I’m definitely filing a complaint.

Con Summary

The Good:

  • Steve Christey w/ CVE/MITRE
  • The Hacker Court crew
  • Miles walked
  • 303/Ninja
  • Pigeons
  • Friends and coworkers

The Bad:

  • Overly crowded conventions and gatherings
  • Clubs and the desire to ‘be’, what people endure to ‘live the life’
  • Parties overlapping, forcing decisions
  • Hertz rental GPS and bank confusion
  • DEF CON badges resetting
  • Airlines