CVE and the matter of “unique” ID numbers

Common Vulnerability Enumeration, now known as Common Vulnerabilities and Exposures (CVE) is a vulnerability database (ignore their silly claim to be a ‘dictionary’) that the information security industry relies on heavily, unfortunately. Per MITRE’s CVE page, “CVE® is a list of entries—each containing an identification number, a description, and at least one public reference—for publicly known cybersecurity vulnerabilities.” Great, digging a bit deeper into their ‘About‘ page, we get a better definition:

Use of CVE Entries, which are assigned by CVE Numbering Authorities (CNAs) from around the world, ensures confidence among parties when used to discuss or share information about a unique software or firmware vulnerability, provides a baseline for tool evaluation, and enables data exchange for cybersecurity automation.

Please take note that a CVE Entry, or ID number, “ensures confidence” when discussing or sharing information about a vulnerability. Basically, it is supposed to be a unique ID to ensure that confidence. Despite that, any of my dozen loyal Twitter followers will see me constantly pinging researchers, vendors, and the media pointing out that they are using the wrong CVE number to reference a vulnerability. Often times it is a case of not copying and pasting, rather typing it out manually. It is also why in the vulnerability database (VDB) world, we strongly emphasize that copy/paste is the best thing to do in order to prevent transcription errors on an ID that is supposed to be unique.

Sure, it seems pedantic to a degree, but imagine if your doctor decided to manually transcribe a diagnosis code after your visit and you get a call saying you were diagnosed with something completely different. In the vulnerability world, it means you might be vulnerable to something and have no idea if so. If you are, you aren’t sure if there is a solution. Maybe a bit of a dramatic analogy? But.. it holds water, has a bit of recent history, and is the kind that helps administrators better understand the underlying issue.

Instead of sending out a series of Tweets on the latest example, I decided to write a blog to show how these little typos can snowball quickly. Any mature VDB will have a variety of processes to catch wind of a CVE ID that they haven’t seen before. It can be as simple as a live search on Twitter for ‘CVE’ (super noisy) or more technical means. If you run across an unknown CVE you Google it to start, that simple. Today’s example was CVE-2019-0895, which appeared to be a “new windows zero-day”. Exciting in the world of VDBs!

Let me go ahead and spoil things, to make this easier. These articles call it “2019-0895”, but in reality, they actually mean “2019-0859”. A simple transposition of numbers, which is all too common in prior cases. Based on ten-second review, it appears that Fossbytes was the first to transcribe these numbers (Unverified @fossbytes14 on Twitter?). A day later, extremely similar articles appeared on Prodefense (no Twitter and broken Contact form?) and “In Depth IT News / SecNews” which has some serious rendering issues in Chrome. The day after that, Tech Rights references it via a weird embedded link below in an unrelated article [1], and Tux Machines posted about it with key quotes cribbed from other articles, the Fossbytes article in this case.

In each case, it is clear that the offending typo came from Fossbytes. The “In Depth IT News” site even links to which has the correct CVE ID in the URL. It is absolutely clear that most of these sites are using automated crap to aggregate content and have no real desire to share accurate news. Each one of them are evidence to the damage caused by a single transposition error from Fossbytes, a “leading source of technology news with a focus on Linux distro releases” … that decided it was important to write about this critical Windows zero day? A critical zero day that is actually ten days old at the time of their article.

OK, hopefully we’re all on the same page here. My Twitter feed is a small graveyard of similar examples from the past few years. Each and every time, the “news” organizations that spread these bad IDs and introduce confusion and questions into the equation, and are the antithesis of a “news” site. Finally, I would like to go on the record about one more bit regarding CVE, which will come as no surprise. On the CVE ‘About’ page, it says CVE is:

Industry-endorsed via the CVE Numbering Authorities, CVE Board, and numerous products and services that include CVE

As a former ten-year veteran of the CVE Board, I do not endorse CVE.

[0] Note: If any of my links show a fixed version of the CVE, good! You can see the originals on
[1] This should really be a separate blog post, but it would mostly be cursing around a simple concept; this is the problem with content/link aggregation sites… which are a plague on the Internet. In 2019, they aren’t trying to help, they are desperate attempts to make a few bucks. Disagree? They would have caught this error when they did a quick tech edit pass on the article. But they didn’t, because it is all automated and centered around ‘SEO’ (search engine optimization) so it appears in Google results and you click and see the ads they are serving. I bet if anyone dug deep on such sites, the amount of questionable traffic or malware they delivered might be enlightening. Go back to where this is linked from and notice the URL of the article (/2019/04/18/libreoffice-6-2-3/) and how far you have to scroll to get to the bottom of the page, past all the “content”.


Microsoft, CVE, MITRE, ETERNALBLUE, Headache…

2019-02-14 Update: Thanks to Chris Mills @ MSRC (@TheChrisAM), who has been working behind the scenes since this blog was published, he has brought clarity to these assignments! MSRC is still potentially touching up some additional documentation to make it easier to see these associations, but here is the definitive answer from him:

CVE-2017-0143 ShadowBrokers : EternalSynergy (Blog)
CVE-2017-0145 ShadowBrokers : EternalRomance (Blog)
CVE-2017-0144 ShadowBrokers : EternalBlue (Blog)
CVE-2017-0146 ShadowBrokers : EternalChampion (Blog)

Note that only the EternalChampion blog does not reference the associated CVE, but he is working on getting that updated. I have also recommended that MSRC update MS17-010 to use the codenames in that advisory as well. Apparently editing the actual bulletins takes a bit more work, but he’s on it! I can’t thank Chris enough for running with this and helping bring clarity to these assignments.

There was initially a lot of confusion over the Equation Group disclosure. Which were legitimate vulnerabilities, which were new, which were known, which were patched, and ultimately how they would be referred to other than their leaked nicknames. That is the purpose of The Common Vulnerabilities and Exposures project (originally Common Vulnerability Enumeration), to give a unique ID to a specific issue so that you can reference a vulnerability without question. A year and a half later? We’re still wondering apparently.

I contacted Microsoft Security Response Center (MSRC) on August 6, 2017 asking for clarification on the CVE assignment for one of the Equation Group vulnerabilities codenamed ETERNALBLUE, because their own resources contradicted each other. From my mail:

Per an older blog [1], the vulnerability known as ‘EternalBlue’ is assigned CVE-2017-0145. From the blog:

However, in this unique case, the ransomware perpetrators used
publicly available exploit code for the patched SMB “EternalBlue”
vulnerability, CVE-2017-0145, which can be triggered by sending a
specially crafted packet to a targeted SMBv1 server.

A newer blog [2] now lists it as CVE-2017-0144, which I believe to be incorrect. From the blog:

The new ransomware can also spread using an exploit for the Server
Message Block (SMB) vulnerability CVE-2017-0144 (also known as
EternalBlue), which was fixed in security update MS17-010 and was
also exploited by WannaCrypt to spread to out-of-date machines.

Can you confirm the correct assignment for ‘EternanBlue’ [sic], and due to the second blog, the assignment for ‘EternalRomance’, and update your blog(s) accordingly?

All this time later? MSRC never answered my mail, and never fixed one of the two blogs. CVE’s description of each does not mention the nickname in either entry. So the assigning CVE Numbering Authority (Microsoft), or CNA, and the core CVE project (MITRE) still don’t answer this question. To date, the Microsoft advisories for those two CVE ID still don’t mention the nickname. To add more confusion? Try using Google to find it, and you get a third CVE ID it may be (screenshot below). Although, that one result doesn’t actually have ‘EternalBlue’ in it, making us wonder why it is the sole result. The blog that MSRC originally published to add some clarity to the Equation Group still only references MS17-010 (and a dead link now). Looking at the new location for MS17-010 doesn’t find the nickname in the advisory either.

To this day, I am still fairly sure ETERNABLUE is CVE-2017-0145 and attribute it as such, but it sure would be nice if MSRC would clean up and clarify this mess.

Further, I have had to chase down two more errant CVE assignments by MSRC in the last months, which was fairly painful. After getting the runaround on both, being told to go ask Microsoft Support via a forum (despite MSRC being the definitive source for this information), not getting a reply, opening a new ticket with MSRC, reminding them that I was still waiting… those two finally got resolved after a month or more. I really don’t like casting shade on MSRC as over the years, in total, they have been wonderful to deal with. However, the last couple of years have seen a serious decline in this type of incident which should be ‘Vulnerability 101’, and a serious uptick in their resistance to clarify assignments when asked. Finally, if you are wondering why MITRE doesn’t provide some kind of oversight to this? Well they basically never have despite repeated requests for just that. Their only oversight is a ‘CNA Report Card’ that is more about statistics of assignments and such, and does not deal with the quality of assignments, incidents of confusion like this, or anything else that would be helpful to the community.

The only upside to all of this? I got to [sic] my own typo from the quoted email.

The Attrition DC26 Badge Challenge Post Mortem

This year, which was my final trip to DEF CON, I made up one last round of Attrition DEF CON badges. In prior years they were typically engraved luggage tags a bit more specific to the year:

Since #BadgeLife has become a big thing, especially this year as far as I can tell, I decided to go a bit lower rent on the badge material but ‘up the game’ on the content. I did a ‘cipher challenge’, which of course was never meant to be a real challenge. I’m not nearly smart enough for that shit. I literally came up with it in less than a day, didn’t vet it with anyone, and just moved to mock up a badge and print. Because I am so pro! I also figured anyone who knows me would know not to trust me on anything ‘cipher’ or ‘challenge’, especially ‘cipher challenge’. Unfortunately, and I do feel bad, a handful of badge-holders went down this rabbit hole.

This write-up is for them, to explain just how fast this was put together, and the lessons I learned as well. The cliff notes details, as I originally intended:

  1. (hobbit) -> “never trust us”
  2. -> “except this time”
  3. location hint (flamingo hotel) -> “Phoenicopteriformes”
  4. refined location – wildlife habitat long/lat -> 36.11662720392657 / -115.17115294683322
  5. 08/11/2018 @ 3:04am (UTC) Epoch Unix Time -> “1533956647”
  6. Klingon “take proof you were there” -> “pa’ SoH’a’ tob tlhap”
  7. random letters/numbers -> (unsolvable/gibberish)
  8. show Jericho proof (latin) -> ostende inamabilis sciurus
  9. winner winner chicken dinner -> (icons)

Seems pretty straight forward! Unfortunately, a few of these didn’t work out so well as I found out, in surprising ways. Here are the hiccups I didn’t expect.

  • (1) There are multiple Cirth character sets. Pretty minor, but it led to a couple people saying the translation was off. Worse? That one character that was off fed into another hint and made it more believable. I should have read through the Wikipedia article to notice that, but growing up as a skilled writer in ‘Tolkein Runic’ (Cirth), I didn’t think about it.
  • (2) Always trust the first hint, never the second!
  • (5) So… Epoch Unix Time is an absolute. You don’t adjust for timezones, because the time is in Coordinated Universal Time (UTC). The Wikipedia entry for UTC confirms it “is not adjusted for daylight saving time“. So my intention of it being on Saturday morning at 3:04am was correct. I didn’t account for everyone adjusting for time zones. I also didn’t account for some adjusting for Las Vegas’ time zone (Pacific) or trying to second-guess it and using my time zone (Mountain). At this point I am vindicated, anyone loitering around flamingos at the Flamingo between ~ 8p – 10p local time, were not following the cipher. Yes, I still feel bad they showed up thinking there was a prize/reward there.
  • (6) I really should have known better here, since Google Translate fails to translate simple text from one language to another, and then back again. I fell to this trap using the first Klingon translator that Google offered and did a simple one-way translation. Unfortunately, that same site changed “take proof you were there” drastically to involve something with a cat in it. I like cats, everyone knows this, so the clue still had some crazy merit. Fortunately for me, one of the badge-holders knows a lot more about Klingon than the online translators do, and gave me a deserved verbal beratement over the horrible translation. This led me back to that translator, where I pasted “pa’ SoH’a’ tob tlhap” back into it and got, you guessed it… “you take a cat room“. This was a solid break in the intended chain, and a deal breaker for solving the badge. Oops.
  • (7) This line had a simple intention. This line may have been the weirdest in the long run. A bunch of random numbers and letters, with no intended meaning, to be an ultimate ‘gotcha’. So no one could say they solved it, or if they did, I could challenge them on that line. I left this up to the wonderful badge artist, Anushika, who typed in a random string while designing it. Between that and the chosen font, there was even question over one or two characters. Either way, I thought it served a purpose. One nice lady from Australia (she is nice, despite her DMs irrationally suggesting I not to call her that) spent a lot of time on this, maybe more than anyone else. At one point she messaged “Threw it through successive shifts. And the answer it gave me was successive shifts.” This was after I reminded her on previous comments, that “i’m not really bright. hashed, encrypted, encoded… i get so confused“. No false modesty or deception; math is a religion, and I don’t believe. Ergo, crypto is a foreign language to me for the most part. So that random line had some merit in the math world maybe? Put it through successive shifts, and the answer is more successive shifts. That certainly sounds like I was really brilliant in a troll cipher, when I was the farthest thing from it. She kind of spooked me when she told me that and I thought “oh shit, this line has meaning?!” Kind of disappointed that a ‘troll cipher’ isn’t a real thing with a Wikipedia entry!
  • (8) Translation woes again. As someone who took a year of Latin in high school, seriously, and knows about the headache of online translators… not sure how I got burned twice in one badge. I translated “show squirrel proof” since I knew it wouldn’t handle “jericho”, and got “ostende inamabilis sciurus“. This is where it gets really weird. Someone messaged while in Vegas that the translation was off, and I went to check again, using Google Translate again. Click that link and you will see the problem. The translation changed between making the badge, and someone translating it after receiving the badge, which was around 30 – 40 days. So now it became “inamabilis sciurus ostendit probationem“. This caused a problem because the first translation now reverses as “show squirrel” which is lacking a crucial word. The updated translation, when reversed, comes back as “squirrel proof shows“, which is a bit closer to the intent. Ugh. For fun, since we had to pick ‘Latin’ nicknames in my Latin class, I chose Sylvester. #JerichoTrivia

So there you go badge-holders and adventure-seekers! I sincerely apologize for any hardship you went through, to a degree, because that first line really is gospel when it comes to me, attrition, and anything remotely close to a challenge. Years prior, I wanted to do a luggage tag badge like those pictured above, but cut out holes in a Goonies sort of way along with instructions to stand in the middle of Las Vegas Blvd to line up three landmarks to figure out where the party was. After this badge challenge? Probably for the best I didn’t, or I bet I would have gotten a few people run over. On the upside, you got to spend time with Flamingos, largely more bearable than the average DEF CON attendee.

Jericho in Vegas Next Week… (for real)


Given my occasional good-natured trolling on Twitter, and since many have asked me the last few weeks, I want to set the record straight. I will be in Las Vegas next week, for real. I arrive tomorrow evening and leave the following Sunday. This is the first time at BH/DC in several years for me.

Between Monday and Wednesday I will be doing the corporate thing around Mandalay and adjacent to the Black Hat event. I am not actually attending the conference, thus ‘adjacent’. Each day already has several meetings lined up so I won’t be readily available for parts of the day. When not in a meeting, happy to meet up with anyone looking to better understand the nuances of the vulnerability intelligence landscape. On Tuesday evening I will be at the Guidepoint Party at the Aureole in Mandalay Bay for several hours. Wednesday night I hope to crash the BSidesLV pool party and enjoy the cool 94 degree temperatures Vegas has to offer at night.

Between Thursday and Sunday I will be doing the hallway thing at DEF CON primarily. On Thursday at 3:30p I will be on the DC101 panel, apparently because I am old, to dish out horror stories about our industry to those attending. On Friday and Saturday I will no doubt be around Skytalks on and off to harass and support that track. Otherwise, you can likely find me roaming around Caesar’s and Flamingo checking out villages and side events.

I have a Twitter client on my phone but it doesn’t have any alerts, so that won’t be a reliable way to reach me. I hope to check Twitter every so often but my lizard brain isn’t wired to check that really. If I do camp down at a spot in a hallway or bar I hope to remember to Tweet my location in case anyone wants to discuss wildlife rehabilitation or vulnerability databases or anything else interesting really. As for spotting me, I will be one of ~ 100 wearing the DC26 Attrition badge, and a T-shirt that has an animal on it. As many have said, I too am really bad at remembering names while fairly good at remembering faces. Worse, when I do remember trying to figure out if you prefer to go by real name or handle at what events. Please don’t be offended and please re-introduce yourself! It may take me a minute to remember our history, my brain is a tad broken these days.

Finally, this will be my last year attending DEF CON. I attended DEF CON 2 back in 1994 at the Sahara, so this will be my 25th anniversary. I see a lot of value in DEF CON and continue to volunteer reviewing talks on the CFP panel to help shape the conference and try to make the content the best possible. Next year I will stay on with CFP in a more limited role, but still offer my input for certain types of talks. That said, as many say before and after ‘hacker summer camp’, the week is emotionally and physically draining, and many of us often come back with ‘con flu’ or some other kind of crud. The last time I attended, I went a full week not seeing some friends that were in Las Vegas, because the meta-convention is just so big and spread out. I hope that doesn’t happen this year, but it is one discouraging aspect of a week in Vegas.

While DEF CON doesn’t work so well for me personally, I see a lot of potential in it especially with the huge rise of villages. More and more that I talk to say that the villages are the first part of the conference that attracts them, more so than the main lineup of talks. Villages are an evolved modern evolution of old “birds of a feather” sessions at conferences back in the day, before ‘hallway con’ was a thing even. A group of people that share a particular interest and want to focus on a given topic have the ability to do it. Even better, often times that comes with elaborate and painstakingly designed networks and challenges to test your skills and learn more. In addition to villages are the side events for runners, shooters, coffee-drinkers, and more. I encourage everyone, especially newcomers, to embrace these side events and villages. DEF CON will be what you make of it, and there is more opportunity now than ever before to make the best of it.

DC26 Attrition Badge Round-up

This is the first DEF CON I am attending after a long break. For kicks I decided to make up a run of DC26 Attrition badges like prior years and conferences. Depending on who you ask, the badge is a decoration only, or it gets you into fabulous parties and amazing events. Anyone with a badge is encouraged to embellish.

Since the July 5 announcement of the badge, I increasingly focused on using them to raise money for charity. That, in turn, prompted several people to ask for details of the badges and the money raised. This blog will hopefully answer those questions and maybe inspire others to help out when they can. If you aren’t interested in the quick story, scroll down to the inspiration section please.

First, a link-heavy summary. On July 7, I did the first charity challenge looking to raise money for the ACLU, GLBT Community Center of Colorado (The Center), and Planned Parenthood. I also started giving out a a handful of personal challenges to random people expressing interest in a badge with fun results.

On July 10, I did a second charity drive bigger than the first. I also offered one badge up as part of an art challenge for the best original art featuring Lazlo. Deathjaw17 won that with this epic piece:

In addition to the art, I did a few other trades including for this slick challenge coin as well as a few other DC26 badges. At this point some of the winners of badges started posting pics, including with chickens, with epic beasts, and with bubbly! The Lazlo badge also got a tour of Philly and a sweet visit to the CompSci building in War Games. One badge went out and lead to a fun picture and backstory of a ‘dojo squirrel’. During this process, I got an unexpected care package from Kentaro, that he sent before I sent him a badge, and @Otterannihilation received a badge and sent back an amazing gift as a thanks. Meanwhile, pictures of badges kept coming:

    Inspiration and the Opposite

By this point, after two big charity drives, and several subsequent one-off drives, it was clear to me that raising money for charities was a great option. Badges were in demand and a lot of great people were willing to throw in money to help great causes. This also led to some other great opportunities that aren’t donations to charity, but amazing ways to help out. The level of inspiration and good-will in our industry is always refreshing, one of the few things that keep some of us from losing all hope. More on that later.

The opposite of inspiration came in two forms. First, while the badges w/ lanyards cost $298.60, but the postage to mail them out to x people cost $448.12, meaning the entire effort cost $746.72. This was due to the lanyards, which meant the badge couldn’t go as an envelope; they had to go as a package. Each envelope cost $3.50 domestic, $10 Canadian, and between $13.75 and $14.25 to mail international. This resulted in one fun trip to the post office that took around 30 minutes and produced a generous receipt.

The second came in the form of being questioned and challenged about my badges repeatedly, and being accused ofstrongly [reinforcing] exclusive cliques within infosec“. After assuring someone this was not “a dark stunt satirizing infosec exclusionism and signaling“, giving information on the charity contributions at the time, and reminding everyone that “the charity-driven badges are open to *anyone*. i have sent badges last week, and will send some this week, to people I don’t know and have had little to no interaction with“, I still faced questions about if I was reinforcing the exclusive cliques in infosec. I’ll say this definitively; I am not reinforcing cliques at all. This is trivial to see if you remember the definition of a ‘clique’, and consider that I don’t know half the people getting a badge other than a brief Twitter interaction.

OK, back to the inspiration. At the suggestion of Noah, with his input, two badges were given out to people who volunteered to provide InfoSec training for free. First, Jim Manico volunteered to give one of his well-known and appreciated AppSec classes in December on his birthday, for free, with the focus of recruiting women, LGBQT, and/or PoC for the class. Additionally, Bones volunteered to give design and give an infrastructure/cloud security pentesting course. I also suckered her into slipping in a not-so-subtle requirement.

An even bigger inspiration, and one that shocked me, was the community stepping up to donate to charity for a badge. Once I saw the generosity, I ran with it and focused on using a majority of the badges to continue raising money for charities I support, and ones that the donors support. The charities that received donations in return for badges included the ACLU, Cavy Care, Center for Genocide Research and Education, Colorado Animal Rescue, Electronic Frontier Foundation, Greenwood Wildlife Rehabilitation Center, Hawaiian Humane Society, Kids in Need Foundation, Planned Parenthood, Retriever Rescue of Colorado, SaveABunny, Special Operations Warrior Foundation, Sprout Therapeutic Riding and Education Center, The Wild Animal Sanctuary, and Women in Security and Privacy (WISP). A total of 69 donations from 67 heroes between 2018-07-06 and 2018-07-28, raised a total of $8453.47. I’m still happily shocked at this outcome.

I also want to thank Heidi for chatting and educating me about Women in Security and Privacy (WISP) and their initiative to help more women get to DEF CON. Over a week of chatting, it started out as “this is my first DEF CON and it is rough financially” to her being one of the recipients of the WISP grants. Even better, one of the people that donated and won a badge said to give it to someone else. I suggested Heidi and they said that was a good choice! So on top of getting help to DEF CON, she got a badge, and I threw in some stickers to round out the fun.

Finally… are you sad you didn’t get a badge? Depressed that you didn’t get a chance to donate to charity to win one? Fortunately for you, there is one last chance! Jives reached out and we’re partnering for a big charity auction, with a couple days left! You can bid to win a DerbyCon ticket, a DC26 Attrition badge, and a custom box of shit! Bid now, bid often, win this sucker

DEF CON 26 CFP Basic Statistics and Observations

This is the second blog in a series about DEF CON 26 CFP. The first:

A Look Into the DEF CON CFP Review Board (we’re actually really boring people)

First, this post is not sanctioned by DEF CON in any way. I am a member of the CFP team who decided to keep some rudimentary statistics on the submissions this year, as I did last year. I did this to give the team a feel for just how many submissions we got, how many talks we accepted, and primarily to track the way we voted. This greatly assists the powers that be (the amazing Nikita) to more quickly determine which talks are well-received. Due to time constraints, I was not able to track as much metadata, so this blog will be shorter than last years.

First, a few bits of information:

  • DEF CON 26 CFP opened on January 16, 2018
  • DEF CON 26 CFP closed on May 01, 2018
  • Two talks were submitted after closing date and were considered for various reasons
  • We received 551 submissions (up from 536 last year)
  • Four of the submissions were withdrawn by the submitters by the end of CFP
  • BlackHat received around 1,000 submissions this year for comparison

A recurring theme in these blogs and our Tweets throughout the CFP process is strong encouragement to submit early. While we did get a share of submissions in January and February, you can still the huge spike we experience in April (a majority a day before CFP closed), and May (on the day it closed). The two weeks between the end of CFP and the time when acceptance/rejection letters are sent out become stressful as we’re under deadline to review talks, try to get last minute feedback when we can, and make final decisions.

Of the 551 submissions, 107 were accepted (19.4%). There were 388 unique male submitters, 39 unique female submitters, and 14 anonymous submissions (note: we only catalog based on the gender, if known, of the primary speaker). Of those 14 anonymous submissions, 3 were trivially identified because the submitter didn’t scrub their submission properly or submitted work that had been presented before and was caught with a quick Google or Bing search.

Of the 551 submissions, 173 (31.40%) said they would release a new tool. 77 (13.97%) said they would release an exploit, up from 56 (10.53%) last year. Of all the submissions, 216 (39.20%) were also submitted to Black Hat and 51 (9.26%) said that speaking at DEF CON was contingent upon Black Hat accepting their talk. Only 73 (13.25%) submissions were also submitted to BSidesLV. Of the 551 submissions, 122 of the speakers had presented before at DEF CON, and an additional 28 had presented before at a DC Village or Workshop.

Unfortunately, time did not permit me to properly track ‘red’ vs ‘blue’ vs ‘black’ submissions, nor categorize the talks. That said, 11 talks were about ‘Artificial Intelligence’ and/or ‘Machine Learning’, even if some of them didn’t quite seem to know what those terms really mean. Ten submissions were on the topic of, or heavily related to, blockchain. Eight submissions came with the ultra creative title that included “for fun and profit”, four included “all your $blah belong to us”, two submissions used “pwned” in the title, and fortunately for our sanity, none wanted to make $blah great again.

That’s it! I realized this is a bit more brief than last year, but the time requirement of reviewing all of the submissions is crazy. Finding extra time to maintain the sheet is rough, and generating even more statistics or tracking additional metadata just can’t happen sometimes. Fortunately for me, this year Highwiz stepped up and did an incredible amount of work filling in data, especially while I was lost in the mountains for a few days. 

A Look Into the DEF CON CFP Review Board (we’re actually really boring people)

Written by Highwiz with contributions and editing from Jericho

Being on the DEF CON CFP Review Board can be as exciting as {something}; as frustrating as {something}; as thought provoking as {something}; and as enriching as {something}. It’s like mad libs, I hope you’ve filled in this section with something good.

Each year, myself and somewhere between 16 and 20 other reviewers take on the responsibility of selecting the best possible talks for DEF CON.

Oh, I should also apologize in advance as you read this first entry in the CFP Blog series. I apologize because I am not known for my brevity. In the “written word” and especially when it comes to something I’m passionate about, I tend to be wordy AF. [See, like that sentence: Could have just said “Hope you enjoy”, but nope – not me…].

I do genuinely hope that someone finds these blog postings helpful and that it will allow submitters (or potential submitters) some insight into the way we work so as to better prepare their submissions in the future.

In its original form, this post was about as dry as some of the white papers we read that were included in several submissions. Speaking of, white papers help tremendously when we’re reviewing your submissions, and if you include one, you’re already ahead of the pack. Sadly however, while White Papers do indeed help your chances during the CFP, they make for really shitty blog posts.

While we’re on this wildtangent of things that are related to the CFP Board but not actually part of the CFP Process itself, let’s talk about the term “CFP”. Above, I mentioned white papers; while the term CFP originally did mean “Call For Papers”, it doesn’t anymore. Most people don’t submit papers. When you think about the term CFP, you should really think of it as Call For Presentations. I know I’m not the first person to say that and I definitely won’t be the last, but still, it bears saying.

Alright, back to the topic at hand…

This year, the DEF CON Call for Presentations (CFP) Review board was made up of 16 “General Reviewers”,  six “Special Reviewers”, and two members of the DEF CON staff.

The DC CFP process is not “blind”, meaning reviewers can see each other’s votes, and we see who submitted it unless they specifically opt to stay anonymous (and properly scrub their submission). There are merits for both open review and blind review, but we’ve found that an open review significantly helps our process as there is a lot of good discussion about each individual submission. One reviewer may spend considerable time digging into the topic, researching prior disclosures or talks along the same lines, or offer their personal in-depth knowledge which typically helps several others better understand the topic and state of research.

If you submitted a talk to DEF CON this year, then all of the General Reviewers most likely reviewed and discussed your talk. While these reviewers tend to agree on many talks there are also submissions that cause arguments and intense heated discussions. Most of the review board members have a very extensive vocabulary and seem to enjoy finding new and creative ways to use the word “fuck” in a sentence (both in the positive and negative). Though, why the topic of vocabulary is at hand, let me say this to my fellow review board members: y’all motherfuckers need to find a new word besides “pedestrian“. I’ll leave it at that.

As reviewers, every year we’re often left wondering why certain people have chosen to submit to DEF CON and whether or not they actually understand what type of conference it is. A prevailing sentiment on many submissions is “This is not a DEF CON talk”. While the content may be of significant quality, the question we often ask ourselves is “is this talk right for DEF CON?”. Sometimes the answer is that while it would be good at a developer conference, RSA, or BlackHat, it simply wouldn’t be right on a main stage at DEF CON. DEF CON is, or at least it strives to be, a hacker con first and foremost.

TL;DR : This is DEF CON, please bring your “A” Game.

The Time Commitment

Often times people ask to be on the CFP Review Board because it is an honor and privilege to be among the group that selects the presentations for DEF CON… It’s also a giant time suck, which people sometimes fail to realize (or believe us when we tell them).

Now for the more formalized explanation of that so my “editor” doesn’t get pissed:

It’s been stated before, but being on the DEF CON CFP Review Board is an enormous time commitment. In the first few months, the average time a reviewer spends on talks is ten to twenty hours a week, depending on the volume of talks received. In the last two weeks, when everyone is rushing to submit before CFP closes, the time required rises to forty or more hours a week. The DEF CON CFP Review Board, like many other CFP Review Boards, is an entirely volunteer activity that many times becomes a second job. This is one of the big reasons we encourage people to submit earlier, and not wait until the last minute. Total time spent for a General Reviewer is probably in the range of 280 working hours.

The rule of the board for a General Reviewer is to do as many talks as you feel you are able to, but hit at least 70% of the talks. In practice and as far as the other general reviewers are concerned, you should be getting as close as you can to 100% of the talks. If the other reviewers feel that you’re not pulling your weight (so to speak) they will call you out. We’re like the fremen in that sense, crysknife and all. In less nerdy terms, no one wants to get shanked in the exercise yard because they didn’t review enough talks.

The topic of the exercise yard leads us into our next area, the prisons guards.. I mean, the DEF CON CFP Review Board staff.

The Defcon CFP Review Board Staff

Nikita and Alex are the foundation of the Review Process. They post the talks, interact with the submitters, deal with the reviewers when we’re cranky and obstinate (we can really be bitches sometimes), reshape the feedback given by the reviewers and transmutate those turds into flowers and candy before the submitters view it. They are the fecal alchemists and without them, the process would not work.

Similarly, there is the non-official review board staff member in the form of Jericho who tracks our submissions, votes, and other information. He categorizes the talks for us while providing amazing feedback and insight into anything vulnerability disclosure related. Like Nikita and Alex, Jericho is an integral part of making the DEF CON CFP Review Board function and prosper.

The fourth person (another unofficial one) who deserves a great amount of credit for making sure that people keep up with their reviewing is our own special CFP Vocal Antagonizer in the form of Roamer. If a review board member is slacking they can be certain that Roamer will “gently” remind them that they need to review talks. This is an important role as we want as many of the review board to provide feedback and vote on a talk as possible. This ensures more reviewers see it, and provide commentary based on their diverse background. In other words, Roamer is like a shot caller; if you don’t sack up and do the tasks assigned to you, you’re going to wake up with a horse head in your bed.  

Both Jericho and Roamer are inspiring examples of what it means to truly care about the hacker and DEF CON communities. On a personal note, it’s also pretty cool that I get to call Nikita, Jericho, and Roamer, these amazing people, my friends. I say that because after all these years, they still talk to me, even though I can be a bit dramatic.

While we’re on the topic of dramatic people, let’s talk about our special reviewers. I’m just kidding, where drama is concerned all of them pale in comparison to yours truly.    

Special Reviewers

Our special reviewers are subject matter experts who specifically comment and give their feedback on talks in their “wheelhouse”. There are many talks where the “general reviewers” simply don’t feel fully qualified enough to make the necessary judgement of a “yes” or “no” vote. Sure, they are familiar with a topic to some degree, but just don’t spend their lives immersed in that corner of security.

Everyone in InfoSec “knows” about pen-testing and social engineering for example. However, unless that is their primary tradecraft and they have been doing it for a decade or more, they may not be keeping up with the latest tools and techniques. In such cases, the general reviewers will typically “defer” to the subject matter experts. The input provided by the Special Reviewers this year has been invaluable in helping shape what DEF CON 26 will be.


The DEF CON CFP Review Board has a unique style in how they (we) review talks in contrast to many other CFP Review Boards. There is oftentimes a lot of discussion that goes on about individual talks that plays a key part in the process. The reviewers do not live in a vacuum when reviewing the individual talks, rather, they are encouraged to communicate with one another openly on the system so as to provide a higher quality of talk selection. Sometimes the discussions may turn heated, but at the end of the day it does improve the final selection. “Heated” is a really nice term. It’s a really nice term because when we say it, you may think we might mean like a “hot summer day” when it fact we mean the fires of Mordor, or whatever is causing a burning sensation in the nether regions.

The being said, on the Review Board, it’s very important to be open to new ideas and perspectives which such discussions strongly facilitate. I don’t think the DC CFP review board would work nearly as well under any other type of system. Conversely, what works for “us” may not necessarily work as well for other CFP Review Boards.

How do I get on the CFP Review Board?

First, are you really sure you want to? Do you really have the time? The numbers we posted before about the time commitment wasn’t an attempt to oversell things (in fact they are probably conservative estimates). As a review board member you will be dedicating that much time to reviewing talks over a three to five month period, with the final weeks being absolutely brutal. And if you don’t? You’ll find yourself being called out or greenlit by a shot caller. And then the best option there is you may not be asked back the following year. Remember, you are helping to shape the tone, feel, and content of DEF CON, the longest-running hacker convention now attended by over 25,000 people. That is an incredible responsibility and you are helping ensure that attendees get value from the talks they attend.

Still want to do it though? OK. Talk to some CFP Review Board members at DEF CON 26. That’s it… just do that. Judge for yourself based on how they describe it, the good and the bad. If any of them describe a breezy stroll through a nice park with flowers and chipmunks, walk away. They aren’t telling you the whole story.

Why don’t you have a CFP Review Board Panel at Defcon?

First, it would be super boring. Invariably the attendees are going to ask us a lot of questions that we can’t answer about specific submissions. While we may “vague” tweet or generally answer a question, we can’t and won’t provide specifics on submitted talks beyond what Nikita and Alex have provided as official feedback, and then only to the person that submitted the talk. So the panel would consist of a lot of jokes, high-level “CFP tips”, and not much more value. If you really want to “know” more about the CFP, just find out where some of us hangout at DEF CON.

Before we end this first entry in this series of three or four posts. I would like to take the opportunity thank you for reading along thus far. Jericho and myself worked on this entry, but he shouldn’t be held responsible for my tangents, side notes, and improper use of some punctuation.

Credit Roll

First and Foremost, we really need to thank those people around us (friends, family, significant others) that deal with us during the three to five month a year process of reviewing talks. They truly are the unsung heroes. They know we can’t go into specifics, but they’re there to listen to us bitch and moan about “that talk”. They understand us during this endeavor when we forgo plans to hangout with them or we’re not in bed until three hours past normal time. Without their support, we could never accomplish the task laid out in front of us.

General Reviewers

Jericho Roamer HighWiz Shaggy
bcrypt Vyrus Zoz Claviger
Suggy Wiseacre Secbarbie PWCrack
KingTuna Medic Dead Addict ZFasel

Special Reviewers

Andrea Matwyshyn w0nk Malware Unicorn
Snow Kodor Grifter


Nikita Alex

DEF CON Founder

The Dark Tangent


We’d also like to give a big shout out to the Workshops Review Board. While they are a separate entity from the CFP Review Board, their contributions to DEF CON are just as important.

Tottenkoph Munin Sethalump DaKahuna
CyberSulu Kodor SinderzNAshes SinderzNAshes
Kodor SinderzNAshes Wiseacre HighWiz

In part two of the series we will be covering the statistics, because that’s the type of thing that makes some of us (but especially Jericho) super wet.

With part three will come our thoughts, and comments on the Submission Form and the Questions we ask.

Part four will be some lessons we’ve learned along the way as well as ideas for improving things in the future.

One last thing, Jericho is totally the Jimmy McNulty of the CFP Review Board.

Continue reading the second blog in this series, “DEF CON 26 CFP Basic Statistics and Observations“.

Before you publish your end-of-year vulnerability statistics…

TL;DR – The CVE dataset does not allow you to determine how many vulnerabilities were disclosed in 2017.

I’ll try to keep this fairly short and to the point, but who am I kidding? Every year for a decade or more, we see the same thing over and over: companies that do not track or aggregate vulnerabilities decide to do their own review and analysis of disclosures for the prior year. Invariably, most do it based on the publicly available CVE/NVD data, and they do it without understanding what the dataset really represents. I know, it seems simple on the surface, but the CVE dataset is not easily understood. Even if you understand the individual contents of the export, you may not understand how it was created, what shortcomings there are, what is missing, and what statistical traps you face in digesting the data. Just doing the basic parsing and automated ‘analysis’ of that data via your tool of choice (be it grep or something fancier) means very little unless you can disclaim and properly explain your results. Either way, follow along with the advice below before you publish your ‘vulnerability stats for 2017’ please!

So let’s start with the basics of CVE data analysis. Begin by grabbing the latest CVE dump, a gzipped CSV file, that represents MITRE’s CVE dataset. Note, this is different than the exports NVD offers and welcome to the first hurdle. While the base vulnerability data is 100% equivalent between the two, NVD does additional analysis and creates metadata that is useful to many organizations. NVD provides CVSS scoring and CPE data for example. The relationship between CVE and NVD is interesting if you observe it over time, where it used to be a clear ‘MITRE publishes, a day later NVD publishes’ relationship. For the last year or two, NVD will sometimes open up a CVE ID before MITRE does for various reasons. This also gave way to Bill Ladd observing and writing about how the Chinese National Vulnerability Database (CNNVD) is actually opening up CVE IDs faster than both NVD and MITRE. Consider that for a minute and understand that the relationship between these three entities is not straightforward. Then consider the relationship between many other entities in the bigger picture, and it gets even more convoluted.

See? You start by grabbing a data dump, a paragraph later you have the start of disclaimers and oddities as pertains to the larger CVE ecosystem. Next, decompress the CVE dump so you have a CSV file to work with. Now, before you eagerly start to parse this data, stop for a moment. Did you do this same analysis last year? If so, great! Do you understand what has changed in the last 18 months with regards to CVE and more specifically MITRE? If you can’t quickly and readily answer that question definitively, the kind of changes that are the first in almost 19 years for the program, reconsider if you should be commenting on this data. In case you missed it, Steve Ragan published an article about MITRE / CVE’s shortcomings in September of 2016. The article pointed out that MITRE was severely deficient in vulnerability coverage, as it has been for a decade. Unlike other articles, or my repeated blogs, Ragan’s article along with additional pressure from the industry prompted the House Energy and Commerce Committee to write a letter to MITRE asking for answers on March 30, 2017. When a certain board member brought it up on the CVE Board list, and directly told MITRE that their response should be made public, MITRE did not respond to that mail in a meaningful manner and ultimately never shared their response to Congress with the CVE Board. It is important for you to understand that MITRE operates CVE as they wish and that any notion of oversight or ‘Board’ input is only as it is convenient to them. The board has little to no real influence over many aspects of MITRE’s operation of CVE other than when they set an official vote on a given policy. Additionally, if you point out how such a vote that impacts the industry is not adopted by certain entities such as CNAs, many years down the road? They don’t want to hear about that either. It’s up to the CNAs to actually care, and fortunately some of them care very much. Oh, you know what a CNA is, and why they matter, right? Good!

OK, so you have your data dump… you better understand the state of CVE and that it is so deficient that Congress is on MITRE’s case. Now, as experienced vulnerability professionals, you know what this means! The rubber-band effect, where MITRE responds quickly and disproportionately to Congress breathing down their neck, and their response impacts the entire CVE ecosystem… and not necessarily in a good way. So welcome to the second half of 2017! Because it took roughly a year for the Congressional oversight and subsequent fallout to strongly influence MITRE. What was their response? It certainly wasn’t to use their abundant taxpayer funded money to directly improve their own processes. That isn’t how MITRE works as I far as I have seen in my career. Instead, MITRE decided to use their resources to better create / enhance what they call a “federated” CNA system.

First, spend a minute looking at the ‘federated’ term in relation to CVE, then look at the use of that term in the recently edited CNA Rules. Notice how the use of ‘federated’ in their context appears to have grown exponentially? Now check the definition of ‘federated’ [, The Free Dictionary, Merriam Webster]. While sufficiently vague, there is a common theme among these definitions. In so many words, “enlist others to do the work for you“. That, is quite simply, what the CNA model is. That is how the CNA model has meant to work from day one, but this has become the saving grace and the crutch of MITRE as well as the broader CVE ecosystem in the last few months. On the surface this seems like a good plan, as more organizations and even independent researchers can do their own assignments. On the downside, if they don’t follow the CNA rules, assignments can get messy and not as helpful to organizations that rely on CVE data. One thing that you may conclude is that any increase in CVE assignments this year may be due, in part, to the increase of CNAs. Of course, it may be interesting to you that at least two of these CNAs have not made a single assignment, and not disclosed any vulnerabilities in prior years either. Curious why they would be tapped to become a CNA.

OK, so you have your data dump… you know of one potential reason that there may be an increase in vulnerabilities this year over last, but you also know that it doesn’t necessarily mean there were actually more disclosures. You only know that there are more CVE IDs being assigned than prior years. Next, you have to consider the simple numbers game when it comes to vulnerability statistics. All CVE IDs are created equal, right? Of course not. MITRE has rules for abstracting when it comes to disclosures. Certain criteria will mean a single ID can cover multiple distinct vulnerabilities, and other VDBs may do it differently. It is easy to argue the merit of both approaches, so I don’t believe one is necessarily right or wrong. Instead, different abstraction rules tend to help different types of users. That said, you will typically see MITRE assign a single CVE ID to a group of vulnerabilities where a) it is the same product and b) it is the same type of vulnerability (e.g. XSS). You can see an example in CVE-2017-16881, which covers XSS vulnerabilities in six different Java files. That is how they typically abstract. Search around for a couple minutes and you will find where they break from that abstraction rule. This may be due to the requesting party filling out separate requests and MITRE not adhering to their own rules, such as CVE-2017-15568, CVE-2017-15569, CVE-2017-15570, and CVE-2017-15571. Then you have to consider that while MITRE will largely assign a single ID to multiple scripts vulnerable to one class (e.g. CSRF, SQLi, XSS), their CNAs do not always follow these rules. You can see examples of this with IBM (CVE-2017-1632, CVE-2017-1549) and Cisco (CVE-2017-12356, CVE-2017-12358) who consistently assign in such a manner. If you think these are outliers that have minimal impact on the overall statistics you generate, reconsider that. In keeping with their abstraction policy, IBM issued two advisories [#1, #2] covering a total of nine CVE IDs for unspecified XSS issues. If MITRE had assigned per their usual abstraction rules, that would have been a single ID.

OK, so you have your data dump… and now you are aware that parsing that dump means very little. MITRE doesn’t follow their own abstraction rules and their CNAs largely follow different rules. So many hundreds, likely a thousand or more of the IDs you are about to parse, don’t mean the same thing when it comes to the number of distinct vulnerabilities. That is around 10% of the total public CVE IDs issued for 2017! OK, forgetting about that for a minute, now you need to consider what the first part of a CVE ID means. CVE-2017-1234 means what exactly? You might think that 2017 is the year the vulnerability was disclosed, and the 1234 is the unique identifier for that year. Perhaps. Or does 2017 mean the year the vulnerability was found and an ID requested? The answer is yes, to both, sometimes. This is another aspect where historically, MITRE made an effort to assign based on when the vulnerability was discovered and/or disclosed to a vendor, not when it was published. Under the old guard, that was an important aspect of CVE as that standard meant more reliable statistics. Under the new guard, basically in the last two years, that standard has disappeared. Not only do they assign a 2017 for a vulnerability discovered and disclosed to a vendor in 2016 but published in 2017, but also they assign a 2017 ID for a vulnerability discovered and disclosed in 2017. Worse? They are also now assigning 2017 IDs to issues discovered and disclosed in previous years. If you need examples, here are MITRE-assigned (as opposed to CNAs that do the same sometimes) 2017 CVE IDs for vulnerabilities disclosed prior to this year; 2016, 2015, 2014, 2013, 2011, 2010, 2008, 2004, and 2002. Notice the missing years? Some of the CNAs cover those gaps! Note that there are over 200 cases like this, and that is important when you start your stats. And we won’t even get into the problem of duplicate CVE assignments that haven’t been rejected, like the first two assignments here (both are invalid assignments and that CNA should know better).

OK, so you have your data dump… you’re ready! Let loose the scripts and analysis! While you do that, I’ll save you some time and math. As of December 24, 2017, there are 18,251 CVE identifiers. 7,436 of them are in RESERVED status, and 133 are REJECTed. As mentioned above, 238 of them have a 2017 ID but were actually disclosed prior to 2017. So a quick bit of math means 18,251 – 7,436 – 133 – 238 = 10,444 entries with 2017 CVE IDs that were disclosed in 2017. This is an important number that will be a bit larger if you parse with Jan 1, 2018 data. This should be your starting point when you look to compare aggregated disclosures, as captured by CVE, to prior years. Based on all of the above, you also now have a considerable list of disclaimers that must be included and explained along with whatever statistics you generate. Because MITRE also stopped using (1) consistent (2) formatting to (3) designate (4) distinct (5) vulnerabilities in a CVE ID, you have no way to parse this data to actually count how many vulnerabilities are present. Finally, know that Risk Based Security’s VulnDB tracked 7,815 distinct vulnerabilities in 2017 that do not have CVE coverage.

Cliff notes? The CVE dataset does not allow you to determine how many vulnerabilities were disclosed in 2017. Hopefully this information helps with your article on vulnerability statistics!

John Thomas Draper: Setting the Record Straight re: Blue Box

The tl;dr cliffnotes: John Draper was not invent the Blue Box.

In April of 2015, several years after Phil Lapsley published “Exploding the Phone” giving a detailed history of the early days of phreaking, I wrote a blog largely based on that book to clear up long-standing rumors and mistakes in a variety of publications. John Draper, despite reputation, had not been the first to discover the whistle in Cap’n Crunch cereal boxes in the late 1960s. Recently, an article by Kevin Collier stated that Draper “invented the ‘Little Blue Box,’ an electronic device to better imitate the signal. In 1971, Draper showed his design to two fans, Jobs and Wozniak, who, with Draper’s blessing, began selling an improved version.

Other articles and publications have varying takes on this, some more neutral and accurate, some even more outlandish. For example, recent articles covering John Draper’s sexual misconduct mention his history and why he is well-known. Ars Technica says that he “helped popularize the ‘Little Blue Box'” and the BBC says he “went on to create a ‘blue box’ that generated other tones“. In the case of Ars Technica, that is certainly accurate historically. In the case of BBC, the wording may be taken to some that he created it, as in he was the first to do so. Another example of wording that implies Draper was the first can be seen in a Computer World article from 2011, that says he “then built the phone phreaking tool called blue box that made free calls by making phone calls appear to be toll-free 800-number calls.” Interesting, to me at least, Wikipedia gives a general history of the device, but does not definitively say who invented it.

Perhaps worse, books about computer crime and security get it wrong, and worse than wrong. In “Cybercrime: Investigating High-Technology Computer Crime” by Robert Moore, he clearly states the blue box was “invented by John Draper“. Perhaps the worst example I have seen is in the book “Mobile Malware Attacks and Defense” by Ken Dunham in which he attributes not only the blue box to Draper, but also all of “telephone hacking” when he built it.

Like my blog two years ago, I turn back to ‘Exploding the Phone‘ by Phil Lapsley, a book that I cannot speak highly enough about. Through his extensive and exhaustive research, along with years of interviews, his history of phreaking is comprehensive and fascinating. By using a few key bits from the book, we can quickly see the real history and origin of the blue box. It also makes it crystal clear that John Draper did not invent the blue box. Like the whistle, he did it years later after friends told or showed him the basics.

From page 51, the start of a chapter titled “Blue Box”, it tells the story of a then 18-year-old named Ralph Barclay who read the November 1960 Bell System Technical Journal which contained an article titled “Signaling Systems for Control of Telephone Switching”. After reading the article, Barclay figured out that it had all of the information required to avoid using a pay phone to make a call, and that it could be done “directly”. By page 56, Lapsley describes how Barclay build his first box over a weekend, in an “unpainted metal enclosure about four inches on a side and perhaps two inches deep.” Barclay realized fairly quickly that he needed the box to do more, and as described on page 57, he built a new box because he “needed multifrequency“. “His new device was housed in a metal box, twelve by seven by three inches, that happened to be painted a lovely shade of blue. Barclay did not know it at the time, but the color of his device’s enclosure would eventually become synonymous with the device itself. The blue box had just been born.” This was in 1960 or 1961 and represents the origin of the blue box.

On page 87, Lapsley tells the story of Louis MacKenzie who also spotted the vulnerability based on the 1960 Bell Systems article. MacKenzie went to AT&T and offered to tell them how to fix the ‘blue box’ vulnerability, for a price. When AT&T declined, “MacKenzie’s attorney appeared on the CBS evening news, waving around a blue box and talking about the giant flaw in the telephone system.” By that point, advertisements for blue boxes could be found in some magazines, including the January 1964 issue of Popular Electronics. Thanks to, old issues of Popular Electronics are available including the January 1964 issue! On page 115, we can see the advertisement:

Further along in the history of phreaking, Lapsley covers John Draper’s story related to the blue box. On page 151 it sets the time frame: “Now it was 1969 and he was John Thomas Draper, a twenty-six-year-old civilian.” Page 154 tells the story of when Draper was asked by friends who had already been ‘blue boxing’ by using an electronic organ, to build them a box.

Teresi and Fettgather wanted to know if Draper could build them a multifrequency generator – an MFer, a blue box, a portable electronic gadget that would produce the same paris of tones they were making with Fettgather’s electronic organ. Draper said he could.

He returned home in a state of shock. “I had to build a blue box,” Draper recalls. And that night he did. It was a crude first effort that was difficult to use. It had seven switches: one for 2,600 Hz and six to generate the tones that made up multifrequency digits.

Draper’s first blue box was built in 1969, around eight years after Barclay had built his first unpainted ‘blue box’, and his second box that was actually “a lovely shade of blue“, giving the phreaking tool its iconic name.


To further set the record straight, Lapsley tells the story (p220 – 221) of Steve Wozniak, who “had his [blue box] design worked out” and “was particularly proud of a clever trick he used to keep the power consumption down so the battery would last longer” in 1972. After Wozniak had built his own blue box and refined it, he and Jobs then met John Draper for the first time. While the three traded “blue boxing techniques and circuit designs”, Draper did not show them how to do it, did not show them their first box, or introduce them to the concept.

A View Into DEF CON 25 CFP…

First, this post is not sanctioned by DEF CON in any way. I am a member of the CFP team who decided to keep some rudimentary statistics on the submissions this year. I did this to give the team a feel for just how many submissions we got, how many talks we accepted, and primarily to track the way we voted. This greatly assists the powers that be (the amazing Nikita) to more quickly determine which talks are well-received. Every day that I kept up on the spreadsheet, the more ideas I had on tracking. Other team members said “you should track…”, and I typically did. So this blog is to give some insight into the entire CFP process, with a solid slant on statistics about the submissions.

First, a few basics:

  • DEF CON 25 CFP opened on February 01, 2017
  • DEF CON 25 CFP closed on May 01, 2017
  • 17 talks were submitted after closing date and were considered for various reasons
  • We received 536 submissions
  • Three of the submissions were retracted by the end of CFP
  • BlackHat received 1,007 submissions this year for comparison

Next, who are we? There were technically 31 DC CFP reviewers this year, and you can read their fun profiles now (mouse over stuff here and there, call it an Easter egg)! Ten of them are considered ‘specialty reviewers’, where they typically review talks on a very specific topic such as ‘social engineering’ or ‘legal’. These are generally topics where the submissions are either too numerous and potentially murky to figure out if they are worth accepting (social engineering), or a topic that most of InfoSec aren’t really experts on, even when some of us are the #1 armchair lawyer in InfoSec. The specialty reviewers are expected to review their topic only usually, while a few are open to review multiple topics. That means there are 21 reviewers who are expected to review ‘as many talks as you can’, understanding that we may DEFER on a given submission if we feel it is out of our wheelhouse, and remembering that this is extremely time-consuming and we all have day jobs. Some of us have night jobs, and some of us have social lives (not me).

Every year we come up short on reviewers who are truly qualified to give solid feedback on a given topic. This year DC CFP put out a call for more volunteers and we hit a bit of gold, getting several new reviewers who are quality and put in a crazy amount of time. Next year? We know there are topics we need help on, so if you are sharp, kind of special(ty), or the top of your game in a popular field… come join us. I can’t stress how important this is. Instead of just working on a talk or doing a thing, you have the ability to help influence the presentations given at a conference with some 20,000+ attendees. That is a lot of power, a lot of influence, and the potential to do a lot of good. Personally, that is why I still sacrifice the incredible time I do.

Shout outs! The only way to start this paragraph is to call out Nikita for handling almost all CFP submission related emails. Incoming submissions, replies saying “you didn’t follow directions”, second-attempts, replies saying “no really you ‘brilliant hacker’, you didn’t read our guidelines”, posting them to the CFP platform, watching for the CFP team to say “I have questions” and us largely forgetting to flag it back to her, her following-up with the submitter, repeating several times in some cases, posting their replies, looking for the CFP team to ask more questions… hopefully you get the picture. The amount of work she fields in a three-month span, just related to CFP, is insane. I say that as someone who has worked more than 80 hours a week in this industry for the last twenty years. Oh, did I mention that she also voted on 60% of the talks? While five ‘full’ reviewers voted on less talks than her.

A plea! If you didn’t see the numerous Tweets and requests to get your talks in early, I cannot emphasize how much it benefits you, more than us. When a talk comes in during the first few weeks, it gives us plenty of time to not only review and ask questions, but to give feedback in the way of suggestions. In some cases, one of the team will break away from the board and work with the submitter to improve their submission. This year, I did that once with someone who’s original two submissions garnered a single yes vote. After working with them and giving feedback on how to combine the talks and hone in on the areas of interest, the re-submission received 12 yes votes and zero no votes. In an ideal world, that would happen for every submission, but a significant number of talks are submitted the last two days.

Meaningless numbers! Because our industry loves to work with statistics that they don’t fully understand or have little meaning without serious caveat and disclaimer (PPT), let me throw out a few. For the 536 submissions we received, the CFP team voted yes 1,223 times, no 3,555 times, maybe 186 times, deferred 945 times, and abstained 54 times. Again, we defer if we feel that a topic is not one we can fairly judge based on our expertise and rely on the rest of the team to review. We abstain when there is a potential conflict of interest: if we work with the submitter, we contributed to the submission, or have a negative personal past with the submitter.

Meaningful numbers! We requested feedback from the submitter 125 times and changed our votes 61 times. Working with us to answer our questions, willingness to accept our feedback, and work with us to build a better presentation benefits everyone. As Nikita tweeted, more than 60 of the accepted talks were from first-time DEF CON speakers. Given there were ~ 110 accepted talks (and 422 rejected), that is quite a lot. It is encouraging to see this many new speakers given some of the past submissions from egotistical industry veterans that felt they deserved a speaking slot on the back of a weak submission, simply because of “do you know who I am?!”

More meaningful numbers! Of the 536 submissions, 185 (34.77%) said they would release a new tool. Only 56 (10.53%) of those submissions said they would release a new exploit, and some of those claims were questionable. It is common for people submitting to DEF CON to also submit to BlackHat and/or BSidesLV. This year, 218 (40.98%) of those submissions were also submitted to BlackHat and 65 (12.22%) of them were also submitted to BSidesLV. For various reasons, often around the ability to get to Las Vegas, some submitting to BlackHat will submit to DEF CON but say that acceptable at DEF CON is contingent upon acceptance at BlackHat. This year, 36 (6.77%) talks were submitted to us with that caveat. In a somewhat arbitrary categorization, overall I felt that 200 (37.31%) of the talks were ‘red’ (offensive), 88 (16.41%) were ‘blue’ (defensive), and 38 (7.09%) were ‘black’. By ‘black’, I mean that the topic really had little merit or benefit for red-teaming and were really in the realm of criminals.

Even more meaningful numbers! Some of the most basic stats that can be generated for your ocular pleasure. First, these are arbitrary categories that were developed as we received submissions. Nothing formal and some talks were hard to classify:

From there, I broke it down further by some topics that aren’t necessarily specific to the red or blue domain. Again, kind of arbitrary and based on seeing the submissions as they came in and note that one talk may have been flagged as more than one topic:

When building a schedule over four days and across five tracks, while considering if it is better to suggest a talk for a village or alternative venue (e.g. Skytalks), Nikita has to play Tetris of sorts based on the accepted talks, the requested time, and the schedule. This is what she had to work with:

One of the more popular questions this year after an increased awareness and public discussion around diversity in InfoSec, is the gender breakdown for submissions:

Finally, a general picture of the submissions by month. Recall what it looked like for the April breakdown above and you once again get a good idea why we would like more submissions earlier in the process:

Finally, a quick note on a common perception for InfoSec conferences and talks in general. Given the drastic rise in the number of conferences popping up, there is a saturation that demands more submissions to fill the schedules. That means that veteran speakers can typically shop their talks around or be selective in where they submit based on the venue they find appealing. That also means more new speakers are submitting which results in a wide range of topic and quality of submissions. That led me to argue this Tweet and remind people that a conference can only work with what is submitted. Personally, I feel that the overall quality of submissions to DEF CON (and a couple other conferences I review for) have gone down this year and last. That means that DEF CON ended up accepting some talks that I personally did not care for.

Bottom line? If you are researching a cool topic, submit a talk on it. Have a unique perspective or done more digging on something? Share your work. Never submitted before? Submit early and let us work with you if you need it. If a security conference is lacking, it is due to the community as much as anything else.