The Attrition DC26 Badge Challenge Post Mortem

This year, which was my final trip to DEF CON, I made up one last round of Attrition DEF CON badges. In prior years they were typically engraved luggage tags a bit more specific to the year:

Since #BadgeLife has become a big thing, especially this year as far as I can tell, I decided to go a bit lower rent on the badge material but ‘up the game’ on the content. I did a ‘cipher challenge’, which of course was never meant to be a real challenge. I’m not nearly smart enough for that shit. I literally came up with it in less than a day, didn’t vet it with anyone, and just moved to mock up a badge and print. Because I am so pro! I also figured anyone who knows me would know not to trust me on anything ‘cipher’ or ‘challenge’, especially ‘cipher challenge’. Unfortunately, and I do feel bad, a handful of badge-holders went down this rabbit hole.

This write-up is for them, to explain just how fast this was put together, and the lessons I learned as well. The cliff notes details, as I originally intended:

  1. https://en.wikipedia.org/wiki/Cirth (hobbit) -> “never trust us”
  2. https://en.wikipedia.org/wiki/Wingdings -> “except this time”
  3. location hint (flamingo hotel) -> “Phoenicopteriformes”
  4. refined location – wildlife habitat long/lat -> 36.11662720392657 / -115.17115294683322
  5. 08/11/2018 @ 3:04am (UTC) Epoch Unix Time -> “1533956647”
  6. Klingon “take proof you were there” -> “pa’ SoH’a’ tob tlhap”
  7. random letters/numbers -> (unsolvable/gibberish)
  8. show Jericho proof (latin) -> ostende inamabilis sciurus
  9. winner winner chicken dinner -> (icons)

Seems pretty straight forward! Unfortunately, a few of these didn’t work out so well as I found out, in surprising ways. Here are the hiccups I didn’t expect.

  • (1) There are multiple Cirth character sets. Pretty minor, but it led to a couple people saying the translation was off. Worse? That one character that was off fed into another hint and made it more believable. I should have read through the Wikipedia article to notice that, but growing up as a skilled writer in ‘Tolkein Runic’ (Cirth), I didn’t think about it.
  • (2) Always trust the first hint, never the second!
  • (5) So… Epoch Unix Time is an absolute. You don’t adjust for timezones, because the time is in Coordinated Universal Time (UTC). The Wikipedia entry for UTC confirms it “is not adjusted for daylight saving time“. So my intention of it being on Saturday morning at 3:04am was correct. I didn’t account for everyone adjusting for time zones. I also didn’t account for some adjusting for Las Vegas’ time zone (Pacific) or trying to second-guess it and using my time zone (Mountain). At this point I am vindicated, anyone loitering around flamingos at the Flamingo between ~ 8p – 10p local time, were not following the cipher. Yes, I still feel bad they showed up thinking there was a prize/reward there.
  • (6) I really should have known better here, since Google Translate fails to translate simple text from one language to another, and then back again. I fell to this trap using the first Klingon translator that Google offered and did a simple one-way translation. Unfortunately, that same site changed “take proof you were there” drastically to involve something with a cat in it. I like cats, everyone knows this, so the clue still had some crazy merit. Fortunately for me, one of the badge-holders knows a lot more about Klingon than the online translators do, and gave me a deserved verbal beratement over the horrible translation. This led me back to that translator, where I pasted “pa’ SoH’a’ tob tlhap” back into it and got, you guessed it… “you take a cat room“. This was a solid break in the intended chain, and a deal breaker for solving the badge. Oops.
  • (7) This line had a simple intention. This line may have been the weirdest in the long run. A bunch of random numbers and letters, with no intended meaning, to be an ultimate ‘gotcha’. So no one could say they solved it, or if they did, I could challenge them on that line. I left this up to the wonderful badge artist, Anushika, who typed in a random string while designing it. Between that and the chosen font, there was even question over one or two characters. Either way, I thought it served a purpose. One nice lady from Australia (she is nice, despite her DMs irrationally suggesting I not to call her that) spent a lot of time on this, maybe more than anyone else. At one point she messaged “Threw it through successive shifts. And the answer it gave me was successive shifts.” This was after I reminded her on previous comments, that “i’m not really bright. hashed, encrypted, encoded… i get so confused“. No false modesty or deception; math is a religion, and I don’t believe. Ergo, crypto is a foreign language to me for the most part. So that random line had some merit in the math world maybe? Put it through successive shifts, and the answer is more successive shifts. That certainly sounds like I was really brilliant in a troll cipher, when I was the farthest thing from it. She kind of spooked me when she told me that and I thought “oh shit, this line has meaning?!” Kind of disappointed that a ‘troll cipher’ isn’t a real thing with a Wikipedia entry!
  • (8) Translation woes again. As someone who took a year of Latin in high school, seriously, and knows about the headache of online translators… not sure how I got burned twice in one badge. I translated “show squirrel proof” since I knew it wouldn’t handle “jericho”, and got “ostende inamabilis sciurus“. This is where it gets really weird. Someone messaged while in Vegas that the translation was off, and I went to check again, using Google Translate again. Click that link and you will see the problem. The translation changed between making the badge, and someone translating it after receiving the badge, which was around 30 – 40 days. So now it became “inamabilis sciurus ostendit probationem“. This caused a problem because the first translation now reverses as “show squirrel” which is lacking a crucial word. The updated translation, when reversed, comes back as “squirrel proof shows“, which is a bit closer to the intent. Ugh. For fun, since we had to pick ‘Latin’ nicknames in my Latin class, I chose Sylvester. #JerichoTrivia

So there you go badge-holders and adventure-seekers! I sincerely apologize for any hardship you went through, to a degree, because that first line really is gospel when it comes to me, attrition, and anything remotely close to a challenge. Years prior, I wanted to do a luggage tag badge like those pictured above, but cut out holes in a Goonies sort of way along with instructions to stand in the middle of Las Vegas Blvd to line up three landmarks to figure out where the party was. After this badge challenge? Probably for the best I didn’t, or I bet I would have gotten a few people run over. On the upside, you got to spend time with Flamingos, largely more bearable than the average DEF CON attendee.

Advertisements

DEF CON 26 CFP Basic Statistics and Observations

This is the second blog in a series about DEF CON 26 CFP. The first:

A Look Into the DEF CON CFP Review Board (we’re actually really boring people)


First, this post is not sanctioned by DEF CON in any way. I am a member of the CFP team who decided to keep some rudimentary statistics on the submissions this year, as I did last year. I did this to give the team a feel for just how many submissions we got, how many talks we accepted, and primarily to track the way we voted. This greatly assists the powers that be (the amazing Nikita) to more quickly determine which talks are well-received. Due to time constraints, I was not able to track as much metadata, so this blog will be shorter than last years.

First, a few bits of information:

  • DEF CON 26 CFP opened on January 16, 2018
  • DEF CON 26 CFP closed on May 01, 2018
  • Two talks were submitted after closing date and were considered for various reasons
  • We received 551 submissions (up from 536 last year)
  • Four of the submissions were withdrawn by the submitters by the end of CFP
  • BlackHat received around 1,000 submissions this year for comparison

A recurring theme in these blogs and our Tweets throughout the CFP process is strong encouragement to submit early. While we did get a share of submissions in January and February, you can still the huge spike we experience in April (a majority a day before CFP closed), and May (on the day it closed). The two weeks between the end of CFP and the time when acceptance/rejection letters are sent out become stressful as we’re under deadline to review talks, try to get last minute feedback when we can, and make final decisions.

Of the 551 submissions, 107 were accepted (19.4%). There were 388 unique male submitters, 39 unique female submitters, and 14 anonymous submissions (note: we only catalog based on the gender, if known, of the primary speaker). Of those 14 anonymous submissions, 3 were trivially identified because the submitter didn’t scrub their submission properly or submitted work that had been presented before and was caught with a quick Google or Bing search.

Of the 551 submissions, 173 (31.40%) said they would release a new tool. 77 (13.97%) said they would release an exploit, up from 56 (10.53%) last year. Of all the submissions, 216 (39.20%) were also submitted to Black Hat and 51 (9.26%) said that speaking at DEF CON was contingent upon Black Hat accepting their talk. Only 73 (13.25%) submissions were also submitted to BSidesLV. Of the 551 submissions, 122 of the speakers had presented before at DEF CON, and an additional 28 had presented before at a DC Village or Workshop.

Unfortunately, time did not permit me to properly track ‘red’ vs ‘blue’ vs ‘black’ submissions, nor categorize the talks. That said, 11 talks were about ‘Artificial Intelligence’ and/or ‘Machine Learning’, even if some of them didn’t quite seem to know what those terms really mean. Ten submissions were on the topic of, or heavily related to, blockchain. Eight submissions came with the ultra creative title that included “for fun and profit”, four included “all your $blah belong to us”, two submissions used “pwned” in the title, and fortunately for our sanity, none wanted to make $blah great again.


That’s it! I realized this is a bit more brief than last year, but the time requirement of reviewing all of the submissions is crazy. Finding extra time to maintain the sheet is rough, and generating even more statistics or tracking additional metadata just can’t happen sometimes. Fortunately for me, this year Highwiz stepped up and did an incredible amount of work filling in data, especially while I was lost in the mountains for a few days. 

A Look Into the DEF CON CFP Review Board (we’re actually really boring people)

Written by Highwiz with contributions and editing from Jericho

Being on the DEF CON CFP Review Board can be as exciting as {something}; as frustrating as {something}; as thought provoking as {something}; and as enriching as {something}. It’s like mad libs, I hope you’ve filled in this section with something good.

Each year, myself and somewhere between 16 and 20 other reviewers take on the responsibility of selecting the best possible talks for DEF CON.

Oh, I should also apologize in advance as you read this first entry in the CFP Blog series. I apologize because I am not known for my brevity. In the “written word” and especially when it comes to something I’m passionate about, I tend to be wordy AF. [See, like that sentence: Could have just said “Hope you enjoy”, but nope – not me…].

I do genuinely hope that someone finds these blog postings helpful and that it will allow submitters (or potential submitters) some insight into the way we work so as to better prepare their submissions in the future.

In its original form, this post was about as dry as some of the white papers we read that were included in several submissions. Speaking of, white papers help tremendously when we’re reviewing your submissions, and if you include one, you’re already ahead of the pack. Sadly however, while White Papers do indeed help your chances during the CFP, they make for really shitty blog posts.

While we’re on this wildtangent of things that are related to the CFP Board but not actually part of the CFP Process itself, let’s talk about the term “CFP”. Above, I mentioned white papers; while the term CFP originally did mean “Call For Papers”, it doesn’t anymore. Most people don’t submit papers. When you think about the term CFP, you should really think of it as Call For Presentations. I know I’m not the first person to say that and I definitely won’t be the last, but still, it bears saying.

Alright, back to the topic at hand…

This year, the DEF CON Call for Presentations (CFP) Review board was made up of 16 “General Reviewers”,  six “Special Reviewers”, and two members of the DEF CON staff.

The DC CFP process is not “blind”, meaning reviewers can see each other’s votes, and we see who submitted it unless they specifically opt to stay anonymous (and properly scrub their submission). There are merits for both open review and blind review, but we’ve found that an open review significantly helps our process as there is a lot of good discussion about each individual submission. One reviewer may spend considerable time digging into the topic, researching prior disclosures or talks along the same lines, or offer their personal in-depth knowledge which typically helps several others better understand the topic and state of research.

If you submitted a talk to DEF CON this year, then all of the General Reviewers most likely reviewed and discussed your talk. While these reviewers tend to agree on many talks there are also submissions that cause arguments and intense heated discussions. Most of the review board members have a very extensive vocabulary and seem to enjoy finding new and creative ways to use the word “fuck” in a sentence (both in the positive and negative). Though, why the topic of vocabulary is at hand, let me say this to my fellow review board members: y’all motherfuckers need to find a new word besides “pedestrian“. I’ll leave it at that.

As reviewers, every year we’re often left wondering why certain people have chosen to submit to DEF CON and whether or not they actually understand what type of conference it is. A prevailing sentiment on many submissions is “This is not a DEF CON talk”. While the content may be of significant quality, the question we often ask ourselves is “is this talk right for DEF CON?”. Sometimes the answer is that while it would be good at a developer conference, RSA, or BlackHat, it simply wouldn’t be right on a main stage at DEF CON. DEF CON is, or at least it strives to be, a hacker con first and foremost.

TL;DR : This is DEF CON, please bring your “A” Game.

The Time Commitment

Often times people ask to be on the CFP Review Board because it is an honor and privilege to be among the group that selects the presentations for DEF CON… It’s also a giant time suck, which people sometimes fail to realize (or believe us when we tell them).

Now for the more formalized explanation of that so my “editor” doesn’t get pissed:

It’s been stated before, but being on the DEF CON CFP Review Board is an enormous time commitment. In the first few months, the average time a reviewer spends on talks is ten to twenty hours a week, depending on the volume of talks received. In the last two weeks, when everyone is rushing to submit before CFP closes, the time required rises to forty or more hours a week. The DEF CON CFP Review Board, like many other CFP Review Boards, is an entirely volunteer activity that many times becomes a second job. This is one of the big reasons we encourage people to submit earlier, and not wait until the last minute. Total time spent for a General Reviewer is probably in the range of 280 working hours.

The rule of the board for a General Reviewer is to do as many talks as you feel you are able to, but hit at least 70% of the talks. In practice and as far as the other general reviewers are concerned, you should be getting as close as you can to 100% of the talks. If the other reviewers feel that you’re not pulling your weight (so to speak) they will call you out. We’re like the fremen in that sense, crysknife and all. In less nerdy terms, no one wants to get shanked in the exercise yard because they didn’t review enough talks.

The topic of the exercise yard leads us into our next area, the prisons guards.. I mean, the DEF CON CFP Review Board staff.

The Defcon CFP Review Board Staff

Nikita and Alex are the foundation of the Review Process. They post the talks, interact with the submitters, deal with the reviewers when we’re cranky and obstinate (we can really be bitches sometimes), reshape the feedback given by the reviewers and transmutate those turds into flowers and candy before the submitters view it. They are the fecal alchemists and without them, the process would not work.

Similarly, there is the non-official review board staff member in the form of Jericho who tracks our submissions, votes, and other information. He categorizes the talks for us while providing amazing feedback and insight into anything vulnerability disclosure related. Like Nikita and Alex, Jericho is an integral part of making the DEF CON CFP Review Board function and prosper.

The fourth person (another unofficial one) who deserves a great amount of credit for making sure that people keep up with their reviewing is our own special CFP Vocal Antagonizer in the form of Roamer. If a review board member is slacking they can be certain that Roamer will “gently” remind them that they need to review talks. This is an important role as we want as many of the review board to provide feedback and vote on a talk as possible. This ensures more reviewers see it, and provide commentary based on their diverse background. In other words, Roamer is like a shot caller; if you don’t sack up and do the tasks assigned to you, you’re going to wake up with a horse head in your bed.  

Both Jericho and Roamer are inspiring examples of what it means to truly care about the hacker and DEF CON communities. On a personal note, it’s also pretty cool that I get to call Nikita, Jericho, and Roamer, these amazing people, my friends. I say that because after all these years, they still talk to me, even though I can be a bit dramatic.

While we’re on the topic of dramatic people, let’s talk about our special reviewers. I’m just kidding, where drama is concerned all of them pale in comparison to yours truly.    

Special Reviewers

Our special reviewers are subject matter experts who specifically comment and give their feedback on talks in their “wheelhouse”. There are many talks where the “general reviewers” simply don’t feel fully qualified enough to make the necessary judgement of a “yes” or “no” vote. Sure, they are familiar with a topic to some degree, but just don’t spend their lives immersed in that corner of security.

Everyone in InfoSec “knows” about pen-testing and social engineering for example. However, unless that is their primary tradecraft and they have been doing it for a decade or more, they may not be keeping up with the latest tools and techniques. In such cases, the general reviewers will typically “defer” to the subject matter experts. The input provided by the Special Reviewers this year has been invaluable in helping shape what DEF CON 26 will be.

Discussions

The DEF CON CFP Review Board has a unique style in how they (we) review talks in contrast to many other CFP Review Boards. There is oftentimes a lot of discussion that goes on about individual talks that plays a key part in the process. The reviewers do not live in a vacuum when reviewing the individual talks, rather, they are encouraged to communicate with one another openly on the system so as to provide a higher quality of talk selection. Sometimes the discussions may turn heated, but at the end of the day it does improve the final selection. “Heated” is a really nice term. It’s a really nice term because when we say it, you may think we might mean like a “hot summer day” when it fact we mean the fires of Mordor, or whatever is causing a burning sensation in the nether regions.

The being said, on the Review Board, it’s very important to be open to new ideas and perspectives which such discussions strongly facilitate. I don’t think the DC CFP review board would work nearly as well under any other type of system. Conversely, what works for “us” may not necessarily work as well for other CFP Review Boards.

How do I get on the CFP Review Board?

First, are you really sure you want to? Do you really have the time? The numbers we posted before about the time commitment wasn’t an attempt to oversell things (in fact they are probably conservative estimates). As a review board member you will be dedicating that much time to reviewing talks over a three to five month period, with the final weeks being absolutely brutal. And if you don’t? You’ll find yourself being called out or greenlit by a shot caller. And then the best option there is you may not be asked back the following year. Remember, you are helping to shape the tone, feel, and content of DEF CON, the longest-running hacker convention now attended by over 25,000 people. That is an incredible responsibility and you are helping ensure that attendees get value from the talks they attend.

Still want to do it though? OK. Talk to some CFP Review Board members at DEF CON 26. That’s it… just do that. Judge for yourself based on how they describe it, the good and the bad. If any of them describe a breezy stroll through a nice park with flowers and chipmunks, walk away. They aren’t telling you the whole story.

Why don’t you have a CFP Review Board Panel at Defcon?

First, it would be super boring. Invariably the attendees are going to ask us a lot of questions that we can’t answer about specific submissions. While we may “vague” tweet or generally answer a question, we can’t and won’t provide specifics on submitted talks beyond what Nikita and Alex have provided as official feedback, and then only to the person that submitted the talk. So the panel would consist of a lot of jokes, high-level “CFP tips”, and not much more value. If you really want to “know” more about the CFP, just find out where some of us hangout at DEF CON.

Before we end this first entry in this series of three or four posts. I would like to take the opportunity thank you for reading along thus far. Jericho and myself worked on this entry, but he shouldn’t be held responsible for my tangents, side notes, and improper use of some punctuation.

Credit Roll

First and Foremost, we really need to thank those people around us (friends, family, significant others) that deal with us during the three to five month a year process of reviewing talks. They truly are the unsung heroes. They know we can’t go into specifics, but they’re there to listen to us bitch and moan about “that talk”. They understand us during this endeavor when we forgo plans to hangout with them or we’re not in bed until three hours past normal time. Without their support, we could never accomplish the task laid out in front of us.

General Reviewers

Jericho Roamer HighWiz Shaggy
bcrypt Vyrus Zoz Claviger
Suggy Wiseacre Secbarbie PWCrack
KingTuna Medic Dead Addict ZFasel

Special Reviewers

Andrea Matwyshyn w0nk Malware Unicorn
Snow Kodor Grifter

DEF CON Staff

Nikita Alex

DEF CON Founder

The Dark Tangent

Shoutouts

We’d also like to give a big shout out to the Workshops Review Board. While they are a separate entity from the CFP Review Board, their contributions to DEF CON are just as important.

Tottenkoph Munin Sethalump DaKahuna
CyberSulu Kodor SinderzNAshes SinderzNAshes
Kodor SinderzNAshes Wiseacre HighWiz

In part two of the series we will be covering the statistics, because that’s the type of thing that makes some of us (but especially Jericho) super wet.

With part three will come our thoughts, and comments on the Submission Form and the Questions we ask.

Part four will be some lessons we’ve learned along the way as well as ideas for improving things in the future.

One last thing, Jericho is totally the Jimmy McNulty of the CFP Review Board.


Continue reading the second blog in this series, “DEF CON 26 CFP Basic Statistics and Observations“.

A View Into DEF CON 25 CFP…

First, this post is not sanctioned by DEF CON in any way. I am a member of the CFP team who decided to keep some rudimentary statistics on the submissions this year. I did this to give the team a feel for just how many submissions we got, how many talks we accepted, and primarily to track the way we voted. This greatly assists the powers that be (the amazing Nikita) to more quickly determine which talks are well-received. Every day that I kept up on the spreadsheet, the more ideas I had on tracking. Other team members said “you should track…”, and I typically did. So this blog is to give some insight into the entire CFP process, with a solid slant on statistics about the submissions.

First, a few basics:

  • DEF CON 25 CFP opened on February 01, 2017
  • DEF CON 25 CFP closed on May 01, 2017
  • 17 talks were submitted after closing date and were considered for various reasons
  • We received 536 submissions
  • Three of the submissions were retracted by the end of CFP
  • BlackHat received 1,007 submissions this year for comparison

Next, who are we? There were technically 31 DC CFP reviewers this year, and you can read their fun profiles now (mouse over stuff here and there, call it an Easter egg)! Ten of them are considered ‘specialty reviewers’, where they typically review talks on a very specific topic such as ‘social engineering’ or ‘legal’. These are generally topics where the submissions are either too numerous and potentially murky to figure out if they are worth accepting (social engineering), or a topic that most of InfoSec aren’t really experts on, even when some of us are the #1 armchair lawyer in InfoSec. The specialty reviewers are expected to review their topic only usually, while a few are open to review multiple topics. That means there are 21 reviewers who are expected to review ‘as many talks as you can’, understanding that we may DEFER on a given submission if we feel it is out of our wheelhouse, and remembering that this is extremely time-consuming and we all have day jobs. Some of us have night jobs, and some of us have social lives (not me).

Every year we come up short on reviewers who are truly qualified to give solid feedback on a given topic. This year DC CFP put out a call for more volunteers and we hit a bit of gold, getting several new reviewers who are quality and put in a crazy amount of time. Next year? We know there are topics we need help on, so if you are sharp, kind of special(ty), or the top of your game in a popular field… come join us. I can’t stress how important this is. Instead of just working on a talk or doing a thing, you have the ability to help influence the presentations given at a conference with some 20,000+ attendees. That is a lot of power, a lot of influence, and the potential to do a lot of good. Personally, that is why I still sacrifice the incredible time I do.

Shout outs! The only way to start this paragraph is to call out Nikita for handling almost all CFP submission related emails. Incoming submissions, replies saying “you didn’t follow directions”, second-attempts, replies saying “no really you ‘brilliant hacker’, you didn’t read our guidelines”, posting them to the CFP platform, watching for the CFP team to say “I have questions” and us largely forgetting to flag it back to her, her following-up with the submitter, repeating several times in some cases, posting their replies, looking for the CFP team to ask more questions… hopefully you get the picture. The amount of work she fields in a three-month span, just related to CFP, is insane. I say that as someone who has worked more than 80 hours a week in this industry for the last twenty years. Oh, did I mention that she also voted on 60% of the talks? While five ‘full’ reviewers voted on less talks than her.

A plea! If you didn’t see the numerous Tweets and requests to get your talks in early, I cannot emphasize how much it benefits you, more than us. When a talk comes in during the first few weeks, it gives us plenty of time to not only review and ask questions, but to give feedback in the way of suggestions. In some cases, one of the team will break away from the board and work with the submitter to improve their submission. This year, I did that once with someone who’s original two submissions garnered a single yes vote. After working with them and giving feedback on how to combine the talks and hone in on the areas of interest, the re-submission received 12 yes votes and zero no votes. In an ideal world, that would happen for every submission, but a significant number of talks are submitted the last two days.

Meaningless numbers! Because our industry loves to work with statistics that they don’t fully understand or have little meaning without serious caveat and disclaimer (PPT), let me throw out a few. For the 536 submissions we received, the CFP team voted yes 1,223 times, no 3,555 times, maybe 186 times, deferred 945 times, and abstained 54 times. Again, we defer if we feel that a topic is not one we can fairly judge based on our expertise and rely on the rest of the team to review. We abstain when there is a potential conflict of interest: if we work with the submitter, we contributed to the submission, or have a negative personal past with the submitter.

Meaningful numbers! We requested feedback from the submitter 125 times and changed our votes 61 times. Working with us to answer our questions, willingness to accept our feedback, and work with us to build a better presentation benefits everyone. As Nikita tweeted, more than 60 of the accepted talks were from first-time DEF CON speakers. Given there were ~ 110 accepted talks (and 422 rejected), that is quite a lot. It is encouraging to see this many new speakers given some of the past submissions from egotistical industry veterans that felt they deserved a speaking slot on the back of a weak submission, simply because of “do you know who I am?!”

More meaningful numbers! Of the 536 submissions, 185 (34.77%) said they would release a new tool. Only 56 (10.53%) of those submissions said they would release a new exploit, and some of those claims were questionable. It is common for people submitting to DEF CON to also submit to BlackHat and/or BSidesLV. This year, 218 (40.98%) of those submissions were also submitted to BlackHat and 65 (12.22%) of them were also submitted to BSidesLV. For various reasons, often around the ability to get to Las Vegas, some submitting to BlackHat will submit to DEF CON but say that acceptable at DEF CON is contingent upon acceptance at BlackHat. This year, 36 (6.77%) talks were submitted to us with that caveat. In a somewhat arbitrary categorization, overall I felt that 200 (37.31%) of the talks were ‘red’ (offensive), 88 (16.41%) were ‘blue’ (defensive), and 38 (7.09%) were ‘black’. By ‘black’, I mean that the topic really had little merit or benefit for red-teaming and were really in the realm of criminals.

Even more meaningful numbers! Some of the most basic stats that can be generated for your ocular pleasure. First, these are arbitrary categories that were developed as we received submissions. Nothing formal and some talks were hard to classify:

From there, I broke it down further by some topics that aren’t necessarily specific to the red or blue domain. Again, kind of arbitrary and based on seeing the submissions as they came in and note that one talk may have been flagged as more than one topic:

When building a schedule over four days and across five tracks, while considering if it is better to suggest a talk for a village or alternative venue (e.g. Skytalks), Nikita has to play Tetris of sorts based on the accepted talks, the requested time, and the schedule. This is what she had to work with:

One of the more popular questions this year after an increased awareness and public discussion around diversity in InfoSec, is the gender breakdown for submissions:

Finally, a general picture of the submissions by month. Recall what it looked like for the April breakdown above and you once again get a good idea why we would like more submissions earlier in the process:

Finally, a quick note on a common perception for InfoSec conferences and talks in general. Given the drastic rise in the number of conferences popping up, there is a saturation that demands more submissions to fill the schedules. That means that veteran speakers can typically shop their talks around or be selective in where they submit based on the venue they find appealing. That also means more new speakers are submitting which results in a wide range of topic and quality of submissions. That led me to argue this Tweet and remind people that a conference can only work with what is submitted. Personally, I feel that the overall quality of submissions to DEF CON (and a couple other conferences I review for) have gone down this year and last. That means that DEF CON ended up accepting some talks that I personally did not care for.

Bottom line? If you are researching a cool topic, submit a talk on it. Have a unique perspective or done more digging on something? Share your work. Never submitted before? Submit early and let us work with you if you need it. If a security conference is lacking, it is due to the community as much as anything else.

Stalking me in Las Vegas…

dc-21-logo-sm

I fly out to Las Vegas tomorrow for the trifecta of summer security conventions held in oppressing heat. BlackHat Briefings, BSides Las Vegas, and DEF CON 21. If you want to catch up to talk about attrition.org, OSVDB, or anything vulnerability related, look for the disgruntled person likely wearing a squirrel-themed shirt. If you would like to stalk me down to catch up, chat about anything, or shank me, this friendly guide will assist you:

Wednesday

I will be at BSides in the morning to catch a few talks, mingle, and generally harass the BSides staff. Because they didn’t have enough going on putting together the entire convention. In the early afternoon I will make my way to BlackHat to register, visit a few vendor booths, and then give a presentation at 3:30 with Steve Christey in Palace 1 room. The talk is called “Buying Into the Bias: Why Vulnerability Statistics Suck”. Hopefully we will demonstrate how vulnerability statistics have sucked throughout the years, ways to improve them, and more. After the talk and any Q&A, I hope to stick around for the Pwnie awards and BarCon, before heading to either the Adobe or Tenable party.

Thursday

I will be at BSides almost all day and hope to catch a variety of talks that sound interesting. Later in the evening I will be in various places for a dinner meeting, and then may swing by the Microsoft party to harass their security people before ending up back at BSides for the after party.

Friday

I will be at Defcon and Skytalks, likely lingering around the Skytalks room if nothing else going on. At 8P, the Defcon Documentary is showing, as well as Hacker Pyramid / Hacker Jeopardy.

Saturday

I will be at Defcon and Skytalks, until 3P where I will present the Defcon Recognize Awards with Russ in Track 3. Come see who the charlatan of the year is, among other categories! That evening at 8P is the screening of the new movie “Reality Hackers” in Track 2. After that, probably doing vile things at the 303 party.

Sunday

If you haven’t found me by this time, you failed.

Building a better InfoSec conference…

There is an abundance of information security conferences out there. While the industry is drowning in these conferences, a lot of them are producing more noise than value. Increasingly, people are realizing that even a moderate security conference is a profit center. We need fewer conferences that are more topical and offer more value, whatever the price. In addition to the frequency of conferences, most of them are doing the same exact thing. There is a serious lack of creativity and forward-thinking. It was only the last few years that saw a couple conferences dedicate entire tracks to defensive security.

I have been attending security conferences for almost 20 years now. Based on my experience, as well as being on several CFP review teams, there are many aspects I want to see in the future.

  • More talks or entire tracks dedicated to sociology and human sciences, as relates to the security world. We see this from time to time, usually in passing regarding security awareness or phishing. Attacker profiling is a stronger use, but most talks are over-simplified and don’t cover new ground.
  • Talks on law and policy are more frequent lately, but they don’t seem to do much good. In the recent DEF CON 21 CFP review, we received many talks that focused on law and/or policy. There was one trend that emerged between all of them; no practical information on how the average person can truly make a difference. Sure, write your congress critters, stay informed, and all the usual advice. That hasn’t worked in the past. What else do you have?
  • Heckling should be encouraged. Several years ago, DEF CON changed to where questions or comments were not allowed during talks. The years prior, if a speaker said something that was not factual, you could quickly call them on it. It gave the audience a chance to see the error with minimal interruption. Now, questions are done after the talk, in a separate room, away from the audience. If a speaker says something inaccurate, the audience leaves thinking it was factual. This is a disservice to the attendees. Speakers must be kept honest.
  • Continuing that theme, all talks should have a mandatory 5 minute Q&A session at the least. It is rare that a speaker is so decisive and thorough as to leave no questions. If an audience member wants to debate a point or call them on bullshit, they get an opportunity to do just that.
  • More lightning talks, with a twist! Having 3 presentations in an hour gives more researchers a chance to share their progress and ideas. It gives a brief platform for them to find others that may want to help, or get ideas for moving forward. The twist? A gong. If a talk is bad or going nowhere, don’t even give them their 15 or 20 minutes. Gong them off the stage and let the next lightning talk start.
  • Most conferences solicit talks (the CFP), have a review team decide which are worthwhile, and create a schedule. It would be nice to see conferences follow this process to weed out the crap, but then put all good talks up for community vote. Based on the feedback, use it to determine what the masses want to see and then build a schedule off the higher voted talks.
  • Speakers should not only explain why they are presenting, they should justify why they are the ones giving the talk. Not a general resume with 20 years of security experience either. What specifically have they done that warrants them giving this talk. Pen-testers with a few years of experience should rarely give a talk on pen-testing or social engineering, unless they truly have groundbreaking material. They should be required to make their slides available shortly after the convention. The slides should properly reference and footnote prior work, source images, and give credit to what influenced them.
  • Conferences should solicit feedback from the audience, and give it to the speakers so that they may improve their talks in the future.

These are but a few ideas for improving conferences. Have your own ideas? Leave a comment!