Android & Granular Permissions

For Android-based phone owners, you are no doubt passingly familiar with the permission system that governs applications and what they can do. Every time you install an application, the device will ask you if you accept a list of permissions that it says are required for it to run. If you want the app, you must accept the permissions no matter what they are.

In theory, users can simply decline an app that requires excessive permissions and find an alternative. After all, there are over 1 million apps available right? Many won’t even read the permissions, while others may casually dismiss them because they are clearly stated, and any app in the Google Play store has to be legitimate!

The problem is that even the most simple and legitimate apps may request a variety of permissions that are not needed to make the program run:

Screenshot_2013-08-22-19-09-55   Screenshot_2013-08-23-19-12-04

A classic example of an application requesting permissions that aren’t required can be seen in the T-Mobile MyAccount app. The app is designed to give a user information about their T-Mobile cellular account, nothing else. This should take nothing more than permission to send and receive network data from their servers. Instead, the app has traditionally wanted extra permissions that are excessive. Worse, the latest version wants more, including “System tools” that give the app even more control over the phone. As T-Mobile is my provider and I don’t want to call them to find out account information, I have to accept their overly broad permissions. There is no alternative application in this case.

The second example is Avast Mobile Security & Antivirus that expects keys to the kingdome. There is a bit of irony that a security app wants enough permissions to completely own your phone, the same threat it claims to protect you from.

The Alternative

The obvious solution to this problem is setting it up so permissions are granular. This would allow a user to deny a specific permission while allowing others. If denying a specific permission causes the application to stop functioning, the user could enable it again if desired.

How hard is it to implement this for Google and Android? Trivial. This is readily apparent in that phones that have been jailbroken already allow it. Android users have requested this feature from Google via Ticket 3778. If you are an Android user and want to see this implemented, load the ticket and ‘star it’ (click the star on the upper left) to indicate you want it. If Google opts not to implement that one, there is a similar feature request (Ticket 6266) that would give a set of optional permissions an app wants, but are not required to function.

Until we get granular permissions, the concept of security in the context of applications will be a lost cause. – Good intentions, needs fix-ups though

[This was originally published on the OSVDB blog.]

Tonight, shortly before retiring from a long day of vulnerability import, I caught a tweet mentioning a web site about reporting vulnerabilities. Created on 15-aug-2013 per whois, the footer shows it was written by Fraser Scott, aka @zeroXten on Twitter.

I love focused web sites that are informative, and make a point in their simplicity. Of course, most of these sites are humorous or parody, or simply making fun of the common Internet user.

This time, the web site is directly related to what we do. I want to be very clear here; I like the goal of this site. I like the simplistic approach to helping the reader decide which path is best for them. I want to see this site become the top result when searching for “how do I disclose a vulnerability?” This commentary is only meant to help the author improve the site. Please, take this advice to heart, and don’t hesitate if you would like additional feedback. [Update: After starting this blog last night, before publishing this morning, he already reached out. Awesome.]

Under the ‘What’ category, there are three general disclosure options:


First, you are missing a fourth option of ‘limited disclosure’. Researchers can announce they have found a vulnerability in given software, state the implications, and be done with it. Public reports of code execution in some software will encourage the vendor to prioritize the fix, as customers may begin putting pressure on them. Adding a video showing the code execution reinforces the severity. It often doesn’t help a VDB like ours, because such a disclosure typically doesn’t have enough actionable information. However, it is one way a researcher can disclose, and still protect themselves.

Second, “responsible”? No. The term was possibly coined by Steve Christey, further used by Russ Cooper, that was polarized by Cooper as well as Scott Culp at Microsoft (“Information Anarchy”, really?), in a (successful) effort to brand researchers as “irresponsible” if they don’t conform to vendor disclosure demands. The appropriate term more widely recognized, and fair to both sides, is that of “coordinated” disclosure. Culp’s term forgets that vendors can be irresponsible if they don’t prioritize critical vulnerabilities while customers are known to be vulnerable with public exploit code floating about. Since then, Microsoft and many other companies have adopted “coordinated” to refer to the disclosure process.

Under the ‘Who’ category, there are more things to consider:


These days, it is rare to see domains following RFC-compliant addresses. That is a practice mostly lost to the old days. Telling readers to try to “Contact us” tab/link that invariably shows on web pages is better. Oh wait, you do that. However, that comes well after the big header reading TECHNICAL SUPPORT which may throw people off.

As a quick side note: “how to notifying them of security issues”. This is one of many spelling or grammar errors. Please run the text through a basic grammar checker.

Under the ‘How’ category:


This is excellent advice, except that using Tor bit since there are serious questions about the security/anonymity of it. If researchers are worried, they should look at a variety of options including using a coffee shop’s wireless, hotel wireless, etc.


This is also a great point, but more to the point, make sure your mail is polite and NOT THREATENING. Don’t threaten to disclose on your own timeline. See how the vendor handles the vulnerability report without any indication of disclosing it. Give them benefit of the doubt. If you get hints they are stalling at some point, then gently suggest it may be in the best interest of their customers to disclose. Remind them that vulnerabilities are rarely discovered by a single person and that they can’t assume you are the only one who has found it. You are just the only one who apparently decided to help the vendor.


Post to Full-Disclosure sure, or other options that may be more beneficial to you. Bugtraq has a history of stronger moderation, they tend to weed out crap. Send it directly to vulnerability databases and let them publish it anonymously. VDBs like Secunia generally validate all vulnerabilities before posting to their database. That may help you down the road if your intentions are called into question. Post to the OSS-security mail list if the vulnerability is in open-source software, so you get the community involved. For that list, getting a CVE identifier and having others on the list verifying or sanity checking your findings, it gives more positive attention to the technical issues instead of the politics of disclosure.


Using a bug bounty system is a great idea as it keeps the new researcher from dealing with disclosure politics generally. Let people experienced with the process, who have an established relationship and history with the vendor handle it. However, don’t steer newcomers to ZDI immediately. In fact, don’t name them specifically unless you have a vested interest in helping them, and if so, state it. Instead, break it down into vendor bug bounty programs and third-party programs. Provide a link to Bugcrowd’s excellent resource on a list of current bounty programs.


The fine print of course. Under CITATIONS, I love that you reference the Errata legal threats page, but this should go much higher on the page. Make sure new disclosers know the potential mess they are getting into. We know people don’t read the fine print. This could also be a good lead-in to using a third-party bounty or vulnerability handling service.


It’s great that you make this easy to share with everyone and their dog, but please consider getting a bit more feedback before publishing a site like this. It appears you did this in less than a day, when an extra 24 hours shows you could have made a stronger offering. You are clearly eager to make it better. You have already reached out to me, and likely Steve Christey if not others. As I said, with some edits and fix-ups, this will be a great resource.

Fine, let’s continue this “debate”. (was re: Active Defense)

David Willson wants to continue this debate, but wants to keep saying the same crap via Twitter, or try to call me out on the things I called him out for (e.g. not reading the other’s blog). So, my turn…

Background for those just joining:

  1. I wrote a serious blog with a tongue-in-cheek title on “strike back” and “active defense”: Putting an end to ‘strike back’ / ‘active defense’ debate…
  2. David Willson replies to my blog by writing his own: Really; Put an end to the strike back active defense debate? Not even close!
  3. I find serious problems with his brief reply, so I write a tequila-fueled rebuttal with naughty words that he doesn’t like, shooting down many of his points: To the guy calling himself “David Willson”, you don’t get it (was re: Active Defense)

Additionally, let me make a few points very clear:

  • Willson has a different definition for “active defense” than most in the industry. I specifically distinguished “hack back” (HB) from “active defense” (AD), citing AD as reconnaissance of the attacking system after being attacked. Willson chose to ignore that and frame my blog in the context of his definition.
  • Willson says that he has “offered a solution” with a definition and blog on the process. True that he has defined it in his eyes. Also true that he has further muddied the waters by mixing the more commonly accepted AD definition with HB as well.
  • He has claimed I am jumping on some form of bandwagon about HB or AD, despite me linking to material I wrote 13 years ago speaking out against the ideas. Meanwhile, he says he has been championing the activities in his definition (including HB) for two years. He also tried to chide me by saying AD is a “new term”, and emphasizing it with “yes new term”, despite the term being over 11 years old with books written on it.

These are pretty simple statements and factually accurate. Willson implying I am new is laughable and speaks to his tenure in this industry. His claim that AD is new also speaks to his relative maturity as an InfoSec professional. Since he has championed AD and HB, per his definition, for the last two years, let’s examine what he has to say on the topic by going through his numerous blogs. It will become very clear he is one of the advocates that are riding the hype and latest wave of attention AD / HB has garnered. What is not clear is if he understands what he is saying, the complications, and the implications. You be the judge.


We’ll start with his definition of “Active Defense”:

“Active Defense” is incident response on steroids. Here is my definition: It is a method for companies who find themselves persistently attacked to collect the intelligence needed to evaluate the attacks, develop courses of action or options, and then enable the leadership to make well-informed decisions to move forward in an effort to protect the company.

This is not a definition of AD at all. This is a general definition of incident response. Next he continues by elaborating on the definition and begins to qualify it:

On a spectrum the options could be anywhere from do nothing or the other extreme of hack back to either find the attackers or disrupt or deny the server(s) being used to launch the attacks. The intelligence collected will allow company leadership to make decisions at pre-determined checkpoints based on risk, liability and legal issues. The initial decision whether to simply proceed with incident response versus Active Defense is based on determining whether the attack is a one-time incident or persistent, and how much money is being lost since. Active Defense will require the company to bring in a team of experts to accomplish the various tasks: intel collection, malware analysis, tool/technique development, evaluating legal, risk and liability issues, and therefore the cost involved must be weighed against the damage to the company or loss due to the attacks.

Here he clearly defines AD as having the HB component. Once again, let’s remember, hacking someone else is not defending your network. It is going on the offensive against theirs. Most of the rest of his expanded definition still boils down to incident response, sans the “risk and liability issues” in the context of hacking back.


The next area of interest is Willson’s ideas on attribution. He boldly claims that attribution isn’t necessary and that if a computer attacks him, then the owner of that system is just as culpable even if they didn’t launch the attacks. Let’s look at what he says in his 2013-05-10 blog:

Most would admit the greatest challenge with cyber crime is determining who the attacker is, e.g. Attribution. One of the great claims by those who believe “Active Defense” is illegal and immoral is that attribution is extremely difficult and if you can’t determine attribution then you may be, “attacking an innocent victim.”

As a side note to the above comment, I have said in previous blogs, if someone has been compromised and their server is being used to attack my company, that person is NOT innocent.

There are many ways to attack this notion. First, this is a logical fallacy. Follow his logic here: He says that “some companies .. have done all they can” in defending their system. So if one of those companies that have done everything they can get compromised, and their systems are used to attack Willson, now they are “NOT innocent”? Which is it? You say they are not responsible if they did due diligence in security, then a paragraph later say that anyone who gets hacked and used as an attack platform is not innocent.

To further justify his notion of why HB is good, Willson makes the argument that he is doing the victim a favor:

A victim like me, yes, but innocent, no. If I have to disrupt his server to protect my company then so be it. Chances are that server owner does not want the other hundreds or thousands of companies who are victims of his server attacks to know that he is the patsy attacking them due to his crappy security.

Wow, what a noble guy! Once again, even if you knock that compromised host offline, have you defended your network? No. Remember the old game of whack-a-mole? Yeah…


Okay, so why is attribution not that important? Certainly, being able to identify your attacker makes life much easier for you and your company. Even if you can’t identify the attacker, being able to identify who owns the server being used to attack you makes life simpler. You can simply call the owner of the company whose server has been compromised and is attacking your network and work together to block the hacker. If, for some reason, the owner of the compromised server will not work with you then you can proceed as if he is the hacker.

Remember playing that game, and no matter how much you tried to stop that damned mole, he always came back? Those moles have something in common with the dreaded APTs. I’ll give you a hint David; what does the “P” stand for?

In the blog quoted above, you also speak to another problem. While taking that course of action of contacting the other company to get them to respond, that is an expensive prospect (time-wise). You will also run into companies that may want to leave the system online to study the attacker themselves, systems that have no real admin, end-user systems on broadband, open access points (e.g. coffee shops), libraries, and more. Oh, that’s right, you will just hack back and shut down the server at the library too. Are you sure you are doing more good than them by providing that access in the first place? I’d argue against that.

The rest of that blog spirals downhill even faster:

Consider the 2006 movie “Firewall” with Harrison Ford. His wife and daughter were kidnapped and the kidnappers, using this leverage, forced him to hack into a bank he was hired to protect and steal millions of dollars for them. Now, granted, I like Harrison Ford, but, if he is stealing my money he’s not an innocent bystander. He is a victim, but, if it is me or him, choices must be made.

And using your completely absurd analogy against you, “if it is my wife and daughter or Willson’s server, choices must be made” and he did the same thing you would have. #derp

Defense Is Not Offense

I really can’t emphasize this enough, and mention how critical it is in this debate given the commonly accepted term “Active Defense” is a contradiction unto itself. In the security world, there is a pretty fine line between defense (blue hat) and offense (white/black hat). Defending your network means patching, firewalls, IDS, IPS, hardening, and many other technologies. Offense is all about breaking into the target, either under contract or as criminal activity. You don’t “defend” your network by hacking someone else. Don’t believe me? Try it and see if all of the attacks they stop. Spoiler alert: they won’t. From Willson’s 2013-01-24 blog:

For instance, if the attack is a one-time attack and is over, then you DO NOT have a right to retaliate. Similar to when someone robs your house. If they are gone you have no right to pursue the burglar on your own. On the other hand, if you have been attacked repeatedly and are sure it continues or will happen again you have a right to defend yourself.

And without attribution, which you don’t think is necessary, how do you determine that? You cannot. Or, you are lumping all attacks into this logic settings yourself up for the obvious justification in response to a loaded statement. Either way, not good.

Next: “Persistent attacks may be bleeding hundreds of thousands of dollars from companies, and in that situation, they should be within their rights to respond, says Willson.” Yes, they should. If your company is losing 50 to 100 thousand dollars a week and you have done everything else you believe possible, to include called or considered calling law enforcement, to no avail, self-defense should be an option.

Yet your previous real-world analogy of someone breaking into your house doesn’t hold up here. You are now equating “self defense” with chasing the burglar, which you outright condemn three paragraphs earlier.

Also many articles lately have claimed that “attribution” is impossible. Stop it. If it was impossible no one would ever be arrested and prosecuted for hacking. It is difficult, but not impossible.

You are correct here, attribution is not impossible. However, the point I keep making with many people that it doesn’t scale. The number of people attacking any given network at a point in time, make attribution impossible if you attempt it for every attacker. Further, your point about some being arrested and prosecuted should be expanded on. As a lawyer, you know as well as anyone the amount of time and resources that go into a single case that leads to arrest and prosecution. You are talking about multiple law enforcement officers, sometimes working for months at a time to find that one person. They have resources that most companies do not that assists them in attribution. Don’t suggest to your readers that since attribution is possible, via citing law enforcement’s ability to catch a bad guy every once in a while, that attribution scales and is doable by every company out there. Simply not the case.

He Said What?

Next, we’ll look at a concrete example that Willson either doesn’t understand how anything works, or doesn’t take the time to consider his wording (while calling out others for their fear-mongering). From his 2013-05-01 blog:

One of the first decisions is whether, based on the information available and/or gathered, the attack is a one-time occurrence or an ongoing intrusion/breach. If it is determined to be a one-time occurrence the decision is easy, initiate an incident response plan, clean up, patch holes, and provide notifications required by law. If the attack appears to be ongoing some of the follow-up on decisions may include: what end-state the company is seeking (find the hacker and prosecute, block the attack, get data back, etc.)

Did you catch that bit? Willson really believes that you can “get data back”. An attacker breaks into your network, copies the information from your servers, and puts it on another server. First, you still have a copy of the data. Second, you don’t know how many copies are out there. Third, you don’t know where all the copies are. How exactly do you “get the data back”? You don’t. I really hope you aren’t using this as a selling point for Titan Info Security Group’s services.

He Said What? (Part 2)

In a 2012-12-14 blog, Willson states:

Active defense will actually improve security for those who consider it.

Seriously? You are actually saying that “active defense” (which by your definition is a mix of standard incident response, with a dose of hack back in some cases) will improve security? You reminded me today that the HB component is only 1% of active defense. By your definition and statement, the other 99% that we’ve been doing all along will improve security. Are you really trying to leverage “keep doing what we’ve been doing” as justification for the other 1%? Remember, in this same blog you start out by saying:

Lately I’ve seen many articles about “active defense” and “hack back.” This is good because current defenses aren’t working and being in a constant state of defensive mode is not a lot of fun. Something needs to be done.

By that opening, then you are saying that the additional 1%, the HB component will ‘improve security’. That is patently absurd. Breaking into other systems does not improve security sir.


It is clear you are trying to sell something. Your blogs on the topic of active defense do not advance the debate. You actually hinder it as you use a different definition of “active defense” than many do. You do not distinguish between active defense in the sense of reconnaissance and active defense in the sense of hack back for most of your conversation, instead using the one term which can mean either or both. You have not put forth a plan as you say, instead giving this vague notion that hack back is justified and legal. You never spell out exactly how it is legal, instead relying on emotional response to justify it. Big difference in a court of law, which you certainly know.

This blog consists of reviewing half a dozen of your blog posts on the topic. In those, I point out a wide variety of contradictions, as well as point out how you clearly do not back your claims about the legality of it. I also point our your flawed notions about attribution and the concept of digital data. I know you said “you tried” and that you are done with me. That’s fine. Ignore me because I use naughty words and “don’t look professional”. I am not selling anything, so I don’t need to maintain any appearance other than a voice of reason in a murky FUD-filled topic primarily led by people seeking to profit from it. You keep blogging, and I will keep pointing out how you are not qualified to provide consulting services in my opinion. Ultimately, I think your customers will figure that out. I just hope none find out the hard way, with you providing legal defense services instead of security services.

To the guy calling himself “David Willson”, you don’t get it (was re: Active Defense)

Yesterday, I published a blog titled “Putting an end to ‘strike back’ / ‘active defense’ debate…”. While the title of the blog was tongue-in-cheek, the content certainly was not. Of course I don’t expect the debate to suddenly end over a single blog, but I did bring up a good great point about the idea of ‘strike back’. I know it is great because the only blog-response I got, completely ignored it. I assume because he simply couldn’t debate the merit of it.

Now, I don’t think I know this guy who calls himself “David Willson”. But everyone knows I am bad with names and faces and InfoSec plebeians that don’t show any indication they have tenure or a clue. Don’t get me wrong, I love InfoSec newbies that are eager and open minded, willing to learn and evolve; I take the time to answer any question they put to me. They are a different breed than the others that mysteriously appear one year, with a resume boasting many years of experience. No clue if David Willson is such a beast, but reading that blog sure seems like it.

I don’t like ego. I like a person that can not only admit, but fully appreciate when they are wrong. Admitting it shows character and a desire to improve themselves. Don’t take this paragraph as me showing ego, it really isn’t about that. Willson is blogging under Titan Info Security Group (@Titaninfosec), whose motto is “Information is the Key to Your Business“. Information you say? Perhaps it was just a completely shitty writing style David, or perhaps you just haven’t been around the InfoSec block. It doesn’t matter if you like me, dislike me, respect me, or want to fist me. Fact is, I have been here a long, long time. If you and your company’s intelligence doesn’t know who I am, or my relevance in the security world, then your intelligence is suitable for the girl scouts at best. Starting your blog with “a guy calling himself Jericho” is laughable. Not only is my real name all over the place, if you Google for “jericho attrition”, the second hit gives you my real name which has been published in media outlets for years in conjunction with my handle. You couldn’t link to my blog, my web site, my Wikipedia entry, or my Twitter feed? I’m sorry, do you claim you knew all this? Consider it when you write these shitty blogs for your over-billed customers. #protip


Now, on to the meat of the blog and the rebuttal! It’s late, I really want to have fun with this, but the bed is more inviting than you. So, you get the more brief and blunt treatment David.

chastises those who advocate Active Defense.

Read the blog again. I chastise those that use the term incorrectly. I specifically point out that ping, traceroute, and nmap are not “active defense”. I further point out that the entire term “active defense” is contradictory and absurd. Offense != defense. “The best defense is a good offense” is better left to movies.

He equates it to strike back and hack back.

No, I have entirely difference sections of that blog that deal with true strike back versus active defense. Is this the “intelligence” you charge your customers for?

I have to say, I agree with two of his points; many companies are now trying to capitalize on this new term, yes new term…

Oh so close David! You were doing well agreeing with me on that part. New term? No. New to newbs in the industry? Yes. You kind of outed yourself here. You can’t claim this term is new when there is a book on it written it in 2002. #newb

I disagree with is his characterization of Active Defense. I wish people would stop equating it to hack back.

HEY FUCKHEAD. I did not say that. Read the fucking blog again please. I specifically defined active defense as this nebulous area of remote reconnaissance, not strike back. I went out of my way to make it very clear there was a solid distinction between the two. The entire point is that companies are using the nebulous “active defense” (which is stupid to begin with) and then further blurring it into “strike back”, ala Kurtz and his Crowdstrike crap.

Hack back is the last 1% of Active Defense. See my definition here:

Great, you define Active Defense! Oh wait, that explains why you are blindly lashing out at what you don’t understand. From your definition:

“Active Defense” is incident response on steroids. Here is my definition: It is a method for companies who find themselves persistently attacked to collect the intelligence needed to evaluate the attacks, develop courses of action or options, and then enable the leadership to make well-informed decisions to move forward in an effort to protect the company. On a spectrum the options could be anywhere from do nothing or the other extreme of hack back to either find the attackers or disrupt or deny the server(s) being used to launch the attacks.

So a whole bunch of management bullshit, then you clearly say “active defense” includes “strike back”. Uh… you just said you wished people would stop equating it to hack back, and you fucking say exactly that in your definition. Seriously, get the fuck out of our industry. Only ignorant sales weasels can’t track their lies between two paragraphs.

Also, the fact that many people who write in opposition to Active Defense made broad statements about how it is illegal without defining Active Defense and detailing what they believe to be illegal or why.

I clearly stated what I see active defense is. Read my blog again, again. I said strike back is illegal; the activity of “hacking the person hacking you”. News flash: it is illegal in most countries. It is illegal in the United States, where I reside, and most of the countries the U.S. still has some form of ties to, economic or otherwise. As an ex-military weenie, you should really grok this.

First of all, if you’re not an attorney stop saying it is illegal because the legality of Active Defense is not black and white.

Tell that to anyone convicted under 18 USC § 1030 please. It is black and white enough to routinely convict people, even ones operating in areas many consider gray. While I am not a lawyer, I am fairly well read on the law. More so than most in our industry, and enough to be invited to be an honorary professor for a cybercrime seminar for a semester, to challenge their students on the notion of law. Yes, a respected university thought me suitable to challenge their students and staff on the law they teach, meaning I am the first to find the wiggle room and gray areas. What, you didn’t know this? was re: intelligence offering.

Jericho’s assertions strike me as hypocritical by jumping on the bandwagon of the Active Defense flurry, making broad assertions and offering NO solutions.

First, I am not hypocritical just because you didn’t actually read my blog. Second, I am not part of the flurry as I argued against this shit back in ~ 2000. Third, what the fuck solutions do you offer? By your definition, a vague HALULULUGGUHGUGHUGHGUH ESCALATE until you get to the strike back phase, which still doesn’t address the simple fact that it STILL WON’T STOP YOU FROM GETTING ATTACKED. Jesus fuck, get out of my industry already. Off my lawn and all that. If you don’t understand WHY I am so vehement about this, then you doubly need to get the fuck out of here. In a light-hearted rant against morons who blindly quote Sun Tzu, Steve Tornio (@steve_tornio) and I point out that trying to know your attacker is futile. Worse, even if you figure out who one is and stop them, then you are dealing with the other eleventy-billion. Are you really under some demented and perverse notion that a single attacker is a threat to you or your customers? That big-bad-APT you are fighting tooth-and-nail against, may just be the decoy while the real attacker is skullfucking your network blind. Sorry to be the bearer of that bad news, and judging by your blog, I certainly am.

If defense is so easy then provide the solution, a solution that hasn’t been tried and one that will work and not subverted by hackers within a few months.

Really? Again with the whole “not reading what I wrote” bit. I didn’t say it was easy. I was arguing for a minimum threshold on defense; that companies who do not focus on defense and put resources there, have no business trying to hack back. In fact, my entire ONE LINE ARGUMENT against all this boils down to that. If you failed Defense 101, then you have no business dabbling in Offense 101. Intelligence business huh?

You need a team of experts who know what they are doing, to include one or more attorneys who know what he/she is doing, but more than just an attorney you believe you can explain the technology to.

And this is where it gets good. You see, I am bad with names and faces, like I mentioned above (since you likely didn’t read it). But in all honesty, I do remember you. I respect(ed) the hell out of you for your presentation at BSides Denver 2010. And you should remember me, “that guy who calls himself Jericho”, as I was leading the mob against your naive but fun presentation titled “When Does Electronic Espionage Become an ‘Act of War’ and What Options Do Nations Have to Defend Their Networks?” Further, I was on the CFP review team for BSides Denver 2013 where I was adamant about having you back, because of your 2010 presentation despite the cute notions that simply weren’t real world. Why? Because you stood up to the heat, you debated it, and you did a good job of doing so from your losing side. The entire CFP review team was looking forward to your talk specifically, hoping it would re-create the passion and energy from years before.

So, what happened during BSides Denver 2013? Two things changed. First, you moved from the military to Titan Info Security Group. Second, you made people walk out of your talk, and it ended in zero debate. What’s the matter David, Titan got your balls all of a sudden? You went from a damn fined individual and debater that I respected, to … this. Part of me says this is cute. The other part of me says this is pathetic. I went to bat for you, saying you would be an outstanding speaker based on your last presentation. I was wrong. I failed BSides Denver attendees as a CFP reviewer, and I still kick myself over that. I don’t blame you, I really don’t. I blame myself for not seeing what you truly are.

You really don’t remember me? You really don’t remember talking to me during the 2010 conference, and again a couple months ago? That’s fine, I don’t blame you for not remembering me. I am not that interesting offline usually. I’m not in the general intelligence business, just the vulnerability intelligence business. I can get away with that. What’s your excuse?

This takes years of experience to understand the technology, apply the law and foresee the results or consequences.

OK expert, please tell us exactly when pure “strike back” is legal, in what contexts. You have this shit figured out obviously, so write a blog that summarizes it please. If you don’t, then you are full of shit and I am calling you on it. I want you to blog about it because one of three things will happen. One, you will write an incredibly insightful blog that clears up all this “legal confusion” over the concept of strike back, and I will apologize to you and learn a lot from it. Two, you will write an incredibly fun blog that clears up nothing, that many people will mock and deride at best. Three, you will not blog, and in doing so quietly admit that I am right. So, put up or shut up pretend-lawyer. Oh, and cite the fucking law, not your mystical snow globe.

Ask your lawyer if he/she would be willing to put their law license on the line and provide advice in cyber security, hack back, the CFAA, ECPA, trace back, open-source collection, etc.

Great, thanks! Let’s have you read this Wikipedia entry before we continue. Please re-read your definition of “active defense” which is arguably criminally negligent. You really want to put your license on the line after that crap? Not only do you completely miss the irony of your definition of “active DEFENSE“, you completely fail to see the legal implications of what you put forth, 1% or not.

I’m not going to quote the last paragraph of your blog, i’ll let my readers re-read it before they continue. You and your company appear to be the same scum I called out, attempting to mix “active defense” with “strike back”, for what appears to be your profit margin. The irony of you arguing this with me has multiple juicy layers of depth.

I mean come on, look at your fucking company’s logo. Ones and zeroes coming out of the planet, with that pathetic slogan? Nothing about your blog post screams “intelligence”. Nothing about it reminds me of the guy who calls himself David in 2010. It does remind me of the sell-out David who appeared in 2013 and bored a room of ~ 120. If you want to attack me and my points, feel free. I love a good debate, and I love challenging the industry to think beyond the current norms. Unfortunately, you failed to do that in a big way. You clearly didn’t read my blog, didn’t consider it before you fired off your own rebuttal, and didn’t consider that rebuttals are a FUCKING HOBBY OF MINE. At least play to my weak standards, or do better than my previous offerings. If not, you aren’t even advancing casual insults or banter, and for that, you should eat a bowl of dicks.

So, to throw the proverbial gauntlet down:

Based on David Willson’s reply to my blog, I personally think that Titan Info Security Group is not qualified to provide any security or legal consulting to anyone. Well, maybe to Paw’s Fishing Shack that just got that new-fangled Wi-Fi thing. I bet Paw wants some of that fancy threat intelligence, and he can trade you for it in fresh worms or stale candy. About all you are worth in my opinion. #getoffmylawn

– Some guy who calls himself “Jericho”

p.s. You tag your blog with “computer”, really? I guess I should follow suit.

Putting an end to ‘strike back’ / ‘active defense’ debate…

The concept of “hack/strike back”, under any of its names, is decades old. Every year or three it surfaces again and makes news. Almost every time, it is a result of a new company claiming they do it to some degree. This extends to the related idea of “active defense”, which is equally absurd. Not only because it is used as a cop-out fallback when a company is challenged on notion of “hack back”, because the term is misleading at best.

The entire debate over “strike back” can be put to an end with one sentence; a simple realization that anyone in the industry should have realized. After this thought, I will expand on it just in case there are equivocations on terminology or the ideas behind this.

Ending the Debate In One Easy Line

If a company can’t do defense correctly, why do you think they can do offense right?

That simple, that logical. Sure, some of these companies may claim no one can defend against 0-day and so-called APTs. On the surface that sounds valid, but responding to that by attacking others with your own 0-day, presumably in the same manner and methodology as the adversary you scapegoat, does not make sense.

More importantly, while hacking into a system is generally considered easy by knowledgeable attackers, the issue of attribution is far from it. Entire debates can be had on the merit of attribution, and they have been. Ultimately, the argument that attribution can happen fails if carried out far enough.

  • If you can easily and positively attribute, they shouldn’t have breached your defenses. You have no business attacking them when you were negligent on defense 101.
  • If you think you can positively attribute, you cannot, you are out of your element.
  • Even if you can miraculously attribute the human at the keyboard, regardless of how many hops back, you cannot positively attribute who hired them to hack you.
  • If you attribute the person, and not the motive, by hacking back, you violated the law just as they did.

All of this makes strike back seriously problematic at best. Ultimately, the concept of “strike back” is a cop out. The attacked can lash out at whoever they thought attacked, with no burden of proof. Last I checked, we as a society like the concept of “burden of proof”. Or apparently, at least when it suits us.

The Misleading “Active Defense”

The concept of “active defense” is equally old. Back in the day it simply meant that if you were attacked from an IP address, you or a device would perform a certain level of active reconnaissance. First, note that recon is not ‘defense’. By port scanning, pinging, or tracerouting the remote system that attacked you, it does not help you defend your network. It is the first stage of an active response. Strictly based on the terminology of “active defense”, activity such as changing a configuration or creating real-time decoys to increase the cost of attack. Even today’s news, covering an entire talk on the legal risks of “active defense”, does not even define the term.

Anyone in the world of “active defense” should know this. If not, they are not qualified for the position they are in, or they are intentionally riding the wave of fear, uncertainty, and doubt (FUD) spearheaded by the by media, following the lead of those very same individuals. The last year of news on the topic leads me to believe these companies are using the blurry line of “active defense” to suggest they do more, which in turn sells their services.

Dull old concepts are still dull; resist the urge to buy into the bullshit.

Buying Into the Bias: Why Vulnerability Statistics Suck [Abstract]

[This was originally published on the OSVDB blog.]

Last week, Steve Christey and I gave a presentation at Black Hat Briefings 2013 in Las Vegas about vulnerability statistics. We submitted a brief whitepaper on the topic, reproduced below, to accompany the slides that are now available.

Buying Into the Bias: Why Vulnerability Statistics Suck
By Steve Christey (MITRE) and Brian Martin (Open Security Foundation)
July 11, 2013

Academic researchers, journalists, security vendors, software vendors, and professional analysts often analyze vulnerability statistics using large repositories of vulnerability data, such as “Common Vulnerabilities and Exposures” (CVE), the Open Sourced Vulnerability Database (OSVDB), and other sources of aggregated vulnerability information. These statistics are claimed to demonstrate trends in vulnerability disclosure, such as the number or type of vulnerabilities, or their relative severity. Worse, they are typically misused to compare competing products to assess which one offers the best security.

Most of these statistical analyses demonstrate a serious fault in methodology, or are pure speculation in the long run. They use the easily-available, but drastically misunderstood data to craft irrelevant questions based on wild assumptions, while never figuring out (or even asking the sources about) the limitations of the data. This leads to a wide variety of bias that typically goes unchallenged, that ultimately forms statistics that make headlines and, far worse, are used to justify security budget and spending.

As maintainers of two well-known vulnerability information repositories, we’re sick of hearing about research that is quickly determined to be sloppy after it’s been released and gained public attention. In almost every case, the research casts aside any logical approach to generating the statistics. They frequently do not release their methodology, and they rarely disclaim the serious pitfalls in their conclusions. This stems from their serious lack of understanding about the data source they use, and how it operates. In short, vulnerability databases (VDBs) are very different and very fickle creatures. They are constantly evolving and see the world of vulnerabilities through very different glasses.

This paper and its associated presentation introduce a framework in which vulnerability statistics can be judged and improved. The better we get about talking about the issues, the better the chances of truly improving how vulnerability statistics are generated and interpreted.

Bias, We All Have It

Bias is inherent in everything humans do. Even the most rigorous and well-documented process can be affected by levels of bias that we simply do not understand are working against us. This is part of human nature. As with all things, bias is present in the creation of the VDBs, how the databases are populated with vulnerability data, and the subsequent analysis of that data. Not all bias is bad; for example, VDBs have a bias to avoid providing inaccurate information whenever possible, and each VDB effectively has a customer base whose needs directly drive what content is published.

Bias comes in many forms that we see as strongly influencing vulnerability statistics, via a number of actors involved in the process. It is important to remember that VDBs catalog the public disclosure of security vulnerabilities by a wide variety of people with vastly different skills and motivations. The disclosure process varies from person to person and introduces bias for sure, but even before the disclosure occurs, bias has already entered the picture.

Consider the general sequence of events that lead to a vulnerability being cataloged in a VDB.

  1. A researcher chooses a piece of software to examine.
  2. Each researcher operates with a different skill set and focus, using tools or techniques with varying strengths and weaknesses; these differences can impact which vulnerabilities are capable of being discovered.
  3. During the process, the researcher will find at least one vulnerability, often more.
  4. The researcher may or may not opt for vendor involvement in verifying or fixing the issue.
  5. At some point, the researcher may choose to disclose the vulnerability. That disclosure will not be in a common format, may suffer from language barriers, may not be technically accurate, may leave out critical details that impact the severity of the vulnerability (e.g. administrator authentication required), may be a duplicate of prior research, or introduce a number of other problems.
  6. Many VDBs attempt to catalog all public disclosures of information. This is a “best effort” activity, as there are simply too many sources for any one VDB to monitor, and accuracy problems can increase the expense of analyzing a single disclosure.
  7. If the VDB maintainers see the disclosure mentioned above, they will add it to the database if it meets their criteria, which is not always public. If the VDB does not see it, they will not add it. If the VDB disagrees with the disclosure (i.e. believes it to be inaccurate), they may not add it.

By this point, there are a number of criteria that may prevent the disclosure from ever making it into a VDB. Without using the word, the above steps have introduced several types of bias that impact the process. These biases carry forward into any subsequent examination of the database in any manner.

Types of Bias

Specific to the vulnerability disclosure aggregation process that VDBs go through every day, there are four primary types of bias that enter the picture. Note that while each of these can be seen in researchers, vendors, and VDBs, some are more common to one than the others. There are other types of bias that could also apply, but they are beyond the scope of this paper.

Selection bias covers what gets selected for study. In the case of disclosure, this refers to the researcher’s bias in selecting software and the methodology used to test the software for vulnerabilities; for example, a researcher might only investigate software written in a specific language and only look for a handful of the most common vulnerability types. In the case of VDBs, this involves how the VDB discovers and handles vulnerability disclosures from researchers and vendors. Perhaps the largest influence on selection bias is that many VDBs monitor a limited source of disclosures. It is not necessary to argue what “limited” means. Suffice it to say, no VDB is remotely complete on monitoring every source of vulnerability data that is public on the net. Lack of resources – primarily the time of those working on the database – causes a VDB to prioritize sources of information. With an increasing number of regional or country-based CERT groups disclosing vulnerabilities in their native tongue, VDBs have a harder time processing the information. Each vulnerability that is disclosed but does not end up in the VDB, ultimately factors into statistics such as “there were X vulnerabilities disclosed last year”.

Publication bias governs what portion of the research gets published. This ranges from “none”, to sparse information, to incredible technical detail about every finding. Somewhere between selection and publication bias, the researcher will determine how much time they are spending on this particular product, what vulnerabilities they are interested in, and more. All of this folds into what gets published. VDBs may discover a researcher’s disclosure, but then decide not to publish the vulnerability due to other criteria.

Abstraction bias is a term that we crafted to explain the process that VDBs use to assign identifiers to vulnerabilities. Depending on the purpose and stated goal of the VDB, the same 10 vulnerabilities may be given a single identifier by one database, and 10 identifiers by a different one. This level of abstraction is an absolutely critical factor when analyzing the data to generate vulnerability statistics. This is also the most prevalent source of problems for analysis, as researchers rarely understand the concept of abstraction, why it varies, and how to overcome it as an obstacle in generating meaningful statistics. Researchers will use whichever abstraction is most appropriate or convenient for them; after all, there are many different consumers for a researcher advisory, not just VDBs. Abstraction bias is also frequently seen in vendors, and occasionally researchers in the way they disclose one vulnerability multiple times, as it affects different software that bundles additional vendor’s software in it.

Measurement bias refers to potential errors in how a vulnerability is analyzed, verified, and catalogued. For example, with researchers, this bias might be in the form of failing to verify that a potential issue is actually a vulnerability, or in over-estimating the severity of the issue compared to how consumers might prioritize the issue. With vendors, measurement bias may affect how the vendor prioritizes an issue to be fixed, or in under-estimating the severity of the issue. With VDBs, measurement bias may also occur if analysts do not appropriately reflect the severity of the issue, or if inaccuracies are introduced while studying incomplete vulnerability disclosures, such as missing a version of the product that is affected by the vulnerability. It could be argued that abstraction bias is a certain type of measurement bias (since it involves using inconsistent “units of measurement”), but for the purposes of understanding vulnerability statistics, abstraction bias deserves special attention.

Measurement bias, as it affects statistics, is arguably the domain of VDBs, since most statistics are calculated using an underlying VDB instead of the original disclosures. As the primary sources of vulnerability data aggregation, several factors come into play when performing database updates.

Why Bias Matters, in Detail

These forms of bias can work together to create interesting spikes in vulnerability disclosure trends. To the VDB worker, they are typically apparent and sometimes amusing. To an outsider just using a data set to generate statistics, they can be a serious pitfall.

In August, 2008, a single researcher using rudimentary, yet effective methods for finding symlink vulnerabilities single handedly caused a significant spike in symlink vulnerability disclosures over the past 10 years. Starting in 2012 and continuing up to the publication of this paper, a pair of researchers have significantly impacted the number of disclosures in a single product. Not only has this caused a huge spike for the vulnerability count related to the product, it has led to them being ranked as two of the top vulnerability disclosers since January, 2012. Later this year, we expect there to be articles written regarding the number of supervisory control and data acquisition (SCADA) vulnerabilities disclosed from 2012 to 2013. Those articles will be based purely on vulnerability counts as determined from VDBs, likely with no mention of why the numbers are skewed. One prominent researcher who published many SCADA flaws has changed his personal disclosure policy. Instead of publicly disclosing details, he now keeps them private as part of a competitive advantage of his new business.

Another popular place for vulnerability statistics to break down is related to vulnerability severity. Researchers and journalists like to mention the raw number of vulnerabilities in two products and try to compare their relative security. They frequently overlook the severity of the vulnerabilities and may not note that while one product had twice as many disclosures, a significant percentage of them were low severity. Further, they do not understand how the industry-standard CVSSv2 scoring system works, or the bias that can creep in when using it to score vulnerabilities. Considering that a vague disclosure that has little actionable details will frequently be scored for the worst possible impact, that also drastically skews the severity ratings.


The forms of bias and how they may impact vulnerability statistics outlined in this paper are just the beginning. For each party involved, for each type of bias, there are many considerations that must be made. Accurate and meaningful vulnerability statistics are not impossible; they are just very difficult to accurately generate and disclaim.

Our 2013 BlackHat Briefings USA talk hopes to explore many of these points, outline the types of bias, and show concrete examples of misleading statistics. In addition, we will show how you can easily spot questionable statistics, and give some tips on generating and disclaiming good statistics.