For those who haven’t seen, the Electronic Frontier Foundation (EFF) has created several lock-screen / wallpaper images related to protecting your rights. I wanted to use the first one on my Galaxy S8 Active, but the image interferes with seeing the clock, date, and notification icons. So I moved the text of the image down just enough so that it fits comfortably while not obscuring any information. Screenshot below, and a link to my version of the image you can download.
[This was originally published on the OSVDB blog.]
[Note: This blog had been sitting as a 99% completed draft since early September. I lost track of time and forgot to finish it off then. Since this is still a relevant topic, I am publishing now despite it not being quite as timely in the context of the articles cited in it.]
An article by Kim Zetter in Wired, titled “When Security Experts Gather to Talk Consensus, Chaos Ensues“, talks about a recent meeting that tried to address the decades-old problem of vulnerability disclosure.
This article covers a recent meetings organized by the National Telecommunications and Information Administration (NTIA), a division of the US Commerce Department (DoC). This is not the main reason I am blogging, when most would assume I would speak up on this topic. I’ll give you an odd insight into this from my perspective, then get to the real point. The person who organized this, was virtually introduced to me on July 31, 2015. He replied the same day, and had a great introduction that showed intent to make this a worthwhile effort:
The US Dept of Commerce is trying to help in the old dance between security researchers and system/SW vendors. We think that folks on most sides want a better relationship, with more trust, more efficiency, and better results. Our goal is to bring the voices together in a “multistakeholder process” to find common ground in an open participant-led forum.
I’ve been trying to reach out to many people in this area, on all sides. I know that you’re both veterans of many discussions like this, and I’d like to chat with you about lessons learned, how things may (or may not) have changed, and generally get your insight into how to make this not suck.
That level of understanding from a U.S. government employee is rewarding and encouraging. But, in the interest of fairness and pointing out the obvious, the first thing I asked:
In the past couple of weeks, I have heard mention of the Dept. of Commerce becoming involved in vulnerability disclosure. One thing that is not clear to myself, and many others, is what your intended role or “jurisdiction” (for lack of better words) is?
That is an important question that the DoC must consider, and understand how to respond to.
That question should still be on everyone’s minds, as it has not been answered in a factual manner. Good intentions only get you so far. Having another government player in this mess, who has no power, no jurisdiction, and only “the best of intentions” can only muddy the waters at best. The notes of the ~ 6 hour meeting are now online. I haven’t read them, don’t plan to. This is pure academic masturbation that has escaped from academia, and missed the age old point that so many refuse to admit. “When the $government outlaws chinchillas, only the outlaws will have chinchillas.” Seriously, stupidly simple concept that basically as much the basis for our society as anything else you hold dear.
I’m jumping past that, because a vulnerability tourist said something else that needs addressing. If you haven’t seen me use that term, ‘vulnerability tourist’ refers to someone who dabbles in the bigger world of vulnerability aggregation, disclosure, and the higher-level ideas surrounding it. This doesn’t speak to finding vulns (although it helps), disclosing them (although it helps), or aggregating them for long-term analysis (although it helps). It speaks to someone who thinks they are doing good speaking as some form of expert, but are actually continuing to harm it due to a lack of real knowledge on the topic. When someone who has next to no experience in those realms speaks on behalf of the industry, it becomes difficult to take them seriously… but unfortunately, journalists do. Especially when it comes to hot topics or the dreaded “0day” vulnerabilities, that happen every day… while the media cherry picks the occasional one. As usual in our industry, give such a person a bit of rope and you are likely to find them hanging a few days or weeks later.
Focusing on a single aspect of the Wired article:
Members of the audience snickered, for example, when a representative from the auto industry pleaded that researchers should consider “safety” when testing for vulnerabilities. At a bar after the event, some attendees said automakers are the ones who don’t seem concerned about consumer safety when they sell cars that haven’t been pen-tested for vulnerabilities or when it takes them five years to fix a known vulnerability.
And when Corman sought community support for new companies entering the bug bounty arena, some attendees responded with derision. He noted that after United Airlines launched its bug bounty program this year—the first for the airline industry—it suffered backlash from the security community instead of support.
Just Google “volkswagen emissions software” and you will see why the public cannot trust auto manufacturers. Nothing to do with vulnerability research, and at least one is manipulating their software to defraud the public which may hurt the world environment in ways we can’t comprehend. If that isn’t enough for you, consider the same company spent two years doing everything in their power to hide a vulnerability in their software, that may have allowed criminals to more easily steal consumer’s cars.
So first, ask yourself why anyone from the security arena is so gung-ho in supporting the auto industry. Sure, it would benefit our industry, and more importantly benefit the average consumer. Effecting that change would be incredible! But given the almost three decades of disclosure debate centered on questionable dealings between researchers and vendors, it is not a fight you can win fast. And more importantly, it is not one you capitulate to for your own gain.
Moving past that, the quote from Corman speaks to me personally. When United announced their bug bounty program, it received a lot of media attention. And this is critical to note. A company not known for a bug bounty program, in an industry not known for it, offering perks that were new to the industry (i.e. United, airlines, and frequent flier miles). Sure, that is interesting! Unfortunately, none of the journalists that covered that new bounty program read their terms, or if they did, couldn’t comprehend why it was a horrible program that put researchers at great risk. To anyone mildly familiar with bounty programs, it screamed “run, not walk, away…”. I was one of the more vocal in criticizing the program on social media. When a new bounty opens up, there are hundreds of (largely) neophyte researchers that flock to it, trying to find the lowest hanging fruit, to get the quick and easy bounties. If you have run a vulnerability reporting program, even one without bounties, you likely have witnessed this (I have). I was also very quick to start reaching out to my security contacts, trying to find back-channel contacts to United to give them feedback on their offering.
United’s original bounty program was not just a little misleading, but entirely irresponsible and misleading. It basically said, in laymen’s terms, “if you find a vuln in our site, report it and you may get up to 1 million airline miles!” It also said, you cannot test ANY united.com site, you cannot test our airplanes (on the back of the Chris Roberts / United / FBI drama), our mobile application, or anything else remotely worth testing. The program had a long list of what you could NOT test, that excluded every single target a malicious hacker would target. Worse? It did not offer a test / Dev network, a test airplane, or any other ‘safe’ target to test. In fact, it even excluded the “beta” United site! Uh… what were bounty-seekers supposed to do here? If United thinks that bug bounty seekers read past the “mail bugs to” and “this is the potential bounty”, they need to reconsider their bounty program. The original bounty excluded:
“Code injection on live systems”
Bugs on customer-facing websites such as:
Yet, despite that exclusionary list, they did NOT include what was ALLOWED to be tested. That, in modern security terms, is called a honeypot. Don’t even begin to talk about “intent”, because in a court of law with some 17 year old facing CFAA charges, intent doesn’t enter the picture until way too late. The original United program was set up so that they could trivially file a case with the FBI and go after anyone attacking any of their Internet-addressable systems, their mobile apps, or their airplanes. Shortly after the messy public drama of a white hat hacker telling the media he tested the airplane systems, and could have done “bad things” in so many words.
My efforts to reach out to United via back-channels worked. One of their security engineers that is part of the bounty program was happy to open a dialogue with me. See, this is where we get to the bits that are the most important. We traded a few mails, where I outlined my issues with the bounty program and gave them extensive feedback on how to better word it, so that researchers could not only trust the program, but help them with their efforts. The engineer replied quickly saying they would review my feedback and I never heard back. That is the norm for me, when I reach out and try to help a company. So now, when I go to write this blog, of course I look to see if United revised their bounty program! Let’s look at the current United bug bounty offering:
Bugs that are eligible for submission:
Bugs on United-operated, customer-facing websites such as:
Bugs on the United app
Bugs in third-party programs loaded by united.com or its other online properties
Wow… simply WOW. That is a full 180 from the original program! Not only do they allow testing of the sites they excluded before, but they opened up more sites to the program. They better defined what was allowed (considerably more lenient for testing) as far as technical attacks, and targets. I still disagree a bit on a few things that are “not allowed”, but completely understand their reasons for doing so. They have to balance the safety of their systems and customers with the possibility that a vulnerability exists. And they are wise to consider that a vulnerability may exist.
So after all that, let’s jump back to the quote which drew my ire.
And when [Josh] Corman sought community support for new companies entering the bug bounty arena, some attendees responded with derision. He noted that after United Airlines launched its bug bounty program this year—the first for the airline industry—it suffered backlash from the security community instead of support.
This is why a soundbyte in a media article doesn’t work. It doesn’t help vendors, and it doesn’t help researchers. It only helps someone who has largely operated outside the vulnerability world for a long time. Remember, “security researcher” is an overly vague term that has many meanings. Nothing about that press release suggests Corman has experience in any of the disciplines that matter in this debate. As a result, his lack of experience shows clearly here. Perhaps it was his transition from being a “DevOps” expert for a couple years, to being a “Rugged Software” expert, to becoming a “vulnerability disclosure” expert shortly after?
First, United’s original bug bounty offering was horrible in every way. There was basically zero merit to it, and only served to put researchers at risk. Corman’s notion that scaring companies away from this example is ‘bad’ is irresponsible and contradictory to his stated purpose with the I Am The Cavalry initiative. While Corman’s intentions are good, the delivery simply wasn’t helpful to our industry, the automobile industry, or the airline industry. Remember, “the road to hell is paved with good intentions“.
[This was originally published on the OSVDB blog.]
This post is in response to the drama the last few days, where Mary Ann Davidson posted an inflammatory blog about security researchers that send Oracle vulnerabilities while violating their End-user License Agreement (EULA… that thing you click without reading for every piece of software you install). The post was deleted promptly by Oracle, then Oracle said it was not the corporate line, and due to the crazy journalists who of course felt obligated to cover. You can read up on the background elsewhere, because it has absolutely no bearing on reality, which this very brief blog covers.
This is such an absurdly simple concept to grasp, yet the CISO of Oracle (among others) are oblivious to it. Part of me wants to write a scathing 8 page “someone is wrong on the Internet” blog. The other part of me says sleep is more valuable than dealing with these mouth-breathing idiots, which Davidson is one of. Sleep will win, so here is the cold hard facts and reality of the situation. Anything else should be debated at some obscure academic conference, but we know Oracle pays big money to debate it to politicians. Think about that.
Reality #1: Now, let’s start with an age-old saying… “when chinchillas are outlawed, only outlaws will have chinchillas.” Fundamentally, the simple fact that cannot be argued by any rational, logical human, is that laws apply to law-abiding citizens. Those who break the law (i.e. criminal, malefactor, evildoer, transgressor, culprit, felon, crook, hoodlum, gangster, whatever…) do not follow laws. Those who ignore criminal law, likely do not give two fucks about civil law, which a EULA violation would fall under.
Reality #2: Researchers get access to crappy Oracle software in the process of performing their job duties. A penetration test or audit may give them temporary access, and they may find a vulnerability. If the client doesn’t mandate they keep it in-house, the researcher may opt to share it with the vendor, doing the right thing. Where exactly does the EULA fit in here? It was agreed to by the customer, not the third-party researcher. Even if there is a provision in the EULA for such a case, if the company doesn’t warn the researcher of said provision, how can they be held liable?
Reality #3: Tying back into #1 here, what are the real consequences? This is civil law, not criminal. When it comes to criminal law, which is a lot more clear, the U.S. doesn’t have solid extradition case-law backing them. We tend to think “run to Argentina!” when it comes to evading U.S. law. In reality, you can possibly just run to the U.K. instead. Ignore the consequences, that is not relevant when it comes to the law in this context. If you focus on “oh but the death penalty was involved”, y
[This was originally published on the OSVDB blog.]
[2014-05-09 Update: We’d like to thank both McAfee and S21sec for promptly reaching out to work with us and to inform us that they are both investigating the incident, and taking steps to ensure that future access and data use complies with our license.]
Every day we get requests for an account on OSVDB, and every day we have to turn more and more people away. In many cases the intended use is clearly commercial, so we tell them they can license our data via our commercial partner Risk Based Security. While we were a fully open project for many years, the volunteer model we wanted didn’t work out. People wanted our data, but largely did not want to give their time or resources. A few years back we restricted exports and limited the API due to ongoing abuse from a variety of organizations. Our current model is designed to be free for individual, non-commercial use. Anything else requires a license and paying for the access and data usage. This is the only way we can keep the project going and continue to provide superior vulnerability intelligence.
As more and more organizations rely on automated scraping of our data in violation of our license, it has forced us to restrict some of the information we provide. As the systematic abuse rises, one of our only options is to further restrict the information while trying to find a balance of helping the end user, but crippling commercial (ab)use. We spend about half an hour a week looking at our logs to identify abusive behavior and block them from accessing the database to help curb those using our data without a proper license. In most cases we simply identify and block them, and move on. In other cases, it is a stark reminder of just how far security companies will go to to take our information. Today brought us two different cases which illustrate what we’re facing, and why their unethical actions ultimately hurt the community as we further restrict access to our information.
This is not new in the VDB world. Secunia has recently restricted almost all unauthenticated free access to their database while SecurityFocus’ BID database continues to have a fraction of the information they make available to paying customers. Quite simply, the price of aggregating and normalizing this data is high.
In the first case, we received a routine request for an account from a commercial security company, S21sec, that wanted to use our data to augment their services:
From: Marcos xxxxxx (email@example.com)
To: moderators osvdb.org
Date: Thu, 16 May 2013 11:26:28 +0200
Subject: [OSVDB Mods] Request for account on OSVDB.org
I’m working on e-Crime and Malware Research for S21Sec (www.s21sec.com), a lead IT Security company from Spain. I would like to obtain an API key to use in research of phishing cases we need to investigate phishing and compromised sites. We want to use tools like “cms-explorer” and create our own internal tools.
Tlf: +34 902 222 521
http://www.s21sec.com , blog.s21sec.com
As with most requests like this, they received a form letter reply indicating that our commercial partner would be in touch to figure out licensing:
From: Brian Martin (brian opensecurityfoundation.org)
To: Marcos xxxxxx (firstname.lastname@example.org)
Cc: RBS Sales (sales riskbasedsecurity.com)
Date: Thu, 16 May 2013 15:26:04 -0500 (CDT)
Subject: Re: [OSVDB Mods] Request for account on OSVDB.org
The use you describe is considered commercial by the Open Security
We have partnered with Risk Based Security (in the CC) to handle
commercial licensing. In addition to this, RBS provides a separate portal
with more robust features, including an expansive watch list capability,
as well as a considerably more powerful API and database export options.
The OSVDB API is very limited in the number of calls due to a wide variety
of abuse over the years, and also why the free exports are no longer
available. RBS also offers additional analysis of vulnerabilities
including more detailed technical notes on conditions for exploitation and
OSF / OSVDB
He came back pretty quickly saying that he had no budget for this, and didn’t even wait to get a price quote or discuss options:
From: Marcos xxxxxx (email@example.com)
Date: Mon, May 20, 2013 at 10:55 AM
Subject: Re: [OSVDB Mods] Request for account on OSVDB.org
To: Brian Martin (brian opensecurityfoundation.org)
Cc: RBS Sales (sales riskbasedsecurity.com)
Thanks for the answer, but I have no budget to get the license.
We figured that was the end of it really. Instead, jump to today when we noticed someone scraping our data and trying to hide their tracks to a limited degree. Standard enumeration of our entries, but they were forging the user-agent:
22.214.171.124 – – [07/May/2014:09:37:06 -0500] “GET /show/osvdb/106231 HTTP/1.1” 200 20415 “-” “Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:26.0) Gecko/20100101 Firefox/26.0”
126.96.36.199 – – [07/May/2014:09:37:06 -0500] “GET /show/osvdb/106232 HTTP/1.1” 200 20489 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko”
188.8.131.52 – – [07/May/2014:09:37:07 -0500] “GET /show/osvdb/106233 HTTP/1.1” 200 20409 “-” “Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)”
184.108.40.206 – – [07/May/2014:09:37:08 -0500] “GET /show/osvdb/106235 HTTP/1.1” 200 20463 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.76 Safari/537.36”
Visiting that IP told us who it was:
So after requesting data, and hearing that it would require a commercial license, they figure they will just scrape the data and use it without paying. 3,600 accesses between 09:18:30 and 09:43:19.
In the second case, and substantially more offensive, is the case of security giant McAfee. They approached us last year about obtaining a commercial feed to our data that culminated in a one hour phone call with someone who ran an internal VDB there. On the call, we discussed our methodology and our data set. While we had superior numbers to any other solution, they were hung up on the fact that we weren’t fully automated. The fact that we did a lot of our process manually struck them as odd. In addition to that, we employed less people than they did to aggregate and maintain the data. McAfee couldn’t wrap their heads around this, saying there was “no way” we could maintain the data we do. We offered them a free 30 day trial to utilize our entire data set and to come back to us if they still thought it was lacking.
They didn’t even give it a try. Instead they walked away thinking our solution must be inferior. Jump to today…
220.127.116.11 – – [04/May/2014:07:22:14 -0500] “GET /90703 HTTP/1.1” 200 6042 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36”
18.104.22.168 – – [04/May/2014:07:22:16 -0500] “GET /90704 HTTP/1.1” 200 6040 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36”
22.214.171.124 – – [04/May/2014:07:22:18 -0500] “GET /90705 HTTP/1.1” 200 6039 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36”
126.96.36.199 – – [04/May/2014:07:22:20 -0500] “GET /90706 HTTP/1.1” 200 6052 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36”
They made 2,219 requests between 06:25:24 on May 4 and 21:18:26 on May 6. Excuse us, you clearly didn’t want to try our service back then. If you would like to give a shot then we kindly ask you to contact RBS so that you can do it using our API, customer portal, and/or exports as intended.
Overall, it is entirely frustrating and disappointing to see security companies who sell their services based on reputation and integrity, who claim to have ethics completely disregard them in favor of saving a buck.
Many years ago I realized that the End User License Agreements (EULA) that we are forced to endure for web sites and software was out of hand. There have been a lot of good points made in the past about them and how they are rarely read. I had written notes about an article but wanted to add something that I had never seen before. What do all those EULAs look like if they are printed? I made a list of all the software on my computer at the time, and a handful of web sites. Ultimately, I never got around to doing it but I mentioned it in discussions with various people.
Recently, Andrea Matwyshyn and I discussed it as well. Some months later, she and some of her students did exactly what I wanted. Pictured below are printouts of some of the many EULAs you have “read”, or at least agreed to. That is a whole lot of legalise you are bound to. Be scared.
[This was originally posted on the OSVDB blog.]
Our sponsor Risk Based Security (RBS) posted an interesting blog this morning about Research In Motion (RIM), creator of the BlackBerry device. The behavior outlined in the blog, and from the original blog by Frank Rieger is shocking to say the least. In addition to the vulnerability outlined, potentially sending credentials in cleartext, this begs the question of legality. Quickly skimming the BlackBerry enterprise end-user license agreement (EULA), there doesn’t appear to be any warning that the credentials are transmitted back to RIM, or that they will authenticate to your mail server.
If the EULA does not contain explicit wording that outlines this behavior, it begs the question of the legality of RIM’s actions. Regardless of their intention, wether trying to claim that it is covered in the EULA or making it easier to use their device, this activity is inexcusable. Without permission, unauthorized possession of authentication credentials is a violation of Title 18 USC § 1030 law, section (a)(2)(C) and potentially others depending on the purpose of the computer. Since the server doing this resides in Canada, RIM may be subject to Canadian law and their activity appears to violate Section 342.1 (d). Given the U.S. government’s adoption of BlackBerry devices, if RIM is authenticating to U.S. government servers during this process, this could get really messy.
Any time a user performs an action that would result in sharing that type of information, with any third party, the device or application should give explicit warning and require the user to not only opt-in, but confirm their choice. No exceptions.
Yesterday, Andrew Auernheimer (aka Weev), was sentenced for his 2012-08-16 indictment on one count of “fraud and related activity in connection with computers” (18 U.S.C. § 1030) and one count of “conspiracy to commit offense or to defraud” (18 U.S.C. § 371). This was the result of Auernheimer’s activities in 2010, where he manipulated a URL on an AT&T web site, and discovered an information enumeration vulnerability.
While a lot has been written the last 24 hours on this topic, mostly via 140 character Tweets, most stories aren’t covering the full range of issues surrounding this case. Some stories cover the harsh sentencing, while older stories cover the simplistic nature of the vulnerability found. What I find lacking are stories that put it together in context, to explain how absurd this is. There are three high-level components to this story.
Enumeration vulnerabilities come in a wide variety of formats. Via the web, they are often very simple and straight-forward. A web site serves up content specific to you, customer #1234. Poorly designed web applications will identify you as customer #1234 to the application using a variable that is passed via the URL you send to the server. For example:
You can clearly see your customer number in the URL. What happens if you change 1234 to 1235 and submit it to the server? In this case, you go to jail for 41 months. No exaggeration, no bullshit. That is a basic example of an information enumeration vulnerability, due to extremely poor coding practices and absolutely no security review of the application.
The frequency of such vulnerabilities is disturbing. But not as disturbing as the multi-million dollar companies that are entrusted to protect hundreds of thousands of customer’s data. If you are browsing the web or using your banking application and notice the above, and casually change 1234 to 1235, who is the real bad guy here? You, or the corporation that decided not to employ the most fundamental security measures from the last thirty years?
This aspect of the story is the perhaps the biggest disconnect for most readers. Instead of being exposed to the fundamentals, and the history of vulnerability discovery and how it influences disclosure, they get wrapped up in the media’s portray of Auernheimer. Yes, “weev” is a controversial character. He is an admitted Internet troll, an asshole of sorts, and a character of questionable repute. However, that doesn’t matter, at all. If being an asshole was a crime, all 18 people in the U.S. who weren’t would be left to read this.
So what did Auernheimer really do? He figured out an enumeration vulnerability in AT&T’s web site, that let him determine the entire iPad user database. This constituted some 114,000 iPad 3G users. What information did the AT&T site give up, that Auernheimer got access to? Email addresses. No full names, no physical addresses, no phone numbers, no credit information, no passwords. In case you weren’t aware, you can purchase 50 million email addresses on a single ISP for a whole $500.
Why the big deal? This is where it gets a bit murky, at least to an outsider. When a researcher finds a vulnerability in a product, service, or web site, they have several avenues for disclosure. First, they can sit on the information and simply not disclose it. This doesn’t protect anyone, because the idea that no one else will find it is absurd, and has been proven wrong many times over. Second, they can disclose it in a ‘responsible’ (poor term, commonly used) or ‘coordinated’ (better term, use it) manner, in which they work with the vendor to disclose it only when the vendor is ready, and the issue has been fixed. Third, they can disclose it without informing the vendor, or they can disclose it after informing the vendor but not waiting for a fix. Each of these scenarios happens every week, a hundred times over.
The average citizen, including jurors and judges, does not understand the history or intricacies of vulnerability disclosure. There are vendors and service providers that have a long history of not caring about vulnerabilities. That is, until it affects them in the public eye. A serious issue can exist for five, ten, or sometimes seventeen years, without being fixed. When the right light hits the ordeal, usually via a negative high-profile media article, the company suddenly takes an interest. If Auernheimer had reported this to AT&T directly and waited for a fix, there is a good chance it would have gone unfixed for months, possibly years. Every day that ‘coordinated’ disclosure happens runs the risk of someone with bad intentions finding the same issue.
Rather than go to AT&T and risk months of back-and-forth and/or waiting, Auernheimer opted to go to a media outlet. Why? Media pressure is one of the strongest motivations for a company to fix a vulnerability. One could argue that since the vulnerability was not very serious (again, just email addresses being disclosed), that going to a journalist instead of the company was not a big deal. Regardless of Auernheimer’s potential intentions regarding the embarrassment to AT&T, he took a route that would likely have the most success in getting the issue fixed.
For his “crime”, Auernheimer was sentenced to 41 months in prison, 3 years probation, and ordered to pay $73,000 in restitution. Again, for showing how anyone could harvest a list of 114,000 email addresses. SC Magazine quickly wrote an article detailing 8 criminals that used computers in the commission of their crime, but received less prison time. I understand that courts are behind the times on computers, their use, abuse, and how to punish crimes related to them. I expect to see some discrepancy between sentencing in such cases. What I fail to understand is how a court can offer up such a sentence as compared to other crimes, that are certainly more destructive, and more heinous. Consider the following crimes and sentences, all handed down very recently:
- Molesting 2 children can get you 14 months. [Source]
- Child abuse can get you 32 months. [Source]
- Manslaughter can get you as little as 42 months, just 1 month more than email addresses. [Source]
- Possession of child pornography is good for 48 months, just 7 months more than email addresses. [Source]
- Involuntary manslaughter, 50 month maximum per victim. [Source]
Perhaps the biggest comparison has been Auernheimer to the two Steubenville (Ohio) rapists who were sentenced for a total of three crimes, and collectively received less time. Trent Mays was convicted of raping a teenage girl, and ordered to spend “at least one year in an Ohio Department of Youth Services facility or until they are 21 years old“. Since Mays was was also convicted of having pictures of a minor in “nudity-oriented material”, he received 1 additional year. Ma’Lik Richmond, also convicted of raping a teenage girl, received one year in the Youth Services facility. Two rapes, and essentially one count of child pornography, and collectively they get 36 months, compared to the 41 Auernheimer received. More disgusting is what is being called the “rape culture”, where news outlets such as CNN were apologetic to the rapists, decrying the sentencing and claiming their “lives were over”. Perhaps if Auernheimer’s lawyers argued that he only “raped the AT&T system”, he would have received a year.
The Lesser of Two Weevs
Once again, forget about Auernheimer’s predilection for trolling or seeking to annoy people. That is entirely irrelevant to the case. He found a minor vulnerability on AT&T’s web site, he told a journalist who wrote an article about it, and AT&T fixed it. No one suffered real damage from his activity. He did not seek to profit from his activity. More interesting is that AT&T specifically wants bugs reported to them, which Auernheimer did about the same time as he notified the journalist. While he did not follow their desired process, both sides made their intentions clear; they want bugs fixed. In this case, they diverged in the method for effecting that change.
If Aurenheimer had tried to profit from his activity, I understand how the court would seek to punish him. If he sent emails to all 114,000 people defaming AT&T, or caused them to receive excessive emails, I would expect a harsher punishment. But given that a bank loan manager was recently sentenced to six months in prison for computer fraud in an attempt to increase her own lines of credit by more than $200,000, you have to wonder what other factors are at play here. Companies are frequently dealing with vulnerabilities, some disclosed directly to them, some exploited by bad guys, some reported via the media first. Why is the AT&T case so special?
It will be interesting to see how other crimes are dealt with in comparison. For example, the same day Auernheimer gets sentenced to prison, other anonymous researchers share their recent work that involved illegally accessing 420,000 systems on the Internet. In the United States, that constitutes 420,000 felonies.
That said, I for one am grateful that Aurenheimer reported the vulnerability, both to the media and AT&T directly. Given my personal history of dealing with vendors in vulnerability disclosure, I don’t blame him or any other researcher who opts not to work with a vendor. It is often a time-consuming and painful process, that typically challenges your faith that a company cares about security and their customers. In this case, we got the lesser of two Weevs; the one that wasn’t intent on pissing as many people off as possible. The one who didn’t opt to use the information for profit, that didn’t sell the list to criminals, that didn’t actively try to compromise AT&T systems. And for that, he will receive over three years in federal prison. Think about it.
For BruCON 4 (2012), and for THOTCON 0x04 (2013), Josh Corman and I presented on Cyberwar. While the topic has been beaten to death, our talk focused on two aspects. First, a solid debunking of the rhetoric and hype that has dominated the topic for years. Second, building up a new set of ideas that seem to be lost on the cyberwar ‘experts’, that effectively broadens the topic and should make everyone reconsider what they think they know about it. In short, Cyberwar has been here for a long time, and it isn’t what we were expecting.
[This is a rebuttal/rant in which I ‘reply’ to various parts of a news article, originally published on attrition.org. This version has been updated for style.]
Computer crime: Changing the public’s perception
12 Oct 2000
You remember Jonathan James? He made national news a couple of weeks ago. You know, he’s that nice 16-year-old young man convicted of hacking into computers at the Pentagon, NASA, BellSouth, the Miami-Dade school system and many other places. That’s pretty funny. Right?
Can you imagine that some nasty judge put him in jail? Young Jonathan put it so well when he said, “I don’t think they should be putting a kid in jail because he proved they don’t have very good security.”
Fortunately, poor misunderstood Jonathan didn’t delete files or infect any computers with viruses while he was engaged in his youthful mischief. As his father put it, “All he did was go look at top secret government information.”
Hey, you know what they say — values come from the home. I can see where Jonathan learned his.
Wrong or not, the point is valid. What morons running these systems are allowing even sensitive, let alone secret, let alone TOP SECRET to remain on public connected machines? Why aren’t the admins and managers who are violating US Law and military regulations being put in jail too? why do they continue to draw tax payer funded income when they are violating US Law, just like this hacker did? Yes, the hacker broke the law. Yes, he may deserve to be in jail. Yes, the people that put the sensitive information there in the first place should be sharing the cell with young Jonathan.
His father described his son as contrite. I guess that the obscene gesture he made at the courthouse to a photographer was yet another minor aberration.
Jonathan was lucky I wasn’t the judge.
Computer crime isn’t a joke. This attitude that he did them a favor by showing them that their security was bad is warped — absolutely and completely warped.
I suppose that Daddy James would be the first one thanking the burglar for breaking into his poorly secured home if the burglar only looked at his most private and personal possessions, but didn’t take anything.
We’re at a point where computers are an essential part of our society’s infrastructure. Any crime that touches the infrastructure of our society is by definition a significant crime.
Ok, so apply this same standard to all the people involved that made this crime possible. Apply the same standards to Jonathan, the admin of the system that did not secure it, the managers and the powers that be who determined the information should be online at all. It is a nice luxury for short sighted and malicious journalists to use double standards here. But wait.. this isn’t a journalist writing.. read on.
The “ILOVEYOU” virus a few months ago is yet another example of the types of problems that can come from computer crime. “ILOVEYOU” disrupted businesses, governments, and people worldwide. We cannot permit these sorts of things to happen.
“ILOVEYOU” demonstrates that every computer has the capability of being a weapon of mass disruption, even destruction. As we become even more dependent on computers, hackers will have even more opportunities to cause mass disruption or destruction.
Oh, this isn’t overly dramatic, no… It is amusing to see that you don’t point out that these ‘weapons of mass destruction’ were ALL Windows systems. Why don’t you hold the creators of Windows even marginally responsible? Oh can’t do that, gotta blame those evil hacker types. Great scape goat and all.
“Wasn’t it cool when I turned off the air traffic control system?” “Wasn’t it great when I turned off all the respirators in the hospital from home?” I assure you that it’s just a matter of time before the things hackers do become even more outrageous and dangerous.
Hey why not? As young Jonathan put it, “All the girls thought it was cool.” If you’re a male over about age 14, what more reason do you need to do something really stupid.
The problem with security, whether it’s hi-tech computer security or physical security is that “perfect” is an impossible goal. The goal is reasonable security.
Really? Seems to me people have proven computer systems can get pretty damn close to ‘perfect’. The problem is that the end user is naïve and scared of computers. They demand point and drool interfaces that require an IQ two points above a lemming. Because of this, security is sacrificed for the masses.
Everybody can and should implement three basic security concepts. You should start by controlling physical and logical access to sensitive information. Your methods could include passwords and encryption.
Wow. You just condemned the right person and didn’t even know it. Where was the good passwords and encryption on the sensitive files Jonathan accessed? Oops.
Next, you should require individual accountability for sensitive information and identify those with access. Finally, you need to have audit trails that show who accessed what information. Your audit trail should be able to answer the basic who, what, where, when, why, and how questions.
Wait, you condemned Jonathan for these break-ins, calling him a computer criminal who deserved jail time. Here you flat out say that the admins of the machines hacked should be accountable. Why don’t you mention this in your misguided and opinionated rant above?
All too often, we see computer crime as not that big a deal. While the Computer Abuse Act of 1984 imposes a $250,000 fine or a five-year prison sentence, or both, for each offense, it just doesn’t often work that way.
Much like the people that are convicted of murder or rape only serving four years in prison? But wait, that’s ok, just burn the hackers.
While I don’t have any formal study to cite, experience has taught me that computer crime is generally not sternly punished.
No formal study to cite? There is an abundance of computer crime statistics out there. Statistics on computer intrusion is easy to find (CERT, Attrition, etc.). Information on hacker cases and convictions is available (DOJ). Why can’t you cite a study to back your claims?
We need to have a basic change in attitude about computer crime. What we must do is use harsh punishment along with reasonable security as deterrents. We have to deliver the message that hacking and other computer crimes are so difficult to prevent and the dangers that come from them are so great that our society simply won’t tolerate them.
Computer crime is not difficult to prevent as a general rule. There are thousands of networks out there that have suffered no external intrusion to date. What, are thousands of competent admins all just lucky?
What Jonathan did wasn’t a childish prank. Saying that there were no horrible consequences from what he did is like justifying drunk driving by saying, “But I got home and I didn’t have an accident.”
If I’d been the judge in a world with perfect laws, Jonathan wouldn’t get out of jail until he was 21 and would never, never, never earn a living in any job involving computers or programming. That’s punishment. That’s a message to others.
Mark Grossman is a shareholder and chairs the Computer and E-Commerce Law Group of Becker & Poliakoff, P.A. His website is http://www.EcomputerLaw.com and his e-mail address is firstname.lastname@example.org. Research assistant is Andrew Chulock.
Ahh, the true motivation. Convict them all.. because I am a lawyer and get paid to do it. I hear sirens, better run Mark.