On the origins of the term ‘Hacktivism’…

This blog is not about debating the definition of Hacktivism; I will leave that to the academics and self-described hacktivists. This article is to clear up confusion on the origin of the term, and point out that Wikipedia’s handling of factual information is sketchy. Further, it will point out that the Cult of the Dead Cow (cDc) happily went along with the notion that they coined the term, when they did not. Even when it was clear that their own dates and stories didn’t line up, that didn’t dissuade them from keeping up appearances.

The Wikipedia entry on Hacktivism currently states that the term was coined by cDc:

The term was coined in 1996 by a Cult of the Dead Cow member known as “Omega”.[2] However, similar to its root word hack, hacktivism is an ambiguous term (computer hacking is tied to several meanings).

There is no other reference to the source of this term today. If you look back at the page on prior dates, that isn’t the case. On May 17, 2013 we see:

The term itself was coined by techno-culture writer Jason Sack in a piece about media artist Shu Lea Cheang published in InfoNation in 1995.

This line was added by ‘Orb Weaver’ on July 23, 2009 with this edit. It was deleted by ‘Pkinnaird’ on May 20, 2013 with this edit. The notes for the edit say:

(Removed references to destructive activities since they are well described in cyberterrorism article. Clarified that the word ‘hacktivism’ is contentious and removed most discussion of hacktivists as cyberterrorists since that is a separate notion.)

This looks like an innocent edit, removing a long list of ‘hacktivism’ incidents and changing it to a few short examples. However, in doing so, this effectively killed any reference to a prior source of the word. In short, this edit is very irresponsible. I would cite you the purpose of Wikipedia and something along the lines of “factual”, but curiously enough that is not part of the mission statement. While you may quickly associate “develop educational content” as being factual, that is simply not the case. Look at the battle in the US over schools teaching evolution versus creationism. No matter which you believe in, the other safely becomes “developing educational content” as a valid argument.

The line about Omega of Cult of the Dead Cow was added on November 22, 2011 with this edit and a change message of “Term coined in 1994 by “Omega” of the Cult of the Dead Cow Hacker collective.” At the bottom of the page, the first reference is “Hacktivism and How It Got Here“, a Wired piece by Michelle Delio from July 14, 2004. Note that Delio is not known for quality journalism and was let go from Wired due to serious issues surrounding her sketchy sources and fabrications. From Delio’s article:

But no one called technology-enabled political activism “hacktivism” until 1998, when cDc members Omega, Reid Fleming and Ruffin were chatting online and were, Ruffin said, “bouncing some wacky ideas around about hacking and political liberation, mostly in the context of working with Chinese hackers post-Tiananmen Square.”
“The next morning Omega sent an e-mail to the cDc listserv and included for the first time the word hacktivism in the post,” Ruffin said. “Like most cDc inventions, it was used seriously and ironically at the same time — and when I saw it my head almost exploded.”

Interesting that Delio says it was coined by cDc in 1998 citing cDc member Oxblood Ruffin in her 2004 article, yet Wikipedia said 1994. In a different interview with Elinor Mills from 2012, Ruffin was quoted as saying it originated in 1996. The Wikipedia page has cited this source for most of the page’s history, but has changed years to mention 1994, 1996, and 1998. In most cases, Ruffin’s story is the same about the term originating in an email between cDc members, but apparently has never provided a copy of this email to journalists or made it public. It is clear that Ruffin is not a reliable source on this and is likely doing it to subvert the media, a stated objective of cDc.

An Earlier Origin

As mentioned above, Wikipedia once attributed the term differently:

The term itself was coined by techno-culture writer Jason Sack in a piece about media artist Shu Lea Cheang published in InfoNation in 1995.

A couple years ago I tried to reach out to Jason Sack to confirm this. My early attempts at reaching him did not work due to finding one email address that he no longer used. Last year, Space Rogue reached out via a different email address and got a response. We both asked Sack if he could dig up the original article and send a copy. Since he only had a copy in print, it took a while to find it, scan it in, and send it to us. But he did. As suspected, and as the original sourcing in Wikipedia says, he uses the term ‘hacktivist’ in 1995 under the pen name ‘Jason Logan’. A year or three before cDc supposedly did. Courtesy of Jason, the cover of the InfoNation magazine along with scans of the article are available as a more definitive reference (click thumbnails below for full size). As the author of this blog, I cannot update Wikipedia to correct the errors in it due to a conflict of interest. Someone else out there will have to do it.

infonation-nov-1995-00  infonation-nov-1995-01  infonation-nov-1995-02  infonation-nov-1995-03  infonation-nov-1995-04

From the article:

Fresh Kill is described by Cheang herself as a work of eco-cybernoia. An environment in which the inability to access the media of change causes the uprising of low-fi activism and hacker mentality, or “hacktivism” if you will.

I could do this all day… (Poor vuln stats from @GFISoftware)

[This was originally published on the OSVDB blog.]

Despite the talk given at BlackHat 2013 by Steve Christey and myself, companies continue to produce pedestrian and inaccurate statistics. This batch comes from Cristian Florian at GFI Software and offers little more than confusing and misleading statistics. Florian falls into many of the traps and pitfalls outlined previously.

These are compiled from data from the National Vulnerability Database (NVD).

There’s your first problem, using a drastically inferior data set than is available. The next bit really invalidates the rest of the article:

On average, 13 new vulnerabilities per day were reported in 2013, for a total of 4,794 security vulnerabilities: the highest number in the last five years.

This is laughable. OSVDB cataloged 10,472 disclosed vulnerabilities for 2013 (average of 28 a day), meaning these statistics were generated with less than half of known vulnerabilities. 2013 was our third year of breaking 10,000 vulnerabilities, where the rest have a single year (2006) if any at all. Seriously; what is the point of generating statistics when you knowingly use a data set lacking so much? Given that 2012 was another ’10k’ year, the statement about it being the highest number in the last five years is also wrong.

Around one-third of these vulnerabilities were classified ‘high severity’, meaning that an exploit for these vulnerabilities would have a high impact on the attacked systems.

By who? Who generated these CVSS scores exactly, and why isn’t that disclaimed in the article? Why no mention of the ‘CVSS 10′ scoring problem as VDBs must default to that for a completely unspecified issue? With a serious number of vulnerabilities either scored by vendors with a history of incorrect scoring, or VDBs forced to use ’10’ for unspecified issues, these numbers are completely meaningless and skewed.

The vulnerabilities were discovered in software provided by 760 different vendors, but the top 10 vendors were found to have 50% of the vulnerabilities:

I would imagine Oracle is accurate on this table, as we have cataloged 570 vulnerabilites in 2013 from them. However, the rest of the table is inaccurate because #2 is wrong. You say Cisco with 373, I say ffmpeg with 490. You say #10 is HP with 112 and I counter that WebKit had 139 (which in turn adds to Apple and Google among others). You do factor in that whole “software library” thing, right? For example, what products incorporate ffmpeg that have their own vulnerabilities? These are contenders for taking the #1 and #2 spot on the table.

Most Targeted Operating Systems in 2013

As we frequently see, no mention of severity here. Of the 363 Microsoft vulnerabilities in 2013, compared to the 161 Linux Kernel issues, impact and severity is important to look at. Privilege escalation and code execution is typical in Microsoft, while authenticated local denial of service accounts for 22% of the Linux issues (and only 1% for Microsoft).

In 2013 web browsers continued to justle – as in previous years – for first place on the list of third-party applications with the most security vulnerabilities. If Mozilla Firefox had the most security vulnerabilities reported last year and in 2009, Google Chrome had the “honor” in 2010 and 2011, it is now the turn of Microsoft Internet Explorer to lead with 128 vulnerabilities, 117 of them ‘critical’.

We already know your numbers are horribly wrong, as you don’t factor in WebKit vulnerabilities that affect multiple browsers. Further, what is with the sorting of this table putting MSIE up top despite it not being reported with the most vulnerabilities?

Sticking to just the browsers, Google Chrome had 297 reported vulnerabilities in 2013 and that does not count additional WebKit issues that very likely affect it. Next is Mozilla and then Microsoft IE with Safari at the lowest (again, ignoring the WebKit issue).

OSVDB -How bad is the scraping problem?

[This was originally published on the OSVDB blog.]

Via Twitter, blogs, or talking with our people, you may have heard us mention the ‘scraping’ problem we have. In short, individuals and companies are using automated methods to harvest (or ‘scrape’) our data. They do it via a wide variety of methods but most boil down to a couple methods involving a stupid amount of requests made to our web server.

This is bad for everyone, including you. First, it grinds our poor server to a stand-still at times, even after several upgrades to larger hosting plans with more resources. Second, it violates our license as many of these people scraping our data are using it in a commercial capacity without returning anything to the project. Third, it forces us to remove functionality that you liked and may have been using in an acceptable manner. Over the years we’ve had to limit the API, restrict the information / tools you see unauthenticated (e.g. RSS feed, ‘browse’, ‘advanced search’), and implement additional protections to stop the scraping.

So just how bad is it? We enabled some CloudFlare protection mechanisms a few weeks back and then looked at the logs.

  • The attacks against OSVDB.org were so numerous, the logs being generated by CloudFlare were too big to be managed by their customer dashboard application. They quickly fixed that problem, which is great. Apparently they hadn’t run into this before, even for the HUGE sites getting DDoS’d. Think about it.
  • We were hit by requests with no user agent (a sign of someone scraping us via automated means) 1,060,599 times in a matter of days…
  • We got hit by 1,843,180 SQL injection attack attempts, trying to dump our entire database in a matter of weeks…
  • We got hit by ‘generic’ web app attacks only 688,803 times in a matter of weeks….
  • In the two-hour period of us chatting about the new protection mechanisms and looking at logs, we had an additional ~ 130,000 requests with no user-agent.

To put that in perspective, DatalossDB was hit only 218 times in the same time period by requests with no user agent. We want to be open and want to help everyone with security information. But we also need for them to play by the rules.

An Open Letter to Ashley Carman, @SCMagazine, and @SkyboxSecurity

[This was originally published on the OSVDB blog.]

[Sent to Ashley directly via email. Posting for the rest of the world as yet another example of how vulnerability statistics are typically done poorly. In this case, a company that does not aggregate vulnerabilities themselves, and has no particular expertise in vulnerability metrics weighs in on 2013 “statistics”. They obviously did not attend Steve Christey and my talk at BlackHat last year titled “Buying Into the Bias: Why Vulnerability Statistics Suck“. If we do this talk again, we have a fresh example to use courtesy of Skybox.]

[Update: SkyboxSecurity has quickly written a second blog in response to this one, clarifying a lot of their methodology. No word from Carman or SC Magazine. Not surprised; they have a dismal history as far as printing corrections, retractions, or even addressing criticism.]


Ashley,

In your recent article “Microsoft leads vendors with most critical vulnerabilities“, you cite research that is factually incorrect, and I fully expect a retraction to be printed. In fact, the list of errata in this article is considerably longer than the article itself. Some of this may seem to be semantics to you, but I assure you that in our industry they are anything but. Read down, where I show you how their research is *entirely wrong* and Microsoft is not ‘number one’ here.

1. If Skybox is only comparing vendors based on their database, as maps to CVE identifiers, then their database for this purpose is nothing but a copy of CVE. It is important to note this because aggregating vulnerability information is considerably more demanding than aggregating a few databases that do that work for you.

2. You say “More than half of the company’s 414 vulnerabilities were critical.” First, you do not disclaim that this number is limited to 2013 until your last paragraph. Second, Microsoft had 490 disclosed vulnerabilities in 2013 according to OSVDB.org, apparently not one of the “20” sources Skybox checked. And we don’t claim to have all of the disclosed vulnerabilities.

3. You cite “critical vulnerability” and refer to Microsoft’s definition of that as “one that allows code execution without user interaction.” Yet Skybox did not define ‘critical’. This is amateur hour in the world of vulnerabilities. For example, if Microsoft’s definition were taken at face value, then code execution in a sandbox would still qualify, while being considerably less severe than without. If you go for what I believe is the ‘spirit’ of the research, then you are talking about vulnerabilities with a CVSS score of 10.0 (network, no user interaction, no authentication, full code execution to impact confidentiality / integrity / availability completely), then Microsoft had 10 vulnerabilities. Yes, only 10. If you add the ‘user interaction’ component, giving it a CVSS score of 9.3, they had 176. That is closer to the ‘216’ Skybox is claiming. So again, how can you cite their research when they don’t define what ‘critical’ is exactly? As we frequently see, companies like to throw around vulnerability statistics but give no way to reproduce their findings.

4. You say, “The lab’s findings weren’t particularly surprising, considering the vendors’ market shares. Microsoft, for instance, is the largest company and its products are the most widely used.” This is completely subjective and arbitrary. While Microsoft captures the desktop OS market share, they do not capture the browser share for example. Further, like all of the vendors in this study, they use third-party code from other people. I point this line out because when you consider that another vendor/software is really ‘number one’, it makes this line seem to
be the basis of an anecdotal fallacy.

5. You finish by largely parroting Skybox, “Skybox analyzed more than 20 sources of data to determine the number of vulnerabilities that occurred in 2013. The lab found that about 700 critical vulnerabilities occurred in 2013, and more than 500 of them were from four vendors.” We’ve covered the ‘critical’ fallacy already, as they never define what that means. I mentioned the “CVE” angle above. Now, I question why you didn’t challenge them further on this. As a security writer, the notion that “20” sources has any meaning in that context should be suspect. Did they simply look to 20 other vulnerability databases (that do all the initial data aggregation) and then aggregate them? Did they look at 20 unique sources of vulnerability information themselves (e.g. the MS / Adobe / Oracle advisory pages)? This matters greatly. Why? OSVDB.org monitors over 1,500 sources for vulnerability information. Monitoring CVE, BID, Secunia, and X-Force (other large vulnerability databases) is considered to be 4 of those sources. So what does 20 mean exactly? To me, it means they are amateurs at best.

6. Jumping to the Skybox blog, “Oracle had the highest total number of vulnerabilities at 568, but only 18 percent of their total vulnerabilities were deemed critical.” This is nothing short of a big red warning flag to anyone familiar with vulnerabilities. This line alone should have made you steer clear from their ‘research’ and demanded you challenge them. It is well known that Oracle does not follow the CVSS standards when scoring a majority of their vulnerabilities. It has been shown time and time again that what they scored is not grounded in reality, when compared to the
researcher report that is eventually released. Every aspect of a CVSS score is frequently botched. Microsoft and Adobe do not have that reputation; they are known for generally providing accurate scoring. Since that scoring is the quickest way to determine criticality, it is important to note here.

7. Now for what you are likely waiting for. If not Microsoft, who? Before I answer that, let me qualify my statements since no one else at this table did. Based on vulnerabilities initially disclosed in 2013, that have a CVSS score of 10.0 (meaning full remote code execution without user interaction), we get this:

Oracle: 48
Adobe: 29
Microsoft: 10

Two vendors place higher than Microsoft based on this. Now, if we consider “context-dependent code execution”, meaning that user interaction is required but it leads to full code execution (e.g. click this malicious PDF/DOC/GIF and we base that on a 9.3 CVSS score (CVSS2#AV:N/AC:M/Au:N/C:C/I:C/A:C”)) or full remote code execution (CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C) we get the following:

Microsoft 176
Adobe: 132
Oracle: 122

I know, Microsoft is back on top. But wait…

ffmpeg: 326
libav: 286

Do you like apples?

Brian Martin
OSF / OSVDB.org

CNN, the TSA, and the ‘Theatre’ of Terrorism

News flash from CNN a few minutes ago:

Terrorists may try to hide explosives in toothpaste or cosmetics tubes, U.S. warns airlines flying into Russia.

A law enforcement source said the warning is based on new information and added that there is no known threat to the United States.

Wait a minute! For ten years now, Americans have had to limit toothpaste and other toiletries in their carry-on bags. Why exactly did we have to do that? If there is “no known threat to the United States” today, then why isn’t this silly restriction lifted? The original cause of this restriction was a hypothetical scenario from a consultant or academic I bet, not known cases of this being used. Even now, if we banned all toiletries including toothpaste, it would be trivial to sneak a significant amount of gel onto a plane.

Further, are they really saying that terrorists would be flying into Russia, via the U.S.? Come on, geopolitics 101 says that is absurd when there is more than a fair share of terrorists already living in proximity to Russia that would not require air travel.

Not only do we live under silly policies that enforce the illusion of anti-terrorism, but we are constantly reminded of how absurd they are. Yet, we still can’t manage to get rid of them and use tactics that have a long track record of actually working.