Perlroth & The First (Zero-Day) Broker

I am currently reading “This Is How They Tell Me The World Ends” by Nicole Perlroth, only on page 60 in Chapter 5, so a long ways to go before completing the 471 page tome. I hit chapter 4, titled “The First Broker” and it was of specific interest to me for sure, prompting this (second) blog on the book. A broker is defined as “a person who buys and sells goods or assets for others” so I was never a vulnerability broker by that definition. I am not trying to claim to be the actual first broker of zero-days in that context at all. Instead, I would like to share a couple of my own stories that are adjacent to the topic. This is all to the best of my recollection, but my memory isn’t the best due to being a diabetic and not having it under control for several years. If anyone involved in any of these stories has a different memory please feel free to comment or reach out directly and I will update this blog accordingly.


First, I was someone who ‘brokered’ deals in the sense of trading zero-day vulnerabilities for a few years in the mid-90s. As a member of multiple hacking groups, some an actual member and some an honorary member, one of my roles in several of those groups was not writing the zero-days because I simply wasn’t a coder and did not have that skill. Instead, it was to barter and try to gain access to specific zero-days one group or member wanted and my currency was other zero-days we had. While I couldn’t code, my social network of hackers was sizable.

Some of what I was authorized to trade for was toward the goal of obtaining e.g. “any remote zero-day in $target operating system” while in other cases it was “trade anything and everything we have for $specific-zero-day“. I acted as a go-between for the groups I was in and a liaison to the general hacker scene. Many knew me to have a well-rounded vulnerability collection and we already traded more pedestrian exploits, some of which weren’t public, but definitely more circulated in such groups.

Back then it was just hackers and groups, not companies, so we didn’t have “duffel bags stuffed full of half a million dollars in cash to buy zero-day bugs” (p.49). Instead we had other zero-day bugs which were just as valuable between groups and acted as the ideal currency. Just like Perlroth describes in her book relating the story of “Jimmy Sabien” (p.43), not his real name, the vulnerabilities had serious value back then too. Some were very closely guarded, to the point of not being shared with their group. For example, Sally may have shared 99% of her exploits and zero-days with her group but held one back because it was so valuable. That one she would use sparingly herself so as not to burn it or authorize it to be traded for a vulnerability of equal value. In those rare cases I would know just enough about the vulnerability to try to arrange a trade on her behalf, sometimes never seeing the vulnerability myself.

There were rumors at the time that some hackers had sold vulnerabilities to specific agencies in European governments. There were also rumors that some were trading zero-day exploits to a European law enforcement agency as a proffer or part of a plea to avoid being charged for hacking activity. But those were just rumors at that point. To me, that was the precursor to the more financial based zero-day market.


Later in the 90s, I was one of the two founders of a startup called Repent Security Inc. (RSI or RepSec). We were three people and started trying to be a penetration testing shop. This was still early in the world of commercial penetration testing and we were going up against companies that either had an established business reputation like a couple of the ‘Big 5’ at the time, or companies that were pioneers in the game like The Wheel Group. We also created software for securely streaming logs over an encrypted tunnel so if a system was popped, you had the logs on a remote host with timestamps including your shell histories (which didn’t have timestamps natively). That software was partially outsourced to a renowned “InfoSec luminary” who had it developed by one of his interns on a compromised .edu machine and later essentially stole the software after RSI imploded. But that story is for another day because it isn’t part of the zero-day world, it’s part of the Charlatan and Errata world.

One thing RSI had of real value was the vulnerability database that I had been maintaining since 1993. It was first maintained for the hacker group I was part of (TNo) where it was originated by other members. When I took over maintaining it I worked on further organizing it, adding several points of metadata, and expanding it. After that group drifted apart I kept maintaining it while a member of w00w00 and honorary member of ADM, where I brokered some trades. I did not maintain the databases for either of those groups which were separate from mine, but I was privy to some of their exploits and shared some of what I had. Members from both groups would frequently ask me to check my database for exploits specific to an operating system or service they were targeting, as this was before Google and Yahoo! didn’t aggregate much in the big picture. Even though a majority of vulnerabilities were posted to Bugtraq, you couldn’t just skim it quickly to determine what was there that you could use for your purpose. Someone that had them all sorted in a database with metadata was fairly valuable. To this day, many friends and colleagues still ask me to do vulnerability lookups, now with VulnDB.

Throughout my hacker days I maintained that database, and then continued to as I transitioned into a career doing penetration testing. Like Perloth documents in her book about the early days of iDefense and the outfit that “Sabien” worked for, we all scoured Bugtraq for our information primarily. I had the benefit of several circles of hackers and hackers-turned-legit that still traded vulnerability intelligence (vuln intel). Essentially the grey market back when the currency was still vuln intel not those duffels of cash. By that point, the database that RSI had was unparalleled in the commercial world. This was initially created before and maintained during Fyodor’s Exploit World and Ken Williams’ Packetstorm. The RSI database came before the ISS XForce database, before BID, before NIST’s ICAT Metabase, and before MITRE’s CVE. More importantly, it was heavy on exploit code but light on proper descriptions or solutions, so it was geared toward penetration testing and compromising machines rather than mature vulnerability intelligence.

As RSI struggled to get penetration testing gigs and opted to work on the “Secure Remote Streaming” (SRS) product, we had taken a trip to Atlanta to talk to ISS about selling a copy of our database to their relatively new X-Force penetration testing team (I forgot who we met there, but I would love remember!). That deal did not happen and we soon found ourselves in talks with George Kurtz at Ernst & Young, one of the ‘Big 5’. While most or all of the ‘Big 5’ had penetration testing teams, their reputation wasn’t the best at the time. That was primarily due to their testers frequently being traditional auditors turned penetration testers, rather than being a ‘real’ tester; someone that came up through the hacking ranks.

It is also important to remind everyone that back then these companies “did not hire hackers“. They literally printed it in advertisements as a selling point that they did not hire and would not consort with so-called black hats. This was almost always an outright lie. Either the company knew the background of their team and lied, or they did not know the background and conveniently overlooked that their employees had zero experience on their resume around that skillset, yet magically were badass testers. Years of companies claiming this also led to what we see now, where many security professionals from that time still refuse to admit they used to hack illegally even 25 years later.

Anyway, back to George and E&Y. It made sense that a shop like that would want to get their hands on RSI’s database. If their testers were primarily from the auditor / bean-counter side of things they would not have had their own solid database. Even if they had hackers it didn’t mean they came with the same vuln intel we had. As best I recall, the negotiations went back and forth for a couple weeks and we settled on a one-time sale of the RSI database for $75,000 with the option to revisit selling ‘updates’ to it as we continued to maintain it. This would have become the first commercial vulnerability intelligence feed at the time I believe, in early 1999. Then, disaster.

The FBI raided the offices of RSI, which was my apartment. At the time that was a death sentence to a penetration tester’s career. Regardless of guilt, the optics were one of black hat / criminal hacking, and finding someone to trust you to break into their systems was not happening. RSI dissolved and I found myself struggling to find work of any kind. So I reached back out to George about the deal we had on the table that we were close to signing and said I was fine with the price, let’s do it. Suddenly, Kurtz had a change of heart.

He didn’t have a change of heart as far as doing the deal, his change was in the price. Instead of $75,000 he came back and said we could do the deal for $25,000 instead, just a third of what we had agreed to. He knew I was in a tight spot and needed the money and he took full advantage of that. This is someone who had a reputation of being a friend to hackers, someone that had bridged the gap between the business world and hackers to put together a reputable team at E&Y. He even had his name on a book about penetration testing, co-authored with names other hackers recognized. He was also very explicit that he knew I had no real power at that point and refused to budge on his one-third offer.

So when he had a chance to honor the deal we originally worked on, a chance to be a friend to a hacker, at no expense of his own? He opted to screw me. Since I was out of options and my limited savings were dwindling I had to accept the offer. That takes me full circle, via a meandering path I know, to likely making one of the largest vulnerability sales at the time. While it wasn’t a single exploit, a $25k deal that was originally set to be $75k is pretty impressive for the time. If RSI had made it, odds are we would have become a software (SRS) and vulnerability intelligence shop rather than a penetration testing shop.

Many aspects of how Perlroth describes the early days of iDefense and “Sabien’s” shop, we were already doing. With a lot fewer people than they claimed, but we were aggregating information from Bugtraq and other sources, writing exploits for some of the vulnerabilities, and then we began to try to sell that information. I guess it isn’t a big surprise I ended up in the vulnerability intelligence business eventually.

Your yearly reminder to post to Full-Disclosure, not Bugtraq

[This was originally published on the OSVDB blog.]

[10/29/2020 Update: As of February 24, SecurityFocus has stopped moderating posts to the Bugtraq mail list without explanation or warning. This is apparently related to Broadcom acquiring Symantec, the owner of SecurityFocus.]

This has been a long-recognized and proven thing, but every year we run into more glaring examples. SecurityFocus, who runs the BID database, which is part of Symantec’s DeepSight offering, routinely uses submissions to the Bugtraq mail list to seed their commercial database, sometimes days before approving the post. This means subscribers who use Bugtraq as one of many sources of ‘real-time’ vulnerability intelligence routinely get the short end of the stick. Full-Disclosure, managed by Fyodor and team, do not have that commercial interest in the content of the posts to the FD. Their average turnaround time seems to be considerably better in approving posts. So please, for the industry’s sake, post to Full-Disclosure and stop supporting Bugtraq.

Today’s example: A new CVE popped up in various places. Google showed the first hit to be the BID Database:

EMC only posts their advisories to the Bugtraq list, so we checked there first, since that would be the provenance:

There are EMC advisories visible, but not the one with CVE-2017-4985. Checking again today:

SecurityFocus delayed the post by three days while it was in their database.

Missing Perspective on the Closure of the Full-Disclosure Mail List

[This was originally published on the OSVDB blog.]

This morning I woke to the news that the Full-Disclosure mail list was closing its doors. Assuming this is not a hoax (dangerously close to April 1st) and not spoofed mail that somehow got through, there seems to be perspective missing on the importance of this event. Via Facebook posts and Twitter I see casual disappointment, insults that the list was low signal to noise, and that many had stopped reading it a while back. I don’t begrudge the last comment one bit. The list has certainly had its share of noise, but that is the price we pay as a community and industry for having a better source for vulnerability disclosure. Speaking to the point of mail lists specifically, there were three lists that facilitated this: Bugtraq, Full-Disclosure, and Open Source Security (OSS). Bugtraq has been around the longest and is the only alternative to Full-Disclosure really (remember that VulnWatch didn’t last, and was ultimately low traffic). OSS is a list that caters to open source software and does not traffic in commercial software. A majority of the posts come from open source vendors (e.g. Linux distributions), the software’s maintainer, etc. It is used as much for disclosure as coordination between vendors and getting a CVE assigned.

One of the first things that should be said is a sincere “thank you” to John Cartwright for running the list so long. For those of you who have not moderated a list, especially a high-traffic list, it is no picnic. The amount of spam alone makes list moderation a pain in the ass. Add to that the fake exploits, discussions that devolve into insults, and topics that are on the fringe of the list’s purpose. Trying to sort out which should be allowed becomes more difficult than you would think. More importantly, he has done it in a timely manner for so long. Read the bold part again, because that is absolutely critical here. When vulnerability information goes out, it is important that it goes out to everyone equally. Many mails sent to Bugtraq and Full-Disclosure are also sent to other parties at the same time. For example, every day we get up to a dozen mails to the OSVDB Moderators with new vulnerability information, and those lists and other sources (e.g. Exploit-DB, OffSec, 1337day) are in the CC. If you use one or a few of those places as your primary source for vulnerability intelligence, you want that information as fast as anyone else. A mail sent on Friday afternoon may hit just one of them, before appearing two days later on the rest. This is due to the sites being run with varying frequency, work schedules, and dedication. Cartwright’s quick moderation made sure those mails went out quickly, often at all hours of the day and over weekends.

While many vulnerability disclosers will send to multiple sources, you cannot assume that every disclosure will hit every source. Some of these sites specialize in a type of vulnerability (e.g. web-based), while some accept most but ignore a subset (e.g. some of the more academic disclosures). Further, not every discloser sends to all these sources. Many will send to a single mail list (e.g. Bugtraq or FD), or to both of them. This is where the problem arises. For many of the people still posting to the two big disclosure lists, they are losing out on the list that was basically guaranteed to post their work. Make no mistake, that isn’t the case for both lists.

This goes back to why Full-Disclosure was created in the first place (July 11, 2002). This was days before Symantec announced they were acquiring SecurityFocus (July 17, 2002). That was not a coincidence. While I can’t put a finger on when BugTraq changed for the worse exactly, I can assure you it has. Back in 2003, security researchers were noticing curious delays in their information being posted. One company challenged SecurityFocus/Bugtraq publicly, forcing them to defend themselves.

“The problem with SecurityFocus is not that they moderate the lists, but the fact that they deliberately delay and partially censor the information,” said Thomas Kristensen, CTO of Secunia, based in Copenhagen, Denmark. “Since they were acquired by Symantec they changed their policy regarding BugTraq. Before they used to post everything to everybody at the same time. Now they protect the interests of Symantec, delay information and inform their customers in advance.” Wong says there is no truth to these accusations. “The early warnings that our DeepSight customers get come from places like BugTraq and events and incidents that we monitor,” Wong said. “We dont give those alerts [from BugTraq] to our customers any sooner than anyone else gets them.”

Unfortunately for our community, Mr. Wong is absolutely incorrect. I have witnessed this behavior first hand several times over the years, as have others. From a series of mails in 2006:

* mudge (mudge @ uidzero org) [060120 20:04]:
Actually, this advisory is missing some important information. bugtraq engaged in this prior to the “buy out”. Security Focus engaged in this practice as well where there were some advisories that would go out only to the Security Focus paid private list and not be forwarded to the public list to which they were posted.

On Fri, 20 Jan 2006, H D Moore wrote:
FWIW, I have noticed that a few of my own BT posts will not reach my mailbox until they have already been added to the securityfocus.com BID database. It could be my subscriber position in the delivery queue, but it does seem suspicious sometimes. Could just be paranoia, but the list behavior/delivery delays definitely contribute to it.

In each case, moderators of Bugtraq vehemently denied the allegations. In one case, Al Huger (with Symantec at the time) reminded everyone that the combined lists of SecurityFocus were delivering over 7 million mails a day. That alone can cause issues in delivery of course. On the other hand, Symantec surely has the resources to ensure they run a set of mail servers that can churn out mail in such volume to ensure prompt delivery. Jump to more recently and you can still see incredible delay that has nothing to do with delivery issues. For example, RBS posted an advisory simultaneously to both Bugtraq and Full-Disclosure. Notice that the mail was posted on Sep 10 for Full-Disclosure and Sep 19 for Bugtraq. A nine day delay in moderating vulnerability information is not acceptable in today’s landscape of threats and bad actors. Regardless of intent, such delays simply don’t cut it.

In addition to the Bugtraq moderators having such delays, they will sometimes reject a post for trivial reasons such as “using a real IP address” in an example (one time using the vendor’s IP, another time using a public IP I control). They rejected those posts, while frequently allowing “target.com” in disclosures which is a real company.

With the death of Full-Disclosure, Bugtraq is now our primary source of vulnerability disclosure in the scope of mail lists, and only source for vulnerabilities in commercial software (out of scope for OSS). To those who argue that people “use mail a lot less now”, I suggest you look at the volume of Bugtraq, Full-Disclosure, and OSS. That is a considerable amount of disclosures made through that mechanism. Another mindset is that disclosing vulnerabilities can be done with a Tweet using a hash tag and a link to pastebin or other hosting site. To this I can quickly say that you have never run a VDB (and try finding a full set of your original l0pht or @stake advisories, many have largely vanished). Pastebin dumps are routinely removed. Researcher blogs, even hosted on free services such as WordPress and Blogger, disappear routinely. Worse, vendors that host advisories in their own products will sometimes remove their own historical advisories. The “Tweet + link” method simply does not cut it unless you want vulnerability provenance to vanish in large amounts. It is bad enough that VDBs have to rely on the Internet Archive so often (speaking of, donate to them!), but forcing us to set up a system to mirror all original disclosures is a burden. Last, for those who argue that nothing good is posted to Full-Disclosure, Lucian Constantin points out a couple good examples to counter the argument in his article on the list closing.

Instead, mail lists provide an open distributed method for releasing information. As you can see, these lists are typically mirrored on multiple sites as well as personal collections of incoming email. It is considerably easier and safer to use such a method for vulnerability disclosures going forward. In my eyes, and the eyes of others that truly appreciate what Full-Disclosure has done, the loss of that list is devastating in the short term. Not only will it introduce a small amount of bias in vulnerability aggregation, it will take time to recover. Even if someone else picks up the torch under the same name, or starts a new list to replace it, it will take time for people to transition to the new list.

To conclude, I would also ask that John Cartwright practice full disclosure himself. Shuttering the list is one thing, but blaming the action on an unnamed person with no real details isn’t what the spirit of the list is about. Give us details in a concise and factual manner, so that the industry can better understand what you are facing and what they may be getting into should they opt to run such a list.

Bogus RFI Reports Getting Out of Hand

[This was originally published on the OSVDB blog.]

I know we’re all getting tired of the Remote File Inclusion (RFI) vulnerabilities being disclosed that end up being debunked, but this one takes the cake so far (yes I’m behind on e-mail).

Fri Jun 16 2006
http://archives.neohapsis.com/archives/bugtraq/2006-06/0321.html
(1) path/action.php, and to files in path/nucleus including (2) media.php, (3) /xmlrpc/server.php, and (4) /xmlrpc/api_metaweblog.inc.php

Sat Jun 17 2006
http://archives.neohapsis.com/archives/bugtraq/2006-06/0447.html
Demonstrated that the vulnerability is bogus.

Mon Oct 30 2006
http://archives.neohapsis.com/archives/bugtraq/2006-10/0486.html
media.php

Mon Oct 30 2006
http://archives.neohapsis.com/archives/bugtraq/2006-10/0501.html
Demonstrated (again) that the vulnerability is bogus.

So not only is it fake, it was also previously disclosed and debunked. I swear, Bugtraq moderators should seriously consider blocking any RFI disclosure from hotmail.com. Would save Vulnerability Databases a lot of time.

[product] (script.php) Remote File Include [exploit|vulnerability]

[This was originally published on the OSVDB blog.]

Somewhere out there is a point-and-click web application that allows neophyte “security researchers” (yes, that is a joke) to quickly whip up their very own Bugtraq or Full-Disclosure post. I’m sure others have noticed this as well? More and more of the disclosures have too much in common, and unfortunately for VDBs, more and more are completely bogus reports. I feel bad for the vendors as much as I feel for those of us trying to track vulnerabilities. Anyway, some of the many things these disclosures have in common:

– Title (example: EasyBannerFree (functions.php) Remote File Include Exploit)
– # Everything is commented as if this is supposed to be a script
– The remote file inclusion is http://shell.txt or SHELLURL.COM
– It has a single line of source code quoted to “validate” the finding (example: rrequire ( $path_to_script.”globals.inc.php”);
– May have 80 lines of perl code to exploit a single http:// line, because it looks cool
– Contains more greets/thanks than vulnerability information
– If their disclosure is proven false, they never seem to reply to explain themselves

Odds are strong they won’t include the vendor or give enough information to find it via extensive searching. Odds are good it will not contain the version supposedly affected and contain typos in the script or variable names. And worst of all, it is a glorified “grep and gripe” disclosure. Meaning, they grep out the ‘require’ line, don’t bother to check any other portion of the code, and assume it is vulnerable. Some will go so far as to say stuff like “ (tested on Version 1.13)” even though it is quickly proven false.

So, “security researchers” disclosing all these remote file inclusion bugs. Test your finds before you publish, no more grep and gripe crap please.

No Exception for Symantec

[This was originally published on the OSVDB blog.]

Symantec posted a message to Bugtraq earlier this month announcing the availability of a new advisory. The advisory presumably covers a vulnerability or issue in Symantec On-Demand Protection. If you are reading this blog entry a year from now, that is all you may find on it. Yes, even in this day and age, not everything is archived in Google cache or archive.org! In December of 2000, Elias Levy (moderator of Bugtraq at the time) said that such posts were not acceptable because security company web sites had a habit of disappearing, leaving no trace of the information behind. Years later, Symantec bought SecurityFocus (who hosts/moderates the Bugtraq mail list) and we see this rule being ignored, and of course the approved post comes from their owner. Some may argue that Symantec is huge and won’t disappear like those other companies. Many said the same about @stake but shortly after they were purchased, their new owner (Symantec) opted to yank all of the old advisories off the web site making Elias Levy’s concerns reality. As Chris Wysopal said in reply, Symantec needs to post their advisories to the list just like everyone else. While Symantec may stick around, their web site may change or corporate policy may be altered, and that information may not be readily available in the future.

PHP-CHECKER

[This was originally posted to the OSVDB blog.]

Yichen Xie and other Stanford researchers posted to bugtraq announcing “99 potential security vulnerabilities”, all SQL injections. Five issues/comments/questions come to mind:

1. This sounds impressive, but even by OSVDB’s level of abstraction (significantly higher than other VDBs), this is far from 99 vulnerabilities. Looking at the phpWebThings SQL injections announced, we see:

ERROR: ./forum.php:@main: _GET#g[“direction”]
ERROR: ./forum.php:@main: _POST#g[“direction”]

ERROR: ./forum.php:@main: _GET#g[“sforum”]
ERROR: ./forum.php:@main: _POST#g[“sforum”]

ERROR: ./forum.php:@main: _REQUEST#g[“msg”]
ERROR: ./forum.php:@main: _GET#g[“msg”]

ERROR: ./forum.php:@main: _REQUEST#g[“forum”]
ERROR: ./forum.php:@main: _GET#g[“forum”]

By OSVDB standards, this is a single vulnerability (forum.php Multiple Variable SQL Injection). Even going one more level of abstraction and breaking it out by variable, we see that eight of these vulnerabilities are really four, just using different HTTP request methods. If any VDB were to break out such vulnerabilities, it would be interesting to see how it applied to the hundreds (thousands?) of previously disclosed SQL injections. Do researchers even check the different methods? In some cases, yes, but I have a feeling it is fairly rare.

Utopia NewsPro – 8 claimed – 5 actual
e107 – 16 claimed – 4 actual
myBloggie – 16 claimed – 11 actual (1 previously disclosed)
PHP Webthings – 20 claimed – 7 actual
DCP Portal – 39 claimed – 16 actual (5 previously disclosed)

Total – 99 claimed – 33 actual (6 previously disclosed) = 27 new vulns

2. Some of the issues disclosed have already been reported. Of specific interest is the myBloggie login.php username Variable SQL Injection which was originally reported Sep 5, 2005, supposedly fixed, and found to still be vulnerable using a NULL character method. So, does Stanford’s PHP-CHECKER look for such variations, or is this a case of a false positive triggered due to the incomplete fix implemented?

Why does their tool find DCP Portal POST Method calendar.php year Variable (OSVDB 20494) vulnerable to SQL injection, but not POST Method register.php name Variable (OSVDB 20493) vulnerable? Seems like the vendor would have patched all or nothing, so finding one and not the other is suspicious.

3. Has the research team used it against other packages with a history of SQL injection problems to determine if it finds the same ones? Does it no longer find them on later versions, after vendor fix? In short, how robust and how accurate is this tool?

4. The top of the post says:

More detailed information, including proof of concept exploits (vendor notified, and since patched), about the tool can be obtained from the links below.

However, the DCP Portal vulnerabilities it found were disclosed as far back as Oct 1, 2003. Were they not patched correctly? The Stanford team says they tested 6.1.1, the vendor was notified, and the vulnerabilities patched, yet the vendor download page still shows 6.1.1 as current. PHP Webthings “1.4 patched” was tested, but the vendor download page still shows that as the current version and dated 07/05/2004. They tested e107 “v0.7” but didn’t indicate that 0.7 is “in development, available from CVS”, while 0.6172 is the current stable version. The myBloggie vendor page shows 2.1.3 beta is the current version, dated 15 Jun 2005, and the same version tested by Yichen Xie et al. Only one of the five programs tested (Utopia NewPro) has confirmation of a fixed version in the news update (“UNP 1.1.5 has been released to fix a few very minor security issues.“)

So where are the fixed versions for the rest?

5. Is this tool going to be released? If so, to who? If not, why not? This tool in the right hands could potentially eliminate thousands of SQL injections in countless programs in a matter of weeks.

Unresponsive Vendors (and a Bit of Irony)

[This was originally published on the OSVDB blog.]

Late yesterday, Jaime Blasco posted to Bugtraq looking for a security contact at 3com to further attempt to disclose a vulnerability in one of their products responsibly. Such posts are not uncommon these days, and one of the driving forces behind the OSVDB Vendor Dictionary. For vendors who may be under some delusion that their products contain no vulnerabilities, you should still maintain the security@ alias as per RFC 2142 standards. Ideally, we’d like for you to contact us with your preferred security address so our vendor dictionary is updated and accurate.

The irony of Blasco’s post is that 3com owns TippingPoint who runs the Zero Day Initiative (ZDI), set up to purchase 0-day vulnerabilities from researchers. Why do I think that had Blasco mailed ZDI, he would have received a prompt reply?

Security Advisories, Mail Lists, and You

[This was originally published on the OSVDB blog.]

When a security researcher finds a vulnerability, they may choose to release the details in a formal advisory. The different between a random post to a mail list and an advisory typically involves the level of detail and the amount of peripheral information to the vulnerability. This includes discovery date, vendor communication timeline, patch information, formal writeup and technical details of the vulnerability. Because advisories are used as marketing material as much (or more) as vulnerability research/disclosure, some security companies would rather use them to attract attention to their web site.

To do this, they may post a brief message to a mail list announcing the discovery of a new vulnerability and a link to the advisory on their web page. This may seem logical and understandable, but in the long run this does a huge disservice to the security community. What happens when the security company goes out of business or gets purchased by another company? Overnight, all of their advisories and research may disappear. Mail list archives will then contain no useful information and a dead link to a site/advisory no longer there.

This problem (and debate) goes back a ways, most notably in 2000 when Elias Levy (then moderator of Bugtraq) rejected a post from @stake because the vulnerability report did not contain enough information. Thomas Greene covered this incident and dug into the issue. Levy later cited his reason for rejecting the post, which touches on my previous post:

“For very long we have tolerated the marketing copy on vendor advisories because while annoying they were accompanied by useful information. But in this change there is no value added to list subscribers. It’s for this reason that we are not accepting such advisories,”

For those of you who side with companies that post glorified advertisements without technical details, consider the following quote from Levy:

“I’ve asked the list subscribers for their opinions. I’ve received over five-hundred messages to far. While a handful of people liked the notices, the large majority of them, probably around 95 per cent, found the change to be a negative one and want me to hold firm to the policy of not approving them.”

The ultimate irony here is the Levy work[s|ed] for SecurityFocus, who was purchased by Symantec, who also recently purchased @stake and subsequently removed the @stake advisories from the web site a few weeks ago.

Disclosure: Greymatter Remote login/pass Disclosure

[This was originally disclosed on the Bugtraq mail list and touched up slightly for style and mirrored on attrition.org. VulnDB 4081, CVE-2002-0324.]

Software: Greymatter 1.21c and earlier
Vulnerability: Remote administrator login/password exposure
Vendor Status: Notified [0]

I originally saw this posted on Metafilter [1] and linked to a two line description [2]. As with many other attacks, you can google for a specific file and find vulnerable sites all over. I did a quick check and found 4 vulnerable out of the first 10 google reported back. Anyway, since I have very limited experience with Greymatter (GM), and I have reported one security bug to the author before, I typed up some more notes on the bug. This will be fairly easy to catch using whisker/Nikto if people use default installs (which is common). At the time of this post, Nikto [3] has been updated to look for the existance of Greymatter. The big sign of GM being present is /cgi-bin/gm.cgi .. that is the Greymatter login screen and odds are GM is being run as root. Just getting the password will let you post to the blogger, erase entries, upload files and more. However, there are a lot of CGIs (listed below) associated with the package, many could be vulnerable to the older attacks.

In the past I notified the author of a bug related to the password being stored in cleartext on the server, so that any local user could read it. This was actually discovered looking at the access_log of apache. When rebuilding the GM threads/pages, it will include the login name and password in the HREF. A simple grep of “password” through access_logs, or snooping through the GM install directories will find the administrator login for GM. This prompted me to look at the cause of the HREF, and lead me to note that many of the GM files are mode 666 by default. The author acknowledged the vulnerability and indicated he rarely (if ever) supports the package. Many people are moving to Movable Type [4] which imports GM material and is being actively maintained. Movable Type apparently worries about security more as well. For those still using GM, there is user based support/upgrades/patches available [5]. The Greymatter home page can be found at http://noahgrey.com/greysoft/.

About Greysoft from their page:

Greymatter is the original—and still the world’s most popular—opensource weblogging and journal software. With fully-integrated comments, searching, file uploading and image handling, completely customisable output through dozens of templates and variables, multiple author support, and many other features, Greymatter remains the weblog/journal program of choice for tens of thousands of people around the world.

From the original post about the vulnerability [2]:

How to hack Greymatter driven sites

Just search for a file called “gmrightclick” in google and download a file
called “gmrightclick*.reg” where the stars represent a number. open it and
there you have it: Username and Password for everyone to use.

For those doing pen-testing or looking for the vulnerability, here are a few signs of Greymatter being used:

  • button “powered by greymatter”, links to: http://noahgrey.com/greysoft/
  • text that says “greymatter”
  • default blog string: Posted by @ [Link] [No Comments]
    : Posted by @ [Link] [2 Comments]
  • /cgi-bin/gm.cgi is present and offers login/pass

Here are the CGI’s in greymatter install (w/ default perms):

-rw-rw-rw- 1 root fs 304 Dec 8 04:17 gm-authors.cgi
-rw-rw-rw- 1 root fs 23 Sep 21 23:00 gm-banlist.cgi
-rwxr-xr-x 1 root fs 15571 Jan 12 2001 gm-comments.cgi*
-rw-rw-rw- 1 root fs 409 Sep 22 01:50 gm-config.cgi
-rw-rw-rw- 1 root fs 18 Dec 8 04:17 gm-counter.cgi
-rw-rw-rw- 1 root fs 23873 Dec 8 04:17 gm-cplog.cgi
-rw-rw-rw- 1 root fs 750 Dec 8 04:17 gm-entrylist.cgi
-rwxr-xr-x 1 root fs 10211 Jan 12 2001 gm-karma.cgi*
-rw-rw-rw- 1 root fs 157160 Feb 22 2001 gm-library.cgi
-rw-rw-rw- 1 root fs 20353 Sep 22 03:15 gm-templates.cgi
-rwxr-xr-x 1 root fs 9162 Jan 12 2001 gm-upload.cgi*
-rwxr-xr-x 1 root fs 388772 Feb 22 2001 gm.cgi*

The path to “gmrightclick*” can vary widely. This is user defined but often easy to find just by visiting the GM based blog/site. The default directory is (I believe) /archive/. Others you may often see is /archive/logs/ or /photo/archives/ depending on the GM usage.

What prompts this vulnerability:

If the administrator uses the “Add Bookmarklets” feature to add a link/photo, it will add a new “gmrightclick” file unless they have set the “clear” function in their configuration. After adding a link, they need to hit the “Clear And Exit” button at the bottom of the page. This will remove all “gmrightclickreg” files.

Sites that customize their look/HTML will likely not have an open /archive/ dir. Sites that use “Master Archive” option will not have a browsable /archive/ directory. This will make it difficult to find the file.

‘gmrightclick’ filename examples:
gmrightclick-150003.reg
gmrightclick-215087.reg
gmrightclick-146133.reg
gmrightclick-558618.reg

I assume the number is pseudo random, or based off PID or something else as an obscurity scheme. This WILL help for sites that customize or use ‘master archive’ feature, as it will not let the user enter the /archive/ directory and clearly see the .reg files. You could brute force find this possibly but the gain is minimal. Further, the file can be deleted without hurting functionality so it may not even be there despite brute forcing.

GM is a unix package, but the ‘bookmarklet’ option is an Internet Explorer
feature.

Contents of gmrightclick*reg:

REGEDIT4
[HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\MenuExt\Post To &Greymatter]

@=”javascript:doc=external.menuArguments.document;lt=escape(doc.selection.createRange().text); loglink=escape(doc.location.href);loglinktitle=escape(doc.title); wingm=window.open(‘http://some.site.here.edu/cgi-bin/greymatter/gm.cgi?jericho=gmbmpost&authorname=ADMINNAME&authorpassword=CLEARTEXTPASSWORD&logtext=’+lt+’&loglink=’+loglink+’&loglinktitle=’+loglinktitle,’gmwindow’,’scrollbars=yes,width=660,height=460,left=75,top=75,status=yes,resizable=yes’);wingm.focus();””contexts”=hex:31

Notice the two fields: “authorname” and “authorpassword” above. With this information, you can log in w/ full administrative rights to a GM site.

References:

[0] http://foshdawg.net/forums/viewtopic.php?p=773#773
[1] http://www.metafilter.com/comments.mefi/15039
[2] http://www.dangerousmonkey.com/dangblog/dangarch/00000051.htm
[3] http://www.cirt.net/nikto/
[4] http://www.movabletype.org/
[5] http://foshdawg.net/forums/index.php