Commentary on Radware’s Top Web Exploits of 2020

At the close of each year we see at least one article covering the top vulnerabilities / exploits from the prior year. This is usually written on the back of having large detection networks across the Internet that get a comprehensive view of exploitation. It’s a great way to get real intelligence for criminal hacking activity. Unfortunately, we often see a breakdown when it comes to conveying that information in a useful manner. I know there is an argument to be made that the companies releasing such blogs are primarily after PR, sure. But they also have an opportunity to help their clients and the rest of the world by ensuring the blogs contain more useful and actionable information.

For this commentary, I’ll examine Radware’s blog, “The Top Web Service Exploits in 2020” published December 23, 2020 and covered almost verbatim by Security Magazine on January 5, 2021. I don’t have a view into exploit activity itself, but I do have a good view into the vulnerability disclosure landscape that is a cornerstone of this commentary.

We’ll start by setting a few basic ideas for mutual understanding for any such blog. First, each exploit should be tied to a unique vulnerability or it should explain it is an exploit chain and clearly delineate each vulnerability in the chain or explain what it represents if not a pure vulnerability. Second, it should provide at least one external reference for each vulnerability; either a CVE ID, vendor advisory, or commonly accepted third-party advisory such as US-CERT or another similar body. This is what allows the reader to quickly determine if their organization has patched against the vulnerability or not. If I have to spend considerable time trying to determine which vulnerability is being described, many organizations may be at a complete loss trying to figure it out.

With that, let’s look at the top 10 exploited vulnerabilities in 2020, according to Radware, and try to figure out some additional information for perspective. I will also be very clear that Radware’s blog is extremely frustrating and not immediately helpful, instead requiring a lot of extra work. The fact that they only attributed three exploits to a CVE ID is a dismal commentary on the CVE ecosystem. This analysis of their analysis will server as a reminder that comprehensive vulnerability intelligence is the foundation of any good security program.


Service Exploit #1: /ws/v1/cluster/apps/new-application

Based on their description, this appears to match VulnDB 184750 “Apache Hadoop YARN ResourceManager REST API Request Handling Remote Command Execution“. The first thing of interest is it was disclosed on October 19, 2016 and does not have a CVE assignment over four years later. No wonder many organizations aren’t aware of this vulnerability and have not sought out their own remediation strategy.

Service Exploit #2: /manager/html

This is summarized as “Apache Tomcat Manager Application Upload Authenticated Code Execution” and goes on to describe it as “This module can be used to execute a payload on Apache Tomcat servers that have an exposed “manager” application. The payload is uploaded as a WAR archive containing a JSP application using a POST request against the /manager/html/upload component.

Despite this description, that does not cleanly map to any vulnerability in VulnDB. The closest matches are CVE-2017-12615 and CVE-2017-12617 which is an abstraction for different platforms, but fundamentally “Apache Tomcat HTTP PUT Method JSP File Upload Remote Code Execution“. On the surface this is a match with Apache Tomcat, JSP application, and POST request to achieve code execution. However, those two CVEs cover a JSP file upload, not a WAR archive, and do not mention the /manager/html/upload component. So we’re left wondering if the exploit described is simply a misconfiguration scenario (i.e. intended functionality not secured) or an actual disclosed vulnerability.

Service Exploit #3: /level/15/exec/-/sh/run/CR

Based on the description, this is a misconfiguration scenario where an administrator sets up a Cisco router with the HTTP admin interface enabled, but without password protection. This allows an attacker to use the legitimate functionality to run arbitrary commands.

Service Exploit #4: /admin/assets/js/views/login.js

Radware says this “resource belongs to Sangoma FreePBX code and it looks like the attackers are trying to detect vulnerable FreePBX servers and exploit one of the known vulnerabilities.” The first issue is that script doesn’t immediately track to a VulnDB entry based on titles, which reflect the script name typically. However, let’s consider the URL being seen: … login.js. Rather than attempting to exploit “one of the known vulnerabilities“, I would suggest instead they are trying default credentials. At least back around 2000, the tried-and-true default credentials of admin/admin were all you needed to access the interface.

This one is curious to me because presumably a company that was detecting exploit traffic and could see e.g. POST requests as demonstrated in Service Exploit #2, would also see that the attackers were trying the default credentials. So we’re left with Service Exploit #4 being of little help and only creating confusion over what is being exploited.

Service Exploit #5: /ftptest.cgi?loginuse=&loginpas=

Radware attributes this to “many cheap Wireless IP web cameras use the same genetic code based on the GoAhead code (the tiny, embedded web server).” This tracks cleanly with VulnDB 181032 “Axis Multiple Products axis-cgi/ftptest.cgi Multiple Parameters Remote Command Execution Weakness“. This is actually a fun rabbit hole as this disclosure originally comes from an audit of a AXIS A1001 Network Door Controller and exploitation of this issue requires privileged access to the management interface. With that in mind, we’re back to a default credential scenario that may be the actual issue. Back in 2001, defaults for Axis network cameras were covered by CVE-2001-1543.

[Update: Z Balazs points out that this finding is likely due to Persirai botnet activity and links to more information.]

Service Exploit #6: /service/extdirect

This is the only one of the ten exploits covered that they include a CVE ID for. CVE-2019-7238 maps to VulnDB 198437 “Nexus Repository Manager /service/extdirect Insufficient Access Control Request Handling Remote Code Execution“. But, is that really the right ID? If we look at CVE-2020-10204 we are given a very brief summary of “Sonatype Nexus Repository before 3.21.2 allows Remote Code Execution” and a link to the vendor advisory. However, VulnDB 226228 also maps to this and is summarized as “Nexus Repository Manager /service/extdirect Request Handling Remote Command Execution“. We immediately see the /service/extdirect from Radware’s finding in both titles. The vendor’s advisory does not include this endpoint though, but we find it in this exploit published on GitHub that tracks with the CVE-2020-10204 and we see it in a different exploit for CVE-2019-7238.

CVE-2019-7238 was fixed in Nexus Repository Manager version 3.15.0 and CVE-2020-10204 was fixed in version 3.21.2. Due to the vague vendor advisories it difficult to tell if this was a regression situation or something else. But, the CVE-2020-10204 vendor advisory gives us the interesting bit in the context of exploitation: “The vulnerability allows for an attacker with an administrative account on NXRM to execute arbitrary code by crafting a malicious request to NXRM.” That is an important distinction! So this is likely CVE-2019-7238 as Radware says, unless there are default credentials which would allow for exploiting CVE-2020-10204 as well.

Looking at the NVD entry for CVE-2020-10204 we also see that they scored this incorrectly for their CVSSv3 score, as ‘Privileges Required‘ should be ‘High‘, notLow‘ as they have it.

Service Exploit #7: /solr/admin/info/system?wt=json

For this one, we get an Apache Bug ID (SOLR-4882) and CVE-2013-6397 as references which is great. That said, it would be very helpful if Radware would link to these resources to make it easier for their readers.

Service Exploit #8: /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php

This is the third exploit they match to an ID, CVE-2017-9841 and it was disclosed June 27, 2017. Another good reminder that software with disclosed vulnerabilities and vendor solutions are not being applied, causing many organizations to become low-hanging fruit in the exploit world.

One little nitpick is that the full path they include is likely not how this would manifest on a server. Everything after “src” would be the endpoint being scanned presumably: /Util/PHP/eval-stdin.php

Service Exploit #9: /hudson

With this, we run into another mess and rabbit hole. Radware summarizes this as “Hudson continuous integration tool – multiple vulnerabilities” and further describes Hudson as “a continuous integration tool written in Java, which runs in a servlet container, such as Apache Tomcat or the GlassFish application server. Over the years the project was replaced by Jenkins. The final release. 3.3.3 was on February 15, 2016. Today Hudson is no longer maintained and was announced as obsolete in February 2017.

Based on this description, this could be any one of at least 50 vulnerabilities going back to February, 2014, one of which does not have a CVE ID. 41 of these are in Jenkins software which is mentioned above.

Other Service Exploits

This is a curious conclusion to the “top 10” list, as it states “In addition to the new items that we covered in this list, we have also seen items that we already saw and covered in our previous blog Top 10 Web Service Exploits in 2019 such as /ctrlt/DeviceUpgrade_1, /TP/public/index.php and /nice%20ports%2C/Tri%6Eity.txt%2ebak.

That isn’t exactly a #10 on this list, rather a catch-all for “other stuff we saw including…“. The first listed tracks with VulnDB 170573 “Huawei HG532 Routers /ctrlt/DeviceUpgrade_1 NewStatusURL Element Remote Command Execution (Satori)” which is notable as it is used in Satori, a Mirai botnet variant.

The second tracks with VulnDB 194379 “ThinkPHP /public/index.php call_user_func_array() Function vars[1][] Parameter Remote Code Execution“. Note the different exploit path and we see it can actually be exploited via several endpoints according to analysis of the vulnerability by Knownsec 404 Team.

The third doesn’t immediately track with an entry in VulnDB. Radware gives us “/nice%20ports%2C/Tri%6Eity.txt%2ebak” which we can decode to a more friendly “/nice ports,/Trinity.txt.bak“. A quick Google for that request finds a blog from Dragos titled “Threat Hunting With Python Part 2: Detecting Nmap Behavior with Bro HTTP Logs” explaining this request:

The request for “/nice ports,/Trinity.txt.bak” comes from Nmap’s service detection routine testing how a server handles escape characters within a URI. The actual request is “GET /nice%20ports%2C/Tri%6Eity.txt%2ebak HTTP/1.0\r\n\r\n”.

So this isn’t an actual exploit, rather, it indicates that attackers are using the Nmap port scanner. This is a good reminder that “exploit scanning” doesn’t always cleanly map to a specific vulnerability.


Detecting exploitation is critical for every organization. Doesn’t matter if it is on-premises devices or a managed service-based detection. What is more critical is having comprehensive and timely vulnerability intelligence that can turn what you detect into actionable information. This is how you not only detect, but evaluate and remediate, assuming of course the vulnerability is known to the vendor or a mitigation can be enacted.

Microsoft, CVE, MITRE, ETERNALBLUE, Headache…

2019-02-14 Update: Thanks to Chris Mills @ MSRC (@TheChrisAM), who has been working behind the scenes since this blog was published, he has brought clarity to these assignments! MSRC is still potentially touching up some additional documentation to make it easier to see these associations, but here is the definitive answer from him:

CVE-2017-0143 ShadowBrokers : EternalSynergy (Blog)
CVE-2017-0145 ShadowBrokers : EternalRomance (Blog)
CVE-2017-0144 ShadowBrokers : EternalBlue (Blog)
CVE-2017-0146 ShadowBrokers : EternalChampion (Blog)

Note that only the EternalChampion blog does not reference the associated CVE, but he is working on getting that updated. I have also recommended that MSRC update MS17-010 to use the codenames in that advisory as well. Apparently editing the actual bulletins takes a bit more work, but he’s on it! I can’t thank Chris enough for running with this and helping bring clarity to these assignments.


There was initially a lot of confusion over the Equation Group disclosure. Which were legitimate vulnerabilities, which were new, which were known, which were patched, and ultimately how they would be referred to other than their leaked nicknames. That is the purpose of The Common Vulnerabilities and Exposures project (originally Common Vulnerability Enumeration), to give a unique ID to a specific issue so that you can reference a vulnerability without question. A year and a half later? We’re still wondering apparently.

I contacted Microsoft Security Response Center (MSRC) on August 6, 2017 asking for clarification on the CVE assignment for one of the Equation Group vulnerabilities codenamed ETERNALBLUE, because their own resources contradicted each other. From my mail:

Per an older blog [1], the vulnerability known as ‘EternalBlue’ is assigned CVE-2017-0145. From the blog:

However, in this unique case, the ransomware perpetrators used
publicly available exploit code for the patched SMB “EternalBlue”
vulnerability, CVE-2017-0145, which can be triggered by sending a
specially crafted packet to a targeted SMBv1 server.

A newer blog [2] now lists it as CVE-2017-0144, which I believe to be incorrect. From the blog:

The new ransomware can also spread using an exploit for the Server
Message Block (SMB) vulnerability CVE-2017-0144 (also known as
EternalBlue), which was fixed in security update MS17-010 and was
also exploited by WannaCrypt to spread to out-of-date machines.

Can you confirm the correct assignment for ‘EternanBlue’ [sic], and due to the second blog, the assignment for ‘EternalRomance’, and update your blog(s) accordingly?

All this time later? MSRC never answered my mail, and never fixed one of the two blogs. CVE’s description of each does not mention the nickname in either entry. So the assigning CVE Numbering Authority (Microsoft), or CNA, and the core CVE project (MITRE) still don’t answer this question. To date, the Microsoft advisories for those two CVE ID still don’t mention the nickname. To add more confusion? Try using Google to find it, and you get a third CVE ID it may be (screenshot below). Although, that one result doesn’t actually have ‘EternalBlue’ in it, making us wonder why it is the sole result. The blog that MSRC originally published to add some clarity to the Equation Group still only references MS17-010 (and a dead link now). Looking at the new location for MS17-010 doesn’t find the nickname in the advisory either.

To this day, I am still fairly sure ETERNABLUE is CVE-2017-0145 and attribute it as such, but it sure would be nice if MSRC would clean up and clarify this mess.

Further, I have had to chase down two more errant CVE assignments by MSRC in the last months, which was fairly painful. After getting the runaround on both, being told to go ask Microsoft Support via a forum (despite MSRC being the definitive source for this information), not getting a reply, opening a new ticket with MSRC, reminding them that I was still waiting… those two finally got resolved after a month or more. I really don’t like casting shade on MSRC as over the years, in total, they have been wonderful to deal with. However, the last couple of years have seen a serious decline in this type of incident which should be ‘Vulnerability 101’, and a serious uptick in their resistance to clarify assignments when asked. Finally, if you are wondering why MITRE doesn’t provide some kind of oversight to this? Well they basically never have despite repeated requests for just that. Their only oversight is a ‘CNA Report Card’ that is more about statistics of assignments and such, and does not deal with the quality of assignments, incidents of confusion like this, or anything else that would be helpful to the community.

The only upside to all of this? I got to [sic] my own typo from the quoted email.

The Lesser of Two Weevs

Yesterday, Andrew Auernheimer (aka Weev), was sentenced for his 2012-08-16 indictment on one count of “fraud and related activity in connection with computers” (18 U.S.C. § 1030) and one count of “conspiracy to commit offense or to defraud” (18 U.S.C. § 371). This was the result of Auernheimer’s activities in 2010, where he manipulated a URL on an AT&T web site, and discovered an information enumeration vulnerability.

While a lot has been written the last 24 hours on this topic, mostly via 140 character Tweets, most stories aren’t covering the full range of issues surrounding this case. Some stories cover the harsh sentencing, while older stories cover the simplistic nature of the vulnerability found. What I find lacking are stories that put it together in context, to explain how absurd this is. There are three high-level components to this story.

The Vulnerability

Enumeration vulnerabilities come in a wide variety of formats. Via the web, they are often very simple and straight-forward. A web site serves up content specific to you, customer #1234. Poorly designed web applications will identify you as customer #1234 to the application using a variable that is passed via the URL you send to the server. For example:

/banking/account.php?date=20130317&account=checking&customer=1234

You can clearly see your customer number in the URL. What happens if you change 1234 to 1235 and submit it to the server? In this case, you go to jail for 41 months. No exaggeration, no bullshit. That is a basic example of an information enumeration vulnerability, due to extremely poor coding practices and absolutely no security review of the application.

The frequency of such vulnerabilities is disturbing. But not as disturbing as the multi-million dollar companies that are entrusted to protect hundreds of thousands of customer’s data. If you are browsing the web or using your banking application and notice the above, and casually change 1234 to 1235, who is the real bad guy here? You, or the corporation that decided not to employ the most fundamental security measures from the last thirty years?

The Crime

This aspect of the story is the perhaps the biggest disconnect for most readers. Instead of being exposed to the fundamentals, and the history of vulnerability discovery and how it influences disclosure, they get wrapped up in the media’s portray of Auernheimer. Yes, “weev” is a controversial character. He is an admitted Internet troll, an asshole of sorts, and a character of questionable repute. However, that doesn’t matter, at all. If being an asshole was a crime, all 18 people in the U.S. who weren’t would be left to read this.

So what did Auernheimer really do? He figured out an enumeration vulnerability in AT&T’s web site, that let him determine the entire iPad user database. This constituted some 114,000 iPad 3G users. What information did the AT&T site give up, that Auernheimer got access to? Email addresses. No full names, no physical addresses, no phone numbers, no credit information, no passwords. In case you weren’t aware, you can purchase 50 million email addresses on a single ISP for a whole $500.

Why the big deal? This is where it gets a bit murky, at least to an outsider. When a researcher finds a vulnerability in a product, service, or web site, they have several avenues for disclosure. First, they can sit on the information and simply not disclose it. This doesn’t protect anyone, because the idea that no one else will find it is absurd, and has been proven wrong many times over. Second, they can disclose it in a ‘responsible’ (poor term, commonly used) or ‘coordinated’ (better term, use it) manner, in which they work with the vendor to disclose it only when the vendor is ready, and the issue has been fixed. Third, they can disclose it without informing the vendor, or they can disclose it after informing the vendor but not waiting for a fix. Each of these scenarios happens every week, a hundred times over.

The average citizen, including jurors and judges, does not understand the history or intricacies of vulnerability disclosure. There are vendors and service providers that have a long history of not caring about vulnerabilities. That is, until it affects them in the public eye. A serious issue can exist for five, ten, or sometimes seventeen years, without being fixed. When the right light hits the ordeal, usually via a negative high-profile media article, the company suddenly takes an interest. If Auernheimer had reported this to AT&T directly and waited for a fix, there is a good chance it would have gone unfixed for months, possibly years. Every day that ‘coordinated’ disclosure happens runs the risk of someone with bad intentions finding the same issue.

Rather than go to AT&T and risk months of back-and-forth and/or waiting, Auernheimer opted to go to a media outlet. Why? Media pressure is one of the strongest motivations for a company to fix a vulnerability. One could argue that since the vulnerability was not very serious (again, just email addresses being disclosed), that going to a journalist instead of the company was not a big deal. Regardless of Auernheimer’s potential intentions regarding the embarrassment to AT&T, he took a route that would likely have the most success in getting the issue fixed.

The Sentencing

For his “crime”, Auernheimer was sentenced to 41 months in prison, 3 years probation, and ordered to pay $73,000 in restitution. Again, for showing how anyone could harvest a list of 114,000 email addresses. SC Magazine quickly wrote an article detailing 8 criminals that used computers in the commission of their crime, but received less prison time. I understand that courts are behind the times on computers, their use, abuse, and how to punish crimes related to them. I expect to see some discrepancy between sentencing in such cases. What I fail to understand is how a court can offer up such a sentence as compared to other crimes, that are certainly more destructive, and more heinous. Consider the following crimes and sentences, all handed down very recently:

  • Molesting 2 children can get you 14 months. [Source]
  • Child abuse can get you 32 months. [Source]
  • Manslaughter can get you as little as 42 months, just 1 month more than email addresses. [Source]
  • Possession of child pornography is good for 48 months, just 7 months more than email addresses. [Source]
  • Involuntary manslaughter, 50 month maximum per victim. [Source]

Perhaps the biggest comparison has been Auernheimer to the two Steubenville (Ohio) rapists who were sentenced for a total of three crimes, and collectively received less time. Trent Mays was convicted of raping a teenage girl, and ordered to spend “at least one year in an Ohio Department of Youth Services facility or until they are 21 years old“. Since Mays was was also convicted of having pictures of a minor in “nudity-oriented material”, he received 1 additional year. Ma’Lik Richmond, also convicted of raping a teenage girl, received one year in the Youth Services facility. Two rapes, and essentially one count of child pornography, and collectively they get 36 months, compared to the 41 Auernheimer received. More disgusting is what is being called the “rape culture”, where news outlets such as CNN were apologetic to the rapists, decrying the sentencing and claiming their “lives were over”. Perhaps if Auernheimer’s lawyers argued that he only “raped the AT&T system”, he would have received a year.

The Lesser of Two Weevs

Once again, forget about Auernheimer’s predilection for trolling or seeking to annoy people. That is entirely irrelevant to the case. He found a minor vulnerability on AT&T’s web site, he told a journalist who wrote an article about it, and AT&T fixed it. No one suffered real damage from his activity. He did not seek to profit from his activity. More interesting is that AT&T specifically wants bugs reported to them, which Auernheimer did about the same time as he notified the journalist. While he did not follow their desired process, both sides made their intentions clear; they want bugs fixed. In this case, they diverged in the method for effecting that change.

If Aurenheimer had tried to profit from his activity, I understand how the court would seek to punish him. If he sent emails to all 114,000 people defaming AT&T, or caused them to receive excessive emails, I would expect a harsher punishment. But given that a bank loan manager was recently sentenced to six months in prison for computer fraud in an attempt to increase her own lines of credit by more than $200,000, you have to wonder what other factors are at play here. Companies are frequently dealing with vulnerabilities, some disclosed directly to them, some exploited by bad guys, some reported via the media first. Why is the AT&T case so special?

It will be interesting to see how other crimes are dealt with in comparison. For example, the same day Auernheimer gets sentenced to prison, other anonymous researchers share their recent work that involved illegally accessing 420,000 systems on the Internet. In the United States, that constitutes 420,000 felonies.

That said, I for one am grateful that Aurenheimer reported the vulnerability, both to the media and AT&T directly. Given my personal history of dealing with vendors in vulnerability disclosure, I don’t blame him or any other researcher who opts not to work with a vendor. It is often a time-consuming and painful process, that typically challenges your faith that a company cares about security and their customers. In this case, we got the lesser of two Weevs; the one that wasn’t intent on pissing as many people off as possible. The one who didn’t opt to use the information for profit, that didn’t sell the list to criminals, that didn’t actively try to compromise AT&T systems. And for that, he will receive over three years in federal prison. Think about it.

Who Discovered the Most Vulns?

[This was originally published on the OSVDB blog.]

This is a question OSVDB moderators, CVE staff and countless other VDB maintainers have asked. Today, Gunter Ollmann with IBM X-Force released his research trying to answer this question. Before you read on, I think this research is excellent. The relatively few criticisms I bring up are not the fault of Ollmann’s research and methodology, but the fault of his VDB of choice (and *every* other VDB) not having a complete data set.

Skimming his list, my first thought was that he was missing someone. Doing a quick search of OSVDB, I see that Lostmon Lords (aka ‘lostmon’) has close to 350 vulnerabilities published. How could the top ten list miss someone like this when his #10 only had 147? Read down to Ollmann’s caveat and there is a valid point, but sketchy wording. The data he is using relies on this information being public. As the caveat says though, “because they were disclosed on non-public lists” implies that the only source he or X-Force are using are mail lists such as Bugtraq and Full-disclosure. Back in the day, that was a pretty reliable source for a very high percentage of vulnerability information. In recent years though, a VDB must look at other sources of information to get a better picture. Web sites such as milw0rm get a steady stream of vulnerability information that is frequently not cross-posted to mail lists. In addition, many researchers (including lostmon) mail their discoveries directly to the VDBs and bypass the public mail lists. If researchers mail a few VDBs and not the rest, it creates a situation where the VDBs must start watching each other. This in turn leads to “VDB inbreeding” that Jake and I mentioned at CanSecWest 2005, which is a necessary evil if you want more data on vulnerabilities.

In May of 2008, OSVDB did the same research Ollmann did and we came up with different results. This was based on data we had available, which is still admittedly very incomplete (always need data manglers.) So who is right? Neither of us. Well, perhaps he is, perhaps we are, but unfortunately we’re both working with incomplete databases. As a matter of my opinion, I believe OSVDB has better coverage of vulnerabilities, while X-Force clearly has better consistency in their data and a fraction of the gaps we do.

Last, this data is interesting as is, but would be really fascinating if it was mixed with ‘researcher confidence’ (a big thing of Steve Christey/CVE and myself), in which we track a researcher’s track record for accuracy in disclosure. Someone that disclosed 500 vulnerabilities last year with a 10% error rate should not be above someone who found 475 with a 0% error rate. In addition, as Ollmann’s caveat says, these are pure numbers and do not factor in hundreds of XSS versus remote code execution in operating system default install services. Having a weight system that can be applied to a vulnerability (e.g., XSS = 3, SQLi = 7, remote code exec = 9) that is then factored into researcher could move beyond “who discovered the most” and perhaps start to answer “who found the most respectable vulnerabilities”.

10 Infamous Moments In Security Research

[This was originally published on the OSVDB blog.]

10 Infamous Moments In Security Research
InformationWeek – Apr 17, 2006

1. SQL Slammer
2. Windows Plug and Play
3. Cisco IOS heap overflow
4. Windows Metafile
5. Oracle transparent data encryption
6. Oracle PLSQL gateway
7. Apple Mac iChat
8. Internet Explorer createTextRange()
9. Internet Explorer HTA files
10. Sendmail SMTP server software

While many of these are notable events, this list seems very centered around the last couple of years and doesn’t consider the bigger picture. The initial discovery/disclosure of certain vulnerability classes (Overflow, XSS, SQL Injection) seem like they would be big moments. What else should have been on the list?

MusicPlasma for Vulnerabilities

[This was originally published on the OSVDB blog.]

A couple years back, I ran across musicplasma. For those not familiar with the engine, it allows you to type in your favorite music artist/band, and see “related” artists. So I type in “portishead” (mmmm) and see related bands like Tricky, and Sneakerpimps. These are all considered “trip-hop” so the links are expected. Moving a bit farther out, I start to see new bands (at the time) like Zero7, Air, or Hooverphonic (many of which are now on my playlist). So using this graphical representation, it is easy to see related bands and this type of tool is incredible for finding new music.

Shortly after, I started wondering what it would be like to use such an engine on vulnerabilities. What would it do, would it be valuable, would it help anyone? Two years later I still have the same questions, but lean toward the idea that it would be invaluable for vulnerability research, statistical analysis, and trending. People like CVE or OSVDB would love such a tool, and we’ve discussed the idea in the past. This most recently came up when Steven Christey (CVE) mailed asking what rules we adhered to for related OSVDB links within our database. As I said to him in e-mail, the cliff notes answer to if we have rules that govern this is “no”. I know, bad VDB! Despite that, there is a definite intention and desire for such links that would be used more strictly and consistently if we had developers to help us integrate our ideas into the actual database and front end. The gist of the related links is to eventually move toward an engine like MusicPlasma for vulnerabilities. Instead of rewriting portions of the mail I wrote, i’ll lazily quote some relevant parts:

Obviously a *great* tool for music given it is hard to find similar bands to the ones you like, given that most music reviews won’t even disclose if the lead singer is male or female, let alone the real style of the music beyond some pretty broad categories like “rock” or “rap”. Anyway, on an abstract level, using something like this to chart vulns and make an interface for users to chart vulnerabilities would be interesting. You visit osvdbplasma, click on PHP-Nuke, then graphically browse the issues but instead of just ‘similar’, you do it by age and severity. The closest to the PHP-Nuke ring would be the remote code execution on latest versions, then you could follow that out to older issues. You could choose a different path for XSS, Path Disclosure and other classes.

Like I said, maybe not so useful but it would look really cool(tm), and would make it more understandable to end users without much security experience (a long term goal of OSVDB).

[..]

Yep, another idea I had a while back, tracking the history of vulns in a set of products. Pick a few that cover a wide range .. Windows, Oracle, PHP-Nuke, John’s Blog. Then look at the vulnerabilities discovered in them, focusing on the types (SQL, PD, XSS, Overflow, etc). See if there are trends in the types discovered, then cross match it with (very rough) dates of when the class of vulnerability was discovered/announced (a task unto itself). Do any of these products get better? Worse? Are there trends on folks discovering the same types as they become ‘popular’ to research? All kinds of neat research to do here.

Not surprising, Christey replied quickly saying that he too had thought of this type of model to view vulnerabilities and added his own ideas for reasons and features of such a project. I don’t think he took me seriously when I suggested mugging top DHS officials to fund such a project.

A couple weeks ago, HexView Security Research brought this to life in the first generation of such an engine. Check out their vulnerability maps. Done in java, they tie in products and platforms to vulnerabilities showing how they are related. Currently, mousing over a vulnerability only offers a title and no additional information, but this is the first step! It’s very cool to see other companies and researchers looking into modeling this type of information.

Anyway, all of this goes back to a long running gripe OSVDB has about the industry and VDBs specifically, and that is lack of evolution. These types of projects would be incredibly fun to work on, and potentially offer great insight into vulnerabilities, research and product history. On the off chance someone reading this knows about rendering such data or has time/expertise, contact us! We’d love to abuse your knowledge and get you involved in making this project happen.

Vulnerabilities becoming more mainstream?

[This was originally published on the OSVDB blog.]

Before 2005, it was fairly rare to see a news article specifically covering a vulnerability. They would usually pop up if a vuln was used in a mass compromise, the basis of a worm propagating, or affected large vendors such as Microsoft and Oracle. This year however, it seems more and more news is being written about vulnerabilities. Some may be explained due to vendors being considered more mainstream (Mozilla & Apple), while others may be attention being put on underlying technology that drives more mainstream applications or high profile lists. Two examples of this can be seen in Mailman [OSVDB 13671, Article] and CPAINT [OSVDB 18746, Article].

More recently:
OSVDB 19255: Firefox flaw found: Remote exploit possible
OSVDB 19227: New Cisco flaw could pose threat to Net
OSVDB 19089: Microsoft Investigates New IE Hole
OSVDB 18956: Reports: Long Registry Names Could Hide Malware

Additionally, it is getting to be routine to see articles covering monthly patch cycles:
Microsoft patches IE, Word, Windows
Microsoft to release six patches, some ‘critical,’ next week
Major Oracle Patch Covers Enterprise Products, Database Server
Apple unloads dozens of fixes for OS X

To stay even more current, articles covering ‘0-day’ vulnerabilities still in various stages of the disclosure cycle.

HTTP Request Smuggling

[This was originally published on the OSVDB blog.]

Last month, Watchfire released a new paper describing “HTTP Request Smuggling” attacks. Since the release of this paper, many products have been found prone to such attacks. Some of these include SunONE Web Server, Oracle Application Server Web Server, IBM WebSphere, BEA WebLogic, Tomcat, Microsoft Internet Information Server, DeleGate Proxy, Sun Java System Web Proxy Server, Squid and Apache. This may qualify as the most recent class of vulnerability discovered and could prove interesting over the next few months as vendors scramble to diagnose their products.

Reverse Engineering Microsoft Patches in 20 Minutes

[This was originally published on the OSVDB blog.]

Halvar posted to the DailyDave mail list today showing a brief flash based demonstration of some of his reverse engineering tools. The presentation shows how one can reverse engineer a Microsoft patch using binary diff analysis, and figure out exactly what the vulnerability is, down to the function.

What will this technology and method do, when hundreds (thousands?) of people can reverse engineer a patch that fast, and offer full vulnerability details within minutes of a patch? That type of information would be incredibly valuable to some people, probably for more nefarious purposes. That type of information would be incredible for the security community and vulnerability databases who often have a difficult time separating issues due to lack of details.

Even more interesting, would this show a more concise history of vulnerabilities in a given vendor’s product that demonstrates the same programs, routines and even functions are found vulnerable repeatedly? Would this help companies identify who should be singled out for additional “secure coding” workshops?

post: http://archives.neohapsis.com/archives/dailydave/2005-q2/0377.html
demo: http://www.sabre-security.com/products/flash_bindiff_png.html

Second-Order Symlink Vulnerabilities

[This was originally published on the OSVDB blog.]

http://archives.neohapsis.com/archives/fulldisclosure/2005-06/0060.html

While symlink vulnerabilities are not new, Steven Christey from CVE points out a recent trend in “second-order symlink” vulnerabilities. Based on the recent examples published, there is a strong chance many applications have been vulnerable to such attacks in the past.