Random Comments on the Symantec Internet Threat Report 2005

[Originally posted to the ISN Mail List. Shortly after, modified for attrition.org. This was republished at The Age (AU) and the Sydney Morning Herald.]

Some interesting stuff in the Symantec report that is being talked about in various news articles:

The original Symantec release for this report:

Symantec Internet Security Threat Report
Trends for July 04 – December 04
Volume VII, Published March 2005

Unfortunately, to download a copy Symantec would like a lot of information about you. After filling out a page long form, then you may receive it.

While reading through the report, I found some things of interest. By the end of it, I wondered how anyone can see value in the conclusions regarding vulnerabilities (the only thing I was really interested in).

Apologies for the length of some quotes, but I didn’t want them to lose context. Indented material is from the report.

Between July 1 and December 31, 2004, Symantec documented 1,403 new
vulnerabilities. This is an increase of 13% over the 1,237
vulnerabilities disclosed in the first six months of 2004.

During the second half of 2004 nearly 97% of all reported
vulnerabilities were rated as moderate or high severity, which could
result in the complete or partial compromise of a system. In addition,
over 70% of all the vulnerabilities reported during this period were
easy to exploit. This means that no exploit code was needed or that
exploit code was readily available, making the compromise of systems
relatively easy. Compounding this problem is that nearly 80% of all the
documented vulnerabilities in this reporting period are remotely
exploitable, which can increase the number of possible attackers.

97% of 1403 vulnerabilities in a six month period are moderate or high severity? The first thing that comes to mind of the 1403 is cross site scripting. These are probably the most popular and prevalent vulnerabilities discovered in the last year. Many people argue that XSS attacks are low severity.. if you agree, then this claim is obviously false. If you argue that XSS is moderate severity, then the 97% may still be arguable. Failing that, what about path disclosure? What about the dozens of vulnerabilities that require administrative authenticated access to conduct a XSS or path disclosure attack? What about the hundreds of DoS attacks against low priority software such as network games, guestbooks and other packages that are extremely low distribution and likely not found on any business site of any kind? Add all that up and it has to be more than 42 vulnerabilities that would be classified as ‘low severity’. Later in the report, they define the severity levels:

Low severity – Vulnerabilities that constitute a minor threat. Attackers
cannot exploit the vulnerability across a network. As well, successful
exploitation of the vulnerability would not result in a complete
compromise of the information stored or transmitted on the system.

Moderate severity – Vulnerabilities that result in a partial compromise
of the affected system, such as those by which an attacker gains
elevated privileges but does not gain complete control of the target

High severity – Vulnerabilities that result in a compromise of the
entire system if exploited. In almost all cases, successful exploitation
can result in a complete loss of confidentiality, integrity, and
availability of data stored on or transmitted across the system.

Interesting that ‘low’ includes “cannot exploit the vulnerability across a network” which explains how they could lump a path disclosure vulnerability into ‘moderate’. Personally, I think that is flat out wrong to do.

To add to the confusion, they also say:

Over the last six months of 2004, Symantec documented 201
vulnerabilities for which associated exploit code was widely available
(figure 18). Because of the availability of exploit code, these
vulnerabilities are considered easy to exploit. The percentage of the
total volume of vulnerabilities with exploit code, 14%, is slightly
higher than what was observed between January 1 and June 30, 2004 (13%).

Cross site scripting and basically every path disclosure vulnerability published had proof of concept (because they are typically so trivial). According to this, saying 201 vulns had exploit code widely available really doesn’t make much sense in the context of the rest of the report.

Add a bit more confusion:

Between July 1 and December 31, 2004, Symantec catalogued 670
vulnerabilities affecting Web applications, nearly half (48%) of the
total vulnerabilities disclosed during this reporting period (figure 21).

As noted in the ease of exploitation discussion, vulnerabilities
targeting Web applications are often classified as easily exploitable,
and their increase has contributed significantly to the high number of
easily exploitable vulnerabilities.

So 670 web based vulns, they are “often” classified as easily exploitable, but only 201 of the 1403 had exploit code? These numbers simply do not jibe. If you skip to the end of the report, Appendix C has information on how they achieved these numbers, how scores are calculated, etc. One thing to note is they say they use the BID VDB with over 9,000 distinct entries. Sure, distinct entries but that really means nothing as they are not consistant on adding vulnerabilities. Some entries are “multiple” vulnerabilities, others are broken out to two or more entries. After extensive work on OSVDb and hitting the other VDBs on a near daily basis, I haven’t seen Symantec keep any standards for how they add entries to their database.

Between July 1 and December 31, 2004, Symantec documented 13
vulnerabilities affecting Microsoft Internet Explorer.

Earlier in the report, in the summary/overview:

Symantec has established some of the most comprehensive sources of
Internet threat data in the world. […] In addition, Symantec maintains
one of the worlds most comprehensive databases of security
vulnerabilities, covering over 11,000 vulnerabilities affecting more
than 20,000 technologies from over 2,000 vendors.

So running Bugtraq (something else they highlight, not quoted here) and the BID Vulnerability Database, they say 13 vulnerabilities for MSIE between Jul 1 and Dec 31 2004. According to OSVDB, I see 51 vulnerabilities for MSIE. If Symantec is working off data that inaccurate, how can we trust any of this report?

The report goes on to say there were 21 vulnerabilities affecting Mozilla browsers, 6 in Opera and 0 in Safari. Again, checking OSVDB for these browsers and that time frame:

MSIE: 51
Mozilla: 53
Opera: 13
Safari: 4

So, there are still more vulnerabilities in Mozilla than MSIE published according to OSVDB, but the disparity is nothing close to what the Symantec report would have you believe.

This data indicates that the attention of researchers may be shifting.
In the rush to find more secure alternatives to Microsofts Internet
Explorer, organizations and end users should be cautious about choosing
an alternative, as all browsers appear to be susceptible to

I understand that this report is about vulnerabilities in the past six months, but I think it a bit irresponsible for them not to mention two things. First, six months or not, MSIE has had a lot longer history of vulnerabilities, and typically more severe due to the integration of IE into the operating system. Second, they don’t address the speed of which these vulnerabilities were patched or mention that if security is that important, they can grab a copy of the latest build of Mozilla with more bugfixes.

Over the last six months of 2004, there were no vendor-confirmed Safari

Checking OSVDB 13183, one of the external references is http://docs.info.apple.com/article.html?artnum=300770 . This update is listed as 2005-001 but covers vulnerabilities in the last quarter of 2004. One of the entries on this page:

Available for: Mac OS X v10.3.7, Mac OS X Server v10.3.7, Mac OS X v10.2.8, Mac OS X Server v10.2.8
CVE-ID: CAN-2004-1314
Impact: When Safari’s “Block Pop-Up Windows” feature is not enabled,
a malicious pop-up window could appear as being from a trusted site

Checking some of the other vulnerabilities in that time frame, another: http://docs.info.apple.com/article.html?artnum=300667

Available for: Mac OS X v10.3.6, Mac OS X Server v10.3.6, Mac OS X v10.2.8, Mac OS X Server v10.2.8
CVE-ID: CAN-2004-1121
Impact: Specially crafted HTML can display a misleading URI the Safari status bar.

Available for: Mac OS X v10.3.6, Mac OS X Server v10.3.6, Mac OS X v10.2.8, Mac OS X Server v10.2.8
CVE-ID: CAN-2004-1122
Impact: With multiple browser windows active Safari users could be mislead about which window activated a pop-up window.

It is clear that there are vendor confirmed Safari vulnerabilities in the time frame covered by the Symantec report. Is this a simple oversight? Or did the authors not even attempt to research the vulnerabilities they write about if they didn’t appear in the BID database?

But wait.. even more confusing:

search for “safari”:

15-12-2004: Apple Safari Web Browser HTML Form Status Bar Misrepresentation
08-12-2004: Apple Safari Remote Window Hijacking Vulnerability
25-11-2004: Apple Safari Web Browser Infinite Array Sort Denial Of Service
01-11-2004: Apple Safari Web Browser TABLE Status Bar URI Obfuscation
20-10-2004: Apple Safari Cross-Domain Dialog Box Spoofing Vulnerability
07-09-2004: Apple Safari Cross-Domain Frame Loading Vulnerability
23-08-2004: Safari/WebCore HTTP Content Filtering Bypass Vulnerability

So the Symantec owned and operated BID Vulnerability database shows seven vulnerabilities in Apple Safari between Jul 1 2004 and Dec 31 2004, yet their report states there were 0 Safari vulnerabilities.

At what point does a report like this lose all value when their conclusions contradict their data source 100%? Can anyone at Symantec give insight?