Full Disclosure – Effective or Excuse?

[This was originally published on attrition.org.]

A comprehensive look at the practice of Full Disclosure, problems associated with it for vendors and security companies, and examples of full disclosure put to the test. (3300 words)


The world of computer security has developed a wicked game of politically correct ‘cat and mouse’. This game is played out through security professionals contacting software vendors to report security vulnerabilities that affect a large portion of their customers. Like a large percentage of professional interaction, the supposed need to act within certain guidelines or be ‘politically correct’ (PC) comes into play. The side affect to this game comes in the form of slower patches and upgrades to address the problem.

What is Full Disclosure?

When a bug in a piece of software is found, the person finding the bug has two courses of action. The first is to notify the software vendor so they may fix the problem in future revisions. In some cases, these bugs pose serious security problems and warrant patches to existing versions. The second course of action is to post the full details of the bug to a public mail list or Usenet newsgroup. In doing so, you are sharing the full details of potentially serious bugs that could affect millions of people. Posting the information is done so that administrators around the world can understand the problem and figure out how to respond to it. With anything less than full disclosure of the problem, administrators may react incorrectly or be unable to convince users and management that the problem is serious.

Most individuals or companies discovering bugs choose a mix of both courses of actions. They opt to give the software vendor a set amount of time to address the problem before they release full details to the public. This is done as an exercise in ethics as some of these bugs pose a serious threat to many organizations. Imagine if a company released full details to the public on a bug that left thousands of government and military systems vulnerable. If the information lead to hackers penetrating these systems, it raises an issue of responsibility on the part of the group reporting the bug. Giving software vendors a heads up about the problem enables them to avert disaster and protect their clients.

Why the Rush?

The logical argument would be to give vendors as much time as possible to fix the bug before going public. While this may sound reasonable, in reality it is not always the best course of action. Every day the vendor spends addressing the problem, it becomes more likely that other individuals will find the same bug currently being addressed. For each person that finds the bug, the better chance that one will not be as ethical and responsible with the information. The goal of responsible security professionals is to address as many security concerns in the shortest amount of time possible.

Gratitude (or lack thereof)

Many software vendors spend all their time and energy creating their products. In this process there is a noticeable lack of care or attention spent on proactive auditing of the products looking for security vulnerabilities. Customers purchase these products expecting secure software but often find it lacking. They turn to security companies to help them present a secure network posture for their corporate networks. In doing so, many of the security professionals run across vulnerabilities and bugs that could affect their clients. In essence, this translates into continued and diverse free security auditing for many software vendors. The type of auditing that often costs some clients up to $500/hr. Software vendors must be grateful, right?

Of course not. Speaking from past experience, I can assure you many vendors take bug reports as a personal affront and insult. Initial responses to original bug reports have been downright hostile in the past. It amazes me that these vendors show anything less than 110% gratification and respect for people reporting security problems. While the software vendors advertise secure platforms, their poor coding and inadequate security auditing lead to these problems to begin with.

http://www.sun.com/solaris/overview.html

     "The Solaris Operating Environment has been carefully
      engineered to deliver a reliable, high-performance,
      scalable, and secure platform on which to develop
      and deploy desktop and server applications."

http://www.microsoft.com/ntworkstation/

     [In a colorful bar chart:] "35% Lower TCO, 30% Faster,
      1/3 Fewer helpdesk calls, Strong Security."

Killing the proverbial messenger does not solve the problem or help the situation. To expect bug reporters to send the information to them, let alone wait up to half a year for a fix is overly presumptuous and egotistical. Companies that share vulnerability information before releasing it to the public should be regarded as heroes, nothing less.

Delays: Why Vendors Take So Long

Software vendors often have a legitimate reason for asking bug reporters not to go public right away. Often times consumers aren’t aware of all of the conditions surrounding patching a vulnerability. This leads to a cry for faster release of information and quicker patches/fixes. The trick to the argument is deciding on a good amount of time to give these vendors as they often DO stall unnecessarily on releases (this said based on a long case history). Some of the valid reasons vendors delay that may escape some people:

  • ‘Regression Testing’. When bugs are reported for a specific version of a piece of software, the vendor is obligated to test and fix ALL affected versions. Once all vulnerable versions are identified, patches must be developed. Next, full tests must be run on previous versions of the software with the new patches to assure the new changes don’t affect other aspects of the software.
  • Architecture. In today’s software world, dozens of different hardware platforms exist. Patches for the PC platform are different than those for Sparc hardware. NT patches for the Alpha vs PC could include a wide variety of alterations.
  • Additional bugs. A handful of software vendors learned via trial by fire that a hasty fix is not always a quality fix. Days after the initial bug was reported and fixed, slight variations of the same bug surface days later providing a healthy dose of mud to the vendor’s face.
  • Beuracracy. Software companies are often big, and full of internal procedure. Departments don’t always play well with others.

Incentive Not to Report Bugs

Going beyond the lack of appreciation vendors show, there are several other reasons not to report bugs after discovering them.

  • In one case, a security company provided a large Unix vendor with a full technical writeup, working (and commented) exploit code, and half an hour explanation of the bug over the phone. Despite all of this, the technicians at the vendor still could not figure out how it all worked. Up to a year later, the vendor was still seeking assistance in getting the exploit to work.
  • In 1998, the same security company worked with another large Unix vendor with a nasty remote exploit that gave attackers full access. During the exchange of information, the security company asked if they could get a “thanks” or some kind of credit in the advisory the vendor would eventually release. Despite releasing their own advisory carefully timed to coincide with the vendor, and despite hours of tech support, the vendor still would not give the company credit. Subsequent security newsletters quoted the vendor as the original poster of the bug information, not the security company.
  • After a miscommunication between security company eEye and software vendor Microsoft over the public dissemination of vulnerability information, Microsoft representatives attempted to place all blame on eEye at a public convention. Without the presence of any eEye employees, a Microsoft spokesperson stood up during the Full Disclosure track at the Black Hat Briefings and proclaimed that Microsoft was the wounded dog, being kicked by big bad eEye Security. The miscommunication? A twenty four hour difference on when to post the information to public forums.

Two Examples

Two specific examples of security/vendor dealings in the past illustrate these points very well. Each demonstrates that vendors have more control over the issue than they admit to. In our examples, both security companies maintained the same stance on full disclosure. One of the companies held firm with their intention to go public in a set time. The other company adhered to the time table set by the vendor and watched the issue move from weeks to months.

Please Repent? (No Firm Stance)

In the middle of 1998, a security company called RepSec (RSI) found several vulnerable functions in one of the Solaris libraries. Upon contacting Sun Microsystems they were informed it would take some time to fix the bugs and issue patches. Repsec asked Sun for two weeks before they released their advisory to the public. Sun countered asking for at least a full month to resolve the issue. Mail went back and forth trying to work out an acceptable time frame for both parties. After two weeks, Repsec did not go through with the release of their advisory, instead waiting for Sun to give the go ahead.

After weeks more of delays, Sun still wasn’t ready. Almost two full months had passed with more requests for Repsec to hold off on releasing their advisory. Excuses of regression testing, dispute on which functions were vulnerable and more came from Sun. Repsec held back, worried that the information hitting public forums could give hackers dangerous new vulnerability information that could be used to break into more machines. A little more than two full months passed and customers of Repsec were getting frustrated. They had seen the advisory and questioned why it had not been released publicly. When given the answer that Sun was delaying further, one of their customers took matters into their own hands.

An unknown customer of Repsec decided Sun was being irresponsible and posted the information to Bugtraq. After two months of stalling on a fix from Sun, with no estimate on completion of patches, Sun miraculously posted a full patch and information on the vulnerability. To many security professionals, this proved that full disclosure was the most effective and speediest solution to a problem affecting many people. Had Repsec held firm with their original plan to go public in two weeks, no doubt Sun would have followed a day after with a patch. Subsequent advisories from Repsec on Sun products reverted to long delays before patches or advisories were issued. By not holding firm to release dates, it allowed the vendor to control vital security fixes to widely used operating systems.

Do Things Our Way, It’s Better (Firm Stance)

In June of 1999, a relatively new security company called eEye Security found a severe vulnerability in Microsoft’s IIS. Because of the widespread popularity of Windows NT and the large install base of IIS, this vulnerability posed a serious threat to millions of companies world wide. eEye notified Microsoft of the vulnerability along with their intention to make details public in a weeks time. Microsoft immediately wanted more time, and asked for it in a less than polite fashion.

Holding true to their word, eEye released the details of the vulnerability in an advisory to the public as scheduled, along with links to Microsoft’s timely patch. In seven days, Microsoft was able to assess the problem, release their own advisory and create a working patch with regression testing. Had eEye not held firm on their intended release date, Microsoft would have taken as much time as possible before releasing a patch. Past history shows they would rather avoid (or downplay) any potential egg on the face, especially when it comes to security matters.

Since their initial dealings with Microsoft, eEye has received nothing but quick and courteous responses. Says Firas Bushnaq of eEye, “We have noticed a big improvement in Microsoft’s handling of security related issues in the past few months. Response time is down to hours compared to days. Someone must have sent an internal memo.” Because of their diligence in releasing vital information to the public on their time schedule and managing the expectations of the software vendor, eEye’s policy on full disclosure has lead to a more responsive, open and attentive software vendor.

Case in Point?

Looking at a history of vendor released security advisories, a pattern emerges that suggests a firm stance on public dissemination of vulnerability information has its merits. Comparing a vendor like Sun Microsystems, we see no pattern of continued releases despite new vulnerabilities brought to light each month on full disclosure security mail lists. Looking at Microsoft, not only do we see a regular release of information, we see a sharp increase shortly after the eEye/Microsoft releases about the IISHACK vulnerability. Is the increase in advisories due to companies like eEye holding firm on their promise of full disclosure? Sure seems like it.

          Sun   Microsoft               Sun   Microsoft
Nov 1998   2        1         May 1999            6
Dec 1998   3        3         Jun 1999   2        5
Jan 1999            2         Jul 1999            3
Feb 1999   3        5         Aug 1999   1*       6
Mar 1999            3         Sep 1999   1        9
Apr 1999            2         Oct 1999            1

* Reprint of CERT Advisory

Arguments Against

There are at least two arguments that suggest security companies have little to do with trends in vendors increasing in full disclosure.

The first argument lies in the fact that Microsoft is relatively new to security advisories. Their first advisory was released January 21, 1999 putting them into their first year of full disclosure. Older vendors that have been around like Sun Microsystems released their first advisory on September 5, 1990. Despite Microsoft being relatively new to the advisory game, unlike other vendors they have the luxury of learning from nearly a decade of other vendor’s experiences.

Others may cite that the previous example of a security company dealing with a large Unix vendor had little affect in the long run. That even faced with a security company threatening public release of vulnerability information, they reverted to delayed response and little public concern. The flaw in this argument is that the security company gave in to the desires of the vendor and followed their recommendations for release dates. This is further backed by the fact that the vendor provided patch and advisory within days of the information being released by a third party.

While this is not definitive proof, it certainly weighs heavy on the fact that some companies policy of firm dates on full disclosure is a good thing.

One man’s rant? Opinions on Full Disclosure

How do security professionals and operating system vendors view full disclosure and security vulnerabilities?

Al Huger (Security Focus, POC for two security companies dealing with reporting bugs to vendors)

    "Full disclosure is in and of itself a means to an end. Many people
    participate in it not out of malice, but out of the hope that with
    enough public scrutiny vendors will finally take responsibility for
    the software they write.

    Make no mistake about it, full disclosure is ugly. People get hurt,
    however, I see no reasonable alternative. For every responsible bug
    reporter out there it's likely he/she has a counterpart who will most
    likely keep the information for themselves and use to ends that we would
    all rather avoid. 

    In a perfect world vendors would perform rigorous security audits,
    previous to market release. In a perfect world we would still have
    buggy software (this is something we will never lose) but we would
    also have vendors who make security a pre-emptive consideration as
    opposed to a forced reaction."

Aleph1 (Bugtraq Moderator, most popular and active full disclosure security mail list)

    Let me clarify our disclosure policy. Some people get the impression
    we are full disclosure extremist. We are not.

    First, we rather you work with the vendor to create a patch or fix
    to the problem. If the vendor is responsive and they are making a
    good faith effort to release a fix in a timely manner we rather you
    keep the existence of the vulnerability secret until such a time when
    the vendor has the fix ready. You can then release both the
    vulnerability information and patch at the same time. The reasoning
    behind this is two fold. First in our experience saying a vulnerability
    exists but not releasing full information seldom stops attackers from
    obtaining details of the vulnerability. Attackers will research the
    problem and either come up with the information on their own or will
    hack their way to someone with details of the vulnerability.

    Second, releasing both the details and the fix at the same time
    minimizes the time attackers have to find the vulnerability on their
    own. Next, we like that people post full details of the vulnerability
    once the vendor has released a fix or a patch. The reason behind this
    is that once patches are out attackers can easily reverse engineer
    them figure out what the vulnerability was. Thus you are only keeping
    in the dark the good guys. Knowing the vulnerability details allows
    people to verify that fix indeed fixes the vulnerability (we seen many
    cases where it doesn't, or it does so in a bad way), it allows people
    to look for similar vulnerabilities in other systems, and it allows
    people to learn from the mistakes that enabled the vulnerability."

Firas Bushnaq (eEye Digital Security, POC for reporting bugs to Microsoft)

    The adoption of Full Disclosure is an ethic, our responsibility and 
    the duty of every security professional is to disclose the facts. How 
    and when we disclose the facts is on the other hand the most crucial 
    part of full disclosure. Many factors come into play, the seriousness 
    and implications of the bug, how long has it been since it was discovered, 
    how responsive is the vendor and how dependant are we on the vendor for 
    a patch.

    We look at a network security breach as a violation of our safety and
    privacy and we will continue to tactically plan and execute to make sure
    that we address the issues in the shortest amount of time possible. Vendors
    are becoming more aware, end users are becoming more informed and our
    networks are becoming more secure.

Erik Berls (NetBSD Team, Vendor POC for receiving bug reports)

    "We try to aggressively pursue any security bug, verify it within
     our operating system, issue an immediate temporary fix and release
     a correct solution as well as a security advisory as soon as possible.
     It tends to operate on a timeframe based on hours not weeks."

Past, Present and Future

The history of security bugs is rather bleak. For years, full disclosure was not practiced by security professionals or vendors. Small groups of interested parties exchanged information amongst themselves, unwilling to disclose it to the masses. As these bugs were slowly found by others or passed on to vendors, they eventually got fixed. This time of security through obscurity did little for the overall perception of secure computing.

The present finds us in a new frame of mind. One that sees full disclosure as a viable and important way of dealing with vulnerabilties that can lead to disastrous effects. Slowly, vendors are learning that security is a growing concern to more and more people, and that responding to these concerns in short order helps everyone in the long run.

So what does the future hold? I hope that all software vendors acknowledge the success of full disclosure and adjust their own procedures to synchronize with it. That quick and open responses to the security companies and individuals reporting these bugs becomes the standard, not the few success stories.

* I am an ex-employee of RepSec (RSI)

Ex-Game: (Untitled)

[This was the second article I did for Ex-Game magazine (print mag in Japan). It was titled as my name and labeled “Original Document”. It was subsequently mirrored on attrition.org.]

In the past few years, Japan has seen very few incidents of web sites being defaced. From 1995 to January of 2000, there were only 27 recorded defacements (http://www.attrition.org/mirror/attrition/jp.html) of Japanese web sites, very few of which were government owned. Beginning around January 24th, a brief but intensive wave of web defacements occurred on Japanese web servers, most owned and run by the government. Among these sites were Japan Science and Technology Agency (www.sta.go.jp), Japanese Management and Coordination Agency (www.somucho.go.jp), and Japanese Statistics Bureau (www.stat.go.jp). Shortly after the first few attacks, officials with the Japanese government responded by declaring the attacks a serious threat to the operation of their information infrastructure. Within days they had asked the United States government for assistance in dealing with the attacks. Not only did government officials ask for help in dealing with recovering from the attacks, they asked for assistance in preventing similar incidents from happening again.

Because of the small but intense wave of defacements plaguing the Japanese government, more and more people are questioning the skill required to perform such feats. Is the government facing computer masterminds intent on destroying the credibility and integrity of government information? Or are the intruders nothing more than unskilled malicious teenagers with a little luck and a lot of bravery (or is it stupidity?). Perhaps it is a little of each rolled into a less sinister and less proficient person or persons. Accomplished hackers intent upon exploration typically does nothing that would draw undue attention to their actions. Public, media or law enforcement scrutiny is often counterproductive to their goal of uninterrupted learning and discovery. Unskilled kids who run scripts they can barely comprehend typically have no message worth reading, and do not understand the potential consequences of their actions, or the seriousness of what they do.

What is now becoming an old and foolhardy debate is whether or not defacing a web page does damage to a company (or the government). Some argue that by changing a few lines of HTML, no real damage is done to the system. Since it does not disrupt the flow of information for more than a few hours, and since it does not prevent people from using the system, many say claims of damage are often inflated for selfish reasons such as financial gain or public sympathy. On the other hand, some argue that simply undermining the integrity and confidence in a system is damage enough unto itself. With the system intrusion comes the time required to assess and repair the damage, examine the security posture of the machine(s) compromised, reports to write detailing the incident and more. All of this adds up to lost time that administrators could have been working on projects that earn money for the company. Jumping back, some would argue that maintaining security was part of their duties in the first place, and that such incidents are the result of these administrators not performing their tasks in the first place.

How It Is Done

There are two basic methods for qualifying web defacements. The first involves vulnerabilities in the web server which allow a remote attacker to alter the content of the page without logging into the server. These exploits typically involve the intruder overwriting or appending to the existing web page. The second type of attack involves compromising the underlying operating system in order to gain full access to the machine, and therefore access to the web pages. Once this type of compromise has occurred, the intruder can interactively edit the existing web page, replace it with his/her own page, and a lot more. For the most part, most Windows NT servers that experience web defacements fall into the first category since NT isn’t designed around multiple users logging in via interactive interfaces. Most Unix (Solaris, Linux, BSD, etc.) defacements occur after the intruder has gained “root” access to the machine, giving them full administrative rights.

Windows NT comes with its own web server prepackaged for customer convenience. Internet Information Server (IIS) is the second most common web server found running on machines across the net (the most common on NT machines). According to Netcraft (www.netcraft.com/survey/), 22.92% machines surveyed in January 2000 are running Windows NT and IIS. In keeping with Microsoft’s tradition of buggy and insecure software, IIS is no exception.

One of the most widely exploited bugs found on Windows NT systems is called the RDS/MDAC vulnerability. Through this “feature”, a third party can easily execute remote commands on a target system. What makes this bug a real threat is that the attacker does not need initial access to the machine to begin with. Remote Data Service (RDS) is a component of Microsoft Data Access Components (MDAC) which is installed by default with the Windows NT 4.0 Option Pack. RDS components are designed to allow controlled access to remote data resources through Internet Information Server (IIS). One component of RDS called the DataFactory object is exploitable to untrusted attackers. The DataFactory object is originally designed as a server based object that handles client requests for information and provides read and write access to specific data sources.

Using exploit code widely available on the Internet, an attacker can use a single program to obtain all the information needed to exploit the vulnerability. This same script will then prompt the attacker with “Please type the NT commandline you want to run (cmd /c assumed):”, allowing them to easily execute the commands on the remote machine. Because of the ease of which this can be exploited, combined with a large amount of vulnerable servers, it is believed that the RDS/MDAC vulnerability is responsible for thousands of web pages being defaced in the last six months. Because of the ease of exploitation and the lack of knowledge required to utilize the attack, anyone and everyone that fancies himself a hacker has used this vulnerability to deface web pages. This is somewhat evident by the childish and lame web pages that are put up in place of the original pages.

For more information on the RDS/MDAC attack, Rain Forest Puppy has written an excellent advisory outlining explicit technical detail about the vulnerability (http://www.wiretrip.net/rfp/p/doc.asp?id=1&iface=2). Microsoft has released two security advisories outlining details and patch information for the RDS/MDAC problem (http://www.microsoft.com/technet/security/bulletin/fq99-025.asp and http://www.microsoft.com/technet/security/bulletin/ms98-004.asp).

Protecting against attacks that allow direct access to a machine is rather simple for the most part. Staying abreast of newly discovered vulnerabilities is the single most important thing. As new bugs are found, the vendor should address the problem with patches or upgraded software. Staying up to date on these patches will typically keep you secure from a majority of the hackers poking around on the Internet. While this will keep you safe for the most part, there always exists a small chance that you will be exploited by a new vulnerability before you can patch the system. This is something that is virtually impossible to protect against, and something that all administrators must deal with.

Unix servers have been designed around the idea of allowing multiple users access the machine without losing any privileges or ability. There are few instances where an administrator must be sitting at the machine to effect any change or alter the configuration of the system. Because of this philosophy, users must log into the system to add or edit web pages (among other things). For intruders intent on defacing a web page, they must first find a way onto the system before they accomplish this. By exploiting bugs in the various services run by Unix systems, it is sometimes possible to gain remote access to the machine. Through remote buffer overflows (http://www.fc.net/phrack/files/p49/p49-14), sniffing attacks (http://www.robertgraham.com/pubs/sniffing-faq.html), or more crude attacks like brute forcing a login and password, attackers are able to spawn interactive shells on a target machine. In many cases, these shells are run with the highest privileges (‘root’ access), and the attacker has access to alter any file on the system. In some cases the privileges are those of a normal user causing the attacker to use additional exploits to gain more access to the machine.

In the past year, vulnerabilities in various Remote Procedure Call (RPC) services have been a consistent entry point into thousands of Unix servers. Some of the more commonly exploited RPC services include rpc.statd, rpc.mountd, and rpc.ttdb, one of which can be found on almost every flavor of Unix distributed today. Because security has only recently become a concern, it has taken software vendors over a decade to realize the seriousness of the problem and only in the last year or two begin to address these vulnerabilities. With the use of scripts readily available all over the Internet, even the most novice of hackers (often called script kiddies) can exploit these holes in systems worldwide.

Once interactive shell access has been gained to a Unix machine, even a rudimentary understanding of the Unix operating system is all it takes to find and edit the system web page. Using find and vi, a competent intruder can walk through the system and assume complete control over it. Changing a web page is actually the least of the damage that could be done to a vulnerable system. However, such defacements are typically the most publicly embarrassing incident a company can face. Because of this, security of a system is often focused on the web server and related components. This focus can quickly create gaping holes in the underlying operating system and allow intruders to waltz right in.

Protecting against intruders who target the operating system rather than the web servers are typically easy to deal with. The key to security is maintaining a consistent and proactive security posture. Rather than wait for an embarrassing incident to prompt your staff to implement better security measures, continual monitoring and updates should be performed since day one. Once the machine is setup, administrators should take steps to improve the default security posture of the machine, as most installations are notoriously insecure. Turning off unneeded remote services, removing extraneous permissions of SUID file, and setting up better group control are just a few things administrators should do. Once done, you should check the web site of the vendor of your operating system. These sites will contain updated information and security patches that address the latest vulnerabilities known and that have been made public.

Japan and the U.S.

Looking at the wave of recent Japanese Government defacements between January 24th and February 2nd, it is interesting to note that at least six of the servers were running Sun Microsystems Solaris Operating System while only a single instance of Microsoft Windows NT was found. At the time of the defacements, five of the machines could not be identified. Comparing this information with a list of United States Government servers that have been defaced (http://www.attrition.org/mirror/attrition/gov.html), and you can see the heavy use of Windows NT.

Without more statistics showing the amount of machines running in each government, it is difficult to draw accurate conclusions that suggest if one operating system is more secure than another. The figures above do begin to paint a picture of each government’s preference in operating platforms. The wide scale deployment of Windows NT servers through the United States Government has left it vulnerable to attackers, as evident from the long list of defaced servers.

What may be more important is the reaction from the administrators of each system as well as the reaction from Government officials. Public statements about U.S. servers being hacked and defaced were slow to come. It took over a year of repeated embarrassing defacements before president William Clinton took a firm stance, calling for more security in government and military web sites as well as a better response from the Federal Bureau of Investigation (www.fbi.gov) in tracking these online vandals. Throughout the past year or more, several different U.S. agencies have asked Congress for more funds in order to put a stop to these attacks. Despite additional funding being granted, virtually nothing has changed and U.S. servers continue to be defaced. As recently as February 19th, three more U.S. government servers (all running Windows NT) were defaced. NOAA Nauticus site (www.nauticus.noaa.gov), National Ocean Service Map Finder (mapfinder.nos.noaa.gov), and the Office of the Speaker of the House (www.speaker.gov) were the latest casualties.

Unlike the slow U.S. reaction, Japanese Government officials quickly met with law enforcement as well as requested help from the U.S. Government (http://news.bbc.co.uk/hi/english/world/asia-pacific/newsid_619000/619139.stm). This call for help is ironic in that the U.S. has demonstrated repeatedly that it can not protect its own information assets and web sites. Lucky for both governments, attacks on their web sites has slowed down in the last few weeks. The question now, is will it continue?