Feds, Felons, and Flakes: Reflections on the Attrition Mirror

In 2000, Matt, Dale, and myself did a presentation at BlackHat Briefings in Las Vegas on the Attrition defacement mirror, after we had concluded the project. Below is the summary and one slide from the stats for perspective.

This presentation covered the basics of running the Defacement Mirror, problems we ran into, the mirror process, detailed statistics on defacement activity to date, and more. Presentation by Jericho, Munge, and Punkis at BlackHat USA 2000.

The statistics and information presented here are based on data collected since November 1998
Attrition began actively mirroring defaced sites in January 1999
Mirrors on the attrition site date back to 1995
Data before January ‘99 is believed to be accurate but is not 100% confirmed

Poetry #45: little piece of my soul

[This was originally published in F.U.C.K. poetry Issue #45. The publish date is approximate.]

	little piece of my soul

	ridiculed out of place
	spit it back in his face

	remove that shred of pride
	by telling him that he lied
	revel in his heartfelt pain
	his feelings to you inane

	watch his life crumble to despair
	tell your friends he treated you unfair
	cold and cruel your eyes
	their look says one thing: despise

	abusing him your natural high
	how many ways you crucify

	take a little piece of my soul

Hacker attacks welcomed…

[This was originally published on attrition.org, and reprinted on Linux Security.]

Hacker attacks welcomed.. I’m sure they are.

The new article reads:

Openhack data will help e-businesses develop the appropriate balance of Net security, openness

Does this bring flashbacks of any previous contest? Does for me. I seem to recall the same group running a contest like this before. I also recall the previous contest being extremely unbalanced, poorly setup, and very unclear as to the actual goal of it.

Last time, the same group put a heavily secured Windows NT box up against a near default install Red Hat Linux box, and tried to claim Linux was less secure after it was hacked. Rather than change the default install of the Linux machine by adding security patches, they added insecure third party CGI software that later proved to be the Achilles hill of the Linux system. This was far from a fair contest. But wait.. they don’t mention this at all. Instead, they only offer this:

Openhack is an evolution of last year’s interactive Hackpcweek.com
test, in which we pitted Linux and the Apache Web server against
Microsoft Corp.’s Windows NT and Internet Information Server 4 to see
how each would fare in a hostile Internet environment

As I reread the article, I see others have posted comments to the ZDNet forum bringing up many of these same points. Still, this is not deterring them or pushing them to improve their ways.

No doubt they have blundered this contest up somehow. As Space Rogue is fond of pointing out, these hacking contests rarely test the security of a system, and often end up as a marketing ploy at best.

Looking back:

This is a summary of the previous contest. They do not mention the outcry of pitting a secured NT server against a near vanilla Red Hat Linux install. They DO at least mention their own role in unbalancing the odds:

Also contributing to the hacker’s success were incomplete security
updates on our test site

With this confession of security ineptness, every reader should begin to wonder what qualified them to run such a contest to begin with, and now, if they are qualified to run the new one. Other questions of what motives Openhack might have come to mind. If they aren’t pitting the machines against each other fairly, what is the ultimate goal of such a contest?

“The Openhack equipment is in the IP range from to –anything in that space is fair game.”

IP’s that respond to ICMP Ping traffic: .2 .4 .7 .15

“Used heavily in the server farm are Sun Microsystems Inc.’s hardware
and Solaris operating system, as well as Linux, OpenBSD, NT and
Windows 2000.”

Solaris, Linux, OpenBSD, Windows NT, and Windows 2000. I count five OSs there. Yet based on pings above, we can see that one of these is obviously being shielded a tad more than the rest by denying some (or all) ICMP traffic. This hardly seems fair in testing the security of various OSs. If they are blocking a relatively harmless ping, what other security measures have been put in place?

Reading further down the article, we find out that only three of the machines are considered targets (Solaris 8, Mandrake Linux, Win2k). Amusing that they did not put a Windows NT box in the line of fire.

Portscanning (loudly) and checking ports 1 - 1024:
22/tcp     open        ssh                     
25/tcp     open        smtp                    
43/tcp     open        whois                   
53/tcp     open        domain                  
80/tcp     open        http                    
110/tcp    open        pop-3                   
111/tcp    filtered    sunrpc                  
416/tcp    open        silverplatter           
417/tcp    open        onmux                   
418/tcp    open        hyper-g                 
420/tcp    filtered    smpte                   
423/tcp    open        opc-job-start           
443/tcp    open        https        

NMAP: unknown
Netcraft: is running Apache/1.3.12 (Unix) (Red Hat/Linux)
	  PHP/3.0.15 mod_perl/1.21 on Solaris 
Port 80: Server: Apache/1.3.12 (Unix)  (Red Hat/Linux) PHP/3.0.15

All 1024 scanned ports on ( are: filtered
Remote operating system guess: HP Advancestack Etherswitch 224T or 210

It looks like they are dropping routes from potentially hostile machines. I was not able to finish portscans of .7 or .15 after the first two.

Either way, this contest doesn’t quite seem fair or worthwhile. A total of $2,500 for a long involved hack if you compromise three target machines. The only caveat is that you must reveal full details of how you penetrated the machines.

I wonder though, is the test one against their firewall and IDS? Or the security of the five OSs? In the long run, it seems like they are doing little more than paying up to $2,500 to learn about one new vulnerability. Too bad the contributors to the Bugtraq mailing list aren’t compensated for their finds.

One of the reader comments sums up the reward money quite well. Axel Giraud says:

     "Only $2,500 for information and skills that can potentially save the industry tens of millions of dollars?

      Sorry, but I would not waste my time."

If you are curious about the current state of the contest, the article says you can get updates at http://www.openhack.com. On 06-28 and 07-03, this site is not responding. Seems a bit odd that their site is down or that their firewall is blocking legitimate web traffic.

We can see that their remote network is not set up in such a way as to give attackers a fair shake at each of the five OS’s in the pool. They have added filters, IDS and more security measures that a considerable percentage of companies have not. And they claim this is a real world scenario? I think not.

Update: 00.07.05

After one of the servers was successfully defaced, eWEEK is claiming this does not count. Checking the status of the contest:


You don’t have permission to access / on this server.

Oh yeah, these people are qualified. What a scam.

Updated Wed Jul 19 01:54:10 MDT 2000


Two successful hacks have occurred that eWeek is acknowledging. For future updates, check the OpenHack site.

Hacking: A Game for the 90’s?

[This was originally published in Ex-Game Vol 1, a print magazine in Japan. Exact publish date not known, just the year.]

Friday night, you’ve been at it for three hours. Typing away at your computer, hitting one web site after another. Every ten minutes that passes, some large corporate network’s web page has been replaced with a new page of your own design. You drink more of your cola and get back to work, a list of vulnerable domains in front of you. For the past three weeks, you and a friend have defaced dozens of corporate web sites each Friday night, bending the original site to your own design.

You are part of what has seemingly become the latest trend or fad: that of computer hacking and web site defacing. The term ‘hacking’ once meant, “to find a clever solution to a difficult problem.” Over the years, journalists and security professionals have skewed the definition to mean “one who accesses other computers illegally.” Regardless of the variety of terms used to describe the activity, illegally accessing computers and altering web pages has exploded in the last twelve months. The frequency of defacements along with the messages left on these altered sites suggests that many participants see their activity as nothing more than a game for the 90’s.

Recent case history has shown that a majority of those defacing web sites are between 15 and 21 years old. Because of their relative young age, the lack of understanding of their actions often leads them into a world of problems with everyone from their parents to law enforcement. Putting these risks aside, defacing web pages seems to be as popular as ever.

Explaining the Popularity and Ease

Perhaps the largest contributing factor to web sites getting defaced is the simplicity behind it. Because of current web sites and available information, it is often a matter of minutes for someone to download the tools required to deface a web page. A wide variety of web sites dealing with both hacking and security offer the scripts and utilities required to commit these acts. Detailed information outlining the bug or vulnerability used to exploit a foreign network is plentiful.

Computer security sites make this information available under the policy of full disclosure. Unfortunately, this policy is a two sided blade of sorts. By making the information available for administrators and security consultants in order for them to patch the vulnerability, they are also making this information available to hackers and other assorted people with questionable motives and ethics. The information shared under full disclosure allows hackers to create tools that automate the exploitation of the vulnerability. Worse, they can easily write additional tools that automate the process of finding vulnerable hosts on the Internet. Rather than try one server at a time, their tools can scan thousands of machines in a matter of minutes.

Crime of the Times

In this world of automation, society strives to make life easier at every turn. More machines and more automation means less work for us. This mindset has carried over into the hacker world all the same. Looking at a recent example of this process, we can see how easy it is for a complete neophyte with little computer knowledge to successfully deface a web page.

Oct 20, 1999 – Several high profile domains are defaced. Each server is running on Windows NT, and exhibits signs of the MSADCS exploits. Most of the defacements were one or two lines of simple text that overwrote the existing page. Because of the way the script worked, it could only overwrite the existing page with simple text.

Nov 3, 1999 – Rain Forest Puppy releases details of a vulnerability in the Microsoft MSADCS distributed library. The bug allows attackers to execute commands on a remote Windows NT server without legitimate access.

Nov 6, 1999 – Many defacers modify their scripts so they can overwrite pages with their own HTML. Several other defacers decide to append their messages to the existing pages rather than overwrite it.

Nov 10, 1999 – Updated versions of the MSADCS exploit code is released.

Dec 17, 1999 – The time of this article, hundreds of systems have fallen victim to people exploiting this bug. On some days, thirty domains are reported as defaced due to the MSADCS and similar vulnerabilities.

The information in RFP’s advisory along with the public utilities for exploiting this bug make it easier than ever before to commit crime by illegally accessing and altering data on a web page. Along with these public resources, hackers pass additional tools and modified versions of the exploit utilities around to their friends. Some choose to make these improved tools available on private web sites where thousands of hackers know to look for them. This begs the obvious question “Why don’t sites protect themselves?”

Computer Security in the 90’s

With the pace of technology and new developments coming out on a per-second basis, one has to wonder why so many insecure sites can maintain such a poor security posture. Multi-million dollar companies like Mitsubishi and Kingston have fallen victim to web defacement this month. Government servers of the United States, United Kingdom, Brazil and Australia have suffered at the hands of attackers in December this year. How is it possible for hoards of teenagers to effectively control the content of such important and high profile servers?

Several factors lend to the insecurity of computers all over the world. These factors do not necessarily apply only to web sites that have been or will be defaced, rather they apply to any networked system. Regardless of technical steps that can be implemented to protect these systems, diligence and continued attention are the most effective resources you can throw at security. Spending fifteen minutes a day to stay updated on the latest security concerns and vulnerabilities will allow any system administrator to protect themselves against a great majority of would-be attackers.

The lack of time spent maintaining security on computer systems leads to several technical issues that become the Achilles Heel of any network.

Installing Security Patches. Software vendors release patches/fixes to address security problems that come to light. System administrators must install these patches, sometimes years after installing the operating system or software. Periodic monitoring of the vendor’s website or subscribing to their mail list is the best way to do this.

Lack of Budget. Perhaps one of the biggest complaint from system administrators is the lack of funding companies spend on maintaining security. There is no excuse for a company to do this, yet it is often done by management that do no realize the implications of security. Rather than maintain proactive security, they take a reactive stance and only see fit to distribute funding after horrible security incidents.

Abundance of Information. As absurd as this may sound, the vast amount of information resources available to administrators can be overwhelming. So overwhelming in fact, it becomes confusing which resources to follow and which to trust. Some sites recommend different courses of actions, different security policies and more. These cause confusion and conflicting advice which can lead to improper configuration of corporate resources.

Poorly Trained Staff. In an effort to maintain lower costs of operation, companies are looking for the lowest possible salary for their administrators to do their job. This leads to hiring undertrained and poorly skilled administrators that become responsible for large computer networks controlling incredible resources.

When several of these problems work in tandem, it becomes apparent how little security holes can be overlooked by even highly skilled administrators. Anything short of full attention and a comprehensive plan to protect corporate networks is begging for trouble.

Two Approaches

Most people don’t realize the logistics of attacking web sites. Until recently, one could not just magically change a web page without having complete access to the system. This meant breaking into the server that held the web pages, gaining the access required to edit the web page, then altering it. This is achieved a number of ways including remote exploits that gives the attacker access to the system, sniffing connections between two computers, or backdooring a utility used to access remote systems. This method is more in tune with the older way of ‘hacking’.

Recent vulnerabilities in web servers designed for more remote services now allow attackers to deface the page without gaining prior access to the server. As with the MSADCS exploit, the attacker simply utilizes a bug that overwrites or appends to the existing page. This is done without gaining a valid login and password combination or any other form of legitimate access. As such, the attacker can only overwrite or append to files on the system. Some may allow them to read any file but for the most part, do not grant the individual serious access to the machine.

Network Security in the New Millennium

If the state of security is in bad shape today, where will it go in the new year? Is security improving enough so that we can expect secure systems in the future? Are more vendors looking at security as a serious concern? Not enough to matter! While vendors are slowly realizing that security is a big concern of the consumers, most are not changing their ways to address the concerns. Rather than do proactive auditing of their products and more extensive testing, they still wait to hear about a bug and fix it down the road.

This means that hackers and web defacers will keep doing their thing into the new year! Even with fairly substantial leaps in security mechanisms, several inherent flaws will continue to plague systems around the world. A system is only as strong as its weakest link. For most outfits, this weak link is the human running the system. They are the ones prone to make mistakes, overlook the minor details or not keep up with the changing security field. Even with the most sophisticated security software available, it is only as good as the person who installs it.

This is the primary reason companies employ a high dollar consultant to come in and install vital parts of their networks. It is their hopes that by doing this, they will not run the risk of human error and ensure a correct setup. Unfortunately, that leaves another challenge of finding qualified professionals to hire as consultants. The last few years of hype surrounding computers, the Internet and Y2K have brought an influx of consultants that may not be adequately trained to perform the tasks you need. Yet another challenge companies must face in the years to come.

Hacking as It Stands Today

Five years ago, hacking was mostly rumor and legend. Tales and stories handed down from hacker to hacker, admin to admin. Web sites were unheard of so most system intrusions were never seen in a public manner. Often times only a handful of hackers, the system administrators and occasionally law enforcement knew about system intrusions or the level of skill involved. Hackers of old were people curious about networks and exploring. They wanted to press the system and see what else they could get it to do, especially if it hadn’t been documented before. For the most part, it was benign discovery of new computing resources and power.

Today’s “hackers” are a new breed unto themselves. Rather than learning and discovery, many seem to enjoy the fame and glory behind it. Instead of learning new aspects of how computers work with each other, they would rather vandalize web sites with poorly written rants backed by weak justifications for their actions. More and more of the web defacers today don’t even know the fundamental differences in the programming languages that make up their exploit utilities. Others can’t even find the web page once they break into a server and must ask others for advice on how to find it. Every first year unix admin knows that the find command is an easy built in utility that can perform this task.

Along with this lack of system knowledge comes a lack of understanding about the potential repercussions their actions could effect. Aside from breaking state or country laws and statutes, being busted for their crimes could have serious effects later in life. On top of losing all of their computer and telephone equipment, they jeopardize their career. Companies do not hire convicted criminals for the most part. Worse, computer and security firms will not hire ex-hackers openly. Unless the person keeps their past hidden and lies to their perspective employer, their past will catch up to them.


Each day five to fifty sites are reported as hacked and defaced. These reports are often sent in by the person(s) who committed the crime, as a sort of bragging. They send the information to sites that mirror defaced web pages and monitor Internet crime. A few of these sites in turn pass on the information to interested third parties as well as law enforcement agencies. In any given week, there appear to be between ten and one hundred groups or individuals participating in web defacing. These people may deface one site a week if it is considered high profile, or dozens of low-key sites most of us have never heard of.

With more and more media attention being focused on these public defacements, it skews the perception of the public. The masses perceive hackers to be mostly young kids intent on digital graffiti. While the hackers of old are still out there silently invading network after network, leaving little or no sign of their intrusion, law enforcement spends most of its time pursuing and investigating actions that barely consist of network compromise. Many web defacements allow the attacker to overwrite a file on the system (the web page), not gain full access to the machine. Every once in a while a story will come out about the hackers of old. A recent story on a group of hackers that were allegedly able to invade everything from phone systems to the US National Crime Information Center databases.

Almost once a month, law enforcement catches up to these hackers and makes a high profile bust. Groups like GlobalHell, Level Seven, and Team Spl0it have all had their run-ins with the law in recent months. Perhaps some of the most high profile web defacing groups in the last year, they have disappeared since federal authorities took interest in their action and served warrants on the alleged members of each group. In a matter of days these groups were replaced by new groups defacing more sites helping create and endless cycle of web defacement.

In the time it took to write this article, a site I help run has received word of fifteen web sites being defaced all around the world. Sites in Brazil, a US Army site, several commercial sites and more have fallen victim to these web defacers in a matter of one day. At an ever-increasing rate of sites being defaced, one could predict that over one thousand sites would be defaced each month next year. Based on the current rate of increase, that guess would be a fairly safe bet. Add to that the rate at which new servers are put up on the Internet along with the rate of new vulnerabilities being discovered and the ease of which they may be exploited. It spells out a future of hacking becoming more and more a game.

Another brick in the wall: Fighting a losing battle on the front lines of security

[This was originally published on IBM Developer Works.]

You sacrifice convenience for security and security for convenience. For which goal was your computer network built?

Security? Oops!

In the realm of human endeavor, there is usually a simple logic applied to the process of building things. This logic is seen in the way houses, computers, a even cans of mandarin oranges are built. We do not near completion of the production of these items only to attempt to squeeze in some vital element that was meant to be first. Foundations are not built after finishing the roof, processors are not seated after the case has been secured, and oranges are not added after the can has been sealed. Yet, when security is considered, this simple application of logic seemingly fails on a majority of computer networks.

We must identify one caveat when addressing this issue. Most computer networks (especially the Internet) were first designed with an open philosophy — one of sharing information freely with anyone who needed it. Security was the little known hobby of a few geeks who enjoyed the cat and mouse game of “hacking” and securing machines. It’s hard to pin down exactly when security became the big push in corporate America, but I think it safe to say it publicly surfaced in the last three or four years.

Just as the Internet had been, five- and ten-year-old corporate networks, when new, were built for connectivity and convenience. As a general rule, you sacrifice convenience for security and security for convenience. The more unrestricted the access you enjoy, the less security is present on the network. Networks built from the ground up with all aspects in mind, especially security, enjoy a stronger foundation.

A losing battle

The real suffering surrounding network security can be found in the system administrator population, which is now playing catch-up. For years, the cries from above were for functionality. Integrate this, introduce this new technology, give us the ability to read sensitive corporate mail from our personal American Online (AOL) accounts. Management worldwide didn’t care how things were done or what changes had to be made, they just wanted everything to be easy!

With the media and fledgling security companies preaching about the benefits of and need for good security, administrators are scrambling. Armed with a new corporate directive, administrators must weed through hundreds of self-proclaimed experts and thousands of inadequate Web sites to find pieces of the security puzzle. Missing the overall philosophy of security, they often become consumed with nit-picky details and technical countermeasures that are not always appropriate for their network. Network administrators today are simply fighting a losing battle, plugging each springing hole in their dam.

The advice everyone asks for

Between security consulting by day, and running a nonprofit security-oriented Web site at night, I get asked a lot of questions. The second most-asked question (after “How do I hack?” which is ignored) comes from system administrators all over, who ask: “How do I secure my system?”

Of course, this question is the subject of entire books and Web sites, so a single e-mail response is typically lacking, to say the least. Even if there was no other evidence, the large volume of e-mail I get asking about security advice tells me that networks are behind the curve in security measures.

Planning considerations

Such a wide range of networks is out there that it becomes impossible to detail all the issues that will be important to your network. Not only is it beyond the scope of this article, but it is simply not feasible.

If you’re reading this article, you may already have a production network up and running right now — putting you at a serious disadvantage. Every passing day brings the possibility of your insecure network being compromised.

While implementing a plan for securing an existing network, you must consider the machines currently deployed. Have any of them been victim to a past compromise? Has an intruder buried himself deep enough so that new security measures will not keep him out? Has an employee with legitimate access secretly backdoored your systems, undermining any new security patches you will install? As a security bricklayer, ask yourself how the wall was built, if you used the right materials, and if there was a bad brick that will undermine the strength of the wall.

One section of the security plan must address these questions and provide policy to deal with cases such as these. Because of these concerns, your task of security will be daunting. For those implementing security policy on a new network, you are exempt from these bad questions! However, you must build the wall from the ground up using the right tools, materials, and techniques, lest it fall at the worst possible time.

For the lucky few who are designing a network from the ground up, the good news is that you can design it in such a way that security should never be a problem or concern for you. Whether it is a completely new network, or additional subnets to your existing corporate structure, implementing security policy from the beginning will alleviate a lot of headaches down the road. Performing consistent upgrades and security patch installation can turn the job of security from a full-time gig into an hour-a-day task. The time gained from this prior planning allows network administrators to spend more time on critical areas as needed. They can also dedicate more time to thorough analysis of logs, implementation of more advanced security measures, and browsing favorite (er, work-related) Web sites.

Building the wall

It’s a given that your network/security administrators are fighting a losing battle. They probably don’t have the time, training, or resources to adequately secure your corporate network. I know this, because I spent the last five years being that outside consultant, working on networks like your own. For administrators, the hardest part of all this, usually, is getting management to allot you the time and resources to develop a plan. While it is probably not a good idea to grant every request made by your employees, it is downright foolish to stand in the way of implementing sound security policies and practices. Remember, security — though difficult — is possible. But you must build your security plan like you would build a wall: from the ground up, and brick-by-brick. Simple logic is all it takes — and it’s foolish to deny resources to your administrators, who are fighting the good fight after all.


  • The author recommends Tightening Corporate Security
  • Read the Appendix to this article, below: Five steps to a plan — security advice in a nutshell

Appendix: Five steps to a plan — security advice in a nutshell

Perhaps the single best piece of advice one could offer for most companies is to start broad and slowly focus in on the little things. This process can be broken down into five major steps.

Begin by defining the security philosophy and policy your company will adhere to. With this in mind, develop a plan to implement the policy on your network. Validate the proposed plan, and proceed to implementing the technical details that defend the network. Last, review the security policy as well as the entire network, checking to see if components were implemented correctly, and that all machines conform to corporate policy. All too often, administrators begin to attack the problem of network security at the technical detail level. This is like trying to plug holes in a near-collapsing dam, frantically filling one hole as three more spring open. Rather than playing catch-up on a failing dam, one should look at building a more solid wall from the foundation.

(1) Security philosophy and policy

Security can be boiled down into two major philosophies, each of which will guide network security policy. It is usually fairly obvious which will be more appropriate to your network, but the choice may not necessarily be black and white. Companies that offer public services outside of their internal network may adopt on philosophy for those machines, while adopting the other for important resources.

That which is not explicitly allowed, is denied.

Simply put, if you do not go out of your way to explicitly and intentionally allow a particular user or service, then the traffic to your network is denied. For the most part, this is the philosophy that all networks should take. Another way of looking at this is to close all resources from the start, and slowly open them as needed to continue network functionality.

  • Example:
  • allow richard telnet access
  • allow business.partner.com ftp access
  • allow group_admins oracle access
  • deny ALL ALL access

In the example above, we see three specific rules allowing a person(s) to access specific network resources. Incoming traffic is checked against each rule one at a time. If it doesn’t match the first three, it then matches the final rule denying the traffic. This security philosophy defaults to the most secure posture for your network, and is ideal for corporate networks with an Internet presence.

That which is not explicitly denied, is allowed.

Opposite of the other philosophy, this one is intended for open networks that share resources with almost anyone. This philosophy is best suited for public resources found at colleges, search engines, etc.

  • Example:
  • deny hacker8.com telnet access
  • deny danise.net http access
  • deny aol.com mail access
  • allow ALL ALL access

The above example sets three specific rules that block incoming traffic. All other connections from any host are allowed to connect to the resources we share. In the case of repeated intrusion attempts against a network, it might be determined that blocking access to a particular host will deter further attempts (rule #1). If a back door is found in one of your Web products, it may be wise to block access to the people found exploiting it (rule #2). Thousands of pieces of junk mail delivered from the same domain can cause unneeded network congestion and lead to blocking a domain from sending mail (rule #3). All other traffic would be permitted to reach its destination.

Disclaimer: These are fictitious “rules” designed to illustrate the philosophy of security. Implementing rules such as these will vary widely depending on each company’s needs, as well as the tools they use to enforce such traffic enforcement.

(2) Developing a plan

This stage of network security will often be the most lengthy and difficult phase. In a seemingly never ending game of proposal and revision, the plan for addressing the security needs of your company must be detailed, precise, and thorough. It must cover all aspects of the company resources, not just network-connected computers. Issues such as physical security, employee awareness, and administrator training must also be covered.

Who will be involved?

The plan must include everyone that will be involved in the project. This includes company employees as well as outside consultants. It should be thorough and list which phases of the plan will be implemented, and the people involved. Assign a point of contact (POC) at each phase that will answer questions and make decisions if something is unclear.

What will be done?

List details when each aspect of security is outlined (physical, training, network). This includes the actual policy that will dictate what is done to a machine or resource to make it secure. Include whom will train employees, what material will be covered, what types of controlled access will be installed, and which vendor will be utilized. Include also the amount of money will it take, and where it will come from.

What timeframe is needed?

Give each phase of the security plan adequate time to be completed. It is important that this timeframe reflect the best estimate to finish the job. If the deadlines are impossible to meet, the entire plan will take on a negative and burdening feel from the first missed deadline. It is important not to over-estimate the time so that those involved do not lose focus of the long-term goals.

(3) Validate

In the ideal security plan, validation occurs at every stage of the game. Because time and resources are often sparse, this cannot necessarily happen. If this is a problem you foresee, there are at least two phases where validation must occur; validation of the security plan and review of the finished product. Validation is essential to ensure integrity is maintained throughout the security process. Validation may come in different forms depending on the resources available to you. If the entire plan for security is developed in-house, you should seek external security consultants to review your plan and make suggestions. Just because they review the plan does not mean they have to be involved in implementation down the road.

You should make sure to choose reputable and knowledgeable consultants to review this plan. If the security plan is developed by a mix of outside consultants and inside staff, seek validation by conducting a peer review (others knowledgeable about security but not necessarily on this project) or hiring third-party consultants. Hiring additional consultants to audit the first set may be expensive, but the results can often be staggering.

(4) Implementing the technical details

At this point, you should be breaking out the old jargon dictionary and reviewing terms like SSH, HTTPS, FWTK, and more. Administrators of each machine take the time required to implement the policy set forth in this process. If the security plan is thorough and your technical staff is capable, this phase of the operation should go smoothly and quickly.

Each system is analyzed for security weaknesses, and each vulnerability is fixed. This analysis includes concerns for both local and remote users. Always keep in mind that security threats may come from internal users (yes, your employees) just as quickly as external hackers. Company firewalls have their rule sets checked, router configurations reviewed, and operating system patches updated. All of your administrator’s time spent reading security Web sites and mail lists should pale in comparison to your corporate security plan. Sites and lists can typically offer a good idea of security vulnerabilities and patches, but can lack the whole picture that brings it together for your organization.

(5) Review

When the security concerns have been discovered and the policy implemented, it is important to review and validate the work done. It is critical that you warn your staff of this stage in advance. Explain to them that this step is not being taken because of a lack of trust in them, but is simply a method for making sure that all phases of the project have been completed. We all know that in large companies, the signals can get crossed, leading to confusion about who did what. This stage of review is simply a tool to make sure everyone completed the required task and the network is secure.

The final step in implementing a security plan should be a real-world test of your network. Contact employees at random to ensure that everything is still working properly. Securing networks has a nasty habit of turning off services or denying access to legitimate employees. The goal is to achieve a secure network and maintain all functionality needed to conduct business. If your budget allows, hire a quick and dirty third-party penetration assessment of your network.