The Perfect Patch Storm

[This was originally published on the OSVDB blog.]

Steven Christey of CVE recently commented on the fact that Microsoft, Adobe, Cisco, Sun and HP all released multi-issue advisories on the same day (Feb 13). My first reaction was to come up with an amusing graphic depicting this perfect storm. Due to not having any graphic editing skills and too much cynicism, I now wonder if these are the same vendors that continually bitch about irresponsible disclosure and it “hurting their customers”?

These same customers are now being subjected to patches for at least five major vendors on the same day. In some IT shops, this is devastating and difficult to manage and recover from. If a single patch has problems it forces the entire upgrade schedule to come to a halt until the problem can be resolved. If these vendors cared for their customers like they pretend to when someone releases a critical issue w/o vendor coordination, then they would consider staggering the patches to help alleviate the burden it causes on their beloved customers.

BumpTop 3D and Computer Interfaces: The Real Hurdle

I’m a fan of videos showing advanced in computer interaction.

The BumpTop 3D model is definitely neat, and the fundamental technology behind it is no doubt revolutionary. I say this sincerely because i recognize the functionality it offers and the difficulty in converting such ideas into graphic output, and more to the point, I take specific interest in such videos. I also acknowledge that I am likely a very atypical computer user, so take that into account during the following criticisms:

If we look back at the Engelbart demonstration in 1968 (more info), think of what that demonstration did to the world of computing and how it probably shook things up and made computer engineers of the time reconsider their views. If you are really interested in the history of computing, especially the formative years, I encourage you to read What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry by John Markoff, the first (and currently only) book that I encourage people to read by Markoff. during the early years, engineers didn’t have twenty years of computer use to influence them like I do. Back then they were resistant to change with a few paltry years of computer use under their belts. imagine how resistant people would be today. It took my mother at least a year to properly use a mouse as she was always prone to move the mouse between overly forceful double-clicks. How is she supposed to move from a classic Microsoft Windows desktop to something more revolutionary? She would still rather apply her 9mm Smith & Wesson to her recently assigned BlackBerry than acceptance.

Jump to 2005 or beyond and we see the next wave of computer desktop evolution. Examine the NICTA interfaces including the VI Ball, Phantom, or Geomi. Consider the almost viral video depicting a new computer interface seemingly straight out of Minority Report developed by Jeff Han and Phil Davidson. As a geek, these new interfaces provoke an immediate reaction of excitement and hope. The idea that the computer culture is finally realizing that Microsoft Windows and Apple Mac OS X aren’t the most intuitive interfaces around. The extremely slow realization that interfaces with menu options on the left are not where we naturally look for them, and that most people naturally embrace a menu on the right.

The Microsoft Windows desktop has none of the features depicted in these videos, only supporting very rudimentary drag-and-drop that was first introduced some 25+ years ago. The idea of moving and organizing documents in such a way is entirely foreign to the desktop computer user, novice or expert. It doesn’t even occur to me to sort my documents by file type, by PDF as demonstrated in the video, even if in the upper left corner as an unorganized group of documents. This makes me wonder if the GUI designers are about to run into the same problem that movie CGI wizards encounter, that of “too real”? Did you see Titan A.E.? The only problem with that movie on a technical/CGI/art level was that the rendered characters were “too” real. This occurred when the computer tried to render 1,000,000 million individual hairs on a human head, instead of realizing that while there may be that many individual hairs, the casual observer could not distinguish such detail or that they acted more in unison than the computer rendered.

The BumpTop 3D desktop suggests that the difference between real-world desktops and computer desktops is lost, suggesting that “documents are piled in casually and arranged in a way that subtlety conveys information to the owner”. While I can not disagree with that statement, there is a huge assumption being made that conventional desktops full of loose papers are treated the same, and organized in a similar fashion to computer desktops. Again, I may be the atypical computer user, but I just don’t see most people automatically bridging these two radically different “desktops” and applying the same mannerism, patterns or logic to them.

Looking at the video in more detail, let’s apply their technical examples to the real world scenarios. specifically, we’ll compare their functionality and design to my real world desk and papers or my current browsing habits.

  1. They drag all PDF documents to the upper left corner. the video says during the demonstration that “we can toss these similar documents like these PDF documents” to a corner. In the real world, we don’t have “pdf” vs “word” vs “ascii” documents. All printed documents are essentially the same. We do not group them by the type of document, rather we group them by content. This is evident by a stack of papers on my desk all related to random/scratch notes I have taken over the past year, or a stack of papers related to music that I want to check out, or a stack of work related reports for clients. these are not all written on 8×11 white paper, a suitable analogy to “all PDF documents” since some of my notes may have been printed from ascii edits while others were printed from MS Word.
  2. The ‘lasso’, ‘tidy pile’ and ‘lasso’n’cross’ functionality is good. just as we can quickly grab separate documents from all over the desk based on any criteria we immediately determine apply to them all, we can relocate them to a new area on the desk or stack them based on our idea of their grouping. This functionality is desperately needed in modern desktop computing, but the matter of quickly applying a set of criteria is still un-intuitive.
  3. ‘lassomenu’ and ‘pigtail gesture’ are entirely too close for casual users, until they get used to it. This is essentially the same problem my mother had when first learning the computer interface. When attempting to double-click an item or clicking and dragging, she would either move the mouse too much between clicks, or let up on the mouse button before dragging. This would result in very jerky movements and unpredictable document manipulation on the desktop. Relying on less than one inch of virtual real-estate to discern between the two methods is not optimal for casual users.
  4. The ‘drag’n’cross’ functionality looks good in video demonstration but falls short elsewise. How is this achieved on a physical level with mouse interaction? Are video displays presumed to be to high enough quality to easily distinguish that one document is not aligned with the rest, making it easy to tell the corner is sticking out AND what that document is? If not, then it does not hold to the real world analogy demonstrated in the video. in a stack of Wired magazines, I can distinguish the Newsweek that is in the middle of them with a tiny sliver visible solely based on thickness of the paper/magazine.
  5. Arbitrarily sized objects: document sorting, browsing and hierarchal piles are all done using a fraction of the screen real-estate that we are accustomed to. Imagine sorting through a dozen web browser windows (regardless of your browser) in a matter of three seconds, when the content is displayed on a two by two inch thumbnail. This only works if there is a large distinguishing graphic on the pages being viewed. This is completely impractical for someone that wants to use it to quickly flip between windows where the content appears the same from a high level view, but is quite different upon closer examination. For example, use this method to sort through 15 different windows of vulnerabilities as displayed by CVE or OSVDB.
  6. ‘tugs’, ‘pinning’ and half a dozen other terms used all seem novel, but there is no indication on how these are achieved in the physical world using the input device. Unless the input device has a lot more buttons or a radically different design, this will not be intuitive to most users regardless of skill level.

I am a fan of overhauling the computer interface. Worse, when it happens I will be one of the very few still clinging to mostly keyboard input for my computing experience. These videos demonstrating innovation are a great step in advancing the entire genre of personal computer use, but for now they are just innovations that are a long way from becoming reality. For those designing such radical interfaces, make sure that every step of the way is thoroughly tested by my mother. If she can’t use the new interface in a matter of hours, you have not designed and/or implemented it adequately. Of course, no offense to my mother at all, she is just the first person to come to mind on adopting and embracing technology.

On a side note: posting the above with href links while drunk is a challenge like no other. Digging up obscure links to computer interfaces you saw demonstrated a year or more ago? Fortunately Google/YouTube offer search engines that made this part easy. “proof” that web sites are “wrong” by putting navigation on the left instead of right? No, I can’t find an easy reference, but I “know” this to be true because an old friend who was a ninja at web interface design in 1994 discovered/learned it fifteen years ago and pointed it out to me a dozen times since.

“We recovered the laptop!” … so what?

[This was originally published on It was written by Lyger and Jericho.]

In May of 2006, the United States Department of Veterans Affairs publicly disclosed the fact that “Personal data on about 26.5 million U.S. military veterans was stolen from the residence of a Department of Veterans Affairs data analyst who improperly took the material home“, prompting a mass concern that the information, if in the wrong hands, could have led to multiple cases of identity theft. At the very least, the fear that even a government entity could have let such sensitive data fall into the wrong hands led many to wonder about the data security of less protected sources. The additional fact that the breach wasn’t disclosed for almost three weeks after the theft did little to initially ease those fears.

Weeks later, the stolen laptop and hard drive were recovered from the back of a truck at a black market sale and sent to the United States Federal Bureau of Investigation for analysis. At the end of June 2006, the FBI issued a declaration that “the personal data on the hardware was not accessed by thieves” to which VA Secretary R. James Nicholson stated “This is a reason to be optimistic. It’s a very positive note in this entire tragic event.” The question that needs to be asked, however, is how could they be absolutely sure that the data wasn’t accessed? Simply because the FBI said so?

Here’s the inherent problem with situations such as these: anybody with a relative clue, or at least a copy of Knoppix or F.I.R.E., could potentially bypass security measures implemented on lost or stolen drives. Period. Unless data on a drive is encrypted with a key either unknown or inaccessible to an intruder, that data is open to compromise. We won’t even go into cracking AES256 or 3DES here; for the most part, such measures are impractical. Cracking algorithms over 128-bit is possible, but only with a lot of time and/or firepower. However, shoving a CD in the machine, rebooting, and typing:

# mount /dev/hda1 /tmp/stolen_info/
# cd /tmp/stolen_info/
# ls -la

It’s not that difficult and it makes all of that “password-protected” data quite readable, even for a casual computer user. If the person who stole the laptop were to remove the drive and perform a bit-by-bit copy, they would circumvent any password protection on the computer. Remember, BIOS and Operating System passwords rely on the computer and OS to boot up. If you remove the drive, neither will offer any level of protection and are completely worthless. The irony of law enforcement claiming the information was not accessed is that the method used to conduct a forensic examination is the exact same thing an attacker would do to access the data without detection. Law enforcement knows this, “independent examiners” know this and the the companies making these bogus statements know this.

In case you think this sounds either pretty high-tech or something we’re just making up:

Despite the fact that computer forensics can’t conclusively prove what happened before a machine was recovered, companies that have a data loss incident seem wont to instantly deny that any information was compromised. Cry as they might, the fact is… they simply don’t know. For companies that experience such a loss, learn from past statements. Like many spaghetti westerns, certain incidents should come with a soundtrack. Since we don’t host sound files on this site, quietly whistle these to yourself:

“The Good, the Bad, and the Ugly”:

The Good:”>
The good news: Nothing has happened since then to give us any concern that any of the files that were potentially on any of the computers have been used for anything related to identity theft. releases/2005/09/15_laptop.shtml
UC police note that while a lab analysis could not determine whether the sensitive campus data was ever accessed, nothing in their investigation points to identity theft nor individuals involved in identity theft.

The Bad:
“PSA has no indication that any of the information has been accessed or misused.

The Ugly:
The ring did not target the data and did not even attempt to access the data on the laptops they stole,” the DOT inspector general’s office said in a statement.
In an e-mail to employees within the past week, Senior Vice President Rick Stephens said Boeing and an outside security consultant had determined that the files containing personal information had not been read.

To take it one step further, what happens if a company hires an employee who either intentionally or unintentionally has access to not only company secrets, but client, customer or employee information… and they become involved in a possible breach? Who can determine what information may (or may not) have been accessed? What level of forensics exists to absolutely, positively confirm that a breach did (or did not) take place (answer: none)?

In short, just claiming that a breach did not happen, especially one that may or may not include personally identifying information, IS NOT ENOUGH. Even with an unbiased third-party examination of all of the details, there is no absolute guarantee that compromised data could not have been accessed. If personally identifying information is involved in a breach, either in possibility or actuality, it is in the best interest of all involved to disclose the (possibility of a) breach to their clients, customers, employees and any other parties involved instead of hiding behind legalities. Doing so only shows that the breached entity’s interest lies in their own bottom line and not in the interest of the people they went into business to serve.