Perlroth, Miller, and the First Remote iPhone Vuln

In what is sure to be my last blog (?!) born out of reading “This Is How They Tell Me The World Ends” by Nicole Perlroth, this article is basically a quick dive into a single paragraph that contains one sentence with an alleged fact pertaining to vulnerability history. As a self-described Vulnerability Historian, this is of course of interest especially if the statement is questionable. From page 63 of the book, here is the paragraph for full context and the relevant bits in bold:

But that’s not how he would be remembered. One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone. The conventional wisdom had always been that the iPhone – with its sleek design and closely held code – was more secure than the alternatives. But Charlie blew a hole right through that theory. He demonstrated before an audience of hundreds how easily he could remotely control anyone’s iPhone simply by steering their browser to a malicious website he created.

With that, we’ll examine three components of this claim:

  1. Was the vulnerability remote?
  2. Was the vulnerability in an iPhone?
  3. Was Miller the first?

Before jumping to conclusions on those answers, there’s a twist or two! If you’re already grumbling over me being “wordy” you can scroll down to the end for the cliff notes TL;DR and miss the rabbit hole adventure. And also thank me for not posting each section separately, teasing it out and making you wait two weeks for an answer.

Was it Remote?

Perlroth cites the quoted section above from a 2007 article by John Schwartz titled “iPhone Flaw Lets Hackers Take Over, Security Firm Says“. To make sure we understand the context from this article, with an important bit highlighted:

Once he was there, the site injected a bit of code into the iPhone that then took over the phone. The phone promptly followed instructions to transmit a set of files to the attacking computer that included recent text messages — including one that had been sent to the reporter’s cellphone moments before — as well as telephone contacts and email addresses. “We can get any file we want,” he said. Potentially, he added, the attack could be used to program the phone to make calls, running up large bills or even turning it into a portable bugging device.

For clarity, and to show this was widely reported, we see from Farhad Manjoo of Salon in his article “Security researchers find a dangerous iPhone flaw” that the attack vector is stated more clearly:

The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message. After it’s activated, an attacker can make the phone transmit files or passwords, run up wireless services or even record audio and relay it back to the attacker.

The reason the attack vector is so important is that it speaks to the first part of the claim in which Perlroth says it was the “first remote hack”. In the context of vulnerabilities, remote means that a vulnerability can be exploited remotely without user interaction from the victim. If the exploit requires the victim to perform an action of any kind, including clicking a link, it is a user-assisted or context-dependent attack. While that is a serious attack, since we know the bar for clicking a link is low, it is still important to make this distinction. Why? Let’s start with risk scoring and refer to Remote Code Execution (RCE) and Arbitrary Code Execution (ACE) for reference.

Using the Common Vulnerability Scoring System (CVSS), an industry standard for better or worse, we get four sets of scores to look at. First, understand that many organizations use a three-tier “stoplight” system for general risk severity (i.e. low, medium, high) or a five-tier system that adds an ‘informational’ and ‘critical’ rating. The five-tier system breaks down as 0.0 (informational), 0.1 – 3.9 (low), 4.0 – 6.9 (medium), 7.0 – 8.9 (high), 9.0 – 10.0 (critical). For organizations that prioritize at this higher level first, focusing on critical before moving to high-risk, the difference between an 8.9 and 9.0 may mean a lot. So let’s compare a RCE versus an ACE looking at both CVSS version 2 and 3, which are the same in spirit but different in implementation:

As we see, arbitrary code execution under CVSSv3 is scored as 8.8 which is only “high” risk while under CVSSv2 it is “critical”. Compare that to remote code execution which is “critical” under both scoring systems. So the distinction between remote and user-assisted is important in both severity and factual accuracy. Jumping back to specifics of the attack:

The hack — first reported by John Schwartz in Monday’s New York Times — can be activated through a malicious Web site, a Wi-Fi access point or a link sent to the phone through e-mail or a text message.

This is clearly an arbitrary code execution situation as it requires a victim to visit a malicious web page in some manner. That distinction is one that Charlie Miller has made himself many times over the years. This is not a remote vulnerability. At this point it would be more accurate to say “first user-assisted code execution vulnerability in an iPhone“. That’s a bit different huh?

Was the vulnerability in an iPhone?

The simple answer is of course, “yes”. But it’s more complicated than that which we’ll see, as well as why that is important. When attributing a vulnerability to a given device like an iPhone we should note if the vulnerable code is in Apple’s own iPhone code or a third-party library used by the device. This distinction starts us down a rabbit hole.

First, we’ll reference the vulnerability in question which is CVE-2007-3944, and it was cited in the iPhone v1.0.1, macOS Security Update 2007-007, and Safari 3 Beta Update 3.0.3 updates:

Safari

CVE-ID: CVE-2007-3944

Available for: iPhone v1.0

Impact: Viewing a maliciously crafted web page may lead to arbitrary code execution

Description: Heap buffer overflows exist in the Perl Compatible Regular Expressions (PCRE) library used by the JavaScript engine in Safari. By enticing a user to visit a maliciously crafted web page, an attacker may trigger the issue, which may lead to arbitrary code execution. This update addresses the issue by performing additional validation of JavaScript regular expressions. Credit to Charlie Miller and Jake Honoroff of Independent Security Evaluators for reporting these issues.

How do we know it was this vulnerability and not a subsequent one? Perloth says it came one month after Miller’s paper, “The Legitimate Vulnerability Market” from May, 2007. Miller and Honoroff’s vulnerability was shared with Apple on July 17 and publicly disclosed on July 19. Close enough to a month and the next iPhone update was 1.1.1 in September which did not credit Miller. You can also notice that while Perlroth credits Charlie Miller, it was additionally credited to a second individual, Jake Honoroff. 

We can see that the first two advisories attribute the vulnerability to the code in Safari, while the Safari advisory attributes the vulnerability to WebKit, an open-source web browser engine used by Google Chrome, Apple Safari, Mozilla Firefox, Microsoft IE (recent versions) and other browsers. But the advisory tells us the issue is actually in Perl Compatible Regular Expressions (PCRE), which is a library used within a library (WebKit) used within Safari used within the iPhone. At this point it would be more accurate to say “first user-assisted code execution vulnerability in a transient dependency used by the iPhone“. That’s quite different huh?

We need to go further down the rabbit hole though. Since the vulnerability is in WebKit, which existed before the iPhone and the first security patch, we need to consider if any prior WebKit vulnerabilities might have impacted the iPhone and simply weren’t reported as such. We know iPhone development began in 2004 and the first release was June 29, 2007. We don’t know what that development was like, specifically how often they pulled in upstream code in WebKit. In theory that gives us a 3.5 year window but I think it is safe to say the developers would pull in code more often. There are at least two WebKit exploits from 2006, only one later in the year disclosed on November 14 that is ACE. I’d suspect that was patched well before the iPhone release since it was patched in macOS at that time.

Next we need to consider if other Safari vulnerabilities might have impacted the iPhone. One vulnerability jumps out quickly, an ACE in Safari patched on June 12, but it only impacts installs on Windows. Next we have a vague disclosure on June 11, 2007 about “ten flaws” in the SVG parsing engine that weren’t reported to Apple (CVE-2007-3718). These very well could represent vulnerabilities that impacted the iPhone, we simply don’t know. There were two more ACE vulnerabilities reported in Safari with no indication they were fixed, just reported (CVE-2007-3187). These could very well affect the iPhone as well. 

Finally, we have to consider if vulnerabilities in other third-party libraries used in the iPhone affect it. Apple doesn’t publish a list of those libraries but based on prior disclosures that affect macOS, which could also affect the iPhone, those include expat, troff, Libxslt, ICU / ICU4C, libXfont, libxml2, glibc, and some FreeBSD BDF font handling code. That’s a lot of code we don’t know about that is certainly a concern.

Did Miller’s vulnerability in question affect the iPhone? Yes, but, at this point it would be more accurate to say “first publicly disclosed user-assisted code execution vulnerability in a third-party library used by the iPhone after commercial sales began“. That’s even more specific huh?

Was Miller the First?

Since the iPhone advisory above covers the first security update for the device, that advisory represents the first batch of vulnerabilities patched after public release. The next thing we need to look at are the other vulnerabilities patched; are any of them ACE or RCE? Yes, one of the four other vulnerabilities is an ACE as well (CVE-2007-2399). It is described as:

Impact: Visiting a maliciously crafted website may lead to an unexpected application termination or arbitrary code execution

Description: An invalid type conversion when rendering frame sets could lead to memory corruption. Visiting a maliciously crafted web page may lead to an unexpected application termination or arbitrary code execution. Credit to Rhys Kidd of Westnet for reporting this issue.

So there are two ACE vulnerabilities fixed in the same advisory. How did Schwartz at the New York Times know that Miller and Honoroff’s vulnerability was first? Because Miller likely told him so. In the article Schwartz quotes Lynn Fox from Apple so they talked but I suspect that Schwartz did not verify that information and Fox did not volunteer it. From the NYT article:

The researchers, working for Independent Security Evaluators, a company that tests its clients’ computer security by hacking it, said that they could take control of iPhones through a WiFi connection or by tricking users into going to a Web site that contains malicious code. The hack, the first reported, allowed them to tap the wealth of personal information the phones contain.

[..]

A spokeswoman for Apple, Lynn Fox, said, “Apple takes security very seriously and has a great track record of addressing potential vulnerabilities before they can affect users.”

Per that article and other sources, we know that Independent Security Evaluators (ISE) reported the vulnerability to Apple on July 17. Looking in VulnDB I can see that Kidd reported his find to Apple on June 13, over a month before ISE did, and it is in the third-party library WebKit rather than a transient dependency of WebKit. So that settles it, right? Not quite.

We know that between these two vulnerabilities, Miller was not first. But we also know that neither were remote code execution either. Moving past the iPhone 1.0.1 update, we have to go through each subsequent release to figure out if any of the fixed vulnerabilities qualify. Fortunately, we only have to go one more version to 1.1.1 before we have our first candidate. On September 27, 2007, the update fixed vulnerability in Bluetooth functionality that can be exploited remotely:

Bluetooth

CVE-ID:  CVE-2007-3753

Impact:  An attacker within Bluetooth range may be able to cause an unexpected application termination or arbitrary code execution

Description:  An input validation issue exists in the iPhone’s Bluetooth server. By sending maliciously-crafted Service Discovery Protocol (SDP) packets to an iPhone with Bluetooth enabled, an attacker may trigger the issue, which may lead to unexpected application termination or arbitrary code execution. This update addresses the issue by performing additional validation of SDP packets. Credit to Kevin Mahaffey and John Hering of Flexilis Mobile Security for reporting this issue.

This technically qualifies as the first remote vulnerability in the iPhone! However, notice that it has to be exploited from within Bluetooth range which severely limits exploitation. In such cases CVSS would be scored as AV:A, meaning adjacent network, dropping the score a good bit. While this does fit the bill, meaning Kevin and John deserve serious kudos, it isn’t remote in the context most associate the term with. So let’s keep going to see the first fully remote vulnerability in an iPhone. We pass the releases for 1.1.2, 1.1.3, 2.0, and 2.1 to find the next of interest in 2.2 on November 20, 2008:

ImageIO

CVE-ID:  CVE-2008-2327

Impact:  Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution 

Description:  Multiple uninitialized memory access issues exist in libTIFF’s handling of LZW-encoded TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected application termination or arbitrary code execution. This update addresses the issue through proper memory initialization and additional validation of TIFF Images.

ImageIO

CVE-ID:  CVE-2008-1586

Impact:  Viewing a maliciously crafted TIFF image may lead to an unexpected device reset

Description:  A memory exhaustion issue exists in the handling of TIFF images. Viewing a maliciously crafted TIFF image may lead to an unexpected device reset. This update addresses the issue by limiting the amount of memory allocated to open a TIFF image. Credit to Sergio ‘shadown’ Alvarez of n.runs AG for reporting this issue.

These two vulnerabilities are interesting because there is a potential for a remote attack here, but the advisory doesn’t make it clear in wording and they don’t provide CVSS scores. Since an image can be delivered a wide variety of ways, including via SMS, the fact that these occur in the ImageIO subsystem is of note. The Apple Developer documentation backs up this thought:

The Image I/O programming interface framework allows applications to read and write most image file formats. This framework offers high efficiency, color management, and access to image metadata.

A bit light on details but this suggests that if e.g. an SMS messaging app, or any other that remotely receives content and processes it, could be an avenue for remote code execution. Based on a chat with a colleague, it would require the victim opening the SMS app at the very least which is a low bar for exploitation, but he does not think the iPhone SMS app renders the images as a preview without clicking into a specific message. Low bar, but still requires some user interaction. We see the exact same thing for CVE-2008-3623 and CVE-2009-0040 in the iPhone 3.0 update on June 17, 2009. It is interesting to note that we’re now two years after the iPhone’s release and full remote vulnerability with no limitations or caveats.

CoreGraphics

CVE-ID:  CVE-2008-3623

Impact:  Viewing a maliciously crafted image may lead to an unexpected application termination or arbitrary code execution

ImageIO

CVE-ID:  CVE-2009-0040

Impact:  Processing a maliciously crafted PNG image may lead to an unexpected application termination or arbitrary code execution

This time, one of them is in CoreGraphics which does not seem to be as promising as ImageIO based on the documentation. Moving on we land on the iPhone 3.0.1 update released July 31, 2009 and see:

CoreTelephony

CVE-ID:  CVE-2009-2204

Impact:  Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution

Description:  A memory corruption issue exists in the decoding of SMS messages. Receiving a maliciously crafted SMS message may lead to an unexpected service interruption or arbitrary code execution. This update addresses the issue through improved error handling. Credit to Charlie Miller of Independent Security Evaluators, and Collin Mulliner of Fraunhofer SIT for reporting this issue.

This has all the makings of what we’re after. While the advisory says “arbitrary code execution” that is a qualifier to “decoding of SMS messages”. Receiving the message triggers it as the payload is processed regardless of loading the message specifically. But notice that the same issue was also found by Collin Mulliner. So who found or reported it to Apple first? That is what ultimately settles this question. Since it lists two people with two different affiliations, that typically means mutual discovery or a “vulnerability collision”. 

I reached out to a contact at Apple and asked if they could figure out which of the two sent the email first to settle this debate. Low and behold, I was told that it was a single mail sent June 18, 2009 and both were listed as creditees! That is backed up by a presentation at Black Hat USA 2009 titled “Fuzzing the Phone in your Phone” given by both individuals.

Conclusion (tl;dr)

We began the blog with a quote from Nicole Perlroth’s book, “This Is How They Tell Me The World Ends”, in which she says “One month after unleashing his white paper on the world, Charlie [Miller] made an even bigger name for himself with the first remote hack of an iPhone.”  The question is if that quote is accurate, understanding she is citing CVE-2007-3944. The answer is, it’s complicated. Here’s the facts as best I can tell:

  1. Was it remote? No, the cited vulnerability is a user-assisted issue and cannot be carried out remotely w/o that victim clicking something.
  2. Was the vulnerability in an iPhone? Kind of. The vulnerability was in the Perl Compatible Regular Expressions (PCRE) library used by the JavaScript engine in Safari, bundled with the iPhone. Yes it affected the device, no the vulnerability wasn’t in Apple’s code let alone the iPhone native code base.
  3. Was Miller the first? It’s complicated. 
    1. If we go strictly by CVE-2007-3944, then no Miller was not the first. Rhys Kidd disclosed a user-assisted vulnerability in WebKit, the rendering engine in Safari, over one month earlier. Further, Jake Honoroff co-disclosed the vulnerability Miller found.
    2. If we go by remote without interaction but limited in attacker location, then no, Kevin Mahaffey and John Hering are the first with CVE-2007-3753 that must be exploited over Bluetooth.
    3. If we go by the actual first remote vulnerability, CVE-2009-2204 around two years later, then yes but Miller co-discovered it with Collin Mulliner and both earned that distinction.

In short: no, kind of, no, no, yes but. So Perlroth is incorrect in her assertion and very likely included it after talking to Miller for her book. The problem is that in the context of the 2007 vulnerability, Miller was wrong and Perlroth et al did not properly fact check that detail, instead relying on a co-worker’s article as gospel. We don’t know if Miller mentioned Honoroff in his discussions with Perlroth or if her text was written outside the scope of her discussion with Miller, but that detail was trivial to find if the claim was fact checked beyond her colleague’s article that also omitted it.

Regardless, I believe we have a definitive answer as to that one line in the book. It took eight pages to get to this answer and I apologize for that (not really).

My Photography is Popular

According to Ken Rockwell, via the first result of a Google search, the definition of a professional photographer is someone:

.. who earns 100% of his income from photography. This is the definition required for entrance into the secret Nikon and Canon factory support organizations. People who earn less than 50% of their income from photography are amateurs.

I am not a professional by that standard, but I had to look because I was curious if viewership mattered. That came after Google notified me that another one of my photos had been viewed over one million times via the exciting subject line of “A lot of people are seeing your photo on Google Maps!

I’m sure there are some amazing photographers that sell their works in stores that have been seen by tens of thousands of people, and yet my quick shot of a local Qdoba has somehow been viewed by that many? That’s about 25% of the population of the greater Denver metro area.

I’ve long thought that these mails and such numbers are wrong, but no way to prove it. There certainly isn’t one million actual humans that interested in the local Qdoba. That means there is likely a lot of automated scraping of images or applications that load it for other purposes. I’m sure there is a blog out there explaining this but I think I would rather enjoy the notion that my photography is just that awesome.

[Update: As I suspected, something else going on. Gillis explains why the numbers are high: “Comes down to the manager of the Google maps business listing. Each “view” of a business counts as a image view if you’re one of the top 6 images for the business. Whereas if you’re ‘below the fold’ views only come if folks scroll down to your pic“. He goes on to sayOh yeah, you’re the main image for the business. So everytime someone tries to navigate there manually, you get a view.” So basically, that manager owes me free queso and chips.

Search Speak for Automaton

Alternate titles for this blog could be “Doodle Transition for Machina” perhaps! For at least a decade I have thought about just such an application and today I have Google Translate for Android. Load, aim, and it will process the text and translate on screen for you. Given the state of technology you would think it would be amazing by now, and it sometimes is.

The success largely depends on the language and that can also be seen in using translate.google.com, where some languages will translate fairly cleanly and others are very rough. One language I have to translate frequently is Chinese (simplified) and it is problematic for many things including company names and technical terms. With that in mind, I would expect it to translate with the same issues via the Google Translate app, and more specifically, do so consistently.

Since I am writing this, you know what’s coming…

This is the result of holding the phone up to a mail label from Japan. That’s all! Just moving the phone ever so slightly by tilting it or moving it half an inch closer / farther will make it change the translation. I think it finally got it a bit correct on that last one since the envelope didn’t contain anything living.

Hopefully the translation technology from Google will advance more quickly on Asian languages. Until then, I am just glad I didn’t get any “Sunrise Holy Poop” in that envelope.

20 Seconds to Comply; 17+ Years to Get It Wrong. From “Roboguard” to “Steve”!

Recently, news broke of a robot security guard lovingly nicknamed “Steve” who drowned in a fountain in the lobby of the building he was sworn to protect. The various Tweets and news articles jumped all over it, with articles anthropomorphizing Steve and headlines such as “Security guard robot ends it all by throwing itself into a watery grave“.

No surprise, but workers in the building set up a “touching” memorial for Steve on his charging plate, further anthropomorphizing him. It’s hard not to care for and feel sorry for poor Steve, who likely roamed an empty building with modern access controls and no real threat, other than a wayward janitor who lost his RFID badge.

While the Internet is enjoying and mourning poor Steve, everyone seems to forget about old ‘Roboguard’! Unfortunately, like most media outlets, even “New Scientist” doesn’t preserve links and evidence like a scientist would. These asshats don’t even clearly list a date on their articles (posted to ISN on Aug 31, 2000). Thanks to the Internet Archive, if we go back far enough we see the article but without pictures, likely because “New Scientist” didn’t want to preserve anything back then, like they don’t today. I don’t think “science” means what they think it means.

Not sure if Asimov would be laughing or rolling in his grave.

It’s 2016, why is rotating a video such a pain?

How many times have you quickly shot a video on your phone and not rotated it for landscape? It happens too often and we see these videos all over social media. I sometimes forget to do it as well, or portrait is more in line with what I am shooting. So, I want to quickly rotate a video 90 degrees sometimes. Should be easy, right?

I’ve asked friends and social media before, but I asked again last night and got a lot of great input. My criteria were very simple, but I did not specify platform; I want to load an MP4 video, rotate it 90 degrees, and save it. I didn’t qualify it, but my expectations are that it would not lose quality, it would keep the original MP4 format, and that the process was “one-click” (or close). While I have plenty of history using Linux, going back to CLI graphics tools to do this is not ideal for me, but I considered those options.

  • @cl suggested Windows Movie Maker – It will rotate trivially, but saves your MP4 as WMV and the quality drops noticeably.
  • @TCMBC suggested mencoder – A command line utility, part of MPlayer. So it is not trivial (download, configure, compile, figure out CLI syntax), but it does rotate. Yet, the quality drops noticeably.
  • @viss suggested ffmpeg – A command line utility and graphics library, not so trivial. It did rotate, but the quality drops noticeably.
  • @viss suggested The ‘Rotate My Video‘ web site – It is a bit slow for file upload and conversion, but very easy to use. It played the video correctly in my browser, but when I saved the video the final copy was not rotated.
  • @DeviantOllam suggested (in DM) the Rotate Video FX app for Android – I thought the UX wasn’t intuitive for starters. It did rotate the video for immediate playback, but no apparently way to save the new video back to the device. Sharing it brings up the usual Android options, but uploading the video to google drive and the video was not rotated.
  • @elkentaro suggested Apple’s QuickTime Player – Even with his reference which is outdated, there is no apparent rotation function. Even the ability to save a file is now ‘Pro’ only.
  • MegaManSec suggested ImageMagick ‘convert’ utility – this didn’t work and gave me a nice reminder of the old ‘terminal flash attacks’ from the early 90s.
  • @DeviantOllum suggested Virtual Dub but warned me that some versions handle MP4 and some don’t. Thus, I didn’t try it.
  • @Grifter801 suggested VLC but qualified it “just for viewing”.
  • @mehebner suggested Open Shot Video Player but said it is Linux only, which isn’t convenient.
  • @cl suggested iMovie but it is Mac OS X only, which isn’t convenient.
  • @cl suggested Facebook but he isn’t sure you can save after. I am fairly sure you lose quality though.

The final recommendation, and the one that worked the best for me, is Handbrake suggested by @bmirvine. The upside is I had it installed (but an old version) and am familiar with it to a degree. The best part about conversion is that the video does not lose any quality. The downside is trying to figure out the ‘Extra Option’ argument to rotate is a raging mess, as seen on this thread. I found that using “, –rotate=4” as the extra option worked for version 0.10.5.0 64-bit (latest as of this blog). The only other annoyance is that Windows won’t show a thumbnail of the newly saved video for some reason. [Update: with a newer version of the K-Lite codec pack, the thumbnails render fine.]

There are my quick testing results. I hope it helps. I’d like to give a big round of thanks to all who contributed ideas late night. Reminds me that Twitter has some value and isn’t a cesspool of insipid political tripe. =)

Twitter’s crowd-sourced blocking idea good, implementation bad…

Yesterday I saw a few mentions of Twitter’s new method for “crowd-sourcing” user blocks. The idea is that one person may have blocked dozens of trolls, and you want to do the same without having to dig through a lot of Tweets. I read about how it was implemented, sighed, and moved on. Last night, someone I respect for his technical prowess over the years said it was “well done”, and I disagreed. He said I should post a blog with my idea, so your wish is granted.

welldonetwitter

The Twitter blog that outlines the implementation says some users “need more sophisticated tools.” Sophisticated, not convoluted and annoying to implement. There is a big difference. From the blog:

To export or import a list of blocked accounts, navigate to your blocked accounts settings on twitter.com. Click on the advanced options drop-down menu and select the action you want to take.

To download a list of your blocked accounts, select the export option and confirm the accounts you want to export.

The blog doesn’t even explain the next part for some reason, and I am curious why. Could it because the process starts looking more hassle than benefit? The next step is to host that block list somewhere, advertise you did so, have another user download it, then they go to twitter.com, and imports the list. Fast and easy right? Of course not; that is one of the most convoluted methods of using this type of feature. Your average Twitter user, especially the huge percent that only use it via mobile, simply will not go through this process (and cannot easily do it if they wanted to). Even sitting at my computer, having to do actions outside my Twitter client is annoying and this has too many steps.

How about integrate the functionality instead? Every client has a way to look up a user, or interact with them.

block-context

Just about anywhere on this context menu works nicely. “Add/Inherit @AlecMuffet’s blocks…” or “Block @AlecMuffet’s blocks…” or “Share @AlecMuffet’s blocks…”. One click and a confirmation box, and I could take any of his exported blocks and make them my own. That presents a smoother, more easily crowd-sourced model that is the intent here. If I have multiple accounts, it is three clicks as I choose which account (or all accounts) to add blocks to. Compare that 2 or 3-click method, with the one Twitter came up with. Designing the “User Experience” (UX) is an art, and not many companies do it well. It is often due to the disconnect between how the developers use a product or service and how their users or customers use it.

An Open Letter to @Twitter

Dear Twitter,

You run one of the largest and most visible social network sites on the Internet, highly visible to millions that don’t even have Internet access due to media saturation and today’s lexicon. And you suck at it. Despite your recent IPO and suggestions that you finally figured out how to make money off this beast you have created, you still don’t seem to understand the first thing about the monster you created. Namely, how your users actually use the service. Your overall user experience (UX) is horrible. In no particular order, a few of the incidents and poor decisions that support my case:

  • The dreaded “Twitter unfollow bug“. This has been plaguing your platform for many years, and you still have yet to solve it. Worse, you default to sending us junk mail asking if we know people, trying to get us to follow more people. These two things are at odds with each other.
  • When you finally made it easy for a user to download an archive of their tweets, you sent a URL that was broken. Only a fraction of your users could see that you were HTML encoding an & sign in one place, and manually fixing it would allow the download. The fact you missed this shows that you essentially have no Quality Assurance (QA) testing in house.
  • Your emails are annoying. I specifically opted not to receive them in the past, only to have you revert my decision, the subjects are laughable. Not only are they written with no thought to how they appear outside your world, you seemingly can’t figure out the purpose of a profile or make brain-dead assumptions about all users.
  • Subject: Do you know cyberwar on Twitter? <– errr…
    Subject: Twitter followers want to purchase from your business! <– hot damn. now I need a business plan…

  • Twitter on a Tab? No thanks. When opting not to receive audible notifications, your software ignored that and kept dinging at me happily. No means no. Again, in your attempt to get more people using your service, you completely forget the basics of the UX and that all software should receive some QA time.
  • One of the most frustrating problems recently, is your constantly changing decision on how to handle URLs in direct messages. One day, they aren’t allowed without warning. The next day they work again. Days later, now I can’t send the same URL to the same person because I have “already said that”, even when the accompanying text is different. News flash: some web sites do not have static content on their front page. If you need an example, check out this web page: twitter.com. If you can’t figure out that I am friends with someone via the mutual follow, or the fact we have conversed via DM for months (or years in some cases) and that we may want to send URLs to each other, just get out of this business.
  • Your inability to fight spam on your service has moved beyond a running joke and on to the “sad” category. You still cannot detect profiles that are obviously spam and have every indication of being easily pegged by a half-way intelligent algorithm. At least twice, you have identified Twitpic as a “hostile” service, calling it “malware” once. All the while allowing these spam profiles to send sketchy links.

I fully understand that the size of your network makes some of this challenging. But this is also on you, because you opted not to address these problems years ago when it was more manageable. Instead of fixing these recurring nuisances with a solution that scales, you let them languish until they are beasts that are more difficult to vanquish. The list above is just the ones that come to mind quickly this morning.

In summary, you suck as social media. You don’t care about your users beyond figuring a way to profit directly off of them. In case it has slipped your mind, you need us. We are your business foundation. Figure a way to profit off of us! Just do so while occasionally paying attention to your user base please.

Sincerely,
@attritionorg

Android & Granular Permissions

For Android-based phone owners, you are no doubt passingly familiar with the permission system that governs applications and what they can do. Every time you install an application, the device will ask you if you accept a list of permissions that it says are required for it to run. If you want the app, you must accept the permissions no matter what they are.

In theory, users can simply decline an app that requires excessive permissions and find an alternative. After all, there are over 1 million apps available right? Many won’t even read the permissions, while others may casually dismiss them because they are clearly stated, and any app in the Google Play store has to be legitimate!

The problem is that even the most simple and legitimate apps may request a variety of permissions that are not needed to make the program run:

Screenshot_2013-08-22-19-09-55   Screenshot_2013-08-23-19-12-04

A classic example of an application requesting permissions that aren’t required can be seen in the T-Mobile MyAccount app. The app is designed to give a user information about their T-Mobile cellular account, nothing else. This should take nothing more than permission to send and receive network data from their servers. Instead, the app has traditionally wanted extra permissions that are excessive. Worse, the latest version wants more, including “System tools” that give the app even more control over the phone. As T-Mobile is my provider and I don’t want to call them to find out account information, I have to accept their overly broad permissions. There is no alternative application in this case.

The second example is Avast Mobile Security & Antivirus that expects keys to the kingdome. There is a bit of irony that a security app wants enough permissions to completely own your phone, the same threat it claims to protect you from.

The Alternative

The obvious solution to this problem is setting it up so permissions are granular. This would allow a user to deny a specific permission while allowing others. If denying a specific permission causes the application to stop functioning, the user could enable it again if desired.

How hard is it to implement this for Google and Android? Trivial. This is readily apparent in that phones that have been jailbroken already allow it. Android users have requested this feature from Google via Ticket 3778. If you are an Android user and want to see this implemented, load the ticket and ‘star it’ (click the star on the upper left) to indicate you want it. If Google opts not to implement that one, there is a similar feature request (Ticket 6266) that would give a set of optional permissions an app wants, but are not required to function.

Until we get granular permissions, the concept of security in the context of applications will be a lost cause.

Exploding the Review

In the early 90’s, when I was moving in the world of computer bulletin board systems (BBS), it ultimately ended in my interest in phreaking. It started out reading t-files, moved into wardialing, and a few years later would result in PBX, voice mail, and switch hacking. While I got a late start in the phreaking world, it involved a world of reading including years of historical activity related to the phone system. Blue boxes were all but a thing of the past. Rumors of a switch or two still allowing you to seize a trunk floated around, but the time and effort of building a box based on rumor wasn’t so appealing, especially after some thirty years of it being the primary tool of the trade.

Red boxes still worked and were fun. Like the phreaks before me and my friends, we didn’t have many people to call, but it was fun using them. Something about that spoofed quarter signal, dee-dee-dee-dee-dee in rapid succession. From there it was the world of voice mail hacking. At first, just to see what the system were about. That quickly morphed into trying to find out which ones allowed outdial, putting me on the eternal hunt for diverters. At some point, enough information emerged about switches, and after a chance lesson from a veteran, a few of us learned the absolute basics of the 1AESS switch. Within a year or two, the Internet was taking a hold of our minds. Mind you, this was when DNS was still largely controlled via your own HOSTS.TXT file, before BIND was prevalent.

I offer this history because it heavily influences this review, and my enjoyment of a book.

Cover-New-200

Exploding the Phone gives a fairly comprehensive history of the origins of phreaking (phone system hacking). Written by Phil Lapsley, foreword by Steve Wozniak, the book was published earlier this year.

The book “Exploding the Phone” opens with a curious story of a classified advertisement in the Harvard Crimson student newspaper:

WANTED HARVARD MIT Fine Arts no. 13 notebook. (121 pages) & 40 page reply K.K. & C.R. plus 2,800; batter; m.f. El presidente no esta aqui asora, que lastima. B. David Box 11595 St. Louis, MO 63105.

This story is a launching point into the curious world of the early phone hackers, known as “phone freaks” that later became “phone phreaks”. After a brief history of the creation of the phone system, Lapsley takes us through the early world of blue boxing. By sharing the stories of several early phreaks that independently discovered the 2600hz signal and how it could give them free calls and the ability to explore the phone system, we see that an entire generation of what is now known as ‘hackers’ were in it for the love of system, nothing more. Because nothing can be that pure, we also learn of bookies in the 60’s that used phreak-made blue boxes for profit, by evading long distance bills for their numerous calls. Along with the phreaks are the stories of the phone company security and law enforcement that began to investigate them.

We get detailed stories of blind phreaks like Josef Engressia (aka The Whistler), Bill Acker, and Rick Plath. Instead of rumors and lore, Lapsley took extensive time not only researching them, but speaking with them when possible. The stories continue with the phone company struggling to figure out this new wave of people using the system in ways not intended. The reader enjoys some of the classic pranks pulled by phreaks, as they routed their calls all over the world, even to the Vatican. The history lesson continues with the tale of John Draper, aka Captain Crunch, who did not discover the cereal-box whistle blew the 2600hz tone (he was told that by phreaks that figured it out years before). As with all hacker culture, the drama of snitching and trying to evade serious punishment enters the picture. The book wraps up with more recognizable names like Steve Jobs and Steve Wozniak, and their founding of Apple based on selling blue boxes.

To anyone remotely interested in phreaking, or phone systems in general, I highly recommend this book. The author has done a wonderful job outlining the past through colorful stories, new details, and a great sense of what the culture was like.

T-Mobile’s Poor Implementation Works Against Amber Alerts

Just over a month ago, I received a pop-up alert on my Samsung Galaxy 3 (via T-Mobile) with a standard, and persistent, emergency broadcast noise…

Emergency alert
Longmont, CO AMBER Alert: LIC/245FLJ (CO) 2001 Blue Ford F350 Pickup truck
Type: AMBER Alert

The noise stopped briefly, then picked back up again until I tapped “OK”. This is a radical departure from the previous product behavior and service provided. Presumably this came with the latest Android update T-Mobile pushed shortly before (May 13).

No warning about this change, no indication where the alerts are coming from, no explanation on criteria for receiving (Longmont is almost 40 miles north of me, outside a metropolis of ~ 4.5 million), no indication of how often we receive them, a repeating noise that we have to acknowledge (as opposed to SMS that gives a noise/vibration one time only), etc. I’m not opposed to getting such warnings but I should be able to opt in and control the settings for how it is displayed.

One hour later, I received the same alert. That is intrusive and annoying. When it happened, I thought “if this shit happened at night, it would wake me up and force me to get up to ack the alert and turn off the phone” and just that happened. Wednesday early morning, at 5:20AM I received another. As I thought, it woke me, given the emergency sound and vibrating on my desk.

amber_alert

Looking at the SMS options that control this is also interesting. I now have to receive “Presidential Alerts” and cannot opt out of them. There are also Imminent Extreme alerts, Imminent Serious alerts, and the Amber alerts that I have received twice now. What are the others, and what differentiates them? When was the last time a Presidential broadcast was sent to everyone’s email address or home phone number? Absurd you say, why is it all of a sudden OK to send them to every subscriber’s cell phone?

What bothers me the most is that the Amber alerts, and presumably the others, do not adhere to the rest of my SMS settings. When I get an SMS, it vibrates once, makes an audible noise of my choice once, and sits idle until I check the phone. Amber alerts come up with a different sound; one that repeats until I acknowledge it.

Screenshot_2013-05-25-17-28-55

This is ridiculous. I want to receive them, but on my terms. The current setup and being woken at five in the morning forced me to disable the Amber alerts. T-mobile’s crappy technical implementation has worked contrary to their intentions by annoying customers into disabling them. This works against the entire purpose of having the alerts pushed via cell phones.