How to make the MacBook Air SuperDrive work with any Mac

Note: for Mac OS X 10.11 El Capitan and later, please see this updated post instead.

(Edited/clarified Nov. 2012, Nov. 2013, Jan 2015 and June 2016)

The story is this - a while ago I replaced the built-in optical disk drive in my MacBook Pro 17" by an OptiBay (in the meantime, there are also alternatives) which allows to connect a second harddrive, or in my case, a SSD.

To be able to continue using the SuperDrive (Apple's name for the CD/DVD read/write drive), the Optibay came with an external USB case which worked fine, but was ugly. And I didn't want to carry that around, so I left it at home and bought a shiny new MacBook Air SuperDrive (by 2012, Apple USB SuperDrive) for the office.

It just didn't occur to me that this thing could possibly not just work with any Mac, so I didn't even ask before buying. I knew that many third-party USB optical drives work fine, so I just assumed that would be the same for the Apple drive. But I had to learn otherwise. This drive only works for Macs which, in their original form, do not have an optical drive.

At this point, I started to search the net, finding hints, disassembling Mac OS X USB drivers and finally patching code in a hex editor which was the first, but ugly, solution to make the superdrive work, and gave me the information to eventually find the second, much nicer solution presented below. For those interested in the nitfy details of disassembling and hex code patching, the first approach is still documented here.

For actually making the SuperDrive work in clean and easy way, just read on (but note: while it has proven to be a quite safe method, still you'll be doing this entirely on your own risk! Using sudo and editing system files incorrectly can damage things severely!).

Apparently, Apple engineers had the need to test the superdrive with non-MacBookAir computers themselves, so the driver already has an option built-in to work on officially unsupported machines! All you need to do is enable that option, as follows:

The driver recognizes a boot parameter named "mbasd" (Mac Book Air Super Drive), which sets a flag in the driver which both overrides the check for the MBA and also tweaks something related to USB power management (the superdrive probably needs more power than regular USB allows). So just editing /Library/Preferences/SystemConfiguration/com.apple.Boot.plist and inserting the "mbasd=1″ into the "Kernel Flags" does the trick:

[For OS X 10.11 El Capitan onwards please see here for updated instructions instead!]

  1. open a terminal
  2. type the following two commands (two lines, each "sudo" starting on a new line)

    sudo plutil -convert xml1 /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

    sudo pico /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

  3. Insert mbasd=1 in the <string></string> value below the <key>Kernel Flags</key> (If and only if there is already something written between <string> and </string>, then use a space to separate the mbasd=1 from what's already there. Otherwise, avoid any extra spaces!). The file will then look like:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>Kernel Flags</key>
    <string>mbasd=1</string>
    </dict>
    </plist>

    [Important update for users of Trim Enabler (thanks boabmatic!): Since Yosemite, installation of Trim enabler puts another flag "kext-dev-mode=1" into the com.apple.Boot.plist, and, unfortunately, also converts the .plist to binary format which shows as mostly garbage in many text editors (that's what the "plutil" line in step 2 above takes care about: it converts the file back to XML so you can edit it). Note that the system will not boot any more when trim enabler is installed, but "kext-dev-mode=1" is missing! So to apply the "mdasd=1" with trim enabler active, you need to combine both flags, such that the line will read
    <string>kext-dev-mode=1 mbasd=1</string>. For details on Yosemite and Trim Enabler, see here]
    [Update: As CyborgSam pointed out in the comments, the file might not yet exist at all on some Macs. In that case, the pico editor window will initially be empty - if so, just copy and paste the entire XML block from above].

  4. Save (press Ctrl-X, answer yes to save by pressing Y, press enter to confirm the file name).
  5. Restart your machine. That's it!

I tested this [Updated:2013-11-03] on Lion 10.7.2 up to 10.7.4, Mountain Lion up to 10.8.4 and Mavericks 10.9 so far, but I expect it to work for all Mac OS versions that came after the initial release of the Macbook Air Superdrive, which is probably 10.5.3, and is likely to work with future versions of OS X. Just let me know your experience in the comments!

BTW: the boot options plist and how it works is described in the Darwin man pages

Secure Cloud Storage with deduplication?

Last week, dropbox's little problem made me realize how much I was trusting them to do things right, just because they once (admittedly, long ago) wrote they'd be using encryption such that they could not access my data, even if they wanted. The auth problem showed that today's dropbox reality couldn't be farther from that - apparently the keys are lying around and are in no way tied to user passwords.

So, following my own advice to take care about my data, I tried to understand better how the other services offering similar functionality to dropbox actually work.

Reading through their FAQs, there are a lot of impressive sounding crypto acronyms, but usually no explanation of their logic how things work - simple questions like what data is encrypted with what key, in what place, and which bits are stored where, remain unanswered.

I'm not a professional in security, let alone cryptography. But I can follow a logical chain of arguments, and I think providers of allegedly secure storage should be able to explain their stuff in a way that can be followed by logically thinking.

Failing to find these explanations, I tried the reverse. Below I try following logic to find out if or how adding sharing and deduplication to the obvious basic setup (private storage for one user) will or will not compromise security. Please correct me if I'm wrong!

Without any sharing or upload optimizing features, the solution is simple: My (local!) cloud storage client encrypts everything with a long enough password I chose and nobody else knows, before uploading. Result: nobody but myself can decrypt the data [1].

That's basically how encrypted disk images, password wallet and keychain apps etc. work. More precisely, many of them use two-stage encryption: they generate a large random key to encrypt the data, and then use my password to encrypt that random key. The advantage is performance, as the workhorse algorithm (the one that encrypts the actual data) can be one that needs a large key to be reasonably safe, but is more efficient. The algorithm that encrypts the large random key with my (usually shorter) password doesn't need to be fast, and thus can be very elaborate to gain better security from a shorter secret.

Next step is sharing. How can I make some of my externally stored files accessible to someone else, but still keeping it entirely secret from the cloud storage provider who hosts the data?

With a two stage encryption, there's a way: I can share the large random key for the files being shared (of course, each file stored needs to have it's own random key). The only thing I need to make sure is nobody but the intended receiver obtains that key on the way. This is easier said than done. Because having the storage provider manage the transfer, such as offering a convenient button to share a file with user xyz, this inevitably means I must trust the service about the identity of xyz. Best they can provide is an exchange that does not require the key be present in a decrypted form in the provider's system at any time. That may be an acceptable compromise in many cases, but I need to be aware I need a proof of identity established outside the storage provider's reach to really avoid they can possibly access my shared files. For instance, I could send the key in a GPG encrypted email.

So bottom line for sharing is: correctly implemented, it does not affect the security of the non-shared files at all, and if the key was exchanged securely directly between the sharing parties, it would even work without a way for the provider to access the shared data. With the one-button sharing convenience we're used to, the weak point is that the provider (or a malicious hacker inside the providers system) could technically forge identities and receive access to shared data in place of the person I actually wanted to share with. Not likely, but possible.

The third step is deduplication. This is important for the providers, as it saves them a lot of storage, and it is a convenience for users because if they store files that other users already have, the upload time is near zero (because it's no upload at all, the data is already there).

Unfortunately, the explanations around deduplication get really foggy. I haven't found a logically complete explanation from any of the cloud storage providers so far. I see two things that must be managed:

First, for deduplication to work, the service provider needs to be able to detect duplicates. If the data itself is encrypted with user specific keys, the same file from different users looks completely different at the provider's end. So neither the data itself, nor a hash over that data can be used to detect duplicates. What some providers seem to do is calculating a hash over the unencrypted file. But I don't really undstand why many heated forum discussions seem to focus on whether that's ok or not. Because IMHO the elephant in the room is the second problem:

If indeed files are stored encrypted with a secret only the user has access to, deduplication is simply not possible, even if detection of duplicates can work with sharing hashes. The only one who can decrypt a given file is the one who has the key. The second user who tries to upload a given file does not have (and must not have a way to obtain!) the first user's key for that file by definition. So even if encrypted data of that file is already there, it does not help the second user. Without the key, that data stored by the first user is just garbage for him.

How can this be solved? IMHO all attempts based on doing some implicit sharing of the key when duplicates are detected are fundamentally flawed, because we inevitably run into the proof of identity problem as shown above with user-initiated sharing, which becomes totally unacceptable here as it would affect all files, not only explicitly shared ones.

I see only one logical way for deduplication without giving the provider a way to read your files: By shifting from proof-of-identity for users to proof-of-knowledge for files. If I can present a proof that I had a certain file in my possession, I should be able to download and decrypt it from the cloud. Even if it was not me, but someone else who actually uploaded it in the first place. Still everyone else, including the storage provider itself, must not be able to decrypt that file.

I now imagine the following: instead of encrypting the files with a large random key (see above), my cloud storage client would calculate a hash over my file and use that hash as the key to encrypt the file, then store the result in the cloud. So the only condition to get that file back would be having had access to the original unencrypted file once before. I myself would qualify, of course, but anyone (totally unrelated to me) who has ever seen the same file could calculate the same hash, and will qualify as well. However, for whom the file was a secret, it remains a secret [2].

I wonder if that's what cloud storage providers claiming to do global deduplication actually do. But even more I wonder why so few speak clear text. It need not to be on the front page of their offerings, but a locigally conclusive explanation of what is happening inside their service is something that should be in every FAQ, in one piece, and not just bits spread in some forum threads mixed with a lot of guesses and wrong information!

 

[1] Of course, this is possible only within the limits of the crypto algorithms used. But these are widely published and reviewed, as well as their actual implementations. These are not absolutely safe, and implementation errors can make them additionally vulnerable. But we can safely trust that there are a lot of real cryptography expert's eyes on these basic building blocks of data security. So my working assumption is that the used encryption methods per se are good enough. The interesting questions are what trade offs are made to implement convenience features.

[2] Note that all this does not help with another fundamental weakness of deduplication: if someone wants to find out who has a copy of a known file, and can gain access to the storage provider's data, he'll get that answer out of the same information which is needed for deduplication. If that's a concern, there's IMHO no way except not doing deduplication at all.

 

The only one who really cares about your data is you!

Once more, yesterday's Dropbox authentication bug shows a fundamental weakness of centralized services. Dropbox is just a high profile example, but the underlying problem is that of unneeded centralisation.

Every teenager who starts using facebook is told how important it is to wisely choose what to put online and what not, and always be aware that nothing published on the internet can ever be completely deleted any more.

However, the way popular "cloud" services are built today unfortunately just ignore this, and choose a centralized implementation. Which means uploading everything first, and then trying (and sometimes sadly failing, like dropbox yesterday) to protect that data from unauthorized access.

Why? Because it is easier to implement. Yes, distributed systems are much harder to design and implement. But just choosing a centralized approach is inevitably generating single points of failure. I really don't think we can afford that risk for a long time any more.

It's not even only a technical problem. It's a mindset of delegating too much responsibility, which is fatal. Relying on a centralized storage to be "just secure" is delegating responsibility to others - responsibility that those others are unlikely to comply with.

The argument often goes: it's too hard for a smaller company to run their own servers and keep them secure, so better leave that to the big cloud providers, who are the experts and really care. That's simply not true. Like everyone else, they care about their profit. If they loose or expose your data, they care about the PR debacle this might be for them, but not for the data itself. The only one who really cares about what happens to your data - is you.

Even assuming the service provider was able to keep your data safe, there's another problem. As we have heard again in the discussion about Dropbox's TOS, there are legal constraints on what a cloud service may and may not do. For instance, they may not store your data encrypted such that "law enforcement" cannot access it under certain conditions. Which means that Dropbox can't offer encryption based on really private keys (only you have the key to your data, they don't) even if they wanted to.

What they could do, and IMHO must do in the long term, is offering a federated system. Giving you the choice to host the majority of data in a place where you are legally allowed to use strong encryption with your entirely private keys, such as your own server. Only for sharing with others, and only with the data actually being shared, smaller entities need to enter a bigger federation (which might be organized by a globally operating company).

That's how internet mail always worked - no mail sent among members of a organisation ever needs to leave the mail servers of that organisation. Same for Jabber/XMPP. This should become true for Dropbox, Facebook, Twitter etc. as well. They should really start structuring their clouds, and giving the option to keep critical data by yourself, without making this a decision against using the service at all.

Unfortunately, one of the few big projects that expressedly had federation on the agenda, Google Wave, has almost (but not entirely) disappeared after a big hype in 2009. Sadly, most probably exactly the fact that they did focus on federation and scalability so much and not on polishing their web interface, has made it a failure in the eye of the public.

Maybe we should really do away with that fuzzy term "the cloud" and start talking about small and big clouds, more and less private ones, and how and if they should or should not interact.

Still, one of the currently most opaque clouds is ahead of us - Apple's iCloud. Nothing at all was said in public about how security will work in the iCloud. And from what was presented, it seems for now it will have no cross-account sharing features at all.

The only thing that seem clear is that all of our data will be stored in that huge datacenter in North Carolina, so I guess that using iCloud when it launches in a few months will demand total trust on Apple to get it right (and as said above - this is a responsibility nobody can really take).

On the other hand, Apple could be foresighted enough to realize the need for federation in a later step, for example allowing future Time Capsules to act as in-house cloud server. After all, and unlike other players, Apple profits from selling hardware to us. And to base a speculation on my earlier speculation (still neither confirmed nor disproved), iCloud might be technically ready for that.

But whatever inherent motivation the big players may or may not have to improve the situation - it's up to us to realize there's no easy way around taking care of our data ourselves, and to ask for standards, infrastructure and services which make doing so possible.

iCloud sync speculation

Here's my last minute technical speculation what iCloud will be in terms of sync :-)

It'll be sync-enabled WebDAV on a large scale.

I spent the last 10 years working on synchronisation, in particular SyncML. SyncML is an open standard for synchronisation, created in 2000 by the then big players in the mobile phone industry together with some well known software companies.

SyncML remained a niche from a user perspective, despite the fact that almost every featurephone built in the last 9 years has SyncML built-in. And despite the fact that Steve Jobs himself pointed out Apples support for SyncML when he introduced iSync in July 2002 at Macworld NY.

As we have learnt by now, iSync (and with it, SyncML for the Apple universe) will be history with Lion. And featurephones are pretty much history as well, superseded by smartphones.

Unlike featurephones, smartphones never had SyncML built-in (a fact that allowed me earn my living by writing SyncML clients for these smartphone platforms...). The reason probably was that the vendors of the dominant smartphone operating systems, Palm and later Microsoft, already had their own, proprietary sync technologies in place (HotSync, ActiveSync). Only Symbian was committed to SyncML, but forced by the market share of ActiveSync-enabled enterprise software (Exchange) in 2005 they also licensed ActiveSync from Microsoft.

So did Apple for the iPhone. So did Google for Google calendar mobile sync. Third party vendors of collaboration server solutions did the same.

For a while, it seemed that the sync battle was won by ActiveSync. And by other proprietary protocols for other kinds of syncing, like dropbox for files, Google for docs, and a myriad of small "cloud" enabled apps which all do their own homebrew syncing.

Not a pleasant sight for someone like me who believes that seamless and standards based sync is as basic for mobile computing as IP connectivity was for the internet.

However, in parallel another standard for interconnecting calendars (not exactly syncing, see below) grew - CalDAV. CalDAV is an extension of WebDAV, which adds calendar specific queries and other functionality to WebDAV. And WebDAV is a mature and widely deployed extension of HTTP to allow clients not only reading from a web server, but also writing to it. Apple is a strong supporter of WebDAV since many years (iDisk is WebDAV storage), and is also a driving force behind CalDAV. Mac OS 10.5 Leopard and iOS 3.0 support CalDAV. And more recently, Apple implemented CardDAV in iOS 4.0 and proposed it as an internet draft to IETF, to support contact information the same way as CalDAV does for calendar entries.

This is all long and well known, and CalDAV is already widely used by many calendaring solutions.

There's one not-so-well-known puzzle piece however. I stumbled upon it a few month ago because I am generally interested in sync related stuff. But only now I realized it might be the rosetta stone for making iCloud. I did some extra googling today and found some clues that fit too nicely to be pure coincidence.

The puzzle piece is this: An IETF draft called "Collection synchronisation for WebDAV" [Update - by March 2012 it has become RFC6578]. The problem with WebDAV (and CalDAV, CardDAV) is that it is was designed as an access method, but not a sync method. While it is well possible to sync data via WebDAV, it does not scale well with large sync sets, because a client needs to browse through all the information available first just to detect the changes. With a large sync sets with possibly many hundred thousand files (think of your home folder) that's simply not working. The proposed extension fixes exactly this problem, and makes WebDAV and its derivates ready for efficient sync of arbitrarily huge sync sets, by making the server itself keep track of changes and report them to interested clients.

With this, a WebDAV based sync infrastructure reaching from small items like contacts and calendar entries to large documents and files (hello dropbox!) is perfectly feasible. Now why should iCloud be that infrastructure? That's where I started googling today for this blog entry.

I knew that the "Collection synchronisation for WebDAV" proposal was coming from Apple. But before I didn't pay attention to who was the author. I did now - it's Cyrus Daboo, who spent a lot of time writing Mulberry, an email client dedicated to make best possible use of the IMAP standard. Although usually seen as just another email protocol, IMAP is very much about synchronisation at a very complex level (because emails can be huge, and partial sync of items, as well as moving them wildly around within folder hierarchies must be handled efficiently), so Cyrus is certainly a true sync expert, with a lot of real-world experience. He joined Apple in 2006. Google reveals that he worked on the Calendar Server (part of Mac OS X server supporting CalDAV and CardDAV), and also contributed to other WebDAV related enhancements. It doesn't seem likely to me they hired him (or he would let them hire him) just for polishing the calendar server a bit...

Related to the imminent release of iCloud, I found a few events interesting: MobileMe users had to migrate to a new CalDAV based Calendar by May 11th, 2011. And just a month earlier, Cyrus issued the "WebDAV sync informal last call" before submitting the "Collection synchronisation for WebDAV" to IETF, and noted that there are "already several client and server implementations of this draft now". And did you notice how the iOS iWork apps just got kind of a document manager with folders? After becoming WebDAV aware only a few months ago?

So what I guess we'll see today:

  • a framework in both iOS5 and Mac OS X Lion which nicely wraps WebDAV+"Collection synchronisation for WebDAV" in a way that makes permanent incremental syncing for all sort of data a basic service of the OS every app can make use of.
  • a cloud based WebDAV+Sync storage - the iCloud
  • a home based WebDAV+Sync storage - new TimeCapsules and maybe AirPorts
  • and of course a lot of Apple Magic around all this. Like Back-to-my-Mac and FaceTime are clever mash-ups of many existing internet standards to make them work "magically", there will be certainly more to iCloud than just a WebDAV login (let alone all the digital media locker functionality many expect).

In about 5 hours we'll hopefully know...

How to Flattr charities?

Returning at home from vacation, I found my (physical) mailbox filled mostly with request from charities for money. Most of them I did pay something in the past and am willing to pay again. But digging through that heap of dead tree material made me angry, and made me realize more clearly than ever that there must be a better way for them to collect funds than that!

The motivation to donate comes from inspiring moments of reading, of being open to good thoughts and intents. These moments are mostly damaged by a feeling of being forced into dealing with heaps of letters each trying to address me friendly, but in their amount being a nuisance of too much information at the wrong time and in the wrong media.

What immediately came to my mind then was Flattr.

That's exactly the way how I'd more than happy (AND efficient!) to pay charities. I would like to answer each and every charity that sends me paper mail asking for money (or tries to urge me into a regular payment via those professionally enthusiatic young hired fundraising agents on the streets) with a suggestion to present their activities online (as many already do) and use Flattr to get funds.

To find out if that could work, I read through Flattr docs and was glad to find that they already support charities with a charity account status that has no fees. And subscriptions also fit nicely with the idea to support something on a ongoing basis.

I got stuck however in one regard: At least for me, and I assume for many others as well, donating to charities is amount-wise an entirely different category than donating to interesting web "things" like blog entries or podcasts.

For both, a monthly budget and the attention based distribution thereof, as Flattr provides it, is perfect.

But it's a significantly different donation chunk size for charity projects than for blogs. I want to give more to the latter per click (but not a fixed amount, as the donation feature would already allow).

Presently, the only way I see to work around that, would be having two Flattr accounts with two budgets. But that seems to me to be opposite to the entire Flattr idea of simply being logged in all the time to allow quick single click donations.

So I tried to imagine what extension of Flattr functionality could help. Basically, it boils down to an option to extend the donation beyond a single flattr, like subscriptions already provide on the time axis.

I'd imagine a flattr button that converts to "flattr more" instead of "subscribe". Clicking it would open a window like it does now, offering subscription (repeated donation) but additionally an option to donate a larger share of the budget, or a share of another budget.

The former (larger share) would be simple: Just offer a multiplier, so I can flatter a thing 5x, 10x, 20x instead of just 1x.

The latter (different budget) is certainly more complicated. Users would need to have the option to add more budgets for different purposes to their accounts, which is probably confusing for many. But it would help to keep separate topics apart.

These are just two of my ideas how it could work.

The point however is: I think the Flattr concept could revolutionize donations in many more areas (traditional charities is just one of them), but for that it needs to step beyond the current "all things are equal" mode, in one way or another.

Why doesn't this exist already? [August 2010: now it does]

FakePad

Why doesn't this kind of device exist for a long time already?

Since I have a MacBook PRO which allows using multitouch gestures on the trackpad (especially scroll and zoom) I miss these a lot when I work on a desktop Mac.

The "photo" above is of course a very amateurish work of Photoshop editing my external keyboard and and the MacBook's trackpad together.

However, should I get access to a broken MacBook body with the trackpad still functional before Apple or someone else makes a real product like this, I'd probably try to create one myself.

As the internal trackpad is a regular USB device (only connected internally), all I'd have to do would be connecting it to a normal USB cable, cutting the Trackpad plus the needed frame material from the MB(P) body, and putting everything together in a decent housing, probably made from a thick sheet of aluminium. I guess I'll be better at doing that in the workshop than in photoshop…

Donations of broken MBP cover plates are welcome - pointers to external trackpad products that might already exist as well, of course! But remember, it's the multi-touch I look for, not just an external trackpad.

[Update: just saw this product - altough it is for PC only and looks ugly to me, it is a step into the right direction. Still, I guess Apple's rumoured multi-touch mouse is more likely to provide what I am looking for]

[Update2: Indeed, it looks like Apple just released (kind of) what I was looking for: The Magic Mouse.]

[Update3 - August 2010: The Magic Mouse was a first step, but apparently they have really listened (to me? ;-)) and thus created the Magic Trackpad. I already got one and yes, it's exactly that what I wanted]

The internet hasn't reached the mobile yet

The latest example of Apple's walled garden policy around the iPhone - they apparently pulled Google Voice App - makes me think if the internet has really reached the mobile space.
Of course, technically it has, a long time ago. I could browse the internet with my Nokia 9210 communicator in 2001. But back then (and all the years full of PocketPC, Palm and Symbian smartphones, until the iPhone came out) it was a truly unpleasant experience, and it was easy to understand why not many used that painful kind of mobile internet.
Then the iPhone seemingly changed that. For the first time, a mobile device had a browser that actually was easy and even fun to use. The usability and smoothness of the UI however was only one half of the story, the other half was that Apple was able to force feed data plans to their customers that removed the fear of paying unpredictably huge amounts of money for mobile data usage. At the same time, they convinced the carriers to offer way more attractive data plans than ever before - still expensive, but at least affordable for many.
So for a while, it seemed the mobile internet was reality. But it is not. What we have is a marketing game played by the carriers selling "unlimited mobile internet" access but essentially refusing to provide the whole thing. Web, email, chat - yes, large downloads, streaming video, tethering - maybe, depending on your carrier's mood, skype and VoIP - no.
And most importantly: true mobility - definitely no. If I leave the country, I'll bankrupt myself within a few minutes of web surfing.
Today's "mobile internet" is a walled garden, and relies on devices that help enforcing the wall. The iPhone is the most prominent example today, but Android or webOS aren't any different in that respect.
Apple of course uses the control they have over the platform for other goals from their own agenda, but I suspect the absolutely predominat force behind all the lockdown efforts are the carriers that demanded a walled garden mobile internet from day one of the iPhone age, and still do so.
The other arguments brought forward to explain why locked down devices are good remind me a lot of long lost battles of the DRM age. Remember Microsofts "Longhorn"? The futile hope for security by technical means alone? While it won't ever work, it's still good for PR as Apple shows with the "hardware encryption" in the 3GS (which is apparently not providing much real security in its current state).
I'm optimistic for the future however. I think the walled garden mobile internet is nothing that can be kept up a long time. Rip-off data roaming prices will disappear, "unlimited" data plans with servere limitations will be replaced by plans that essentially charge for the bandwith you consume, but at a reasonable price for everyday use.
And once we have real mobile internet, walled garden devices will not make much sense any more. Especially no commerial sense for those that provide them, and that's when the iPhone and other mobile platforms will open up.
That might take longer than I hope now, but I think it is inevitable.

The latest example of Apple's walled garden policy around the iPhone - they apparently pulled the Google Voice App - makes me think if the internet has really reached the mobile space.

Of course, technically it has, a long time ago. I could browse the internet with my Nokia 9210 communicator in 2001. But back then (and all the years full of PocketPC, Palm and Symbian smartphones, until the iPhone came out) it was a truly unpleasant experience, and it was easy to understand why not many used that painful kind of mobile internet.

Then the iPhone seemingly changed that. For the first time, a mobile device had a browser that actually was easy and even fun to use. The usability and smoothness of the UI however was only one half of the story, the other half was that Apple was able to force feed data plans to their customers that removed the fear of paying unpredictably huge amounts of money for mobile data usage. At the same time, they convinced the carriers to offer way more attractive data plans than ever before - still expensive, but at least affordable for many.

So for a while, it seemed the mobile internet was reality. But it is not. What we have is a marketing game played by the carriers selling "unlimited mobile internet" access but essentially refusing to provide the whole thing. Web, email, chat - yes, large downloads, streaming video, tethering - maybe, depending on your carrier's mood, skype and VoIP - no.

And most importantly: true mobility - definitely no. If I leave the country, I'll bankrupt myself within a few minutes of web surfing.

Today's "mobile internet" is a walled garden, and relies on devices that help enforcing the wall. The iPhone is the most prominent example today, but Android or webOS aren't any different in that respect.

Apple of course uses the control they have over the platform for other goals from their own agenda, but I suspect the absolutely predominat force behind all the lockdown efforts are the carriers that demanded a walled garden mobile internet from day one of the iPhone age, and still do so.

The other arguments brought forward to explain why locked down devices are good remind me a lot of long lost battles of the DRM age. Remember Microsofts "Longhorn"? The futile hope for security by technical means alone? While it won't ever work, it's still good for PR as Apple shows with the "hardware encryption" in the 3GS (which is apparently not providing much real security in its current state).

I'm optimistic for the future however. I think the walled garden mobile internet is nothing that can be kept up a long time. Rip-off data roaming prices will disappear, "unlimited" data plans with servere limitations will be replaced by plans that essentially charge for the bandwith you consume, but at a reasonable price for everyday use.

And once we have real mobile internet, walled garden devices will not make much sense any more. Especially no commerial sense for those that provide them, and that's when the iPhone and other mobile platforms will open up.

That might take longer than I hope now, but I think it is inevitable.