How to make the MacBook Air SuperDrive work with any Mac (El Capitan onwards)

Flattr this!

This is an updated version of an earlier post, adapted for Mac OS X 10.11 El Capitan and later. It describes how to apply a simple trick to make the MacBook Air SuperDrive work with any Mac. For earlier Mac OSes (and more context), please refer to the original post.

Long time ago, I bought an external Apple USB SuperDrive for my MacBook PRO 17″ late 2010, in which I had replaced the built-in superdrive by a SSD to speed up the machine.

Only to find out, like many other people, that Apple prevents the superdrive to be used with Mac models that originally came with a built-in superdrive. Nowadays, Apple does not sell these models any more, but many of these older Macs are still very good machines, especially when upgraded to SSD like my MBP!

With some investigation and hacking back in 2011, I found out that Apple engineers apparently had the need to test the superdrive with officially not supported Macs themselves, so the driver already has an option built-in to work on any machine!

[Note: there is also a simpler method, as for example described here, which consists of just typing sudo nvram boot-args=”mbasd=1″ in a terminal – done. I had that method in my post for a long time, but removed it recently because feedback was very mixed. While it seems to work fine in many cases, some users ended up with their Mac not booting any more afterwards. Maybe it was due to other important settings already present in boot-args, so if you want to give it a try, it might be a good idea to do a check first, see last post on this page]

This option can be activated on El Capitan (10.11) and later following the procedure below. Basically, it’s a clean and safe trick that has proven working fine for many users since 2011. But still, you’ll be doing this entirely on your own risk! Using command line in recovery mode and editing system files incorrectly can damage things severely! Make sure you have a backup before starting!

  1. Boot your Mac into recovery mode: Select “Restart” from the Apple menu and then hold the left  Cmd-key and the “R” key down for a while until the startup progress bar appears. (Thanks to @brewsparks for the idea to use recovery mode!)
  2. After the system has started (might take longer than a normal start), do not choose any of the options offered.
  3. Instead, choose “Terminal” from the “Utilities” menu.
  4. In the text window which opens, type the following (and then the newline key)
    ls -la /Volumes
  5. You’ll get output similar to the following, with MyStartDisk being the name of your Mac’s startup disk:
    drwxrwxrwt@  7 root  admin   238  4 Jul 21:02 .
    drwxr-xr-x  41 root  wheel  1462  4 Jul 21:04 ..
    lrwxr-xr-x   1 root  admin     1 29 Jun 19:16 MyStartDisk
    lrwxr-xr-x   1 root  admin     1 29 Jun 19:16 Recovery HD -> /
  6. Then, type the following but replace the MyStartDisk part with the actual name of your startup disk as listed by the previous command (you can copy and paste the name to make sure you don’t make a typing mistake, but don’t forget the doublequotes!):
    D="/Volumes/MyStartDisk"
  7. type the following command
    plutil -convert xml1 $D/Library/Preferences/SystemConfiguration/com.apple.Boot.plist
  8. and then
    $D/usr/bin/pico $D/Library/Preferences/SystemConfiguration/com.apple.Boot.plist
  9. Now you are in the “pico” editor. You cannot use the mouse, but the arrow keys to move the cursor.
  10. Insert mbasd=1 in the <string></string> value below the <key>Kernel Flags</key> (If and only if there is already something written between <string> and </string>, then use a space to separate the mbasd=1 from what’s already there. Otherwise, avoid any extra spaces!). The file will then look like:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>Kernel Flags</key>
    <string>mbasd=1</string>
    </dict>
    </plist>

    [Important note for users of Trim Enabler: make sure you have the latest version of Trim Enabler (see here) before you edit the file! Otherwise, your Mac might not start up afterwards].

  11. Save (press Ctrl-X, answer yes to save by pressing Y, press newline key to confirm the file name).
  12. Restart your machine. That’s it! (When you connect the superdrive now, you will no longer get the “device not supported” notification – because it is now supported)

I tested the above on El Capitan 10.11, but I expect it to work for macOS Sierra 10.12 and beyond. The trick has worked from 10.5.3 onwards for more than 5 years, so except Apple suddenly wants to kill that feature, it will probably stay in future OSes.

04. July 2016 by luz
Categories: Uncategorized | 10 comments

Fixing OS X 10.9 Mavericks Migration from external Volume

Flattr this!

Today, I updated my sister’s MacBook with an SSD, something I’ve done many times with many different Macs in the past. I used the same procedure I always used, which is first replacing the HD by the SSD, then installing a fresh copy of the latest OS X on the new SSD,  then connecting the old HD via an external USB housing and using Migration Assistant to copy the applications and user accounts.

Only, this time, the migration assistant only found 157k of data (on the 120G disk!) to migrate!

Apparently this is a known issue – there are many reports of Migrations Assistant not seeing the connected external source disks properly.

However, I did not find a convincing recipe for how to work around this problem.

What I found was the explanation what is different in Mavericks’ migration assistant: It logs out of the user account it was started from before actually starting its task. Only, this causes all mounted disks (and disk images) to be unmounted. For some reason, it fails to remount them properly, so even if you can choose the name of an external disk as a source for migration, the disk is not really online and the assistant only sees those ominous 157k of data.

So the problem was to find a way to remount the disk, while migration assistant was already running. As it takes over the machine full screen, you can’t start any tools or change users at this point.

But you can login from another machine via ssh, and you can mount disks from the commandline easily, thanks to the powers of diskutil.

So the solution was as follows:

  1. Enable ssh login for the target machine: Set the checkbox in “System preferences”->”Sharing”->”Remote login. To the right, there will be a green dot and a text saying “Remote login: On – to login to this computer remotely, type “ssh user@192.168.1.11“.
  2. This is exactly what you need to type on another computer in the same network to log in via ssh (preferably a Mac, BSD or Linux box, but anything capable of ssh, even a WinXP with putty will do, if you know how to use that). You need to enter your target computer’s admin account password to get access.
  3. Now, back on the target machine, start the Migration Assistant, enter the password when asked and confirm terminating all other apps. The welcome screen of Migration Assistant appears. This is the point where Migration Assistant has unmounted all volumes.
  4. Go back to the ssh session on the other computer. Here you can enter the commands to mount a disk. In my case (original HD, connected via external USB housing) it was simply:
    • sudo diskutil list    # showing the available disks. Usually /dev/disk0 is the internal disk, /dev/disk1 the first external one.
    • sudo diskutil mountDisk /dev/disk1
  5. Now, on the target computer, click through the  Migration Assistant dialogs step by step as usual, and everything works as expected :-)

Instead of using diskutil to mount real disks, you might want to use hdiutil to mount images (.dmg, sparse bundles etc.) with a command line like “sudo hdiutil mount /path/to/some/diskimage.dmg“. Or maybe both combined, first diskutil to mount an external HD containing disk images, and then hdiutil to mount one of those.

There’s plenty of  examples and howtos about how to work with mount, diskutil and hdiutil on the net. But I did not find a single hint using these tools to mount volumes and images for Mavericks’ buggy Migration Assistant – so I wrote this. Hope it will save others a bit of time…

 

19. February 2014 by luz
Categories: Uncategorized | 19 comments

Wir brauchen eine offene Sync-Infrastruktur!

Flattr this!

tl;dr: Einen Ansatz gibt es – bauen wir RFC 6578 in mod_dav ein!

Nach einem deutlichen Artikel auf the Verge hat sich in den letzten Wochen auch auf breiterer Front die Erkenntnis duchgesetzt, dass die iCloud ihr Versprechen vom magischen Sync definitiv nicht eingelöst hat.

Andererseits zeigte die einigermassen unerwartete Abkündigung von Google Reader, wie schnell eine “Infrastruktur” wegbrechen kann, wenn es sich eben nicht wirklich um Infrastruktur, sondern nur um ein Projektchen einer Firma wie Google handelt, die, aus was für Gründen auch immer, dessen überdrüssig wurde.

Das Gute an diesen Ereignissen ist – es wird klarer: wir brauchen eine Infrastruktur für Sync. Basierend auf Standards, mit offener Implementation, und nicht abhängig von einem bestimmten Provider.

Sync lässt sich grob in zwei Aufgaben unterteilen:

  • Eine effiziente Verteilung (insbesondere was Datentransfer anbelangt) der produzierten Daten an alle Teilnehmer
  • Das konsistente Zusammenführen der Daten von verschiedenen Beteiligten zu “einer Wahrheit”

Wie schwierig die zweite Aufgabe ist,kommt nur darauf an, was man will. Apple wollte nichts weniger als die Königslösung und nahm sich vor, komplexe Objektgraphen (CoreData) generisch zu synchronisieren. Das haben sie vorerst nicht mit einer brauchbaren Stabilität geschafft.

Will man aber “nur” einen Haufen Files überall im Zugriff, wie Dropbox, dann ist die zweite Aufgabe fast trivial, und alles viel einfacher. Der reine File-Sync funktioniert ja scheinbar sogar in der iCloud ganz gut.

Für die meisten Apps liegt der Aufwand für die Ermittlung einer in ihrem Kontext ausreichend konsistenten “Wahrheit” irgendwo dazwischen. Es wäre schön, wenn es da mit der Zeit bequeme Frameworks für diesen oder jenen Anwendungsfall gäbe, aber zu einer generischen Sync-Infrastruktur gehört dieser Teil nicht.

Der erste Aufgabe hingegen ist von der Sync-Infrastruktur zu erfüllen, die man von ganz klein auf dem eigenen Server installiert bis hin zu ganz gross skaliert bei einem Cloud-Anbieter betreiben können möchte. Genauso simpel, zuverlässing und austauschbar wie ein Webserver.

Es braucht dazu nicht nur einen Server, denn als App-Enwickler möchte man nicht die Files irgendwo abholen müssen, nicht Verzeichnisbäume nach Änderungen durchscannen. Sondern lokal ein Abbild der Daten haben, und benachrichtigt werden, wenn sich etwas ändert (analog dazu, wie kaum jemand direkt TCP-Sockets aufmachen will, um HTTP zu sprechen). Dazu braucht es ein passendes Client-Framework.

Dropbox nutzt jetzt die Gunst der Stunde, genau so ein Framework mit seinem neuen Sync-SDK anzubieten. Aber eine Standard-basierte offene Infrastruktur ist das nicht.

Offen und standardisiert hingegen ist WebDAV (RFC 4918). Und in der Tat nutzen es einige App-Entwickler bereits für ihren Sync-Bedarf. WebDAV hat aber ein Problem im Zusammenhang mit Sync – es ist bei einer grösseren Anzahl von Objekten (Files) nicht effizient, Änderungen zu finden.

Es sei denn, man hätte eine Implementation von RFC 6578: “Collection Synchronization for Web Distributed Authoring and Versioning (WebDAV)”.

Wer hat eine? Apple. Warum? Weil sie das in der iCloud brauchen. Das ist vielleicht momentan nicht gerade die beste Werbung. Aber wer sich in Apple-Technologien ein bisschen auskennt, weiss, dass Apple genau dieses Muster viele Male sehr erfolgreich angewendet hat: solide Standards nehmen, offen weiterentwickeln, und dann als Basis für proprietäre “Magic” benutzen (z.B. HTTP Live Streaming, Bonjour/multicast DNS, CalDAV). Ich bin überzeugt, dass WebDAV Collection Sync nicht das Problem in der iCloud ist.

Mein Fazit:

Wir (damit meine ich die Indie-Dev Community) sollten Kräfte bündeln, RFC 6578 einerseits in mod_dav implementiert zu bekommen und so in nützlicher Frist auf jeden Apache-Webserver dieses Planeten auszurollen. Und am anderen Ende Sync-Frameworks zu bauen, die genau dasselbe bieten wie die neue Dropbox Sync-API, aber mit WebDAV als Backend.

Damit nicht nur wir Entwickler, sondern auch die Anwender unserer Apps wieder wählen können, bei wem die Daten liegen sollen.

Ist die Zeit endlich reif dafür?

09. April 2013 by luz
Categories: Deutsch | Tags: , | Leave a comment

“Nicht finanzierbar” ist immer eine Ausrede

Flattr this!

Die ganze Kultur und Zivilisation der Menschheit hat nicht jemand finanziert, sondern sie wurde von eben dieser Menschheit aufgebaut.

Ob das Ganze, so wie es heute ist, in Betrieb gehalten und gewartet werden kann, ist nicht ursächlich davon abhängig, ob Geld dafür fliessen kann, sondern ob die Menschen dafür weiter arbeiten. Ebenso die Weiterentwicklung der Zivilisation.

Wenn jemand etwas tut, dann wirkt es in dieser Welt. Finanziert oder nicht, ist keine Frage des “ob”, sondern nur eine des “wie”.

Diese Leistung, Dinge und Strukturen zu erschaffen (über die Natur hinaus) und zu unterhalten (gegen die Entropie), aber auch zu zerstören, kommt (neben der Energie – die Frage nach deren Quellen ist ein anderes Thema) aus der Arbeit der Menschen. Meine, Deine, unser aller tägliche Arbeit in allen Formen. Bezahlte, unbezahlte, zu hause, an einer Arbeitsstelle, konstruktive, zerstörerische.

Mit Geld hat das nichts zu tun. Die wirkliche zentrale Frage heute ist nicht, ob das Finanzsystem zusammenbrechen wird. Sondern: Werden weiterhin andere Leute das tun, was ich selber nicht tun kann oder will, aber zum Leben brauche? (Lebensmittel anbauen z.B.)

Diese Frage mit “ja, solange ich genug Geld habe, das zu bezahlen” zu beantworten, ist keine Antwort. In einer Zeit, wo das Geldsystem aus 90% Spekulation besteht, und wenn vielleicht nicht total zusammenbrechen, doch sicher grobe Veränderungen erfahren wird (und bereits hat), ist eine finanzielle Begründung immer eine Ausrede.

Wer wirkliche Antworten will muss den Glauben ans Geld durchbrechen, und nüchtern sehen, dass die Geldsphäre zwar ein Machtinstrument ist, aber keineswegs eine Ursache in sich.

Also geht es darum zu fragen, wer die Macht ausübt, dass Dinge gemacht werden oder nicht.

1. Einmal ich selber – Wieweit bedingt mein Lebensstandard, dass andere Menschen mit Gewalt zu Arbeit gezwungen werden, die sie freiwillig nicht leisten würden? Das ist nicht einfach zu beantworten. Aber klar ist, dass uns westlichen Konsumbürgern eine Menge Komfort zur Verfügung steht, für den andere unter weniger komfortablen Bedingungen, bis hin zu mörderischer Ausbeutung, arbeiten müssen [1]. Weil wir mit einem laxen Konsumverhalten (haben wollen, aber nicht schauen wie’s zustande kommt) Verantwortung und Macht delegieren und uns dadurch selbst entmündigen.

2. Heikler ist die Umkehrfrage: Was leiste ich selber, das anderen Menschen wirklich das Leben komfortabler, sicherer, besser, schöner macht? Und dann: In welchem Verhältnis steht diese Leistung zu dem, was ich von anderen beziehe? Das ist keine einfache Frage, wenn ich mich nicht mit einer Geld-Ausrede aus der Verantwortung stehlen will (“meine Arbeit wird ja gut bezahlt, also gehe ich davon aus dass sie nützlich ist”). Letztlich ist es die Sinnfrage – mache ich etwas Sinnvolles für die Weiterexistenz der Gesellschaft und ihrer Institutionen, die ich selber in Anspruch nehme, oder säge ich am Ast, auf dem ich sitze?

An sich ist es banal – es geht darum, sich laufend, mit jedem neuen Tag, die schwierige Sinnfrage zum eigenen Tun in der Welt zu stellen, und sich nicht schon auf der Geldebene zu einfachen Antworten verführen lassen.

Wenn ein Projekt “nicht finanzierbar” ist, heisst das letztlich nur: In diesem Moment sind nicht genügend Leute bereit, etwas für diese Sache zu tun oder zu geben. Das ist gewiss nicht einfacher zu überwinden als ein leeres Konto. Aber zuerst an Geld zu denken, verstellt den Blick auf die Motivationen und Absichten der Beteiligten und Betroffenen, Freunde und Feinde. Wäre unser Geldsystem gesund, die Märkte funktionierend, dann würde die Geldlandschaft die Realität einigermassen abbilden. Aber im heutigen schwerkranken Geldsystem ist diese Abbildung so stark verzerrt, dass die eigentlichen realen Vorzüge und Probleme eines Vorhabens kaum mehr sichtbar sind vor lauter Finanzaspekten. So werden schwerst schädliche Sachen gemacht, nur weil sie finanziellen Profit abwerfen, und dringend Notwendiges wird nicht angepackt, weil es “nicht finanzierbar” ist.

Das heutige Geldsystem sofort abschaffen können wir  nicht – aber wir können im Kopf die Geldüberlegungen aus dem Zentrum der Realitätswahrnehmung verbannen, und in der Kategorie “nicht mehr zeitgemässes, kaputtes Werkzeug, leider im Moment noch ohne Ersatz” abstufen. Sobald dem Geld die absolute Bedeutung in den Köpfen verloren geht, verliert es auch die absolute Macht in der Welt.

Jeder plötzliche Zusammenbruch des Geldsystems wäre sehr schmerzhaft – dass danach alles besser käme, ist eine gefährliche Illusion. Hingegen eine allmähliche Erosion der Bedeutung dieses Irrsinnsgeldes in den täglichen Gedanken der Einzelnen ist eine hoffnungsvolle Variante. Schwindet der Glaube, schwindet auch die Angst, und Kräfte werden frei für die Arbeit an Alternativen.

Das Geld wird dadurch nicht abgeschafft, aber zurückgestuft dorthin, wo es nützlich und in einer arbeitsteiligen Welt auch unverzichtbar ist – von einem Zweck an sich zu einem Mittel zum Zweck.

Insofern hoffe ich auf eine Aufklärung 2.0 – in den Köpfen.

[1] Das ganz detailliert für jedes Konsumprodukt herauszufinden ist schwierig, aber eine grobe Ahnung lässt sich bei Vielem mit etwas Vergleichen der Arbeit (und nicht des Geldes!) schon finden: Ein Kleidungsstück, das zu nähen jemand einen halben Tag beschäftigt, aber hier den Bruchteil eines hiesigen Stundenlohns kostet, beinhaltet Ausbeutung. Nicht notwendigerweise individuelle, aber auf jeden Fall volkswirtschaftliche. Meist aber beides.

 

19. July 2012 by luz
Categories: Deutsch | Tags: | Leave a comment

PrivateTax auf Mac OS X 10.7 Lion installieren

Flattr this!

Schon wieder ein Hack! Diesmal auf Deutsch weil mit geographisch eng beschränktem, aber grossem Nutzen für diejenigen, die wie ich auf den letzten Termin im Kanton Zürich die Steuererklärung 2010 einreichen müssen, und einen Mac mit 10.7 Lion haben.

Wie jedes Jahr habe ich mir dazu das kostenlose Programm “Private Tax 2010” heruntergeladen. Da stellt sich heraus, dass dieses Programm einen PowerPC-Mac oder Rosetta benötigt, und deshalb auf Lion gar nicht installierbar ist. Toll! 5 Jahre nach der Umstellung auf Intel bei Apple!

Doch halt – PrivateTax ist doch ein Java-Programm? Das hat doch nix mit Intel/PowerPC zu tun? Ein Blick in das “Setup.app” Package zeigt sofort den Schuldigen: da ist eine Uralt-Version von InstallAnywhere (von 2006) drin.

Da war wohl jemand zu geizig (wir müssen schliesslich Steuergelder sparen, jaja), die Lizenz auch nur einmal in den letzten 5 Jahren zu erneuern.

Zum Glück ist die Lösung recht einfach: Man suche sich irgend ein halbwegs aktuelles anderes Mac-Programm, das mit InstallAnywhere installiert wird (Google – “mac installanywhere” findet allerlei). Nun öffnet man im Finder mit “Rechtsclick->Paketinhalt anzeigen” die “Setup.app” von PrivateTax und die entsprechende *.app des anderen Programms (die muss nicht unbedingt “Setup.app” heissen). Aufgeklappt sehen die Inhalte auf beiden Seiten etwa so aus:

PrivateTaxInstallerContentsDas angewählte “Setup” ist das Binary des InstallAnywhere, das im Fall von PrivateTax PPC-only ist. Dieses wird nun einfach durch das entsprechende Binary aus dem neueren Installer ersetzt (wenn es im anderen Installer anders heisst, einfach in “Setup” umbenennen).

Damit ist der Installer modernisiert. Jetzt einfach direkt das “Setup”-Binary doppelclicken, dann öffnet sich ein Terminal, der Installer startet normal mit GUI, und PrivateTax lässt sich installieren. An der Stelle dachte ich schon: Geschafft! Leider noch nicht ganz.

Denn die installierte App “Private Tax 2010.app” ist technisch selber wieder ein InstallAnywhere. Das heisst, es hat genau dieselbe Struktur wie oben gezeigt, und genau dasselbe veraltete Binary, diesmal heisst es aber “Private Tax 2010”. Also braucht es den gleichen Trick nochmals – einfach auch hierhin das Binary aus dem neueren Programm kopieren, und es “Private Tax 2010” nennen.

Als Letztes muss nun noch das Cache von Mac OS X überlistet werden, welches immer noch fest glaubt, die installierte App sei PowerPC-only, und darum den Start verweigert. Das geht, indem die ganze “Private Tax 2010”-App (aus /Applications/Private Tax 2011) an einen anderen Ort hin und wieder zurück kopiert wird. Dabei verschwindet die Verbotstafel auf dem Icon, und ab jetzt ist der Start ganz normal möglich.

Das alles selbstverständlich, wie bei jedem Hack, nur ganz auf eigenes Risiko durchführen!

Bei mir hat es prima funktioniert, ohne Nebengeräusche beim Benutzen von PrivateTax nachher.

Fröhliches Steuererklärungsausfüllen!

12. November 2011 by luz
Categories: Deutsch | Tags: , | 19 comments

How to make the MacBook Air SuperDrive work with any Mac

Flattr this!

Note: for Mac OS X 10.11 El Capitan and later, please see this updated post instead.

(Edited/clarified Nov. 2012, Nov. 2013, Jan 2015 and June 2016) 

The story is this – a while ago I replaced the built-in optical disk drive in my MacBook Pro 17″ by an OptiBay (in the meantime, there are also alternatives)  which allows to connect a second harddrive, or in my case, a SSD.

To be able to continue using the SuperDrive (Apple’s name for the CD/DVD read/write drive),  the Optibay came with an external USB case which worked fine, but was ugly. And I didn’t want to carry that around, so I left it at home and bought a shiny new MacBook Air SuperDrive (by 2012, Apple USB SuperDrive) for the office.

It just didn’t occur to me that this thing could possibly not just work with any Mac, so I didn’t even ask before buying. I knew that many third-party USB optical drives work fine, so I just assumed that would be the same for the Apple drive. But I had to learn otherwise. This drive only works for Macs which, in their original form, do not have an optical drive.

At this point, I started to search the net, finding hints, disassembling Mac OS X USB drivers and finally patching code in a hex editor which was the first, but ugly, solution to make the superdrive work, and gave me the information to eventually find the second, much nicer solution presented below. For those interested in the nitfy details of disassembling and hex code patching, the first approach is still documented here.

For actually making the SuperDrive work in clean and easy way, just read on (but note: while it has proven to be  a quite safe method, still you’ll be doing this entirely on your own risk! Using sudo and editing system files incorrectly can damage things severely!).

Apparently, Apple engineers had the need to test the superdrive with non-MacBookAir computers themselves, so the driver already has an option built-in to work on officially unsupported machines! All you need to do is enable that option, as follows:

The driver recognizes a boot parameter named “mbasd” (Mac Book Air Super Drive), which sets a flag in the driver which both overrides the check for the MBA and also tweaks something related to USB power management (the superdrive probably needs more power than regular USB allows). So just editing /Library/Preferences/SystemConfiguration/com.apple.Boot.plist and inserting the “mbasd=1″ into the “Kernel Flags” does the trick:

[For OS X 10.11 El Capitan onwards please see here for updated instructions instead!]

  1. open a terminal
  2. type the following two commands (two lines, each “sudo” starting on a new line)

    sudo plutil -convert xml1 /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

    sudo pico /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

  3. Insert mbasd=1 in the <string></string> value below the <key>Kernel Flags</key> (If and only if there is already something written between <string> and </string>, then use a space to separate the mbasd=1 from what’s already there. Otherwise, avoid any extra spaces!). The file will then look like:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
    <key>Kernel Flags</key>
    <string>mbasd=1</string>
    </dict>
    </plist>

    [Important update for users of Trim Enabler (thanks boabmatic!): Since Yosemite, installation of Trim enabler puts another flag “kext-dev-mode=1” into the com.apple.Boot.plist, and, unfortunately, also converts the .plist to binary format which shows as mostly garbage in many text editors (that’s what the “plutil” line in step 2 above takes care about: it converts the file back to XML so you can edit it). Note that the system will not boot any more when trim enabler is installed, but “kext-dev-mode=1” is missing! So to apply the “mdasd=1” with trim enabler active, you need to combine both flags, such that the line will read
    <string>kext-dev-mode=1 mbasd=1</string>. For details on Yosemite and Trim Enabler, see here]
    [Update: As CyborgSam pointed out in the comments, the file might not yet exist at all on some Macs. In that case, the pico editor window will initially be empty – if so, just copy and paste the entire XML block from above].

  4. Save (press Ctrl-X, answer yes to save by pressing Y, press enter to confirm the file name).
  5. Restart your machine. That’s it!

I tested this [Updated:2013-11-03] on Lion 10.7.2 up to 10.7.4, Mountain Lion up to 10.8.4 and Mavericks 10.9 so far, but I expect it to work for all Mac OS versions that came after the initial release of the Macbook Air Superdrive, which is probably 10.5.3, and is likely to work with future versions of OS X. Just let me know your experience in the comments!

BTW: the boot options plist and how it works is described in the Darwin man pages

28. October 2011 by luz
Categories: English | Tags: , | 479 comments

Technical background for: How to make the MacBook Air SuperDrive work with any Mac

Flattr this!

Please note: this is the first, much too complicated way I tried (and succeeded) to get the Mac Book Air Superdrive to work with my MacBook PRO. In the meantime, I found a much better, safer and easier way to do it. I kept this description here for those interested in the technical details of searching for and eventually finding a solution. If you just want to make your Mac Book Air Superdrive work, please see this post, and don’t confuse yourself with the techy details below.

Warning: this is a hack, and it’s not for the faint at heart. If you do anything of what I’ll describe below, you are doing it entirely on your own risk. If the description below does not make at least a bit of sense to you, I would not recommend to try the recipe in the end.

The story is this – a while ago I replaced the built-in optical disk drive in my MacBook Pro 17″ by an OptiBay (in the meantime, there are also alternatives)  which allows to connect a second harddrive, or in my case, a SSD.

To be able to continue using the SuperDrive (Apple’s name for the CD/DVD read/write drive),  the Optibay came with an external USB case which worked fine, but was ugly. And I didn’t want to carry that around, so I left it at home and bought a shiny new MacBook Air SuperDrive for the office.

It just didn’t occur to me that this thing could possibly not just work with any Mac, so I didn’t even ask before buying. I knew that many third-party USB optical drives work fine, so I just assumed that would be the same for the Apple drive. But I had to learn otherwise. This drive only works for Macs which, in their original form, do not have an optical drive. Which are the MacBook Airs and the new Minis.

But why doesn’t it work? Seaching the net, among a lot of inaccurate speculations, I found a very informative blog post from 2008 which pretty much explains everything and even provides a hardware solution – replacing the Apple specific USB-to-IDE bridge within the drive with a standard part.

However, I was challenged to find a software solution. I could not believe that there’s a technical reason for not using that drive as-is in any Mac.

There are a lot of good reasons for Apple not to allow it  – first and foremost avoiding complexity of possibly multiple CD/DVD drives, confusing users and creating support cases.

So I though it must be the driver intentionally blocking it, and a quick look into the console revealed that in fact it is, undisguised:

2011/10/27 5:32:37.000 PM kernel: The MacBook Air SuperDrive is not supported on this Mac.

Apparently the driver knows that drive, and refuses to handle it if it runs on the “wrong” Mac. From there, it was not too much work. The actual driver for Optical Disk Drives (ODD) is /System/Library/Extensions/ AppleStorageDrivers.kext/Contents/PlugIns/AppleUSBODD.kext

I fed it to the IDA evaluation version, searched in the strings for the message from the console, found where that text is used and once I saw the code nearby it was very clear how it works – the driver detects the MBA Superdrive, and then checks if it is running on a MBA or a Mini. If not, it prints that message and exits. It’s one conditional jump that must be made unconditional (to tell the driver: no matter what Mac, just use the drive!). In i386 opcode this means replacing a single 0x75 byte with 0xEB. IDA could tell me which one this was in the 32-bit version of the binary, and the nice 0xED hex editor allowed me to patch it. Only, most modern Macs run in 64-bit mode, and the evaluation version of IDA cannot disassemble these. So I had to search the hexdump of the driver for the same code sequence (similar, but not identical hex codes) in the 64-bit part of the driver. Luckily, that driver is not huge, and there was a pretty unique byte sequence that identified the location in both 32 and 64 bit. So the other byte location to patch was found. I patched both with 0xED – and the MacBook Air SuperDrive was working instantly with my MBP!

Now for the recipe (again – be warned, following it is entirely your own risk, and remember sudo is the tool which lifts all restrictions, you can easily and completely destroy your OS installation and data with it):

  1. Make sure you have Mac OS X 10.7.2 (Lion), Build 11C74. The patch locations are highly specific for a build of the driver, it is very unlikely it will work without modification in any other version of Mac OS X.
  2. get 0xED or any other hex editor of your choice
  3. Open a terminal
  4. Go to the location where all the storage kexts are (which is within an umbrella kext called AppleStorageDrivers.kext)
    cd  /System/Library/Extensions/AppleStorageDrivers.kext/Contents/PlugIns
  5. Make a copy of your original AppleUSBODD.kext (to the desktop for now, store in a safe place later – in case something goes wrong you can copy it back!)
    sudo cp  -R AppleUSBODD.kext ~/Desktop
  6. Make the binary file writable so you can patch it:
    sudo chmod 666  AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
  7. Use the hex editor to open the file AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
  8. Patch:
    at file offset 0x1CF8, convert 0x75 into 0xEB
    at file offset 0xBB25, convert 0x75 into 0xEB
    (if you find something else than 0x75 at these locations, you probably have another version of Mac OS X or the driver. If so, don’t patch, or it means asking for serious trouble)
  9. Save the patched file
  10. Remove the signature. I was very surprised that there’s nothing more to it, to make the patched kext load:
    sudo rm -R  AppleUSBODD.kext/Contents/_CodeSignature
  11. Restore the permissions, and make sure the owner is root:wheel, in case your hex editor has modified it.
    sudo chmod 644  AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
    sudo chown root:wheel  AppleUSBODD.kext/Contents/MacOS/AppleUSBODD
  12. Make a copy of that patched driver to a safe place. In case a system update overwites the driver with a new unpatched build, chances are high you can just copy this patched version back to make the external SuperDrive work again.
  13. Plug in the drive and enjoy! (If it does not work right away, restart the machine once).

PS: Don’t ask me for a download of the patched version – That’s Apple’s code, the only way is DIY!

28. October 2011 by luz
Categories: Uncategorized | 3 comments

Secure Cloud Storage with deduplication?

Flattr this!

Last week, dropbox’s little problem made me realize how much I was trusting them to do things right, just because they once (admittedly, long ago) wrote they’d be using encryption such that they could not access my data, even if they wanted. The auth problem showed that today’s dropbox reality couldn’t be farther from that – apparently the keys are lying around and are in no way tied to user passwords.

So, following my own advice to take care about my data, I tried to understand better how the other services offering similar functionality to dropbox actually work.

Reading through their FAQs, there are a lot of impressive sounding crypto acronyms, but usually no explanation of their logic how things work – simple questions like what data is encrypted with what key, in what place, and which bits are stored where, remain unanswered.

I’m not a professional in security, let alone cryptography. But I can follow a logical chain of arguments, and I think providers of allegedly secure storage should be able to explain their stuff in a way that can be followed by logically thinking.

Failing to find these explanations, I tried the reverse. Below I try following logic to find out if or how adding sharing and deduplication to the obvious basic setup (private storage for one user) will or will not compromise security. Please correct me if I’m wrong!

Without any sharing or upload optimizing features, the solution is simple: My (local!) cloud storage client encrypts everything with a long enough password I chose and nobody else knows, before uploading. Result: nobody but myself can decrypt the data [1].

That’s basically how encrypted disk images, password wallet and keychain apps etc. work. More precisely, many of them use two-stage encryption: they generate a large random key to encrypt the data, and then use my password to encrypt that random key. The advantage is performance, as the workhorse algorithm (the one that encrypts the actual data) can be one that needs a large key to be reasonably safe, but is more efficient. The algorithm that encrypts the large random key with my (usually shorter) password doesn’t need to be fast, and thus can be very elaborate to gain better security from a shorter secret.

Next step is sharing. How can I make some of my externally stored files accessible to someone else, but still keeping it entirely secret from the cloud storage provider who hosts the data?

With a two stage encryption, there’s a way: I can share the large random key for the files being shared (of course, each file stored needs to have it’s own random key). The only thing I need to make sure is nobody but the intended receiver obtains that key on the way. This is easier said than done. Because having the storage provider manage the transfer, such as offering a convenient button to share a file with user xyz, this inevitably means I must trust the service about the identity of xyz. Best they can provide is an exchange that does not require the key be present in a decrypted form in the provider’s system at any time. That may be an acceptable compromise in many cases, but I need to be aware I need a proof of identity established outside the storage provider’s reach to really avoid they can possibly access my shared files. For instance, I could send  the key in a GPG encrypted email.

So bottom line for sharing is: correctly implemented, it does not affect the security of the non-shared files at all, and if the key was exchanged securely directly between the sharing parties, it would even work without a way for the provider to access the shared data. With the one-button sharing convenience we’re used to, the weak point is that the provider (or a malicious hacker inside the providers system) could technically forge identities and receive access to shared data in place of the person I actually wanted to share with. Not likely, but possible.

The third step is deduplication. This is important for the providers, as it saves them a lot of storage, and it is a convenience for users because if they store files that other users already have, the upload time is near zero (because it’s no upload at all, the data is already there).

Unfortunately, the explanations around deduplication get really foggy. I haven’t found a logically complete explanation from any of the cloud storage providers so far. I see two things that must be managed:

First, for deduplication to work, the service provider needs to be able to detect duplicates. If the data itself is encrypted with user specific keys, the same file from different users looks completely different at the provider’s end. So neither the data itself, nor a hash over that data can be used to detect duplicates. What some providers seem to do is calculating a hash over the unencrypted file. But I don’t really undstand why many heated forum discussions seem to focus on whether that’s ok or not. Because IMHO the elephant in the room is the second problem:

If indeed files are stored encrypted with a secret only the user has access to, deduplication is simply not possible, even if detection of duplicates can work with sharing hashes. The only one who can decrypt a given file is the one who has the key. The second user who tries to upload a given file does not have (and must not have a way to obtain!) the first user’s key for that file by definition. So even if encrypted data of that file is already there, it does not help the second user. Without the key, that data stored by the first user is just garbage for him.

How can this be solved? IMHO all attempts based on doing some implicit sharing of the key when duplicates are detected are fundamentally flawed, because we inevitably run into the proof of identity problem as shown above with user-initiated sharing, which becomes totally unacceptable here as it would affect all files, not only explicitly shared ones.

I see only one logical way for deduplication without giving the provider a way to read your files: By shifting from proof-of-identity for users to proof-of-knowledge for files. If I can present a proof that I had a certain file in my possession, I should be able to download and decrypt it from the cloud. Even if it was not me, but someone else who actually uploaded it in the first place. Still everyone else, including the storage provider itself, must not be able to decrypt that file.

I now imagine the following: instead of encrypting the files with a large random key (see above), my cloud storage client would calculate a hash over my file and use that hash as the key to encrypt  the file, then store the result in the cloud. So the only condition to get that file back would be having had access to the original unencrypted file once before. I myself would qualify, of course, but anyone (totally unrelated to me) who has ever seen the same file could calculate the same hash, and will qualify as well. However, for whom the file was a secret, it remains a secret [2].

I wonder if that’s what cloud storage providers claiming to do global deduplication actually do. But even more I wonder why so few speak clear text. It need not to be on the front page of their offerings, but a locigally conclusive explanation of what is happening inside their service is something that should be in every FAQ, in one piece, and not just bits spread in some forum threads mixed with a lot of guesses and wrong information!

 

[1] Of course, this is possible only within the limits of the crypto algorithms used. But these are widely published and  reviewed, as well as their actual implementations. These are not absolutely safe, and implementation errors can make them additionally vulnerable. But we can safely trust that there are a lot of real cryptography expert’s eyes on these basic building blocks of data security. So my working assumption is that the used encryption methods per se are good enough. The interesting questions are what trade offs are made to implement convenience features.

[2] Note that all this does not help with another fundamental weakness of deduplication: if someone wants to find out who has a copy of a known file, and can gain access to the storage provider’s data, he’ll get that answer out of the same information which is needed for deduplication. If that’s a concern, there’s IMHO no way except not doing deduplication at all.

 

28. June 2011 by luz
Categories: English | Tags: | 1 comment

The only one who really cares about your data is you!

Flattr this!

Once more, yesterday’s Dropbox authentication bug shows a fundamental weakness of centralized services. Dropbox is just a high profile example, but the underlying problem is that of unneeded centralisation.

Every teenager who starts using facebook is told how important it is to wisely choose what to put online and what not, and always be aware that nothing published on the internet can ever be completely deleted any more.

However, the way popular “cloud” services are built today unfortunately just ignore this, and choose a centralized implementation. Which means uploading everything first, and then trying (and sometimes sadly failing, like dropbox yesterday) to protect that data from unauthorized access.

Why? Because it is easier to implement. Yes, distributed systems are much harder to design and implement. But just choosing a centralized approach is inevitably generating single points of failure. I really don’t think we can afford that risk for a long time any more.

It’s not even only a technical problem. It’s a mindset of delegating too much responsibility, which is fatal. Relying on a centralized storage to be “just secure” is delegating responsibility to others – responsibility that those others are unlikely to comply with.

The argument often goes: it’s too hard for a smaller company to run their own servers and keep them secure, so better leave that to the big cloud providers, who are the experts and really care. That’s simply not true. Like everyone else, they care about their profit. If they loose or expose your data, they care about the PR debacle this might be for them, but not for the data itself. The only one who really cares about what happens to your data – is you.

Even assuming the service provider was able to keep your data safe, there’s another problem. As we have heard again in the discussion about Dropbox’s TOS, there are legal constraints on what a cloud service may and may not do. For instance, they may not store your data encrypted such that “law enforcement” cannot access it under certain conditions. Which means that Dropbox can’t offer encryption based on really private keys (only you have the key to your data, they don’t) even if they wanted to.

What they could do, and IMHO must do in the long term, is offering a federated system. Giving you the choice to host the majority of data in a place where you are legally allowed to use strong encryption with your entirely private keys, such as your own server. Only for sharing with others, and only with the data actually being shared, smaller entities need to enter a bigger federation (which might be organized by a globally operating company).

That’s how internet mail always worked – no mail sent among members of a organisation ever needs to leave the mail servers of that organisation. Same for  Jabber/XMPP. This should become true for Dropbox, Facebook, Twitter etc. as well. They should really start structuring their clouds, and giving the option to keep critical data by yourself, without making this a decision against using the service at all.

Unfortunately, one of the few big projects that expressedly had federation on the agenda, Google Wave, has almost (but not entirely) disappeared after a big hype in 2009. Sadly, most probably exactly the fact that they did focus on federation and scalability so much and not on polishing their web interface, has made it a failure in the eye of the public.

Maybe we should really do away with that fuzzy term “the cloud” and start talking about small and big clouds, more and less private ones, and how and if they should or should not interact.

Still, one of the currently most opaque clouds is ahead of us – Apple’s iCloud. Nothing at all was said in public about how security will work in the iCloud. And from what was presented, it seems for now it will have no cross-account sharing features at all.

The only thing that seem clear is that all of our data will be stored in that huge datacenter in North Carolina, so I guess that using iCloud when it launches in a few months will demand total trust on Apple to get it right (and as said above – this is a responsibility nobody can really take).

On the other hand, Apple could be foresighted enough to realize the need for federation in a later step, for example allowing future Time Capsules to act as in-house cloud server. After all, and unlike other players, Apple profits from selling hardware to us. And to base a speculation on my earlier speculation (still neither confirmed nor disproved), iCloud might be technically ready for that.

But whatever inherent motivation the big players may or may not have to improve the situation – it’s up to us to realize there’s no easy way around taking care of our data ourselves, and to ask for standards, infrastructure and services which make doing so possible.

21. June 2011 by luz
Categories: English | Tags: , | Leave a comment

iCloud sync speculation

Flattr this!

Here’s my last minute technical speculation what iCloud will be in terms of sync :-)

It’ll be sync-enabled WebDAV on a large scale.

I spent the last 10 years working on synchronisation, in particular SyncML. SyncML is an open standard for synchronisation, created in 2000 by the then big players in the mobile phone industry together with some well known software companies.

SyncML remained a niche from a user perspective, despite the fact that almost every featurephone built in the last 9 years has SyncML built-in. And despite the fact that Steve Jobs himself pointed out Apples support for SyncML when he introduced iSync in July 2002 at Macworld NY.

As we have learnt by now, iSync (and with it, SyncML for the Apple universe) will be history with Lion. And featurephones are pretty much history as well, superseded by smartphones.

Unlike featurephones, smartphones never had SyncML built-in (a fact that allowed me earn my living by writing SyncML clients for these smartphone platforms…). The reason probably was that the vendors of the dominant smartphone operating systems, Palm and later Microsoft, already had their own, proprietary sync technologies in place (HotSync, ActiveSync). Only Symbian was committed to SyncML, but forced by the market share of ActiveSync-enabled enterprise software (Exchange) in 2005 they also licensed ActiveSync from Microsoft.

So did Apple for the iPhone. So did Google for Google calendar mobile sync. Third party vendors of collaboration server solutions did the same.

For a while, it seemed that the sync battle was won by ActiveSync. And by other proprietary protocols for other kinds of syncing, like dropbox for files, Google for docs, and a myriad of small “cloud” enabled apps which all do their own homebrew syncing.

Not a pleasant sight for someone like me who believes that seamless and standards based sync is as basic for mobile computing as IP connectivity was for the internet.

However, in parallel another standard for interconnecting calendars (not exactly syncing, see below) grew – CalDAV. CalDAV is an extension of WebDAV, which adds calendar specific queries and other functionality to WebDAV. And WebDAV is a mature and widely deployed extension of HTTP to allow clients not only reading from a web server, but also writing to it. Apple is a strong supporter of WebDAV since many years (iDisk is WebDAV storage), and is also a driving force behind CalDAV. Mac OS 10.5 Leopard and iOS 3.0 support CalDAV. And more recently, Apple implemented CardDAV in iOS 4.0 and proposed it as an internet draft to IETF, to support contact information the same way as CalDAV does for calendar entries.

This is all long and well known, and CalDAV is already widely used by many calendaring solutions.

There’s one not-so-well-known puzzle piece however. I stumbled upon it  a few month ago because I am generally interested in sync related stuff. But only now I realized it might be the rosetta stone for making iCloud. I did some extra googling today and found some clues that fit too nicely to be pure coincidence.

The puzzle piece is this: An IETF draft called “Collection synchronisation for WebDAV” [Update – by March 2012 it has become RFC6578]. The problem with WebDAV (and CalDAV, CardDAV) is that it is was designed as an access method, but not a sync method. While it is well possible to sync data via WebDAV, it does not scale well with large sync sets, because a client needs to browse through all the information available first just to detect the changes. With a large sync sets with possibly many hundred thousand files (think of your home folder) that’s simply not working. The proposed extension fixes exactly this problem, and makes WebDAV and its derivates ready for efficient sync of arbitrarily huge sync sets, by making the server itself keep track of changes and report them to interested clients.

With this, a WebDAV based sync infrastructure reaching from small items like contacts and calendar entries to large documents and files (hello dropbox!) is perfectly feasible. Now why should iCloud be that infrastructure? That’s where I started googling today for this blog entry.

I knew that the “Collection synchronisation for WebDAV” proposal was coming from Apple. But before I didn’t pay attention to who was the author. I did now – it’s Cyrus Daboo, who spent a lot of time writing Mulberry, an email client dedicated to make best possible use of the IMAP standard. Although usually seen as just another email protocol, IMAP is very much about synchronisation at a very complex level (because emails can be huge, and partial sync of items, as well as moving them wildly around within folder hierarchies must be handled efficiently), so Cyrus is certainly a true sync expert, with a lot of real-world experience. He joined Apple in 2006. Google reveals that he worked on the Calendar Server (part of Mac OS X server supporting CalDAV and CardDAV), and also contributed to other WebDAV related enhancements. It doesn’t seem likely to me they hired him (or he would let them hire him) just for polishing the calendar server a bit…

Related to the imminent release of iCloud, I found a few events interesting: MobileMe users had to migrate to a new CalDAV based Calendar by May 11th, 2011. And just a month earlier, Cyrus issued the “WebDAV sync informal last call” before submitting the “Collection synchronisation for WebDAV” to IETF, and noted that there are “already several client and server implementations of this draft now”. And did you notice how the iOS iWork apps just got kind of a document manager with folders? After becoming WebDAV aware only a few months ago?

So what I guess we’ll see today:

  • a framework in both iOS5 and Mac OS X Lion which nicely wraps WebDAV+”Collection synchronisation for WebDAV” in a way that makes permanent incremental syncing for all sort of data a basic service of the OS every app can make use of.
  • a cloud based WebDAV+Sync storage – the iCloud
  • a home based WebDAV+Sync storage – new TimeCapsules and maybe AirPorts
  • and of course a lot of Apple Magic around all this. Like Back-to-my-Mac and FaceTime are clever mash-ups of many existing internet standards to make them work “magically”, there will be certainly more to iCloud than just a WebDAV login (let alone all the digital media locker functionality many expect).

In about 5 hours we’ll hopefully know…

06. June 2011 by luz
Categories: English | Tags: , | Leave a comment

← Older posts