Apple’s Mind Control

This will seem pretty crazy (and it probably is), but I have a weird idea about where Apple could be headed over the next several years - and it could be straight into the land of science fiction.

Rather than try to persuade you to my conclusion, I’ll just state it up front: I believe that the next big interaction model could be mind control.

I don’t mean the technology will be controlling our mind, but rather that we will use our mind to control our technology.

Now let’s consider what Apple just announced - a wrist watch. This is a device with such a small screen that the currently dominate interaction model - multitouch - is almost useless on it. Apple themselves pointed this out and then tried to sell us on the “digital crown” as the clever solution which is nothing more than the original iPod’s wheel turned on its side. It’s not exactly a shocking innovation and it’s not exactly a whole new interaction model, either. Surely there’s more to this?

The other thing to consider is that Apple’s “home screen” user interface for the watch appears to be a zoom, pan, and scroll model - which is very odd since the screen is very small and dragging around the home screen area to choose the function/app that you want actually obscures the very things you are trying to find under your relatively giant finger. How can Apple point out the absurdity of a pinch/drag UI for maps on the watch, while also highlighting the very same interaction for the home screen?

As I understand it, current mind control technology is mostly in the realm of gross movements - up, down, left, right, etc and can be used to move cursors on a screen or drive prosthetic limbs. Now think of those rough movements as gestures and then imagine controlling the Apple Watch UI with them. If they are able to read thought gestures for up, down, left, right, and perhaps a kind of push/selection thought, you could control the entire watch’s UI without actually touching it.

So how would Apple read your mind? Well, the Watch already knows when you’re glancing at it based on movements of your arms so it’s going to have a pretty good idea of *when* to listen to your mind - but certainly that can’t be enough on it’s own! In all likelihood there would need to be some kind of sensors much closer to head for this to work.

Enter Beats. Headphones that sit on your head already have a band that wraps right around your skull and they’re already socially acceptable (unlike Google Glass). Sensors could be put right into the headphone bands. If you’re wearing the headphones, you can use thought gestures. If not, you can use touch and the digital crown - at least for a few years.

In order for us to trust Apple with our thoughts (even if they aren’t *actually* thoughts), Apple must sell itself on privacy - which is what they are doing. They’re already getting everyone used to basic biometrics such as Touch ID - and they’ve protected it with things like the secure enclave. This is some massive engineering effort to put people’s minds at ease about this stuff and the secure enclave could easily make an appearance in mind-reading headphones of the future.

At first, of course, the Watch would be just as it is advertised - but in a year, perhaps, it might be paired with special headphones. Then perhaps special earbuds. Then thought gestures will come to your iPhone, and your Mac, and the mouse will go the way of the floppy disk. Technology will improve over time and the gestures will get smoother, more precise, and the sensors better, smaller, and more invisible. It won’t matter how big or small your phone or tablet is - you will hardly have to touch it. This won’t happen overnight or indeed next year - but just imagine it!

Apple’s Game

This week Apple introduced app extensions into both iOS and OSX. When a 3rd party app wants a particular kind of service (such as photo editing), iOS presents the user with a list of other apps on their device that have the desired extension. Once the user picks one, the extension appears right within the 3rd party app so the user can use it without switching out of their current flow. This allows apps to interoperate in a controlled manner without sacrificing security, privacy, and convenience for the user.

One of the interesting things about this is how the underlying mechanisms actually work - the extensions themselves are entirely self-contained apps in their own right. They are walled off from all other apps - including their own parent app for the most part - and are given a limited view of the outside world that mostly only includes the data necessary to do the type of task the extension was designed to fulfill. This means that the extension apps are, essentially, entirely self-contained. As far as users are concerned, their flow is relatively uninterrupted and they’re able to do what they want when they want without iOS standing too much in the way. It should just work.

Apple also announced that AirPlay will now support direct peer-to-peer connections. This means that latency will be much lower and connections should be more reliable. It also means that your iPhone (for example) will not need to be on the same wifi network as your AppleTV in order to use AirPlay. Lifting that constraint won’t matter much for typical home use, but it’s a huge deal in schools and businesses. It also means it’ll be far easier to play a video from your phone on your friend’s AppleTV without needing to ask to join their wifi network - and, indeed, not even needing to *know* you’d need to join a wifi network in the first place! It’ll just work.

Game controllers for iOS have been around awhile now, but they haven’t taken off yet in a big way. The specification has always included both standalone wireless controllers, and “shell” controllers that wrap around an iPhone or iPod. While a number of games support the game controller input API, it’s been pretty spotty and as a result there aren’t a ton of manufacturers of the controllers out there yet. Even with those downsides, Apple announced this week that their game controller API will now transparently forward controller events from one device to another. What this means is that if you already have a shell-style controller for your iPhone, you can now use your iPhone as a dedicated standalone controller to play games that are running on your iPad or Mac and the game itself doesn’t need to know any different. It just works.

Metal is one more thing Apple introduced us to this week. Metal is an extremely thin layer of software that interfaces between apps and the underlying GPUs. It does the same job that OpenGL ES has been doing for us for years, but Metal is optimized for Apple’s own hardware and software needs while also cutting out a bunch of legacy cruft in the process. This means games that use Metal will use less CPU time communicating with the GPU, which leaves more CPU time left over for running the actual game itself.

Each one of these things is great by itself, but I believe that there’s an even bigger plan hidden in plain sight - and it’ll all land this fall.

Apple now has everything they need to disrupt the game console industry in a way that none of them see coming. I predict that we’ll see a new AppleTV update (and hardware) this fall along with a new app extension type for AirPlay. AirPlay will become about more than just streaming video to your AppleTV - instead that’ll simply be one of the things you can do with it. Apps (mostly games, I suspect) will be able to bundle an AirPlay extension inside - just like how apps can now bundle photo editing or sharing extensions as of iOS 8. The key difference is where the AirPlay extension app actually executes - instead of running on your device itself from within another host app, the AirPlay extension app will be automatically uploaded to whatever AppleTV you are currently AirPlaying with and will run directly on the AppleTV natively instead. This means no video streaming lag and minimal controller lag. Your iPhone would then turn into a generic game controller with onscreen controls or, if you have a physical shell controller attached to your iPhone, it activates that instead. The game controller inputs are then relayed to the AppleTV and thus to the AirPlay extension app using the new game controller forwarding feature.

The AirPlay extension API would probably require that the extension communicate with the outside world primarily by way of the existing game controller API. This means that for any game that adopted the AirPlay extension mechanism, they’d also be required to support physical external game controllers by way of Apple’s API which will increase adoption significantly.

Since the game controller specification also defined wireless game controllers, you could cut out the iPhone middleman entirely, too. Any app that is submitted to the App Store that includes an AirPlay extension could also automatically show up on AppleTV in an AppleTV Store. Users without an iPhone, but with a compatible wireless controller, could purchase games directly from that store and play them on their AppleTV using their controller without even owning any other iOS devices and it’d just work.

If that customer buys an iPhone or iPad later, all the games they purchased on their AppleTV would also work on their shiny new iPhone or iPad because Apple would require all games to support normal touch-only controls - just as they have always done even for apps that support the existing game controller APIs. This would be a huge win for Apple, of course, but also for customers - buy the game once and it’s always yours on any device no matter how you want to experience it. There would be no AppleTV-exclusive games in the AppleTV Store - only normal touch-enabled iOS games that happen to also bundle an AirPlay extension. No fragmentation.

Due to AirPlay now supporting peer-to-peer connections, this means that if you bring your iPhone or iPad with you to a friend’s house (or anywhere with an AppleTV such as a hotel room, school, etc), and you have all of your games in your pocket but can play them on the nearby screen if you want. You can play without needing to purchase the game on that particular AppleTV, without needing to sign in with your iCloud account to access your purchases, without needing to get their wifi password, or indeed without there even needing to *be* a wifi network to join in the first place. All without any hassle. When you go home, you take the game and any earned progress along with you in your pocket.

Thanks to Metal, there’s even a chance Apple could enable all of this for the current generation of AppleTV hardware (while also introducing a beefier A7-powered AppleTV). The reason is that while Apple talked a lot about how Metal increases the amount of CPU available for games, that same tradeoff means older hardware could potentially keep up with newer hardware by making the tradeoff in reverse - the CPU might be slower, but since the overhead of talking to the GPU is now lower, you could achieve a similar performance level on older hardware as an OpenGL ES app running on current hardware. And boom - with a single ecosystem software update, Apple will have dropped an entire gaming console ecosystem into millions of living rooms without anyone having to buy anything new.

Provisioning

A summary-ish of things related to code signing, the developer portal, and provisioning profiles.

I believe this is mostly accurate - but I admit that I have not dug deep into the internals because this rough understanding has served me well enough so far. Don’t take this as some kind of technical gospel to swear by. Instead this is meant more to help anyone who might be really confused to start to find meaning in what can seem like chaos and pointless complexity. Putting this together helped me clarify the reasons for the different parts and roughly how they fit together and why they’re necessary. If you want to know precise technical details about this stuff, I’m not your guy - I just want to build apps and not spend all my time fighting the technology. I hope this helps someone find peace without adding too much confusion.

* An “App ID” represents a class of capabilities (entitlements) for things that require an “account” or “identity” or “permission” of sorts on Apple’s servers and/or on device. It is used when generating a provisioning profile to configure the profile to allow iOS to grant an app permissions to certain capabilities.

* A wildcard App ID is like an “abstract class” which allows the same “identity” for Apple’s services (like GameCenter, Push, IAP, iCloud, etc) to be shared by all apps that implement the “abstract class”. Implementation is done simply be using a bundle ID for your app that matches the App ID’s pattern. (This is probably not a perfect understanding, but that seems like the intent. It can behave a bit like a superclass for sets of entitlements and data containers. While I’ve never used this, I believe you can even share a single iCloud storage container across multiple apps this way.)

* The entitlements *file* that’s part of an app’s bundle acts as *configuration* for certain entitlements or even opting-out of some entitlements that the specific app doesn’t want but which may have been enabled by the provisioning profile that was made with an App ID that matched the app’s bundle identifier when it was launched. It does not itself grant you anything just by virtue of it existing! (Otherwise that wouldn’t be very secure and Apple wouldn’t have much control!)

* A certificate uniquely identifies a developer (individual or company). It is used to check that a given code signature is valid.

* A device, as far as we’re concerned here, is just a unique identifier that represents a single individual piece of hardware.

* Adding an app to iTunes Connect is how you tell the store itself about a specific app and gives the *store* permission to communicate with your app for IAP (as opposed to entitlements specified by the matching App ID which just gives your app permission to talk *to* the store using the builtin frameworks but does not promise your app will get a useful reply from the store). To use and test IAP, you must add your app using your your app’s bundle ID to iTunes Connect and add IAP products (not necessary to submit the app or products for review) or else the store will ignore your app when it tries to use it’s IAP entitlements to talk to the store itself. Think of adding things to iTunes Connect as granting the App Store specific permissions that can affect the behavior of your app (by it offering products for sale, etc).

* Provisioning profiles are where the vasty majority of the confusion is. They represent a union of almost everything mentioned so far and act as a single solution that addresses several issues. The important thing to remember is that they exist to grant an iOS device permission to grant your app specific permissions.

Code signing requires specifying a certificate. That certificate is usually your personal development certificate or your main distribution certificate depending if you’re building for development or getting ready to submit to the store. The code signing process searches your keychain for a private key that matches the specified certificate and uses that private key to generate the signature. (Both the public and private keys are created and stored in your keychain when you initiate the CSR in Keychain Access when you are first setting up your certificates in the developer portal, but only the public key is signed by Apple and turned into your certificate.) Signing marks the binary in such a way that it can be shown that it has not been tampered with since it was signed and that it was signed specifically by you. Any tampering will invalidate the signature - but that only matters if something actually checks and requires the signature to be valid in the first place! Without access to the public key necessary to validate the signature, your binary’s integrity cannot be determined. Having a signed binary alone doesn’t confer it any special privileges.

So how does iOS know that your app is really yours and that you are approved by Apple and that the app has not been tampered with? That’s one of the things that a provisioning profile solves by including exactly which certificates are allowed to be used to sign apps matching a specific App ID. When the provisioning profile is created, the selected certificates are encoded right within the provisioning profile itself so that iOS can use the public keys from those certificates to ensure an app’s binary was signed by one of them before deciding to grant that binary access to anything. This is why provisioning profiles need to have certificates added to them when they are configured.

Another thing that provisioning profiles do is restrict the set of devices allowed to run a given app. This is, in a way, like a second “signature” for the hardware itself. Not only does the binary need to be signed by an approved certificate included in the profile, but the device itself must have a specific “signature” in the form of it’s hardware ID and that ID must also be listed in the provisioning profile before the app is allowed to run.

Provisioning profiles grant access to an app to certain capabilities that Apple wants to control - these are the entitlements that are specified for a given App ID in the portal. The entitlements themselves are encoded within the provisioning profile when it is generated so that iOS knows which things to allow or deny when the app is launched.

Also encoded within the provisioning profile is the distinction between Distribution and Development. This distinction determines which backends some of Apple’s cloud services will connect to if the app has permission to use them (such as sandbox mode for the store or game center) and the certificates used to validate push notifications in either context (which enables you to have separate development and production push notification services and behaviors if you want).

The reason all of this works and is secure is that Apple generates the provisioning profiles in the portal and then signs them with their own private keys before delivering them to you. The signing of the provisioning profiles is something only Apple can do. The file you download can therefore not be tampered with without rendering it invalid. An invalid provisioning profile will not be accepted by iOS and thus Apple can control exactly what can and cannot be provisioned by a developer by simply restricting access to the signing of the provisioning profile to things the the developer portal gives you permission to configure in the first place - even though provisioning profiles can support any number of other awesome options you can’t use without jailbreaking. This is why you have to register testing devices in the portal, add your certificates to the portal, etc - only things in the portal (and thus the numbers of which can be controlled and limited arbitrary by Apple) can be included in a generated and properly signed provisioning profile. The portal is where Apple’s provisioning policies and limitations are actually enforced.

In Xcode, the setting for provisioning profile is listed under Code Signing, but I believe it is not actually referenced in the app bundle nor is it specifically required even when code signing! This may seem surprising, but that’s because the provisioning profile actually has nothing to do with building your app - it’s all about permission when *running* the app on device. That permission is specified from the point of view of the *device*. The device must have a relevant provisioning profile installed for your app and for the device itself in order to run your app’s code, but you do not need one to compile or sign that code! (You only need a public/private key pair to sign your code.) I tested this by simply deleting the provisioning profile from the Xcode’s project settings and cleaning and building and running the project. There was no complaint at all - the app still built was signed and ran on device just fine. Deleting the provisioning profile from the device itself caused the app to stop launching, of course. I think all the provision profile setting in Xcode does is ensure that build & run installs the one you wanted on the device for you automatically and, if there’s any conflicts of found keys in your keychain, it may help to disambiguate which key pair to use when signing (and maybe to aid with the “fix issues” feature).

On device, when attempting to launch an app, iOS will check all installed provisioning profiles and match the app’s bundle ID against the App ID of an installed provisioning profile. It will then use the most-specific one it finds (I think) - but not necessarily the one specified exactly in Xcode’s settings! This can cause problems from time to time if things get out of sync. Basically, the provisioning profile and the app itself are disconnected entirely from each other. You can have any number of apps that use the same provisioning profile (in the case of a wildcard, for example), or you can have any number of provisioning profiles with different combinations of device IDs. It does not need to be one-to-one! This also means you do not need to rebuild your device if you add a tester’s device to your provisioning profile or anything like that. Just get them to install the updated profile and things should be fine.

If iOS finds a relevant provisioning profile and all of the restrictions check out, code signing is validated, device IDs are validated, and the provisioning profile was signed by Apple, then the app is allowed to run. Otherwise you get a provisioning error of some kind and are left with a bunch of combinations of things to check for when attempting to correct it. Fun!

The main takeaway, I think, is that provisioning profiles grant the device permission to grant a set of other permissions to a particular app. Without a profile signed by Apple (or a jailbreak), iOS won’t grant your app any permissions at all and therefore it won’t launch.

(Check out https://github.com/chockenberry/Provisioning for a handy QuickLook plugin that can inspect provisioning profiles.)

Consumable in-app purchases are occasionally abused by developers who prey on some people’s weaknesses and addictions. I believe Apple has some responsibility to help protect their customers that are in need of intervention from themselves while also discouraging developers from abusing those customers who may have an addiction problem. Failure to act on this might invite government regulation of such purchases and that’s probably a can of worms very few would like to open unless it is proven unavoidable.

My suggestion for a way to handle this without being too intrusive is to implement a mechanism to contest consumable purchases some time after they have been made. This idea would *only* apply to consumable purchases (typically these are things like buying coins or one-time effects that cannot be restored later or shared with other devices) and it would be handled on a per-app basis. This is by no means a perfect plan and there’s plenty of room for tweaks, but I believe that an approach like this would be a fair place to start.

The moment a user completes an initial purchase of a consumable item in an app, an internal timer is started on their device associated with the app. Nothing unusual happens from the point of view of the user as things proceed like normal and they can continue to use the app and the items they purchased and purchase additional consumables in that app - however once the timer is started, all of their purchases remain in an unresolved state until the end of the next phase. This means that the app considers the purchases as having been completed immediately as usual so the user can keep using the app as they intended, but for billing purposes, Apple does not yet consider them complete. This effectively means the user got what they purchased in the app, but the developer has not yet been paid for it. (Note, again, this *only* applies to consumables!)

Eventually the internal timer will expire (perhaps after 18 hours). A new timer is now started which tracks the time remaining until those purchases are considered “final.” This second phase is the “reconciliation” phase and also has an 18 hour duration. While in the reconciliation phase, those past purchases may be now contested by the user.

If the user opens the app during the reconciliation phase, they are shown a store prompt which contains the sum total dollar amount of all consumable in-app purchases made since that initial purchase about 18 hours ago. They are then asked to confirm that they intended to spend that total amount. If the user confirms, the entire batch of consumable purchases are considered “final” and are now counted such that payment will be made to the developer and the user will be billed as normal. If the purchases are rejected, the developer is not paid and the user is not billed for them. To protect the developer from abuse, however, in the event of a rejection, in-app purchases will be disabled for the app for this user for some amount of time afterward that increases each time the user attempts to refuse payment. Eventually a threshold might be reached where a given user is out-right banned from all in-app purchases for that app pending intervention by Apple (perhaps a call to support requiring some soul searching on the part of the user could re-enable them).

If the user deletes the app while in the reconciliation phase, this is considered an implicit rejection. The user is not billed, the developer is not paid, in-app purchases are disabled for the user for the same time period as if they had tapped the reject button on the prompt to prevent uninstall-reinstall from cheating the developer.

If the user does not open the app again before the reconciliation phase expires, then the purchases are considered final, the developer gets paid, and the user gets billed as usual.

After the reconciliation phase is completed, any purchases of consumable products in the app simply starts the whole process over again.

Apple could also track apps and/or developers that have high rates of purchase rejections relative to approved purchases and take further action based on that. They could even expose a rating in the App Store for each app based on this and other metrics to better inform users of what they might be getting into.

The intent here is to provide a way for users to protect themselves while also putting some pressure on developers who are prone to abusing users with additive personalities without having to set specific limits or significantly altering the experience for those who willingly spend a lot of money on consumable items.