I’m looking forward to our usual pre-WWDC discussion here on the blog. This post continues a tradition I started when I founded Mosen Consulting in 2013, where just ahead of the big iOS reveal at Apple’s Worldwide Developer’s Conference, I write down my wish list for the latest iOS and open up the discussion to read what you’re hoping for.
Apple’s Worldwide Developer’s Conference, WWDC, kicks off with the keynote at 10 AM US Pacific time on Monday 13 June. That’s 1 PM eastern, 6 PM in the UK, and bright and early on 14 June in New Zealand at 5 AM. If Apple follows its usual tradition, the first beta of iOS 10 will be in the hands of developers immediately after the keynote. It is likely to be available to those in the wider Apple beta program sometime later.
This post is written from my perspective as a blind person who uses VoiceOver, the screen reader that has made the iPhone a powerful productivity tool even when you can’t see the screen. Some of the features are blindness-specific, while some are not.
Here are 10 changes I’d like to see in iOS10 that, for me, would make a great user experience even better.
As a frequent Braille user and a staunch proponent, this one makes my list for the fourth year in a row. Braille is a critical tool for independence, productivity, study and employment. Of the mainstream mobile devices with accessibility options, Apple’s Braille support is the best, but when you compare what’s currently available on Android, that really isn’t saying much. Input for those who prefer to use contracted Braille is still unorthodox. With iPads often being used in the classroom, it’s important that those of us who feel passionately about Braille don’t let up on this until it’s fixed. Many tech-savvy adults can learn to work around the idiosyncrasies if we must. But we should not be teaching kids bad Braille habits as they are learning.
It should be possible to route the cursor to somewhere in the middle of a word, insert a correction, and immediately read the result without any convoluted work-arounds.
With an operating system and screen reader as mature as iOS and VoiceOver now are , it’s time to separate Braille from speech, so that Braille is more than just a tactile mirror of what the speech says, or would say if it weren’t muted. Imagine the productivity benefits of being able to separate the Braille cursor from the VoiceOver speech. This could be a phenomenal feature in iPad with its split screen capability. You could be referring to something on your Braille display on one side of the screen, while working with speech in another app on the other side of the screen.
Although Braille support in OS X, the operating system powering Macs, is also fairly rudimentary, one Braille feature it has that iOS does not is the ability to change the function of the special keys on your Braille display. In my dream of a significant Braille overhaul, this would also be in iOS 10, so I can reverse the panning buttons as is my preference.
Apple has made some improvements to Braille over the years, my favourite of which arrived in iOS 8 with the ability to turn pages automatically when reading eBooks. I use this feature a lot, and it’s a joy. But serious deficits remain, and I hope they’ll be addressed this year. The abundance of new Braille note taking devices at CSUN this year shows that Apple has failed to kill off the note taker. They have their own poor Braille implementation to blame for that.
This year marks the fifth birthday of Siri on iOS. Where I live in New Zealand at least, your fifth birthday is usually when you start school. That’s appropriate, because Siri has much to learn.
I bought an iPhone 4S in 2011 when my original intension was to wait for an iPhone 5, specifically because I was excited about Siri. Those of us who go back that far in the iOS world may remember that before Apple bought Siri, Siri was a third-party app, available like any other from the App Store. Apple then bought it, and turned it into what one of Siri’s original creators described as a “glorified chat bot”.
When Siri was unveiled in 2011, Apple gave it a beta designation. And for a beta, it was pretty good. We were wowed not so much by what it was doing then, but the promise of what it might do in future. In my view it has yet to deliver. Clearly, software developers had taken the time to program in some quirky responses that gave it some personality. Even five years later, new responses are being added. They’re ultimately discovered, and go viral on social media. The “what is 0 divided by 0” question is probably the biggest in recent times.
That’s all fun, and we all need fun in our lives, but it’s kind of getting old when Google’s assistant continues to increase the intelligence gap, and Amazon Echo is showing us what’s possible when a personal assistant is truly smart.
Siri is lagging behind in many critical areas. Studies have shown that speech recognition is quite high with both assistants but generally better with Google. And if you’ve asked questions of both personal assistants, you’ll be familiar with how often Siri responds with, “I’ve found something on the web, take a look”. Google, on the other hand, is far more likely to give you a precise answer to a question using its text-to-speech engine. Not only is this convenient for VoiceOver users who don’t have to flick around the screen looking for the answer from the web, but it’s also of benefit to drivers who want information and don’t want to take their eyes off the road.
I use Siri frequently to open apps and dictate messages, because I have no choice under Apple’s locked down model, where you can’t at present change default applications. But I’ve become so disappointed with the lackluster response Siri gives to questions that I find myself always using Google for such things now.
My view is that just as touch has dominated the last nine years, voice will dominate the next decade as it becomes a more viable method for a truly efficient, intelligent means of accessing information and asking complex questions. Technologies like Amazon Echo are demonstrating what’s possible. Forget 0 divided by 0. Being able to order a specific item of food for delivery, or asking for another box of those protein bars I bought last month, saves time and is truly useful.
Siri’s creators have left Apple, disenchanted with the narrow implementation of their technology, and are developing their own personal assistant, Viv. It sounds like everything Siri should have been by now, as this Washington Post article explains.
There are a couple of measures critical to making Siri competitive again.
First, Siri has to be liberated from Apple’s exclusive control through an application programming interface (API). Android already has an API for its voice search. That’s one of the things that makes it more useful, and seem more intelligent, to the end user. I want to be able to tell Siri to play a specific radio station on TuneIn, to open a particular book I have in Voice Dream, to play a piece of music from my Synology Nas. If I have an app for my local taxi company installed, I want to be able to tell Siri to book a taxi to pick me up at home at 8 AM tomorrow. With a good API, the possibilities are huge.
But the API is now only part of what’s required to make Siri competitive. I want to be truly stunned by Siri with a kind of wow factor that Viv is promising to deliver. I want to be able to say to Siri, “show me the cheapest flights from Boston to Los Angeles between noon and 5 PM on 16 August. I want to be read the list, and to be able to say, “book that one with my business credit card”. I want to be able to have my intelligent assistant do things that it’s hard for me to do on the web, like “show me Indian restaurants within 5 KM of my office that aren’t too noisy and offer a kids’ menu”.
This is a complex subject. Apple has some of the best minds in technology working for it, and I think some of the questions that may be holding Siri back are ethical ones. I take comfort from the great care Apple appears to take with my personal data. I feel I have a clear understanding of how my data is used, and who has access to it. Google is doing some pretty slick stuff, as will Viv in future, but at what cost to privacy? No doubt Apple is thinking about how it can extend Siri’s capabilities to third-parties, while at least giving consumers a clear choice, and a clear understanding of the implications of doing so.
Apple has been hiring, and acquiring technologies, in this area. Perhaps iOS 10 is where we’ll see Apple once again leap ahead of the pack, or at least get back up to speed with it.
3. Good Vibrations
There’s an old song called “Little Things Mean a Lot”. This one is a little thing that, in my experience as a trainer, would make a big difference. iDevices give no non-visual indication at all that they are starting up. I’d like to see iDevices that are capable of vibrating emit a vibration the moment the device starts to power up.
I have seen many people have immense difficulty learning how long to hold down the power button on their iDevice to get it to start up.
If you have the iPhone connected to a power source, you’ll get a vibration when the phone has finished booting into iOS, but that timing issue with the power button is a big deal for a lot of people.
Vibrating at power up would be handy when you’re troubleshooting and you can’t see the screen, since it can sometimes be hard to know whether your device is completely dead or just not fully booting into the operating system.
4. Notification Interrupt Toggle
I don’t own any specialized blindness reading device. I do now own a Kindle, but I just find the ability to have all my content on the device that’s always in my pocket too compelling to use anything else on an ongoing basis. The only downside I have found is the lack of control I have of notifications. It makes for a distracting, disjointed experience to be reading a book, only to be interrupted on a regular basis with notifications. When a notification comes in, VoiceOver stops what it’s reading, speaks at least part of the notification, and then if you’re lucky, resumes reading. You can get around this to some extent by using the Speak Screen feature. This is an option separate from VoiceOver, which will read continuously in iBooks and Kindle. It’s not a perfect solution though, because while notifications no longer interrupt speech, they will speak at the same time as the book you’re reading unless you toggle VoiceOver’s speech off. Speak Screen may also not always work well for longer articles in some third-party apps.
You can of course put your device in Do Not disturb mode, but then you may not be alerted to notifications you need to know about.
I’d like a setting, available on the rotor and in VoiceOver settings, to be able to specify whether notifications interrupt what’s being read. This would be particularly useful to those of us who make extensive use of custom tones. All the important people in my life have unique ring tones and text alerts. I have a special sound assigned to VIP email. So all this means that when a notification comes in that I know I may want to look at right away, the tone will tell me I should stop reading my book and check my notification. When I hear other alert sounds, I know that they can keep until I want to take a break from my book. It sure would beat being interrupted all the time.
5. Do Not Disturb Improvements
Speaking of notifications, I’d like Do Not Disturb to offer me more flexibility. Currently, I can choose to have Do Not Disturb enabled, but to allow phone calls from my favourites. I’d like also to allow texts and email from my favourites, as well as to be notified of VIP mail. All these features would be toggles.
I’d also like to be able to set up Do Not Disturb profiles that could be enabled at different times. For example, at night when I sleep, do not disturb allows calls from family members in the case of an emergency, thanks to them being in my favourites. But there are other occasions, such as when I’m in a meeting, doing a radio show or recording audio, where I don’t want to be interrupted by anyone. I’d like to be able to tell Siri, “I’m in a meeting”, and have the appropriate profile loaded.
6. Keyboard Access for Everyone
In its attempt to convince us that iPad really can be a viable PC replacement, many more keyboard shortcuts were added to iOS 9. Unfortunately, some of these keys are only available when using an iPad. This seems like an arbitrary and punitive decision on Apple’s part. Many blind people own iPhones, and don’t see a need to own an iPad since they don’t perceive a benefit from the larger screen. However, they’d benefit immensely from more keyboard shortcuts. At least when VoiceOver is running, all keyboard shortcuts should be available on all devices. And I’d like to see Apple pack their apps with useful keyboard shortcuts, clearly documenting what they are. When you look at the absolute joy Twitterific now is to use, it shows you what’s possible with thoughtful and methodical keyboard support.
While talking about the keyboard, how cool it would be if you could assign the activation of a specific button on a screen to keyboard commands.
7. File Management.
Last year, I bought Bonnie a new iPhone 6 Plus, because it supported more bands and would work better when she visited her family in the US. This left her old iPhone 5s free, which I offered to my Android-using son. He agreed to give it a shot, since he’s the only one in our family who doesn’t use an iThing.
He’d given it back to me in an hour in disgust. The deal breaker for him right off was his inability to just work with files. He wanted to be able to create a folder, copy music into it, and play that music with the app of his choice. I explained to him the benefits of sandboxing and working through iTunes. He laughed at me and said he just wanted to copy his data and use his phone the way he wanted to.
A few months later, Tim Cook told us that the iPad Pro was a PC replacement. To me, it will never be, unless it has a proper file manager so I can create folders in the structure I want, copy files to those folders without iTunes, browse those folders and open files from the file manager.
We’re some of the way there with Apple’s nifty iCloud Drive app, which has created the user interface for browsing files and folders independent of apps.
8. Text-to-Speech API.
Speech is a subjective thing, so the more choice of text-to-speech engine people have, the better. Android allows you to install system-wide voices that can be used by any application. It would be good to see Apple offer this feature as well. Imagine being able to use all those cool voices you bought for Voice Dream Reader with VoiceOver itself.
Not only would such an API increase the range of voices available to VoiceOver, but it could also save precious storage space. Many blind people are on a tight budget, can’t afford high-capacity devices, and need every bit of storage they can get. Yet people find themselves in the crazy position of having multiple copies of several voices because each app must use its own copy of the voice. Some of those voices can take up a lot of space.
9. Continuous Reading Improvements
There are a number of enhancements to the iOS continuous reading function, the two-finger flick-down, that would further improve it.
I like to read my Twitter timeline in proper chronological order. Reading from the most recent tweet backwards doesn’t give you the sense of a story unfolding. When I read my Twitter timeline this way, I need to do it manually, flicking through each tweet, because there’s no way of reading continuously up the screen. This feature would make a real difference in a number of apps that put the most recent item at the top of the screen.
The most significant source of energy consumption on the iPhone is the screen. We can improve battery life by turning down the brightness, mine is usually set at 0. Contrary to popular belief, having screen curtain on has no positive impact on battery life. Since we don’t need the screen when continuously reading, it would be wonderful if technology could be developed allowing us to lock the screen while continuously reading. This would also eliminate the hassle of accidentally tapping the screen when reading continuously, causing the reading to stop and our place to be lost.
Finally, I’d like to be able to flick up, down, left and right to navigate by elements such as sentence and paragraph, without interrupting reading. You could still use a two-finger tap, or even perhaps the magic tap, to stop reading.
10. Pronunciation Dictionary
This is another one that makes the list for the fourth year running, and it’s such a fundamental screen reading feature that I really don’t know why it isn’t there already. It would be nice to have control over the way my speech pronounces New Zealand place names, and the names of friends that the TTS nicely mangles. Ideally, you could choose to apply the rule to all voices, or just the currently active one.
That’s my list, and no matter what Apple has in store, I’ll be covering it in detail in my next book, “iOS 10 Without the Eye”. I’m sure that there will be some features not in this list that will make a big difference. Would you also like to see any of these features, and what’s important to you personally that I haven’t included?
I’m looking forward to reading your thoughts in the comments.