My top 10 wish list for iOS 12, and our post-WWDC coverage

Preview of our post-WWDC coverage

Apple’s 2018 Worldwide Developers conference, (WWDC) begins with the keynote at 10 AM Pacific, 1 PM Eastern, 6 PM in the UK on Monday 4 June. That equates to stupid o’clock, AKA 5 AM New Zealand time on Tuesday 5 June.

Before discussing some things I hope Apple will reveal at the WWDC keynote, a heads-up about coverage from Mosen Consulting after the keynote has concluded.

Episode 89 of The Blind Side Podcast will be devoted to WWDC. Once again, the daughter formerly known as Heidi Mosen, now rebranded as Heidi Taylor due to marriage, will be busily taking screenshots during the event. She will be on the podcast to describe some of the things shown on stage or on screen that weren’t verbalised during the presentation. And as regular listeners know, her audio descriptions are excellent.

As well as Heidi, I’ll be joined by Debee Armstrong, a highly experienced user and observer of technology, and Janet Ingber, who is well-known for her book on using VoiceOver with the Mac, as well as her numerous articles for Access World. You can be sure that this panel will make sense of all that went down, particularly from an accessibility perspective.

As usual, we will get The Blind Side Podcast published as soon as we can. But there is a bonus this year if you subscribe to our new Daily Fibre Premium Podcast service. As you’re probably aware by now, the Daily Fibre Premium Podcast brings you the latest technology news every weekday, and it costs just $5 USD a month, which equates to a quarter per episode.

I am always looking for ways to add value for the generous people who make the Daily Fibre Premium Podcast possible. One way I’ll be doing that is to offer a live stream of the recording of The Blind Side Podcast on that day for all Daily Fibre Premium subscribers. If you’d like to get your post-WWDC fix as soon as possible, become a Daily Fibre Premium Podcast subscriber today, and tune into the exclusive live stream.

I can also confirm that Mosen Consulting will be releasing “iOS 12 Without the Eye”, our comprehensive guide to iOS 12 from a blindness perspective, on the same day that iOS 12 is released officially. This means that when your iThing gets upgraded, you’ll have access to a thorough guide to help you upgrade and become familiar with all that’s new.

If Apple follows tradition, the first beta of iOS 12 will be in the hands of developers immediately after the WWDC keynote. Be warned, usually the first developer beta is, perfectly understandably, rough around the edges and probably not for use on a device that you depend on. It is likely to be available to those in the Apple public beta program sometime later, when some of those rough edges are a little smoother.

About my top 10 wish list for iOS 12

This post continues a tradition I started when I founded Mosen Consulting in 2013, where just ahead of the big iOS reveal at Apple’s Worldwide Developers Conference, I write down my wish list for the latest iOS and open the discussion to learn what others are hoping for.

Most of that discussion is taking place on The Blind Side Podcast. I’m posting my top 10 as a conversation starter. To share your own thoughts, you can phone your comments into The Blind Side Podcast feedback line (719)270-5114. Alternatively, you can email an audio attachment, or simply write your thoughts down, and send them to TheBlindSide at Mosen Dot org.

The Blind Side Podcast community would enjoy hearing a range of opinions, which will be published in The Blind Side Podcast episode 88 a few days out from WWDC.

This post is written from my perspective as a blind person who uses VoiceOver, the screen reader that has made the iPhone a powerful productivity tool even when you can’t see the screen. Some of my top 10 enhancements are blindness-specific, while some are not.

One: Defect equivalency criteria

Yes, I realise that’s a bit wordy and your eyes may be glazing over, but let me explain because this is my number one wish for iOS 12. It may even come true, since news sources claim to have information that Apple has reduced its new features so they have more time to improve the reliability, consistency, and general user experience of iOS.

Let’s be clear, all software has bugs, and you must stop developing sometime. There comes a time when you have to make a call that says, “we need to release this now. We will absolutely keep working on the outstanding bugs, but we’ve got to get a product out there.” The critical question then becomes, what software bugs are unacceptable in a public release that doesn’t have a beta designation?

I believe that in an accessibility context, Apple is failing to address this question appropriately or even humanely. I do not believe for one moment that the problem with apples accessibility-bug-filled initial releases of every major version of iOS for many years is that no one told them about the bugs. That’s demonstrably not true. So someone, somewhere, is making the call that it’s OK to release iOS with the extremely serious, for some users crippling, VoiceOver bugs that we’ve been seeing. We can, and should, complain about that. But I also believe we should complain constructively.

To that end, I’d like to offer a simple guideline that in my view should assist Apple to determine whether a bug is tolerable until it’s time for another minor release.

The key to this is to translate the impact of a bug to an equivalent bug for the sighted.

Let me paint a hypothetical picture for you. Apple releases a major iOS update. When it’s installed, one of the most basic functions of the iPhone, answering calls, is broken for many users. You turn on the TV news, and it’s the lead story. Breathlessly, the newscaster begins with, “Commerce was plunged into chaos today as millions of people were unable to communicate with one another”. There are interviews with tradespeople, salespeople, all of whom had their livelihoods utterly disrupted. Apple’s share price plummets. Tim Cook holds an emergency press conference to say it’s not good enough, he’s sorry, there’ll be an inquiry about how this slipped through and a patch will be released tomorrow after the team has worked non-stop to create a fix and test it.

This exact scenario did in fact play out when iOS 9 was released. The only difference is, it just affected blind people. The problems answering calls were only present when VoiceOver was running. It was a bug repeatedly reported during the testing phase by many blind people, but it was released to the public nonetheless. Because it only affected blind people, it wasn’t headline news, it didn’t even make the news. There was no apology from Tim Cook, no journalist brought it to his attention. And the fix was not quick in coming.

Would Apple dream of releasing a new version of iOS if a core function of the device was rendered useless to sighted people? Of course they wouldn’t, and therefore it’s unacceptable for Apple to discriminate against its blind users by considering us deserving of an inferior user experience. Our money’s as good as anyone else’s, and where they exist, consumer protection laws requiring that products must be fit for purpose are just as applicable to blind people.

If you think I’m bringing up ancient history to be sensational, ask any Braille user about iOS 11, released with Braille support that was simply unfit for purpose despite numerous reports to Apple. What would be the equivalent that we could apply here? In my view, there are two – the screen and the virtual keyboard. If your Braille display is not functioning correctly for reading, that’s the same as a sighted person’s screen being rendered defective. You can be sure that in such a situation, there would be an overnight fix. Braille input was next to unusable when iOS 11 was first released. This would be the equivalent of the virtual keyboard not being operable for a sighted person.

Many people reading this blog will be doing so with speech. But I urge you to imagine what it would be like if that wasn’t an option for you. Several people that I work with on a regular basis have no hearing, or insufficient hearing to use their iPhone with speech. When Apple breaks Braille, they completely break the iPhone for this often neglected, vulnerable group.

So, if criteria were set that made relevant comparisons with the impact on sighted people, I would like to think more of these serious issues would not be deemed nearly acceptable for release.

This also requires qualified blind people who understand the needs of various parts of our community being in key decision-making roles. Nothing about us without us.

Two: Improved Braille input

As mentioned in the previous section, Braille in iOS 11 got off to a rocky, I would say even catastrophic, start. Yet there were some wonderful developments in iOS 11 Braille which Apple clearly thought through carefully. The comprehensive keyboard manager for assigning any key on your Braille display to any function is simply brilliant. And, it further widens the massive chasm that exists between Braille on iOS and Braille on Android. I say congratulations to Apple, and express my thanks for creating such a flexible, powerful solution. I take advantage of it every day.

It seems to me that a sincere effort was also made to address some of the idiosyncrasies of Braille input. Unfortunately, whether it’s because of lack of knowledge, lack of resources, or lack of time, the changes were half baked, and for a while a step backwards. There has been some recovery, but Braille input still needs a lot of work. It seems strange that with all the resources and knowledge at Apple’s disposal, we don’t yet have a Braille input system better than this. The objective should be that Brailling into any edit field, be it a text message or a full document, be just as efficient and transparent as if it were a native contracted Braille file. We know this is possible, because it is being done elsewhere. Apple clearly took up the Braille challenge, with some very positive benefits. I don’t think anyone can now accuse Apple of a lack of commitment to Braille. But they need to get this right, and I look forward to seeing what’s next.

Three: Siri

Talk to anyone who has access to another voice assistant such as Google Assistant, Alexa or Bixby, and you’ll find a consensus that Apple is far behind in the personal assistant space and just isn’t innovating. Apple has made some positive moves in opening Siri to some third-party developers, but they took a fatally conservative approach. Only certain classes of apps can use Siri. I still can’t open a book of my choice in iBook’s, Kindle or Voice Dream Reader. I’m not allowed to ask TuneIn Radio or Ootunes to tune to Mushroom FM, and I can’t ask Spotify or Deezer, my favourite lossless music service, to play a song.

But further opening of the API is not enough. Siri just doesn’t know basic stuff. I must use Siri for control functions on my phone, but if I want to know a fact, locate a business or get a phone number, it’s quicker for me to launch the Google Search app, perform the magic tap, and ask the question. The quality and detail of the responses Google gives me is astonishing.

Not only that, I find Google’s recognition significantly better. With the processing power and storage now available on these devices, offering training for those who want to improve speech recognition should now be viable.

With the recent improvements to Alexa’s iCloud calendar integration, I find scheduling appointments far more reliable by instructing the Amazon Echo Dot on my desk in the office, than talking to Siri with the iPhone that lies beside it.

Speaking of Alexa, I’d like to see the Siri team take a leaf from Alexa’s book and give us a recipe type feature, where one can tell Siri to respond in certain ways to commands. Even if this process began by simply responding with a custom-made output string in response to an input string, it could be beneficial for those with accessibility needs, and the public.

The big change last year was Siri’s new voice. That’s all very nice and the voice is certainly impressive, but focussing resources on the voice totally missed the boat when it came to addressing the widening gaps between Siri and other assistants.

Apple really must pull something significant out of the hat this year to even catch up with its competitors, let alone surpass them.

Four: Do for the keyboard in iOS 12 what was done for Braille in iOS 11

For me, the Braille keyboard manager was the outstanding feature of iOS 11. It’s rare that Braille users obtain a significant UI advantage over those who are not Braille users. Not only was I delighted that Apple added this feature, I was genuinely surprised, since Apple tends not to like confusing the iOS UI with too many options.

Now, the precedent has been set. I’d love to see the same degree of configurability available with a Bluetooth QWERTY keyboard that now exists with Braille. Wouldn’t it be great if the more tech savvy among us could set up a keyboard layout that emulates our favourite Windows screen reader, or just change the key layout because for whatever reason it suits us better?

I do wonder what impact the ability to customise Braille commands has had on the Apple technical support team, since there is now no standard frame of reference when a Braille customer seeks support, but, as I say, the precedent has now been set, and as a user, I love that.

Five: Finish the job with the spell checker and the rotor

Last year, I made the point that if Tim Cook believes that the iPad can be a replacement for a computer, then blind people need to be given the tools to create content better. It was therefore great to see a new rotor item added that allows users to navigate between misspelled words. It’s a promising start. The only trouble is, when you’ve found that misspelled word, it’s tedious to take an action on it. Since one now swipes up and down to review the misspelled words in a document, it would seem to make sense to then swipe left and right to review the corrections, and double tap the one you want. This one minor change would have a massive impact on making iOS a better platform for content creation.

Six: New access paradigms.

VoiceOver was introduced to iOS in 2009. It’s now a mature screen reader, and a very good one. It’s a testimony to the brain power at Apple that they came up with such an elegant paradigm to facilitate blind people interacting with touchscreens.

I know some people who feel far more comfortable using iOS than they do trying to memorise the myriad keyboard commands available on Windows screen readers. But as a trainer, I also meet many blind people who understand the power and the potential of mobile devices, but they genuinely struggle. It saddens me that in the blind community, and perhaps just in the community in general, there are people who treat with disdain those who don’t understand something as readily or fully as they do.

Obviously, it’s in Apple’s interest to ensure that as many people as possible feel comfortable about using their technology. And I also believe that it’s in Apple’s culture to think of innovative ways to make this technology more readily usable by more people. You only need to look at the impressive series of accessibility options now in iOS to know that. Their commitment to inclusion is exemplary.

So I’d like to suggest a couple of radical ideas.

The first is to significantly expand wireless keyboard functionality in VoiceOver. In most applications, icons always appear in the same place. Those of us with a little bit of spatial awareness take advantage of that by learning the location of icons and applications we use frequently, so we don’t have to swipe around the screen to find the icon we are looking for. Not everyone finds this easy. Imagine how much value could be added for users who do have spatial awareness issues if a form of macro functionality could be introduced. Once a person finds an icon they know they are going to use regularly, such as a Play button in a podcast app, the user could perform a gesture allowing them to assign a key to activate that icon. They may choose something like Command+P. That’s all there is to it. The user would have programmed an application-specific command that, whenever pressed, would give focus to that icon on the screen if its visible and tap it.

This change would allow a trainer to come in and keyboard-enable commonly used apps for the person they are working with.

This idea is an embryonic one. I know for example that an app update may cause an icon to be relocated, potentially breaking the keyboard command. So it might be possible for the hotkey to be associated with an icon based on several criteria, such as its name and picture, as well as position. If most conditions are met, the keyboard command might activate.

Another option worth considering to make iOS easier to use for those who struggle with touchscreens is a voice input UI for VoiceOver commands. If you open an app, and you know there is an icon on the screen called Play, but you struggle to find it, imagine how much simpler it would be if you could say to Siri “tap play”. If Siri could find an icon on the screen with an appropriate textual label, focus would be placed on that icon and it would be tapped.

For many of the power users who read this blog, I can imagine they may find this a bit ridiculous. But believe me, far more people are struggling with iOS than you think.

Seven: External audio description

How wonderful it is that blind people now have access to so much audio described material. When Bonnie and I are watching TV on our own, since we’re both blind, we are grateful for all the advocacy and implementation work that has been done to give us so much choice of content to watch.

My kids will sit and listen to the audio description with us if we all watch a movie together. And I’ve heard them on occasion say that sometimes the audio description helps them take note of details they might otherwise have missed. However, I can’t help noticing that when they are watching TV on their own, they are quick to turn the audio description off. And that got me thinking. I would love to see a feature in iOS that allows a user to specify an external device to which audio description should be sent. For example, say the family is watching a movie on Apple TV. Using AirPlay two, which can output to multiple devices, I’d like to be able to tell the Apple TV to send the audio described soundtrack to Bonnie’s and my iPhones. I understand that this isn’t as simple a request as it may appear to some. Essentially, you would be streaming two soundtracks, the soundtrack without audio description on the Apple TV, and the soundtrack with audio description on the external devices. There may be some synchronisation issues, but I’m certain they are not beyond Apple’s ability.

Eight: Clean up the actions rotor

One of the great efficiency-enhancing features in iOS is the actions rotor. But its utility is lessened by a few anomalies that I’d like to see addressed.

First, it needs to behave consistently everywhere. This is particularly important when you’re training someone who isn’t finding the device intuitive. If the actions rotor behaves a certain way most of the time, and then differently at other times, it’s confusing without adding any value. During the iOS 11 cycle, many of us were perturbed by Apple’s decision not to reset the actions rotor to a known state in Mail. Thankfully, Apple heard that feedback, common sense won the day, and they changed it back. But there are still some places where the actions rotor doesn’t behave consistently. The App Switcher is one example. If you close an app, the actions rotor remains on the close option, rather than being set to a known default state. I understand the argument here. If you want to close a whole bunch of apps at once, having the actions rotor remain on close when you start closing apps speeds the entire process up. But there are a couple of ways around this that don’t involve breaking a user interface convention.

First, you could go into a “Close Apps” mode, where you select all the apps you want to close, and then click the close button.

Second, one that would benefit everyone and not just blind people, why the heck doesn’t Apple just give us a button to close all apps at once? Yes, I know the “geniuses” tell us that it shouldn’t be necessary to close apps, that we should leave them in the background and let Apple’s clever memory management do its job, but I’ve tried that, and experienced significant battery drain. The fact is there are some apps that are just badly behaved and need to be closed. If I want to be able to close all my apps with one tap, or double tap, I don’t think that’s too much to ask. It’s been in Android forever, and it would alleviate the need to fool with the actions rotor’s usual behaviour.

When a delete option appears on the actions rotor, I would find it helpful to always see it in the same place, preferably by flicking up once from the default action. This happens in most cases, but not all.

Finally, I’ve been dismayed to note some verbiage creeping in to the performing of actions. Sometime during the iOS 11 cycle, VoiceOver started saying “message deleted” every time I deleted a message. At other times, VoiceOver now says, “performed action,” and then tells you the action it has performed. Of course you know what action it was going to perform, because you already chose that action in the rotor. This is a time waster. I understand that some people might want this, but it should be at least configurable. I note that there is now an actions rotor option in VoiceOver settings, and I hope this will be expanded in iOS 12 to offer a range of behavioural choices.

Nine: Install your own voices for system-wide use

Thankfully, Apple has added a lot of APIs (application programming interfaces) in recent years. It’s helped to make iOS more flexible and vibrant. It was a needed and pragmatic response that sought to deal with the criticism that iOS is too restricted, while not opening the platform to malware.

But we don’t yet have an API which allows a third-party text-to-speech engine to be available to any application. If you install a third-party voice on Android, any application using text-to-speech can make use of that voice. That’s what I want for iOS. I have multiple copies of several voices on my phone because each app must use its own copy of the voice. That wastes precious storage, and it means VoiceOver can’t use any of the third-party voices I really like.

Yes, haters are gonna hate and I know one’s text-to-speech engine choice is highly subjective, but if we could get to a point where a company such as Code Factory could release a version of Eloquence for iOS like their Android offering, I for one would be delighted.

Ten: Vibration at start up

iPads don’t come equipped with of a vibration motor, but it would make an enormous difference for many of the people I work with if the iPhone would make a gentle vibration when it’s powered on. For some, it’s a frustrating experience just getting the phone powered on, because they overestimate or underestimate how long they need to hold the power button down for to switch the phone on.

Now it’s your turn

Those are some of my wishes for iOS 12. What are yours? Tune into the next episode of The Blind Side Podcast to hear people’s own wishes, and be sure to let your own voice be heard. You can phone your comments into The Blind Side Podcast feedback line (719)270-5114. Alternatively, you can email an audio attachment, or simply write your thoughts down, and send them to TheBlindSide at Mosen Dot org.