Click here to view this site's accessibility statement.
Writings and musings on the latest web trends and life, advertising, design, projects, and news from an avid and prolific web designer.
Up and about since 2003.
Interesting Defunct United States Airlines
What I learned from the Boston Career Forum
Getting FancyUpload to Work
Google Plus bar color changer extension
Home Sweet American Culture
Install Android 2.2 Froyo on iPhone 3G on Windows with pics and video
An Engineer's Nightmare - Ocean Tower, South Padre Island
Setting up the Arduino Pro Mini and Bluetooth Mate on Mac
Python on the iPad
Custom iPhone lock screens on iOS5
The views represented here are my own, and not of my employer.
I don’t think a single day goes by where I don’t use Google Maps, and not merely because I work for Google. And as much as championing accessibility is something I wished I did more often, it’s not something that I think about on a day-to-day basis. But a confluence of events — including hearing about the 20/20/20 project to provide free surgery to those suffering from cataracts — prompted me to explore the area in depth. An important note: since I am not blind or vision impaired, the tools I use and the way I use them might differ significantly from actual users’ usage, and as such the best next step beyond this is to actually work together with blind and impaired users.
The main task I gave myself was to find directions from my alma mater, Swarthmore College, to New York City. While I assess the individual merits and weaknesses, I also dive into some potential opportunities abstracted away from our current solutions.
On my MacBook Air, I followed Google’s Accessibility Guide and enabled VoiceOver and tried to navigate Google Maps by going to maps.google.com on a Chrome browser. There are some initiatives through Open Street Maps to enable blind-friendly maps, but I couldn’t get to it, nor did I have ready access to screenreaders that a blind person may already be using, like JAWS. The basic premise of VoiceOver is navigating the screen real-estate using the keyboard - in my case, my main key was the tab and Control + Option key. I didn’t set up any fancy settings that I knew some of my friends who were blind were using (I know, for example, that the typical blind user will set the speech speed high — almost indecipherably high to my ears).
The first hiccup I notice is that the tab key won’t get me anywhere, nor will the arrow keys bring me to any other section of the UI. I was thoroughly confused later when the tabbing through the UI I heard mention of the Google Apps menu and G+ Profile section. Once I’m in the directions detailed flow, I’m given a flurry of steps, but the VoiceOver tool picks up on the miles of each step, completely ignoring the actual steps necessary. The layout of the site also meant VoiceOver spent a lot of time reminding me that the data was copyrighted to Google.
I then followed up with a check of the Apple Maps desktop version, which has been a bundled app since the Mavericks release. Here, the integration with the desktop UI makes it much easier to navigate through the map itself, as well as access directions and step through the multiple steps. POIs were also called out in the UI, which is useful so that you know what’s available to search for in the area.
At this point if I were trying to navigate my way through an unfamiliar city (for example, by trying to search for the nearest grocery store) I would probably save the directions onto my phone as a note and use that to remember how to get there.
On the iPad running 7.1.2, I tested both the latest versions of Google Maps (v 3.2.0) and Apple Maps.
Perhaps not surprisingly, VoiceOver integration on the iPad was much better with Apple Maps. Google Maps would not return any additional information to VoiceOver based on tapping on the map, which meant that you had to either know where to enter or engage with UI elements, or you were stuck. Tapping on various locations on the map, Apple Maps would highlight the area with a dark bar, or speak the actual location.
Worth of note is the “Rotor” feature, where you use two fingers to rotate through an additional set of dials that give you a variety of on-screen categories, like headings or points of interest, or toggling between the voice reading words or lines. Since they are context sensitive, swiping quickly upwards after selecting an option in the Apple Maps app, you can navigate through the POIs available on the map. This feature is not available to the Google Maps app.
Similarly to iOS, Android has a feature called TalkBack that will render a speech synthesizer to most of the UI components. Most of the usual single tap interactions require two taps instead of one, and a swipe takes 3 fingers instead of one. I tested this using my Nexus 4, running Android KitKat.
There were a myriad of things broken here, starting with the fact that Google Now interpreted the “Tap to speak button” TalkBack prompt as an actual cue for search. The Google Now card was not tappable in any way, so the card was completely useless.
Here, I dive into the Google Maps app and try to complete my task. There’s an instance where I get results back for directions “Showing items 2 of 2″ but I’m not aware what was returned, so I randomly tap around.
Towards the tail end of the above video you’ll note that I was getting a bit frustrated and as such decided to resort to speech. I did a more thorough run through in the video below.
In Android, tapped UI elements are highlighted in yellow. On iOS, they are bounded in a black frame.
The key learnings:
One point you may raise is that the current solutions and implementations are hardly helpful even after resolving some of the glaring accessibility problems inherent in Google Maps. Maps are for wayfinding and orienting — wayfinding with a visually-rich UI doesn’t help someone who needs to know where and which direction to head. Google Maps’s navigation tools will help, but it doesn’t help if a user can’t get to it. And our current UIs hardly help if a user can’t tell which direction is north, much less which direction is the right one to take.
Beyond the mobile or desktop interfaces, users interacting with the real world also need to understand how best to navigate it. We make choices — based on time, personal preferences, externalities that force us to decide what route we take. But what if those choices were also grounded in how much support in real life you could get in getting there?
Those yellow bars at the bottom are raised divets that let blind people know it’s the end of the platform
In Japan, most of the subway platforms and vending machines are Braille friendly. It does not necessarily mean that they are always accessible, or that Japanese infrastructure as a whole is more accessible than anywhere else. But it’s worth imaging a world where we can tell blind and vision-impaired users about how they might best navigate through accessible roads, minimizing occasions where they may stumble, get bumped into or otherwise lose their orientation.
We need to develop tools that are more than hacked on bandaids on top of beautiful, cutting edge interaction design that most of the world can enjoy. There may only be 4% of people who are blind, but they are the ones who can benefit most from a better way to navigate the world we live in.
One of the highlights of traveling to China was having the opportunity to understand the unique technology silo it lives in, and how it manages to remain innovating and cutting edge, relevant to its users, and rapidly gaining traction on the international tech scene.
I hear the muezzin’s call and make my way towards the Kizilkayalar Hamburger stand in Taksim square, where I’d been wandering about as the clock ticked 8:36pm on my phone.
Click thumbnails for larger versions
70-200mm f/2.8 II at f/11, 170mm 1/15 seconds, ISO 100
It’s approaching 8pm and I’m sitting at the Chez Vrony, with the Matterhorn in front, waiting for someone to ask me if I’d like to have a drink.
This technology is part of a $2.5 billion market expected to grow soon into a $17 billion yearly market in the US alone.
One of the things that make me most excited about having shelled out an exorbitant amount of money for Google Glass is that the space for research and design using it remains pretty big.
It’s nearing the end of the afternoon, and my feet are sinking into the bog, unstable among the mole-hill-like bumps that dot the hills.
Updated 137 days ago.
Updated 137 days ago.
» Mobile apps must die
» Useful objects made uncomfortably un-useful.
» A slightly weighted font for dyslexics.
» The nicest place on the Internet
» The definitive guide to trading candy
» An awesome handwriting font made to mimic a doctor’s penmanship.
» 32 innovations that will change your tomorrow
» Six years in the making, a proposition for a sexy highway font.
» It turns out there are many things that don’t exist.
» Are expensive batteries worth it? Maybe not
» 10 things to do with Hostess Twinkies
» He’s eaten at 362 pizza joints in New York City and has reviewed each one on an 8-slice scale.
» If you’re going to destroy your reputation as a PR person, better do it in an epic way
» Beijing pollution meter goes off the charts, literally
» Photoshop orgasm takes the shape of deblurring blurry images.