We’re not supposed to text while driving. That makes sense—it diverts your eyes and mental attention elsewhere. But what about your average turn-by-turn GPS screen? It’s sort of the same idea, no? So especially for seniors on the road, how do we design effective tools to not only get from point A to point B, but to get them there more safely?
Professor SeungJun Kim from Carnegie Mellon’s Human Computer Interaction Institute is playing with new ideas to improve the driving performance of the elderly. In his most recent, still unpublished paper "Route Guidance Modality for Elder Driver Navigation," Kim shares details of his study in which he tested the performance of both young and old drivers with the assistance of audio cues (“Turn left!”), visual cues (think Google Maps navigation), and a special steering wheel that would vibrate to signal the next turn.
(The wheel is of particular note: It was built in a partnership of AT&T. It uses 20 individual motors and a liberal layer of memory foam to create a wheel that vibrates in distinct areas. To signal a left turn, the wheel created an animated vibration of a counterclockwise turn, like a snake passing between your fingers.)
I’ve had a chance to read through the paper, and the findings are fascinating. Kim’s goal was to find a sweet spot of assistance, one where all of these tools (modals) could assist a driver without either taking their attention off the road or weighing down the brain too much in what researchers call “cognitive load." So he tested all sorts of combinations of modals to see which worked both best and least intrusively—audio and visual, haptic and audio, audio and visual and haptic, and, of course, each of these techniques on their own.
What he found can probably be applied to products and UIs of all types:
- 2. Seniors performed best with audio plus haptics (and audio plus visual)
- 3. Seniors preferred audio feedback above all other types of feedback.
- 4. Seniors performed worst / had the most cognitive load when fed everything all at once (audio plus visuals plus haptic)
- 5. Younger people performed best / had the least cognitive load when fed everything all at once.
- 6. Younger people preferred visuals and audio (but they were wrong to—they actually performed WORST under these conditions).
Plus, this gem from the article is particularly fun:
71% of elder drivers thought the auditory modality was the most useful and 59% thought the visual modality was the most annoying. In contrast, 63% of younger drivers thought the visual modality was most useful and 50% of them thought the auditory modality was most annoying. Both groups ranked haptic feedback between auditory and visual feedback.
Kim’s ultimate finding shows that we shouldn’t design in-car navigation the same way for youth and the elderly. Young people performed better with more information being thrown their way. Older people clearly had a penchant for audio over visual cues. But there was a unifying piece: Both groups benefited from haptic feedback. Humans clearly love touch.
It would be interesting if Kim ran this same study 30 years from now. While younger people always kick butt in general cognitive testing (sadly, the mind’s raw horsepower begins a steady decline starting in your early 20s), I’m curious how much of that is actually playing a role in modal preference. In other words, do seniors perform better with less information being thrown at them because their minds can no longer process it, or because today’s young people have been trained to multitask from birth? Is it nature or nurture playing a role here? Kim’s paper doesn’t hazard a guess, but I will.
The seniors of tomorrow will perform better with three types of feedback. But the youth of tomorrow will be able to juggle four, five, or six. And in the meantime, there’s no reason we shouldn’t be customizing all sorts of user interfaces—from inside our cars to inside our phones—to accommodate one’s age.
[Hat tip: Core77]