Why The Human Body Will Be The Next Computer Interface

Fjord charts the major innovations of the past, and predicts a future of totally intuitive "micro gestures and expressions" that will control our devices.

By now you’ve probably heard a lot about wearables, living services, the Internet of Things, and smart materials. As designers working in these realms, we’ve begun to think about even weirder and wilder things, envisioning a future where evolved technology is embedded inside our digestive tracts, sense organs, blood vessels, and even our cells.

As a service design consultancy we focus on how the systems and services work, rather than on static products. We investigate hypothetical futures through scenarios that involve production/distribution chains and how people will use advanced technology. Although scientifically grounded, the scenarios proposed aren’t necessarily based on facts but on observations. They are designed to create a dialogue around technologies that are still in science labs.

To see the future, first we must understand the past. Humans have been interfacing with machines for thousands of years. We seem to be intrinsically built to desire this communion with the made world. This blending of the mechanical and biological has often been described as a "natural" evolutionary process by such great thinkers as Marshall McLuhan in the ’50s and more recently Kevin Kelly in his seminal book What Technology Wants. So by looking at the long timeline of computer design we can see waves of change and future ripples. Here’s our brief and apocryphal history of the human-computer interface.

1801: the first programmable machine

Let’s skip the abacus and the Pascal adding machine and move straight to the 19th-century Jacquard loom. This, although not a computer, used punched cards to change the operation of the machine. This was foundational for the invention of computing. For the first time, people could change the way a machine worked by giving it instructions.

1943: Colossus valve computer

Not only critical for its role in winning WWII but for being the world’s first electronic digital computer. Tommy Flowers’s Colossus can be described as the first modern computing machine. It used a series of on and off switches to make complex calculations that no human could manage in any kind of realistic timeframe. Without it, the scientists at Bletchley Park would never have cracked the Nazi messages encrypted by the equally revolutionary Enigma machines.

1953: the FORTRAN punch card

Very similar to the Jacquard loom’s punch cards, here was a system that could order a machine to perform many different calculations and functions. Prior to FORTRAN, machines could practically perform only one function and the input was merely used to change the pattern of that function. Now we had entered a world of multifunctional thinking machines.

So really it was a century and a half before the fundamental paradigm of computing changed, but one thing stayed constant: In order to use these machines we had to become like them. We had to think like them and talk to them in their language. They were immobile, monolithic, mute devices, which demanded our sacrifice at the twin altars of engineering and math. Without deep knowledge, they were impenetrable and useless lumps of wood, metal, and plastic. What happened next (in a truly "ahead of its time" invention) was the first idea that began the slow shift in emphasis to a more human-centric way of interfacing with the machine.

1961: the first natural computer interface

Something in the zeitgeist demanded that the ’60s would see a humanistic vision appearing in the rapidly expanding sphere of computer engineering. And so in a typically counterintuitive move, Rand Corporation, that bastion of secretive government and military invention, created the first tablet and pen interface. It remained hidden for many years as a military secret, but this was definitively the first computer interface that was built around a natural activity, drawing with a pen on paper.

1979: touch screens appear on the horizon

Although the Fairlight CMI was the first touch-screen interface, it was many years before the technology was affordable. The Fairlight cost around $20,000 and was out of reach for everyone apart from the likes of Stevie Wonder, Duran Duran, and Thomas Dolby. What was equally remarkable about it, in addition to the advancement of having a touch-sensitive screen operated with a light pen, was that it used the comprehendible interface of the musical score. However it was still pretty damn complicated to work and was thus neither truly democratic nor humanistic. Almost all musicians needed to hire a programmer to create their synthetic soundscapes.

1980: a regression occurs

Although hugely important in popularizing the home computer, MS-DOS, with its bare green text glowing on a black screen, showed only the barest hint of human warmth. A layer down, though, there were important ideas like accessible help systems and an easily learned command interface that gave access to the logical but not very user-friendly hierarchy of the file system. All told, however, it was a retrogressive step back to the exposed machine interface.

1984: a more human space

Apple took all the innovations from the stubbornly uncommercial minds at Xerox Parc and made them work for a mass audience. The mouse was an incredibly concise invention, bringing ergonomic touch to the desktop interface. If you put a mouse in the hands of a novice, they almost immediately understand the analogue between the movement on the plane of the desk and the corresponding movement of the pointer on the screen. This was the result of much experimentation in the exact gearing ratio, but it felt natural and effortless. The iconographic approach to the interface, meanwhile, was also a massive step toward an intuitive computer world with close resemblance to familiar physical objects.

Then, for a long time nothing happened—except, that is, iteration after iteration of the same metaphors of the Macintosh, each a further refinement or regression, depending on the quality of design.

2007: touch screen computing finally arrives

The iPhone was not the first touch screen by any means, but it was the most significant, demonstrating that we really wanted to get our hands on, even inside, the interface, as if yearning to touch the actual data and feel the electrons passing through the display. We were now almost making contact, just a thin sheet of glass between us. Paradoxically, the visual metaphors had hardly changed in over 20 years. Maybe this was all that was needed.

2009: Kinect blows it all wide open again

Of course, just when everything seems to be stable and static, a wild and unpredictable event occurs. Kinect (and let’s not forget the honorable Wii) showed a new way of interacting in which the body becomes the controller. The game format allows a one-to-one relationship between the physical body and the virtual body. A leg movement corresponds to a kick on screen; a wave of a hand becomes a haymaker knocking out your opponent. This is very satisfying and instantly accessible, but in the end, is no good for anything more complex than role-playing.

2011 Siri, the no-interface interface

For the third time in 30 years, Apple took an existing and poorly implemented technology and made it work, properly, for the masses. Siri does work and is a leap forward in terms of precision. But it is hard to say it is any more sophisticated than a 1980’s text-based adventure. Combine a few verbs and nouns together and come back with a relevant response. Still, Siri understands you no better than the primitive text parser.

So when put in a timeline, it is clear that we have dramatically shifted the meeting point of man and machine. It is now almost entirely weighted toward the human languages of symbols, words, and gestures. Nevertheless, that last inch seems to be a vast chasm that cannot be breached. We are yet to devise interfaces that can effortlessly give us what we want and need. We still must learn some kind of rules and deal with an interpretation layer that is never wholly natural.

A predictive world of sensors

Some early attempts at predictive interactions exist: The Japanese vending machine that recognizes the age and sex of the user and presents choices based on demographic breakdown, and the brilliant but scary ability of McDonald’s to predict what you’re going to order based on the car you drive with 80% accuracy. The latter was necessary so the fast-food chain could reduce the unacceptable 30-second wait while your drive-in order was prepared.

The sensor world that makes these kinds of predictive systems possible will only become richer and more precise. Big data will inform on-demand services, providing maximum efficiency and total customization. It will be a convincing illusion of perfect adaptation to need. However, there are still three more phases of this evolution that we see as being necessary before the machine really becomes domesticated.

The first evolutionary leap is almost upon us, embedding technology in our bodies. This finally achieves the half-acknowledged desire to not only touch machines but have them inside us. Well, maybe that’s pushing it a bit; don’t think that we’re going to become cyborgs. But great artists like David Cronenberg have imagined what it would be like to have machines embedded in humans and what kinds of advantages it could bring. Dramatic embellishments aside, the path is clear: Beyond mechanical hips and electric hearts, we will put intelligences inside us that can monitor, inform, aid, and heal.

Embedded tech brings a new language of interaction

The new language will be ultra subtle and totally intuitive, building not on crude body movements but on subtle expressions and micro-gestures. This is akin to the computer mouse and the screen. The Mac interface would never have worked if you needed to move the mouse the same distance as it moved on the screen. It would have been annoying and deeply unergonomic. This is the same for the gestural interface. Why swipe your arm when you can just rub your fingers together. What could be more natural than staring at something to select it, nodding to approve something? This is the world that will be possible when we have hundreds of tiny sensors mapping every movement, outside and within our bodies. For privacy, you’ll be able to use imperceptible movements, or even hidden ones such as flicking your tongue across your teeth.

Think about this scenario: You see someone at a party you like; his social profile is immediately projected onto your retina—great, a 92% match. By staring at him for two seconds, you trigger a pairing protocol. He knows you want to pair, because you are now glowing slightly red in his retina screen. Then you slide your tongue over your left incisor and press gently. This makes his left incisor tingle slightly. He responds by touching it. The pairing protocol is completed.

What is lovely about these micro gestures and expressions is that they are totally intuitive. Who doesn’t stare at someone a second too long when they fancy them, and licking your lips is a spontaneously flirtatious gesture. The possible interactions are almost limitless and move us closer and closer to a natural human-computer interface. At this point, the really intriguing thing is that the interface has virtually disappeared; the screens are gone, and the input devices are dispersed around the body.

What we will explore in the next article is the end game of this kind of technology as we bring the organic into the machine and create a symbiotic world where DNA, nanobots, and synthetic biology are orchestrated to create the ultimate learning devices. We will also explore the role of the designer when there is no interface left to design: Will they become choreographers and storytellers instead? Or will they disappear from the landscape entirely, to be replaced by algorithmic processes, artificial intelligence and gene sequencing? What we can say for sure is that the speed of change is accelerating so rapidly that the advanced interface technologies that we marvel at today will seem as out-dated as FORTRAN before we have time to draw breath.

Written by Andy Goodman and Marco Righetto.

[Fjord copyright images from presentation.]

Add New Comment

19 Comments

  • ds_design

    How do we imagine the lick detector (aka 'slobber chops') detect the difference between a deliberate (and potentially sleazy) attempt at flirting and the rather more inane task of removing a piece of corn from your teeth? This is of particular concern as a wonky-toothed Brit. 

    As mentioned below, there's going to be a lot of barriers to overcome the ways system detects and respond to deliberate nuanced gestures and accidental (and quite natural) spasms of the human body. 

  • lina

    instead of carpel tunnel fingers we would be experiencing crick necks...staring and nodding every few seconds

  • Claudia Brauer

    Scary, but probably true.  The issue, as with all great inventions will be who will be in control.  Will it be you and me, or some big-brother government.  That is the question.  Once you have the ability to control my body and my mind, you basically "own" me. What would keep governments from inserting these chips in every newborn and then produce the human beings they want?  I am guessing it is up to us to make sure that this does not happen and that just like the iPad, we are in control of the information and actions we desire for ourselves.

  • Chris Kelly

    There is already no interface between flirting with someone across a room. My adding in these systems your getting in the way of something that would naturally happen anyway by adding, not removing, an interface: staring at someone for two seconds IS a pairing protocol. 

    Whats making mobile technology useful is placing interfaces to link us to things that were impossible before. I can see round the corner before i walk there because I have a map app. I can see my friends holiday photos as they are taken, almost real time. 

    When technology goes inside us, we wont have to lick our teeth to inform the doctor we have a problem, they will simply know. 

  • stansbuj

    1953 FORTRAN PUNCHED CARD? FORTRAN is a programming language and punched card is a stiff piece of paper. The first FORTRAN compiler was delivered in April 1957.

  • Tony

    Hmmm, as someone who works in the control and automation industry, it looks like this theorist hasn't really engaged in the practical problems of gesture and control. 

    Kinect is too impractical for everyday applications, and besides, who wants to do body-wide movements to do something incremental. Voice control begs the question of everyone hearing the instruction. Who wants to. And micro-gesture, well, there will be an awful lot of spam in the system.

    A visual interface will remain amongst us for a while yet in one form or another. To me the activity of control is an iterative thing on a conscious and then onto a subconscious level. What's the best interface then? Another human being.

  • Marra

    This is gorgeous. But first the human bring engineering will test all kind of chemicals to create best materials. Then, they will think about introducing a new era of products in the market. So, it will be a pleasure seeing this article in the next fifty years.

  • MacGoo

    Pie in the sky. Sorry Andy, but memorizing a complex series of gestures that includes teeth-licking (that sounds more absurd every time I consider it) is NOT nearly so intuitive as a touchscreen, or even a mouse. 

    Perhaps we will see some natural gesture recognition in the near future, but in order to be successful, this type of technology would need to be ubiquitous. And to achieve ubiquity means overcoming the aversion to implanted sensors - a rather gigantic obstacle. People are struggling to get back into the idea of wearing a WATCH right now.

  • Nicholas McDonald

    This article was clearly written by persons who are truly out of touch with reality. Ever heard of Google Glass? I'm sure that alone will massively push technology forward rather than Siri. I can imagine that some people find it incredibly fascinating  being able to talk to their phones, however, compared to Google Now (Innovation of the Year - 2012) , it pales in comparison.

  • Nicholas McDonald

     
    I could ask you the same question. Maybe you should check § 2011, Siri: The No Interface Interface.

    And at no point did I mention that the writer alluded to Siri being the pinnacle of human interfaces. All I mentioned was what I saw. A major omission on the software and hardware front which will undoubtedly push forward the technology for Human Interfaces.

  • Manuel González Noriega

    Have you even read the article? Where did you get the idea that it talks about Siri as the pinnacle of human interfaces?

  • Scott Bell

    Would you say the time has come for User Analytics?  With all the integration, do WE really know how our other brain (your mobile) is being used?  To what extent?  Quantified-self-like?  Shall we remain largely unconscious of our own activity online?

    MyData.....an integrated approach to expanding the USER'S awareness of analytics
    http://www.youtube.com/watch?v...

  • georgeboutilier

    So, according to Andy Goodman and Marco Righetto, the summation of 200+ years of technological innovation reaches its pinnacle in the near future by allowing two people to pick-up each other at a party by licking their chops.

  • MacGoo

    ROFL. Well said sir. Framing it in that way doesn't reduce the absurdity of it, but it makes it much more pleasantly ironic.