Last week, we reported on a cool, if seemingly far-fetched, UI concept that’d let you drag files from your phone to your computer with a swipe of the finger. The idea is “so simple and clever, you wonder why it doesn’t exist already,” we wrote. Hours later, an email appeared in our inbox, subject line: “it exists!”
The message came courtesy of Natan Linder, a PhD student in the Fluid Interfaces group at the MIT Media Lab. Linder and undergraduate researcher Alexander List are developers of Swÿp, a piece of open-source software that facilitates “cross-app, cross-device data exchange using physical ‘swipe’ gestures,” they write on their website. “Our framework allows any number of touch-sensing and collocated devices to establish file-exchange and communications with no pairing other than a physical gesture.” Translation: Dragging files from a phone to a computer with a swipe of the finger isn’t just a cool, far-fetched idea, it’s reality. Watch the demo:
Here’s the amazing part: They didn’t hack the iPhone and iPad with IrDA transceivers or anything like that, which would’ve enabled the devices to detect each other in 3-D space, a la Sifteo cubes. Instead, List and Linder exploited the capabilities the devices already had.
Swÿp gathers information such as your phone and iPad’s approximate location (available via WiFi) and account details (via sites like Facebook or Gmail), then ties that information to a real-time gesture, the swipe (or Swÿp). Hold up two Swÿp-enabled devices next to each other, and they’re able to communicate in a language both understand: a hybrid of the digital and physical worlds.
Why should we care? For one thing, it’s a hell of a lot easier to transfer files that way. I’d rather share a photo with a friend sitting next to me by dragging my finger across our screens than slog through a bunch of steps to send it via Dropbox, YouSendIt, or email. In a larger sense, Swÿp takes the mysterious computational process of sharing data–something we do all the time but never see–and externalizes it, giving it a tactile, intuitive interface. It allows users to “immediately grasp the concepts behind device-to-device communications,” Swÿp’s developers says.
“It’s a very smart way to use existing devices without any added technology,” says Ishac Bertran, who developed the UI concept we reported on last week. Though he points out that Swÿp doesn’t enable a completely seamless user interaction. For instance, he says, after swiping to create a connection between two gadgets, you can’t change their position, or they’ll lose their spatial link. A device equipped with spatially aware sensors, as Bertran envisions it, wouldn’t have that problem.
List started Swÿp, then Linder jumped on board. Linder was a member of the Media Lab’s LuminAR project, a desk lamp that can turn any surface into an interactive space (such as a guide for shoppers), and saw Swÿp as a “great fit with the stuff I was working on.”
Swÿp can be used for iOS and LuminAR, but it’s still part of ongoing research, so it’s not available in app form yet. “Our hope is that developers would jump in and contribute to the open-source project, make it better and that app makers will incorporate it into their apps making them Swÿp enabled,’” Linder says.
List and Linder picture a world in which the cumbersome process of sharing digital information with your neighbor is replaced by simple physical gestures. They’ve tried marrying Swÿp and LuminAR to create a new type of experience that lets users collaborate and create digital content together. “Most recently we’ve been deploying a website wherein any Internet-connected device would be able to Swÿp with any other,” Linder says. “Our target is everyone who uses touch- and gesture-enabled devices, counting laptops and iPads, but also screens with a Kinect setup. But we are still early on, and even though we demonstrated the working tech, there is lots to do to fulfill our vision of different devices chatting to each other using nothing but user-generated gestures.”
[Images courtesy of Natan Linder]