Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

7 minute read

16 Wild Research Experiments That Could Change Design

Siggraph is the year’s biggest conference on computer graphics research. Here’s a look at its most compelling tech.

Whether it’s jaw-dropping UI, compelling use cases for 3D printing, or the most realistic computer-generated human faces you’ve ever seen, there were hundreds of technological breakthroughs on view at Siggraph, the graphics conference that took place in Anaheim, California, in July.

At Siggraph, academic and corporate researchers from around the world present the latest work in computer graphics and interface design. The only problem? These are scientists we’re talking about here, so they’re submitting full-blown academic papers. And in this pile of highly specialized research—full of papers on minutia like highly specific skeleton animations or properly rendering scratches on metal surfaces—the gems that the mass audience will appreciate (or frankly, even understand) can be hard to find.

So we went through all 119 technical papers being presented at this year’s Siggraph conference to highlight the most incredible, notable, and sometimes just plain fun projects. Here’s our list.

How We Fabricate

Cheap, Quick, And Easy Molded Objects
We tend to think that 3D printing is the only way to create one-of-a-kind objects with your computer, but a new hardware/software technique called Computational Thermoforming can create highly detailed plastic objects (in terms of both texture and color), ranging from masks, to plastic food, to tiny landscape models. It’s sort of like artisanal injection molding for hobbyists and Etsy types.

Bendy, Squishy, 3D-Printed Things
Your average 3D-printed object is not only stiff, it’s equally stiff in all parts. But in one Siggraph paper called Procedural Voronoi Foams for Additive Manufacturing, Disney Research details how to print meshed structures, or foams, with varying levels of density to be more or less elastic. Your teddy bear’s neck can bend, but its back can stay rigid.

More Practical 3D Printing
When an artist draws something, they break a scene into rough geometrical shapes that are then detailed out. CofiFab brings that theory to the production of physical things. It’s a method that allows you to produce an object with a quick-and-dirty 2D laser cut base. Then on top of that, you layer the finely detailed, painstakingly 3D-printed shell like pieces of a puzzle. It’s a method that would be cheaper and faster than sticking the same model into a 3D printer and just letting it rip.

Smarter Models (Without Modeling Expertise)
Bending long strips of wire is a quick and cheap way to make really large art projects and structural prototypes. But you’ve still got to design that whole wire skeleton properly so it doesn't collapse or tip. Now, a new system from the Institute of Science and Technology Austria can convert something like a life-size 3D model of a car into a wire frame that will be completely stable when you stick the schematics into a wire bender. MoMA, here we come.

How we design

The Ultimate Cheat Code For Fashion Designers
The process of turning 2D patterns into 3D fashion confounds most of us. But what if creating fashion felt more like creating your avatar in a video game? A new paper from researchers at Adobe, Stanford, and the University of British Columbia details a system that allows you to dress a virtual mannequin, draping and adjusting virtual fabric over its body. Once you’re done, the system spits out a 2D cloth pattern that can be sewn together to make that look a reality.

Computers That Understand How Objects Work . . .
What makes a wheelbarrow a wheelbarrow? And how does it relate to a shopping cart or stroller? A new technology described in one Siggraph paper can scan categories of objects and find similarities in the way they are used. For example, that the wheelbarrow, shopping cart, and stroller all have handles that we push. With that sort of data, designers could build "functional hybrids," or reimagined objects that can do more than one thing.

. . . And How Humans Use Them
Another similar project creates what the authors call "saliency maps" of objects. That basically means they’ve trained a computer to be able to analyze 3D files and create a heat map as to which parts of objects we tend to touch (like the handle of a mug, or the direction pad on a video game controller). What can a system do with that knowledge? In product design, it’s a means to highlight where we might consider tactile-friendly materials, while in the virtual world, it could add another layer of intelligence to any environment. In other words, a virtual character might just know to grab an object by its handle, or a user interface would allow you to grasp such an object naturally, without extra programming.

Rendering In Real Life—No Computer Screen Required
This is so wild we had to watch the video a half dozen times to get it. But an invention called the ZoeMatrope spins a series of models around so fast, using perfectly timed strobe lighting, that it can fool your eyes and simulate how something would appear if made in various different materials. It’s basically a means to know what your product will really look like before getting a sample made off the assembly line.

Automated Urban Planning
What’s the perfect way to build roads into any city? That’s a hard question to answer, requiring designers, architects, and traffic engineers to test and refine. But a new algorithm from Arizona State University and University College London can generate everything from the street grid of a midsized city to the cubicle flow of your office, all by starting with generalized information like preferred movement flows and likely destinations.

Designing Sounds
They’re called Acoustic Voxels, and they’re like little Lego shapes that can be stuck inside the cavity of piggy banks or car mufflers to precisely control the pitch they emit. This means you could create a squeaky toy with a very particular pitch using this Disney Research technique, and at the same time, researchers have demonstrated that they can use an app to then identify that object by its sound. I imagine squeezing a toy to activate a game on a phone or tablet, purely through sound.

Giving Your Product Multiple Alternative Uses
You know how some bikes can fold up to fit somewhat awkwardly over your shoulder? Researchers are experimenting with what they call "configurables" to automate how everyday objects like living room furniture might mechanically reconfigure themselves for easier storage or more versatile use. In their paper, researchers demonstrated a bench they'd designed that morphed into a picnic table during meal time.

How we interact

Avatars In Every Shape And Size
The media’s depiction of the human body is limited, skewed, and often offensive—so how might we bring more realistic bodies into virtual worlds? Imagine if you could use a simple descriptor, like "short" or "built," to craft a fully generated, realistic body, that fits your preconceptions. That’s the goal of Body Talk. It's easy-to-use, come-as-you-are technology that could bring a bit more realism to virtual reality.

Give Yourself An Instant Video Chat Costume Change
Don’t like the color of your T-shirt? Wish your dress were silk instead of cotton? A technique called Live Intrinsic Video can take a live video feed, identify a garment of your clothing, and then convincingly augment it to another material or hue in real time. Wild.

3D Movies Without Silly Glasses
Okay, okay, 3D movies may be a bit of a gimmick, but imagine if you didn’t need glasses. Cinema 3D is an almost silly-stupid solution. Much like any parallax screen, which shows your left and right eye a slightly different pattern of vertical stripes without you ever noticing, this projection breaks an image down into vertical strips, too. But instead of working on just two eyes, this screen can scale to work for the two eyes sitting in every seat in an entire theater. The breakthrough is that researchers use so many of these parallax image strips, so carefully, that they can make the geometrical illusion scale to a theater full of people who are looking from various angles.

Use Your Real Face Inside A Computer Game
Face mapping tech that can put your visage onto, say, a video game soldier, has been around for a while. But those mapped faces can’t really emote. Now, researchers from Zhejiang University are able to film your face in a few pre-specified poses and build a digital version of your face that can smile, grimace, shout, and—very importantly—smooch. The results are deep in the uncanny valley, but promising nevertheless.

AI That Can Describe Your Poorly Drawn Sketches
We’ve seen computers identify objects in photos. But the Georgia Tech-built Sketchy Database is AI that’s learned to identify 12,500 different objects found in over 75,000 sketches. That's right, it's computer vision that can identify drawn things, not just photographs. It's the sort of practical micro-intelligence that AI needs, to have day-to-day parity with humans. Long term, your robo-nanny will be able to very specifically compliment that random squiggle your toddler is drawing. In the short term, maybe we’ll get an app that does the same thing.

For even more Siggraph reading, see the latest from Microsoft, Google, as well as incredible papers on rendering hair styles and algorithmically copying your sketching style.

loading