Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

3 minute read

Technology

Artificial Intelligence's Ultimate Challenge? Cyber Attacks

At MIT, machine learning specialists are training deep learning algorithms to spot cyber attacks. It may be AI's ultimate test.

Artificial Intelligence's Ultimate Challenge? Cyber Attacks

Have you heard the one about how our jobs are about to be snatched away by machines? Or how artificial intelligence will ultimately rise up against us?

AI is a field full of tropes, many of which come from places of truth: AI is evolving at an incredible speed, and humans are teaching some AI to learn using the same basic model found in our own craniums. But for a more realistic take on the future of AI, look no further than the many software engineers and companies that have struggled to create an intelligent system that can identify cyber attacks.

"We were trying to figure out what is the foundational problem—why do we have so many cyber attacks and data breaches that are going undetected?" says Kalyan Veeramachaneni, a research scientist at MIT's Computer Science and Artificial Intelligence Lab and the author of a paper released today titled "Training A Big Data Machine To Defend." After all, it should be easy: Processing power has increased to a level where detecting attacks within billions of pieces of data is now feasible. And machine learning has progressed to the point where it's possible to build these types of intelligent attack-detection algorithms, too.

MIT CSAIL

Yet there was one major problem: Who would "teach" it? The human analysts who study cyber attacks are in extreme demand, and already work extraordinarily taxing jobs. Forcing them to double-check the work of an AI system just didn’t make sense. Veeramachaneni realized that the problem wasn’t just about machine learning, it was about human-computer interaction. They needed to design an AI, sure, but an AI with a human-facing interface that only bothered its human teacher at the right time—and learned from them seamlessly.

The system they designed over the past three years—unveiled today publicly—is called AI Squared. From a security perspective, it’s a huge advancement: It detected real-world attacks with 85% accuracy and reduced false positives by a factor of five when tested with more than three billion logs from a real-world e-commerce platform. But what’s fascinating about the software is how it interacts—and completely depends—on its human teacher.

On a very basic level, here’s how AI Squared works. First, using a recurrent neural network and other machine learning techiques, it parses the huge amount of data generated by users—the proverbial "haystack"—for potentially odd activity, a process called "unsupervised learning." Once it has identified the anomalies—the "needles"—it notifies its human analyst and presents its findings. The human confirms or denies each needle, and then their decisions are relayed back to the AI—which turns them into a model to use the next day, in a process called "supervised" learning. "We can now evolve alongside the attacker," says Veeramachaneni.

The way these humans interacted with the system was crucial. Because the AI depended on their daily input to create a better, always-evolving prediction model, it needed to establish trust with the human—and give them an easy, instantaneous way to give feedback to the machine. "The challenge was making sure we didn’t overwhelm them," he explains. "Most systems are currently failing in that respect."

The design does this in two ways. First, the AI Squared system carefully limits how much information it shows to analysts—and over time, it reduces that amount steadily. So, while it might show its teacher 200 possible attacks on day one, because it learns from every day’s work, it may only show 50 attacks a few weeks later.

The analysts can also give feedback anywhere at any time, either on their smartphones or computers, so that the system can always be learning. They do this through a simple, visually driven user interface that maps threats to a network and uses simple, graphical elements to communicate with the human.

The interface is essentially a visual translator—it takes the subjective, expertise-driven ideas of the humans on one side and translates them into math, which the AI can then build a model around. It’s a GUI for machine learning.

"Putting those two things together—with active learning—we can now actually do this in real time at scale," Veeramachaneni says. "What we’ve found is that even though it’s possible, it hasn’t been done anywhere." Now that AI Squared is public, the company funding it—PatternEx—is setting up case studies with companies that will run the software and test how it learns over longer time periods.

It’s a great example of how even the most advanced AI still needs humans to truly learn—and as a result, still needs designers to craft the language that the human/machine team uses to talk to each other.

Monitor: Flickr user Arkadiusz Sikorski

loading