Predicting cyber attacks before they happen have two conventional approaches. One uses a set of indicators specified by an expert, and the other uses machines to detect abnormal activity. The problem with these approaches is that rules are often broken by criminals, and there is just too much abnormal activity flagged, even when these are not attacks.
AI² is a collaboration between MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and PatternX, a machine learning startup with a focus on information security and threat prediction. The hybrid platform combines Artificial Intelligence, with Analyst Intuition, to give AI².
The system starts with three differing unsupervised machine learning approaches to identify and tag potentially suspicious activity. These are presented to a human analyst, who can confirm or deny the tagged activity as suspicious. The human feedback is put into the machine learning loop, and the next cycle of machine learning detection incorporates the inputs from the humans.
This approach of increasing the effectiveness of machine learning algorithms by using humans as a step in the process has increased detection count to 85 percent of attacks, which is three times better than any previous approach. The system also brings down the number of false positives by a factor of 5.
Publish date: April 19, 2016 4:05 pm| Modified date: April 19, 2016 4:05 pm