U.S. Army Sponsored Artificial Intelligence Surveillance System Attempts to Predict the Future

By Reuven Cohen

In something that looks straight out of the CBS show “Person of Interest“, the science website Phsy.org is reporting on a potentially important breakthrough from researchers at Carnegie Mellon. In research sponsored by the United States Army Research Laboratory, the Carnegie Mellon researchers presented an arti?cial intelligence system that can watch and predict what a person will ‘likely’ do in the future using specially programmed software designed to analyze various real-time video surveillance feeds. The system can automatically identify and notify officials if it recognized that an action is not permitted, detecting what is described as anomalous behaviors. According to the paper, one such example are cameras at an airport or bus stations with an autonomous system flagging a bag abandoned for more than a few minutes.


The paper presents a complex knowledge infrastructure of a high-level artificial visual intelligent system, called the Cognitive Engine. In particular it describes how the conceptual specifications of basic action types can be driven by a hybrid semantic resources. In layman’s terms, the context of an action. For example, is a person leaving a bag because he’s sitting next to it? Or has that person left all together?

The goal of the research is to create artificial intelligence system similar to that of human visual intelligence. The ability for a computer system to make effective and consistent detections. The researchers noted that humans evolved by learning to adapt and properly react to environmental stimuli, becoming extremely skilled in altering and generalizing over perceptual data, taking decisions and acting on the basis of acquired information and background knowledge. These computer vision algorithms needed to be complemented with higher-level tools of analysis involving knowledge representation and reasoning, often under conditions of uncertainty.

The Cognitive Engine represents the core module of the Extended Activity Reasoning system (EAR) in the CMU-Minds Eye architecture. Mind’s Eye is the name of the Defense Advanced Research Projects Agency (DARPA) program for building AI systems that can alter surveillance footage to support human (remote) operators, and automatically alert them whenever something suspicious is recognized (such as someone leaving a package in a parking lot and running away).

Alessandro Oltramari, a postdoctoral researcher and Christian Lebiere, both from the Department of Psychology at Carnegie Mellon, suggest that this automated video surveillance approach could find applications both in military and civil environments.

Original article found at:
http://www.forbes.com/sites/reuvencohen/2012/10/29/u-s-army-sponsored-artificial-intelligence-surveillance-system-attempts-to-predict-the-future/