DARPA has teamed up with scientists from Carnegie Mellon University to create an artificial intelligence system that can watch and predict what a person will “likely” do in the future, using specially programmed software designed to analyze various real-time video surveillance feeds; the system can automatically identify and notify officials if it recognized that an action is not permitted, detecting what is described as anomalous behaviors
The Army’s Defense Advanced Research Projects Agency (DARPA) has teamed up with scientists from Carnegie Mellon University to create “an artificial intelligence system that can watch and predict what a person will ‘likely’ do in the future using specially programmed software designed to analyze various real-time video surveillance feeds. The system can automatically identify and notify officials if it recognized that an action is not permitted, detecting what is described as anomalous behaviors.”
The device is expected to be used at various airports and bus stations, and if the program is successful, the devices could be installed at nearly every red light and intersection in America. According to Forbes, which broke the story, “Carnegie Mellon is one of 15 research teams and commercial integrators that is participating in a five-year program, started in 2010, to develop smart video software.”
DARPA spokesman Mark Geertsen said in a statement the goal of the project is “to invent new approaches to the identification of people, places, things and activities from still or moving defense and open-source imagery.”
The first part of the projects being worked on is PetaVision. According to a statement released by DARPA, PetaVision is a “Multi-Modal Approach to Real-Time Video Analysis. Biologically-inspired, hierarchical neural networks to detect objects of interest in streaming video by combining texture/color, shape and motion/depth cues.”
A Web site maintained by the Los Alamos National Laboratory provided more insight into the technology and why the federal government may find it useful.
We seek to understand and implement the computational principles that enable high-level sensory processing and other forms of cognition in the human brain. To achieve these goals, we are creating synthetic cognition systems that emulate the functional architecture of the primate visual cortex. By using petascale computational resources, combined with our growing knowledge of the structure and function of biological neural systems, we can match, for the first time, the size and functional complexity necessary to reproduce the information processing capabilities of cortical circuits. The arrival of next generation supercomputers may allow us to close the performance gap between state of the art computer vision approaches by bringing these systems to the scale of the human brain.
Another tool being designed by DARPA is Videovor. Currently, no specific information on the technology is available, but a Web site offering scholarly journals has an abstract of an article written on the subject.
Video data, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video e-mails, and so on, is perhaps most time-consuming to process by human beings. In this paper, we present a novel methodology for “summarizing” video sequences using volume visualization techniques. We outline a system pipeline for capturing videos, extracting features, volume rendering video and feature data, and creating video visualization. We discuss a collection of image comparison metrics, including the linear dependence detector, for constructing “relative” and “absolute” difference volumes that represent the magnitude of variation between video frames. We describe the use of a few volume visualization techniques, including volume scene graphs and spatial transfer functions, for creating video visualization. In particular, we present a stream-based technique for processing and directly rendering video data in real time. With the aid of several examples, we demonstrate the effectiveness of using video visualization to convey meaningful information contained in video sequences.
According to the abstract, the system plans to use video data in real time and that the source of that video feed is to be provided by “security and traffic cameras, video conferencing systems, video e-mails, and so on.”
The third description on DARPA’s list is a geospatial oriented structure extraction. According to the DARPA report, geospatial oriented structure extraction is designed to deliver “automatic construction of a 3D wireframe of an object using as few images as possible from a variety of angles.”
These systems give the feeling that in the near future, all activities in public will be monitored and recorded, have the possibility to limit crime and even prevent crime immensely. DARPA could be lining themselves up for a major fight with privacy advocates in the future however, which could delay the release of these as organizations such as the American Civil Liberties Union (ACLU) might feel the use of these programs are a serious violation of citizens rights.
Beat The Fed
Golden Eagle Coins
Chameleon John Coupons
Calling for Contributors!Got something to say?
We want to hear from you.
Submit your article contributions and participate in the world's largest independent online news community today!