毕业论文外文翻译-远程视频监控系统.doc
《毕业论文外文翻译-远程视频监控系统.doc》由会员分享,可在线阅读,更多相关《毕业论文外文翻译-远程视频监控系统.doc(9页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、A System for Remote Video Surveillance and MonitoringThe thrust of CMU research under the DARPA Video Surveillance and Monitoring (VSAM) project is cooperative multi-sensor surveillance to support battlefield awareness. Under our VSAM Integrated Feasibility Demonstration (IFD) contract, we have deve
2、loped automated video understanding technology that enables a single human operator to monitor activities over a complex area using a distributed network of active video sensors. The goal is to automatically collect and disseminate real-time information from the battlefield to improve the situationa
3、l awareness of commanders and staff. Other military and federal law enforcement applications include providing perimeter security for troops, monitoring peace treaties or refugee movements from unmanned air vehicles, providing security for embassies or airports, and staking out suspected drug or ter
4、rorist hide-outs by collecting time-stamped pictures of everyone entering and exiting the building.Automated video surveillance is an important research area in the commercial sector as well. Technology has reached a stage where mounting cameras to capture video imagery is cheap, but finding availab
5、le human resources to sit and watch that imagery is expensive. Surveillance cameras are already prevalent in commercial establishments, with camera output being recorded to tapes that are either rewritten periodically or stored in video archives. After a crime occurs a store is robbed or a car is st
6、olen investigators can go back after the fact to see what happened, but of course by then it is too late. What is needed is continuous 24-hour monitoring and analysis of video surveillance data to alert security officers to a burglary in progress, or to a suspicious individual loitering in the parki
7、ng lot, while options are still open for avoiding the crime.Keeping track of people, vehicles, and their interactions in an urban or battlefield environment is a difficult task. The role of VSAM video understanding technology in achieving this goal is to automatically “parse” people and vehicles fro
8、m raw video, determine their geolocations, and insert them into dynamic scene visualization. We have developed robust routines for detecting and tracking moving objects. Detected objects are classified into semantic categories such as human, human group, car, and truck using shape and color analysis
9、, and these labels are used to improve tracking using temporal consistency constraints. Further classification of human activity, such as walking and running, has also been achieved. Geolocations of labeled entities are determined from their image coordinates using either wide-baseline stereo from t
10、wo or more overlapping camera views, or intersection of viewing rays with a terrain model from monocular views. These computed locations feed into a higher level tracking module that tasks multiple sensors with variable pan, tilt and zoom to cooperatively and continuously track an object through the
11、 scene. All resulting object hypotheses from all sensors are transmitted as symbolic data packets back to a central operator control unit, where they are displayed on a graphical user interface to give a broad overview of scene activities. These technologies have been demonstrated through a series o
12、f yearly demos, using a testbed system developed on the urban campus of CMU.Detection of moving objects in video streams is known to be a significant, and difficult, research problem. Aside from the intrinsic usefulness of being able to segment video streams into moving and background components, de
13、tecting moving blobs provides a focus of attention for recognition, classification, and activity analysis, making these later processes more efficient since only “moving” pixels need be considered.There are three conventional approaches to moving object detection: temporal differencing ; background
14、subtraction; and optical flow. Temporal differencing is very adaptive to dynamic environments, but generally does a poor job of extracting all relevant feature pixels. Background subtraction provides the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting
15、and extraneous events. Optical flow can be used to detect independently moving objects in the presence of camera motion; however, most optical flow computation methods are computationally complex, and cannot be applied to full-frame video streams in real-time without specialized hardware.Under the V
16、SAM program, CMU has developed and implemented three methods for moving object detection on the VSAM testbed. The first is a combination of adaptive background subtraction and three-frame differencing . This hybrid algorithm is very fast, and surprisingly effective indeed, it is the primary algorith
17、m used by the majority of the SPUs in the VSAM system. In addition, two new prototype algorithms have been developed to address shortcomings of this standard approach. First, a mechanism for maintaining temporal object layers is developed to allow greater disambiguation of moving objects that stop f
18、or a while, are occluded by other objects, and that then resume motion. One limitation that affects both this method and the standard algorithm is that they only work for static cameras, or in a ”stepand stare” mode for pan-tilt cameras. To overcome this limitation, a second extension has beendevelo
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- 毕业论文 外文 翻译 远程 视频 监控 系统
限制150内