a国产,中文字幕久久波多野结衣AV,欧美粗大猛烈老熟妇,女人av天堂

當(dāng)前位置:主頁 > 碩博論文 > 信息類碩士論文 >

基于雙目視覺的人體行為分析技術(shù)研究

發(fā)布時(shí)間:2018-12-18 18:34
【摘要】:人體行為分析技術(shù)是計(jì)算機(jī)視覺領(lǐng)域的一個(gè)研究熱點(diǎn)問題。該技術(shù)在視頻監(jiān)控、感知接口、運(yùn)動(dòng)分析和虛擬現(xiàn)實(shí)等多個(gè)領(lǐng)域均具有廣闊的應(yīng)用前景。其中如何有效克服遮擋和多義性、環(huán)境的復(fù)雜變化性以及人體的非剛體性等困難的影響成為人體行為分析技術(shù)中的一個(gè)重要任務(wù);诖,本文圍繞基于雙目視覺的人體行為分析技術(shù)展開研究,重點(diǎn)針對(duì)基于雙目視覺的立體匹配與深度信息獲取方法和基于卷積神經(jīng)網(wǎng)絡(luò)的人體行為分析算法展開了分析與研究,提出了一些解決方法和改進(jìn)措施。本文研究的主要內(nèi)容如下:1、在基于雙目視覺的立體匹配與深度信息獲取算法研究中,提出了一種基于人體邊緣信息的SURF(Speeded-Up Robust Features-簡稱SURF)與區(qū)域匹配結(jié)合的立體匹配算法。該算法旨在降低遮擋和多義性造成的影響,引入三維深度信息提高行為分析算法的精度。該方法包括雙目視覺系統(tǒng)標(biāo)定、運(yùn)動(dòng)目標(biāo)檢測、SURF立體匹配與區(qū)域匹配優(yōu)化、三維信息獲取四個(gè)部分。在采用平面模板兩步法完成雙目視覺系統(tǒng)的標(biāo)定后,采用改進(jìn)的混合高斯模型的背景差分法提取人體運(yùn)動(dòng)目標(biāo)。在匹配過程中,先對(duì)獲取的人體邊緣信息進(jìn)行SURF匹配,然后結(jié)合基于極限約束的區(qū)域匹配算法進(jìn)一步優(yōu)化匹配結(jié)果,提高人體特征點(diǎn)匹配的精度。最后根據(jù)得到的匹配點(diǎn)獲取三維深度信息。實(shí)驗(yàn)結(jié)果表明,該算法能夠準(zhǔn)確獲取人體三維空間坐標(biāo),有效避免遮擋和多義性的干擾。2、在基于雙目視覺的人體行為分析算法研究中,提出了一種基于小樣本卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Networks-簡稱CNN)的人體行為分析算法。卷積神經(jīng)網(wǎng)絡(luò)分為特征提取層和特征映射層。在特征提取層,利用CNN神經(jīng)元感知并提取局部特征;然后利用由多個(gè)特征映射層組成的網(wǎng)絡(luò)層進(jìn)行相應(yīng)的計(jì)算,使得特征提取精度更為準(zhǔn)確可靠;谛颖揪矸e神經(jīng)網(wǎng)絡(luò)的人體行為分析算法分別對(duì)雙目視覺系統(tǒng)下左右相機(jī)采集的圖像采用CNN方法進(jìn)行分類識(shí)別,然后對(duì)左右圖像的識(shí)別結(jié)果進(jìn)行權(quán)值融合處理,通過調(diào)節(jié)系統(tǒng)參數(shù),獲取更高的行為匹配度。實(shí)驗(yàn)結(jié)果表明,該算法能夠?qū)稳藙?dòng)作和交互動(dòng)作進(jìn)行準(zhǔn)確識(shí)別,有效提高人體行為分析算法的識(shí)別率。
[Abstract]:Human behavior analysis is a hot topic in the field of computer vision. This technology has broad application prospects in many fields such as video surveillance, perceptual interface, motion analysis and virtual reality. How to effectively overcome the influence of occlusion and polysemy, the complexity of environment and the non-rigid nature of human body has become an important task in human behavior analysis technology. Based on this, this paper focuses on the research of human behavior analysis technology based on binocular vision. The methods of stereo matching and depth information acquisition based on binocular vision and the algorithm of human behavior analysis based on convolutional neural network are analyzed and studied, and some solutions and improvement measures are put forward. The main contents of this paper are as follows: 1. In the research of stereo matching and depth information acquisition algorithm based on binocular vision, A stereo matching algorithm combining SURF (Speeded-Up Robust Features- SURF) and region matching based on human edge information is proposed. The algorithm aims to reduce the influence of occlusion and polysemy and improve the accuracy of behavior analysis algorithm by introducing 3D depth information. The method includes four parts: binocular vision system calibration, moving target detection, SURF stereo matching and region matching optimization, and 3D information acquisition. After the calibration of the binocular vision system was completed by using the plane template two-step method, the background difference method of the improved mixed Gao Si model was used to extract the moving target of human body. In the process of matching, the human body edge information is first matched by SURF, and then the matching result is optimized by combining the region matching algorithm based on limit constraint to improve the accuracy of human body feature point matching. Finally, the 3D depth information is obtained according to the matching points. The experimental results show that the algorithm can accurately obtain the three-dimensional coordinates of human body and avoid the interference of occlusion and polysemy. 2. In the research of human behavior analysis algorithm based on binocular vision, A human behavior analysis algorithm based on small sample convolution neural network (Convolutional Neural Networks- for short CNN) is proposed. Convolution neural network is divided into feature extraction layer and feature mapping layer. In the feature extraction layer, the CNN neuron is used to perceive and extract the local features, and then the network layer composed of multiple feature mapping layers is used to calculate the feature extraction accuracy more accurately and reliably. The human behavior analysis algorithm based on small sample convolution neural network uses CNN method to classify and recognize the images collected by left and right cameras in binocular vision system, and then carries on the weight fusion processing to the recognition results of left and right images. By adjusting the system parameters, a higher behavior matching degree can be obtained. The experimental results show that the algorithm can accurately identify single action and interactive action, and improve the recognition rate of human body behavior analysis algorithm.
【學(xué)位授予單位】:北方工業(yè)大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類號(hào)】:TP391.41

【參考文獻(xiàn)】

相關(guān)期刊論文 前10條

1 李錦明;閆曉俊;江旭東;溫杰;郇_";;Sobel圖像邊沿檢測算法的優(yōu)化設(shè)計(jì)與實(shí)現(xiàn)[J];電子技術(shù)應(yīng)用;2016年03期

2 劉洪彬;常發(fā)亮;;權(quán)重系數(shù)自適應(yīng)光流法運(yùn)動(dòng)目標(biāo)檢測[J];光學(xué)精密工程;2016年02期

3 楊景豪;劉巍;劉陽;王福吉;賈振元;;雙目立體視覺測量系統(tǒng)的標(biāo)定[J];光學(xué)精密工程;2016年02期

4 趙奇可;孫延奎;;快速定位圖像尺度和區(qū)域的3維跟蹤算法[J];中國圖象圖形學(xué)報(bào);2016年01期

5 李寰宇;畢篤彥;查宇飛;楊源;;一種易于初始化的類卷積神經(jīng)網(wǎng)絡(luò)視覺跟蹤算法[J];電子與信息學(xué)報(bào);2016年01期

6 趙燕偉;任設(shè)東;陳尉剛;樓炯炯;冷龍龍;;基于改進(jìn)BP神經(jīng)網(wǎng)絡(luò)的可拓分類器構(gòu)建[J];計(jì)算機(jī)集成制造系統(tǒng);2015年10期

7 楊宇翔;高明煜;尹克;吳占雄;;結(jié)合同場景立體圖對(duì)的高質(zhì)量深度圖像重建[J];中國圖象圖形學(xué)報(bào);2015年01期

8 熊英;;基于背景和幀間差分法的運(yùn)動(dòng)目標(biāo)提取[J];計(jì)算機(jī)時(shí)代;2014年03期

9 鐘靈;章云;;等級(jí)閾值的彩色圖像矢量中值濾波[J];中國圖象圖形學(xué)報(bào);2011年03期

10 顏軻;萬國偉;李思昆;;基于圖像分割的立體匹配算法[J];計(jì)算機(jī)應(yīng)用;2011年01期

相關(guān)碩士學(xué)位論文 前5條

1 王培培;基于視頻的人體動(dòng)作識(shí)別研究[D];南京郵電大學(xué);2013年

2 朱巖;復(fù)雜場景中的時(shí)空特征學(xué)習(xí)與人體行為分析[D];上海交通大學(xué);2012年

3 王艷;基于點(diǎn)特征的立體匹配算法研究[D];南京理工大學(xué);2009年

4 吳亞鵬;基于雙目視覺的運(yùn)動(dòng)目標(biāo)跟蹤與三維測量[D];西北大學(xué);2008年

5 宰小濤;基于SIFT特征描述子的立體匹配算法研究[D];上海交通大學(xué);2007年

,

本文編號(hào):2386312

資料下載
論文發(fā)表

本文鏈接:http://www.wukwdryxk.cn/shoufeilunwen/xixikjs/2386312.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶dbceb***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com
国内精品国产成人国产三级| 欧美日韩激情无码专区 | 97在线观看永久免费视频| 人妻中文字幕AV无码专区| 中国女人内谢69xxxx免费视频| 国产精品久久香蕉免费播放| 精品国产sm最大网站| 久久99精品久久久久久婷婷| 男女亲热| 麻豆激情| 99精品| 色婷婷av| 国产婷婷色一区二区三区| av黄色片| 精品一二三区| 蜜桃成人| 国产女人18毛片水真多18精品| 成人性生生活性生交委| 碰超老熟女699xx| 在线观看h视频| 男男(h)肉在线观看| 国产麻豆91欧美一区二区| 天天干人妻| 1.76精品散人天堂| 四虎官网| 国产色在线| 全南县| 天天躁日日躁狠狠躁av| 18禁肉肉无遮挡无码网站| 人妻无码专区一区二区三区| 亚洲精品无AMM毛片| 久久97超碰色中文字幕总站 | 亚洲日韩国产欧美一区二区三区 | 欧美性受| 虎林市| 国产成人久久777777| 日韩99在线 | 中文| 中文字幕人妻无码乱精品| 新版天堂资源中文WWW连接| 国产精品三级一区二区| 久久综合精品国产丝袜长腿|