基于數(shù)據(jù)驅(qū)動(dòng)的人群仿真的方法研究與實(shí)現(xiàn)
本文選題:人群仿真 + 路徑規(guī)劃; 參考:《北京交通大學(xué)》2017年碩士論文
【摘要】:人群仿真應(yīng)用于學(xué)術(shù)、商業(yè)、娛樂(lè)等領(lǐng)域,近年來(lái)備受關(guān)注。人群仿真的目的是對(duì)人群建模,仿真人在群體中的行為。由于人群是個(gè)復(fù)雜的自組織系統(tǒng),影響行人運(yùn)動(dòng)的因素繁多,而且應(yīng)用領(lǐng)域廣泛,實(shí)現(xiàn)高質(zhì)量的人群仿真效果存在著諸多挑戰(zhàn),這也體現(xiàn)了本文工作的價(jià)值。本文關(guān)注人群仿真的三個(gè)層次:全局路徑規(guī)劃、局部碰撞避免以及人體動(dòng)畫(huà)實(shí)現(xiàn)。首先利用Delaunay三角劃分對(duì)場(chǎng)景建模,得到場(chǎng)景的靜態(tài)可達(dá)路徑,然后運(yùn)用Dijkstra算法計(jì)算行人從起點(diǎn)到終點(diǎn)的最短路徑。由于在人群運(yùn)動(dòng)的場(chǎng)景中不僅僅存在靜態(tài)障礙物,所以還要考慮與周?chē)娜吮苊馀鲎病榱诉_(dá)到真實(shí)又高效的碰撞避免效果,本文重點(diǎn)研究基于數(shù)據(jù)驅(qū)動(dòng)的仿真方法。在局部碰撞避免方法中,本文利用行人運(yùn)動(dòng)軌跡,建立了范例數(shù)據(jù)庫(kù);在仿真過(guò)程中,虛擬人物根據(jù)自身的狀態(tài),從數(shù)據(jù)庫(kù)中查找相似的范例;通過(guò)碰撞預(yù)測(cè),從相似范例中選擇不會(huì)與其他虛擬人物或障礙物發(fā)生碰撞的行為,并使虛擬人物復(fù)制該行為。然而,此方法依賴于行人運(yùn)動(dòng)數(shù)據(jù),當(dāng)數(shù)據(jù)量較少時(shí)易發(fā)生碰撞現(xiàn)象,并且在數(shù)據(jù)量過(guò)大的情況下,由于數(shù)據(jù)搜索量增大,存在仿真效率下降的問(wèn)題。針對(duì)這兩個(gè)問(wèn)題,本文引入計(jì)算無(wú)碰撞速度的規(guī)則和碰撞檢測(cè)及解除的算法,使得在數(shù)據(jù)未覆蓋仿真場(chǎng)景的情況下依然少有碰撞發(fā)生;利用運(yùn)動(dòng)軌跡數(shù)據(jù)訓(xùn)練人工神經(jīng)網(wǎng)絡(luò),對(duì)仿真?zhèn)體的行為進(jìn)行預(yù)測(cè),能夠在仿真過(guò)程中擺脫對(duì)數(shù)據(jù)的依賴,使得仿真效率不受數(shù)據(jù)量的影響。最后利用運(yùn)動(dòng)捕捉數(shù)據(jù)和運(yùn)動(dòng)數(shù)據(jù)可視化方法生成人物的肢體動(dòng)作,將行人運(yùn)動(dòng)軌跡擴(kuò)展成人群行走動(dòng)畫(huà),使仿真效果更加真實(shí)完整。從仿真的效果來(lái)看,本文工作能夠有效地對(duì)場(chǎng)景建模,進(jìn)行全局路徑規(guī)劃,在碰撞避免仿真方面較現(xiàn)有工作有很大的提升,并且為運(yùn)動(dòng)軌跡添加了人體動(dòng)畫(huà)效果。另外開(kāi)發(fā)了人群仿真系統(tǒng)原型,將全局路徑規(guī)劃、局部碰撞避免和人體動(dòng)畫(huà)實(shí)現(xiàn)整合在一個(gè)系統(tǒng)中。原型系統(tǒng)實(shí)現(xiàn)了圖形用戶界面,使用戶或者研究人員更方便地了解該人群仿真系統(tǒng),直觀地看到各仿真階段的效果,方便開(kāi)展未來(lái)的研究工作。
[Abstract]:Crowd simulation is applied in academic, commercial, entertainment and other fields, and has attracted much attention in recent years. The purpose of crowd simulation is to model and simulate the behavior of people in the crowd. Because the crowd is a complex self-organizing system, there are many factors affecting pedestrian movement, and the application field is extensive, there are many challenges to realize the high quality crowd simulation effect, which also reflects the value of the work in this paper. This paper focuses on three levels of crowd simulation: global path planning, local collision avoidance and human animation implementation. Firstly, the Delaunay triangulation is used to model the scene, and the static reachable path of the scene is obtained. Then, the Dijkstra algorithm is used to calculate the shortest path from the beginning to the end of the pedestrian. Since there are not only static obstacles in the crowd movement scene, it is also necessary to avoid collision with the people around. In order to achieve real and efficient collision avoidance, this paper focuses on data-driven simulation methods. In the method of local collision avoidance, this paper establishes a case database by using pedestrian trajectory. In the process of simulation, the virtual character looks up similar examples from the database according to his own state. Select from a similar example a behavior that does not collide with another virtual character or obstacle and make the virtual character copy the behavior. However, this method relies on pedestrian motion data. When the amount of data is small, collision will occur easily, and when the data is too large, the efficiency of simulation will decrease due to the increase of data search. Aiming at these two problems, this paper introduces the rules of calculating collision-free velocity and the algorithm of collision detection and resolution, so that there are few collisions when the data does not cover the simulation scene, and the artificial neural network is trained by moving track data. The behavior of simulation individuals can be predicted to get rid of the dependence on data in the process of simulation, so that the efficiency of simulation is not affected by the amount of data. Finally, the movement capture data and motion data visualization method are used to generate the body movements of the characters, and the pedestrian movement track is expanded into a crowd walking animation, which makes the simulation effect more real and complete. From the result of simulation, this paper can effectively model the scene, plan the global path, improve the simulation of collision avoidance greatly compared with the existing work, and add the human animation effect for the motion track. In addition, a prototype of crowd simulation system is developed, which integrates global path planning, local collision avoidance and human animation into one system. The prototype system realizes the graphical user interface, which makes the user or researcher understand the simulation system of the crowd more conveniently, see the effect of each simulation stage intuitively, and carry out the research work in the future.
【學(xué)位授予單位】:北京交通大學(xué)
【學(xué)位級(jí)別】:碩士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP391.41;TP183
【參考文獻(xiàn)】
相關(guān)期刊論文 前7條
1 唐春林;蘇虎;金煒東;;一種地鐵列車(chē)乘客仿真模型[J];系統(tǒng)仿真學(xué)報(bào);2014年10期
2 陳姝;梁文章;伍靚;;基于Kinect深度相機(jī)的實(shí)時(shí)三維人體動(dòng)畫(huà)[J];計(jì)算機(jī)工程與科學(xué);2014年08期
3 李婷;;國(guó)內(nèi)外神經(jīng)網(wǎng)絡(luò)的發(fā)展及概述[J];知識(shí)經(jīng)濟(jì);2013年18期
4 林莉婭;;談?dòng)?jì)算機(jī)動(dòng)畫(huà)在電影特效中的應(yīng)用[J];衡水學(xué)院學(xué)報(bào);2013年04期
5 任治國(guó);蓋文靜;金嘉磊;王章野;彭群生;;面向動(dòng)態(tài)場(chǎng)景視頻的虛擬行人路徑規(guī)劃[J];計(jì)算機(jī)輔助設(shè)計(jì)與圖形學(xué)學(xué)報(bào);2013年04期
6 周雪梅;黎應(yīng)飛;;基于Bowyer-Watson三角網(wǎng)生成算法的研究[J];計(jì)算機(jī)工程與應(yīng)用;2013年06期
7 尹寶才;徐振華;孔德慧;肖小芳;;基于Voronoi圖的實(shí)時(shí)人群路徑規(guī)劃[J];北京工業(yè)大學(xué)學(xué)報(bào);2009年08期
相關(guān)博士學(xué)位論文 前1條
1 張克敏;基于虛擬現(xiàn)實(shí)的機(jī)器人仿真研究[D];重慶大學(xué);2012年
相關(guān)碩士學(xué)位論文 前5條
1 高莉;改進(jìn)的Delaunay三角剖分算法研究[D];蘭州交通大學(xué);2015年
2 張健;人群建模仿真算法的研究與系統(tǒng)實(shí)現(xiàn)[D];北京交通大學(xué);2015年
3 李順意;基于運(yùn)動(dòng)捕獲數(shù)據(jù)的角色動(dòng)畫(huà)合成研究[D];西南交通大學(xué);2014年
4 潘燕華;多層次虛擬人群仿真技術(shù)研究[D];哈爾濱工業(yè)大學(xué);2013年
5 曾林森;基于Unity3D的跨平臺(tái)虛擬駕駛視景仿真研究[D];中南大學(xué);2013年
,本文編號(hào):1962231
本文鏈接:http://www.wukwdryxk.cn/kejilunwen/zidonghuakongzhilunwen/1962231.html