a国产,中文字幕久久波多野结衣AV,欧美粗大猛烈老熟妇,女人av天堂

面向異構(gòu)人臉識(shí)別的跨模態(tài)度量學(xué)習(xí)研究

發(fā)布時(shí)間:2018-07-27 19:28
【摘要】:異構(gòu)人臉識(shí)別是指待比對(duì)識(shí)別的人臉圖像來(lái)自?xún)蓚(gè)不同模態(tài)的人臉識(shí)別,如近紅外圖像與可見(jiàn)光圖像人臉識(shí)別,素描與真人照片的人臉識(shí)別,低分辨率與高分辨率圖像人臉識(shí)別等,本文重點(diǎn)研究了異構(gòu)人臉識(shí)別中的跨模態(tài)度量學(xué)習(xí)問(wèn)題,針對(duì)帶有模態(tài)干擾的異構(gòu)人臉特征表示,學(xué)習(xí)距離度量,消除模態(tài)的干擾,使得跨模態(tài)人臉的同類(lèi)與不同類(lèi)距離可分。具體的,針對(duì)異構(gòu)人臉識(shí)別應(yīng)用中跨模態(tài)度量學(xué)習(xí)的不同問(wèn)題,本文主要提出了以下的四個(gè)創(chuàng)新方法:(1)提出了一種基于間隔的跨模態(tài)度量學(xué)習(xí)方法(Margin Based Cross-Modal Metric Learning,簡(jiǎn)稱(chēng)為 MCM2L)。針對(duì)異構(gòu)人臉識(shí)別中,受模態(tài)干擾的影響,跨模態(tài)同類(lèi)距離與跨模態(tài)不同類(lèi)距離不可分的問(wèn)題,提出了一種最大化跨模態(tài)三元組距離約束中同類(lèi)與不同類(lèi)距離之間的間隔的方法。具體的,采用的度量函數(shù)為基于公共子空間的跨模態(tài)度量函數(shù),可以對(duì)兩個(gè)模態(tài)下的特征找到一個(gè)公共子空間,在公共子空間中對(duì)特征進(jìn)行距離度量,學(xué)習(xí)該度量函數(shù)的目標(biāo)包括兩部分,第一部分是最小化跨模態(tài)同類(lèi)樣本對(duì)的距離,第二部分是約束跨模態(tài)三元組中的同類(lèi)樣本對(duì)的距離小于不同類(lèi)樣本對(duì)的距離一個(gè)間隔,該方法可以更關(guān)注于優(yōu)化那些同類(lèi)與不同類(lèi)樣本距離不可分的樣本。所提的方法還被進(jìn)一步擴(kuò)展為基于核的方法(Kernelized Margin Based Cross-Modal Metric Learning,簡(jiǎn)稱(chēng)為KMCM2L)來(lái)處理數(shù)據(jù)非線性可分的問(wèn)題。所提出的方法在三個(gè)異構(gòu)人臉數(shù)據(jù)集上進(jìn)行了測(cè)試,驗(yàn)證了所提算法相對(duì)于基準(zhǔn)算法能取得更優(yōu)的識(shí)別效果。(2)提出了一種基于AUC優(yōu)化的跨模態(tài)度量學(xué)習(xí)方法(Cross-Modal Metric Learning for AUC Optimization,簡(jiǎn)稱(chēng)為CMLAuC)。已有的度量學(xué)習(xí)方法關(guān)注于最小化定義在同類(lèi)和不同類(lèi)樣本對(duì)上的距離損失,而通常在異構(gòu)人臉數(shù)據(jù)集上,能構(gòu)造出的同類(lèi)與不同類(lèi)的樣本對(duì)的數(shù)量是嚴(yán)重不均衡的,在數(shù)據(jù)分布不均衡的情況下,采用AUC(Area Under the ROC Curve)指標(biāo)更具有實(shí)際意義。因此,提出了一種優(yōu)化定義在跨模態(tài)樣本對(duì)上的AUC的跨模態(tài)距離度量方法,該方法被進(jìn)一步擴(kuò)展為可以?xún)?yōu)化部分AUC(partial AUC,簡(jiǎn)稱(chēng)為pAUC),pAUC是在一個(gè)特定的假陽(yáng)率范圍內(nèi)的AUC,這對(duì)于一些要求在特定假陽(yáng)率范圍內(nèi)有較好性能的應(yīng)用尤其有用。所提算法被建模為一個(gè)基于對(duì)數(shù)行列式正則化的凸優(yōu)化問(wèn)題,為了快速的對(duì)所提的算法進(jìn)行優(yōu)化,提出了一種小批量鄰近點(diǎn)優(yōu)化算法,每輪隨機(jī)采樣一部分的跨模態(tài)同類(lèi)樣本對(duì)以及跨模態(tài)不同類(lèi)樣本對(duì)進(jìn)行優(yōu)化。所提算法在三個(gè)跨模態(tài)數(shù)據(jù)集以及一個(gè)單模態(tài)數(shù)據(jù)集上進(jìn)行了測(cè)試,證明了該算法能有效提升基準(zhǔn)算法的性能,此外,基于pAUC優(yōu)化的度量在一些評(píng)價(jià)指標(biāo),如Rank-1,VR@FPR=0.1%上取得了更好的效果。(3)提出了一種稀疏跨模態(tài)度量集成學(xué)習(xí)方法(Ensemble of Sparse Cross-Modal Metrics,簡(jiǎn)稱(chēng)為 ESPAC)。異構(gòu)人臉識(shí)別中,除了模態(tài)不同帶來(lái)的干擾,人臉圖像上通常還存在著很多其它的干擾因素,包括,遮擋,表情變化,光照變化等,針對(duì)該問(wèn)題提出了一種可進(jìn)行特征選擇的跨模態(tài)度量學(xué)習(xí)方法。具體的,首先給出了一種弱的跨模態(tài)距離度量學(xué)習(xí)方法,可以在兩類(lèi)跨模態(tài)三元組上學(xué)習(xí)秩為一的跨模態(tài)距離度量,同時(shí)進(jìn)行基于組的特征選擇來(lái)消除人臉特征中的噪聲特征(對(duì)應(yīng)于遮擋,表情變化,光照變化等);通過(guò)集成學(xué)習(xí)的方法來(lái)學(xué)習(xí)一系列可相互補(bǔ)充的弱距離度量,并將它們集成為一個(gè)強(qiáng)距離度量。實(shí)驗(yàn)證明所提算法在有強(qiáng)遮擋的情況下,可以有效的通過(guò)特征選擇來(lái)提升性能,此外,在三個(gè)異構(gòu)人臉數(shù)據(jù)集上,證明了所提算法相較于基準(zhǔn)算法能有更好的識(shí)別效果。(4)提出了一種干擾魯棒的跨模態(tài)度量學(xué)習(xí)方法(Variation Robust Cross-Modal Metric Learning,簡(jiǎn)稱(chēng)為 VR-CM2L)。該方法針對(duì)解決了漫畫(huà)人臉識(shí)別中度量漫畫(huà)與照片距離的問(wèn)題,漫畫(huà)人臉識(shí)別是一種特殊的異構(gòu)人臉識(shí)別問(wèn)題,識(shí)別過(guò)程會(huì)受到各種干擾因素的影響,與漫畫(huà)相關(guān)的干擾因素包括面部特征夸張,繪畫(huà)風(fēng)格變化等,其它干擾因素包括視角變化,表情變化,光照變化等,這些干擾因素使得漫畫(huà)特征與照片特征之間存在嚴(yán)重的誤配準(zhǔn)。針對(duì)該問(wèn)題,提出了一種干擾魯棒的跨模態(tài)度量學(xué)習(xí)方法。具體的,提出了一種特別設(shè)計(jì)的基于人臉關(guān)鍵點(diǎn)的異構(gòu)特征抽取方法,照片人臉特征在固定視角以及尺度的人臉關(guān)鍵點(diǎn)周?chē)槿?漫畫(huà)特征在同樣的人臉關(guān)鍵點(diǎn)周?chē)?在不同的視角以及不同尺度下抽取。為了度量這樣的異構(gòu)特征表示之間的距離,在每個(gè)人臉關(guān)鍵點(diǎn)處學(xué)習(xí)一個(gè)跨模態(tài)度量,該跨模態(tài)度量中采用了距離池化的方法來(lái)對(duì)齊每個(gè)關(guān)鍵點(diǎn)處漫畫(huà)的多個(gè)特征與照片的單個(gè)特征。最終漫畫(huà)與照片之間的距離是所有基于關(guān)鍵點(diǎn)的距離度量的組合,為了保證學(xué)習(xí)得到的組合度量的全局最優(yōu)性,所有的基于人臉關(guān)鍵點(diǎn)的跨模態(tài)度量是在一個(gè)統(tǒng)一的優(yōu)化框架下學(xué)習(xí)的。在兩個(gè)漫畫(huà)數(shù)據(jù)集上驗(yàn)證了所提方法在各種干擾情況下的有效性,同時(shí)驗(yàn)證了所提出的異構(gòu)特征抽取方法與VR-CM2L結(jié)合,相較于同構(gòu)的特征抽取方法取得了更好的效果。
[Abstract]:Heterogeneous face recognition refers to face recognition from two different modes, such as near infrared image and visible image face recognition, face recognition of sketch and reality photos, low resolution and high resolution face recognition. This paper focuses on cross modal measurement learning in heterogeneous face recognition. Aiming at the representation of heterogeneous face features with modal interference, learning distance measurement and eliminating modal interference, the similar and different distance of cross modal faces can be divided. Specifically, in view of the different problems of cross modal measurement learning in the application of heterogeneous face recognition, the following four innovative methods are proposed in this paper: (1) a kind of new method is proposed. Margin Based Cross-Modal Metric Learning (MCM2L) based on interval mode (abbreviated as MCM2L). Aiming at the problem of isomeric face recognition, which is influenced by modal interference and the distance between the same type of cross mode and the different class distance of the cross mode, a kind of maximum cross modal three tuple distance constraint is proposed for the same and different class distance. The measure function is a cross mode metric function based on the common subspace, which can find a common subspace for the characteristics under two modes and measure the distance in the common subspace. The target of learning the metric function includes two parts. The first part is the minimization of the cross mode. The distance between the same sample pairs, the second part is that the distance of the same sample pair in the constrained cross modal three tuples is less than the distance from the different class of sample pairs. The method can be more concerned about optimizing the samples of the same kind and the different samples. The proposed method is further extended to a kernel based method (Kernelized Ma). Rgin Based Cross-Modal Metric Learning, called KMCM2L), is used to deal with the problem of data nonlinear separable. The proposed method is tested on three heterogeneous face data sets. It is proved that the proposed algorithm can achieve better recognition effect compared with the benchmark algorithm. (2) a cross modal metric learning method based on AUC optimization is proposed. Cross-Modal Metric Learning for AUC Optimization, referred to as CMLAuC). The existing metric learning method is concerned with minimizing the distance loss of the definition in the same and different sample pairs, but usually on a heterogeneous face data set, the number of similar and dissimilar sample pairs is seriously unevenly matched in the data distribution. In equilibrium, the use of the AUC (Area Under the ROC Curve) index is more practical. Therefore, a cross modal distance metric method is proposed to optimize the definition of AUC on the cross modal sample pair. This method is further extended to be able to optimize part AUC (partial AUC, for short), and pAUC is within a specific false positive rate range. AUC, which is particularly useful for applications that require better performance in a specific false positive rate range. The proposed algorithm is modeled as a convex optimization problem based on the regularization of logarithmic determinants. In order to optimize the proposed algorithm quickly, a small batch neighborhood point optimization algorithm is proposed. The proposed algorithm has been tested on three cross modal data sets and one single mode data set, which proves that the algorithm can effectively improve the performance of the benchmark algorithm. In addition, the pAUC based optimization measures have been achieved on some evaluation indicators, such as Rank-1 and VR@FPR=0.1%. Good results. (3) a sparse cross modal measurement integrated learning method (Ensemble of Sparse Cross-Modal Metrics, called ESPAC) is proposed. In the heterogeneous face recognition, there are many other interference factors, including, occlusion, expression change and illumination change, in the face image. In this paper, a cross modal metric learning method, which can be selected for feature selection, is presented. Firstly, a weak cross modal distance metric learning method is given, which can learn the cross modal distance measure of the rank one in two types of cross modal three tuples, and perform the feature selection based on the group to eliminate the noise characteristics of the face features. We learn a series of complementary weak distance measures by integrated learning, and integrate them into a strong distance measure. Experiments show that the proposed algorithm can effectively improve performance through feature selection in the case of strong occlusion. In addition, three heterogeneous face data are used. It is proved that the proposed algorithm can have better recognition results than the benchmark. (4) an interference robust cross modal metric learning method (Variation Robust Cross-Modal Metric Learning, called VR-CM2L) is proposed. This method aims at solving the problem of measuring the distance between comics and photographs in comic face recognition, and comic face recognition It is a special heterogeneous face recognition problem. The identification process is affected by various interference factors. The comic related factors include exaggeration of facial features, change of painting style, and other interference factors including visual angle change, expression change, illumination change, etc. these interference factors make comic features and photo characteristics exist between them. In the case of serious misregistration, a robust cross modal metric learning method is proposed. In particular, a specially designed heterogeneous feature extraction method based on the key points of the face is proposed. The feature of the photo face is extracted around the fixed point of view and the key point of the face, and the character of the comic is in the same face key. In order to measure the distance between these heterogeneous features, a cross modal measurement is learned at each key point of each face, and a distance pooling method is used to align the multiple features and the individual features of the pictures at each key point. The distance between the picture and the picture is a combination of all the distance metrics based on the key points. In order to ensure the global optimality of the combined measure obtained by the learning, all the cross modal measurements based on the key points of the face are learned under a unified optimization framework. The two comic data sets verify the interference situation of the proposed method. The validity of the proposed method is verified by the combination of the heterogeneous feature extraction method and VR-CM2L, which achieves better results than the isomorphic feature extraction method.
【學(xué)位授予單位】:南京大學(xué)
【學(xué)位級(jí)別】:博士
【學(xué)位授予年份】:2017
【分類(lèi)號(hào)】:TP391.41

【相似文獻(xiàn)】

相關(guān)期刊論文 前10條

1 Global Advisor of GreaterChinaCRM;;度量和建立合作伙伴滿(mǎn)意度、忠誠(chéng)度從而不斷帶來(lái)收益[J];數(shù)碼世界;2004年16期

2 馮惠珠;;前言[J];航天電子對(duì)抗;1986年S1期

3 樂(lè)建兵;楊建梅;趙海霞;;軟件開(kāi)發(fā)中的度量技術(shù)應(yīng)用[J];科技管理研究;2006年01期

4 魏明俠;卓越制造績(jī)效度量方法的探析[J];科技進(jìn)步與對(duì)策;2000年03期

5 劉穎華;;關(guān)于度量和加法教學(xué)的幾點(diǎn)做法[J];現(xiàn)代教學(xué);2014年05期

6 長(zhǎng)弓;怎樣度量信息[J];中國(guó)培訓(xùn);1995年01期

7 弓惠生;;軟件設(shè)計(jì)復(fù)雜性度量[J];計(jì)算機(jī)研究與發(fā)展;1992年03期

8 邱昭良;量信息化之體裁IT之衣——信息技術(shù)(IT)價(jià)值度量[J];軟件世界;2002年06期

9 倫立軍,丁雪梅,李英梅;路徑復(fù)雜性度量研究[J];計(jì)算機(jī)應(yīng)用與軟件;2004年04期

10 王欣,沈備軍,樓松齋,董輝明;一個(gè)面向C++的度量工具的設(shè)計(jì)與實(shí)現(xiàn)[J];華東理工大學(xué)學(xué)報(bào);2000年05期

相關(guān)會(huì)議論文 前2條

1 彭學(xué)明;;八、行為價(jià)值的度量與監(jiān)督機(jī)制研究[A];2010中國(guó)國(guó)有經(jīng)濟(jì)發(fā)展論壇暨“中國(guó)經(jīng)濟(jì)發(fā)展方式轉(zhuǎn)變與國(guó)有經(jīng)濟(jì)戰(zhàn)略調(diào)整”學(xué)術(shù)研討會(huì)論文集[C];2010年

2 李宣東;;基于認(rèn)識(shí)與理解途徑的軟件可信性度量與評(píng)估[A];第五屆中國(guó)測(cè)試學(xué)術(shù)會(huì)議論文集[C];2008年

相關(guān)重要報(bào)紙文章 前5條

1 劉慶;詳解“KPI”[N];網(wǎng)絡(luò)世界;2006年

2 甘肅總隊(duì)白銀支隊(duì) 張琳;度量里面有團(tuán)結(jié)[N];人民武警;2008年

3 上海社會(huì)科學(xué)院部門(mén)經(jīng)濟(jì)研究所研究員 胡曉鵬;討論“貨幣超發(fā)”不可“情緒化”[N];文匯報(bào);2011年

4 鄭也夫;賣(mài)地尋根,醫(yī)病治本[N];南方周末;2010年

5 劉培林國(guó)務(wù)院發(fā)展研究中心發(fā)展部;氣候變暖:經(jīng)濟(jì)學(xué)的應(yīng)對(duì)之道[N];中國(guó)社會(huì)科學(xué)報(bào);2010年

相關(guān)博士學(xué)位論文 前10條

1 邢彬;云計(jì)算可信客戶(hù)域關(guān)鍵技術(shù)研究[D];北京交通大學(xué);2016年

2 傅煜;區(qū)域尺度森林地上生物量的不確定性度量研究[D];中國(guó)林業(yè)科學(xué)研究院;2015年

3 霍靜;面向異構(gòu)人臉識(shí)別的跨模態(tài)度量學(xué)習(xí)研究[D];南京大學(xué);2017年

4 金希深;K(?)hler幾何與Sasakian幾何中的典則度量[D];中國(guó)科學(xué)技術(shù)大學(xué);2017年

5 鄒洋楊;(α,,β)-度量的廣義獨(dú)角獸問(wèn)題和重要共形性質(zhì)[D];西南大學(xué);2014年

6 劉小莉;商業(yè)銀行信用風(fēng)險(xiǎn)與利率風(fēng)險(xiǎn)的聯(lián)合度量研究[D];復(fù)旦大學(xué);2006年

7 田萍;金融風(fēng)險(xiǎn)存在與度量最新進(jìn)展研究[D];吉林大學(xué);2005年

8 王微;融合全局和局部信息的度量學(xué)習(xí)方法研究[D];中國(guó)科學(xué)技術(shù)大學(xué);2014年

9 李本伶;關(guān)于某些重要的Finsler度量[D];浙江大學(xué);2007年

10 於耀勇;某些射影平坦的Finsler度量和射影相關(guān)的Randers度量[D];浙江大學(xué);2007年

相關(guān)碩士學(xué)位論文 前10條

1 史瑞東;關(guān)于(α,β)-度量的共形關(guān)系及Douglas曲率性質(zhì)[D];重慶理工大學(xué);2015年

2 嚴(yán)小丹;次線性期望空間中的度量及其性質(zhì)與應(yīng)用[D];南京大學(xué);2014年

3 吳亞?wèn)|;某些特殊Finsler度量的射影性質(zhì)[D];寧波大學(xué);2015年

4 陳藝文;Finsler幾何中的幾類(lèi)特殊度量[D];寧波大學(xué);2015年

5 楊s

本文編號(hào):2148905


資料下載
論文發(fā)表

本文鏈接:http://www.wukwdryxk.cn/shoufeilunwen/xxkjbs/2148905.html


Copyright(c)文論論文網(wǎng)All Rights Reserved | 網(wǎng)站地圖 |

版權(quán)申明:資料由用戶(hù)a7d08***提供,本站僅收錄摘要或目錄,作者需要?jiǎng)h除請(qǐng)E-mail郵箱bigeng88@qq.com
国产成在线观看免费视频成本人 | WWWW亚洲熟妇久久久久| 亚洲 欧美 日韩 国产综合 在线| 人人爽人人爽人人爽学生一级| av动态图| 亚洲av五月天| 色哟哟在线观看| 亚洲综合第一页| 国产成人高清| 91精品啪国产在线观看| 亚洲AV日韩AV天堂影片精品一 | 天堂资源在线播放| 九色少妇丨porny丨蝌蚪| 午夜福利网| 一木久道热线| 人人澡人人看| 久久91精品国产91久久跳| 色婷婷影视| 香蕉超碰| 久久久久久久久久一级| 亚洲国产欧美一区二区三区丁香婷 | 亚洲av无码之国产精品| 亚洲欧美日韩中文加勒比| 人人澡人人澡人人看添av| 国产真实乱子伦视频播放| 久久国产精品一国产精品金尊 | 少妇与黑人一二三区无码| 在线中文字幕亚洲日韩2020| 偷窥XXXX盗摄国产| 国产成人精品日本亚洲11| 国产精品无码AV无码| 亚洲VA中文字幕无码一二三区| 国产亚洲欧美日韩精品一区二区| 国产成人av大片大片在线播放 | 龙里县| 亚洲欧美人成视频一区在线| 向日葵视频色| 久久久综合网| 天堂网中文在线| 亚洲爱| 国产男女无遮挡猛进猛出|