《《中国图象图形学报》论文投稿模板.docx》由会员分享,可在线阅读,更多相关《《中国图象图形学报》论文投稿模板.docx(4页珍藏版)》请在优知文库上搜索。
1、中藻分突(此号在中国图书馆分类法中杳)力WMRe1.A文Moo6896(年I-修文引用格式I融合深度模型和传统模型的显著性检测(标题中不建议使用缩写词,居中排)作者I”,作者2(4号,各作者之间用逗号分隔,居中排)I.单上所在省市版箱,2.单位,所在褥市邮箱(6号一体.单位名解务必与金称.J:中M要,目的显杵性校测站图像和视觉久!域个收砒计题.传统极小时卜骁为性物体的边界保用较好,但是对必汇性目标的自信度不妙高,召回车岐,而深度学习模生对于是斤性物体的自信度跖,但是共结果边界粗IS,准痛率较低,针对这两种模型各自的优或点,提出了种H方性模里以煤介利用两种方法的优点并抑制各自的不足.方法田先改造
2、域族的宓生卷积网络.训练了个Mj该网络的全卷税网络(FCN)显普性根本.同时选取个现有的基于是像总的显著性别归慑型,在得到两种模型的W杆性结果图后,提出一-种融合辑法,岫合两种方法的菇果以得到很终忧化结;K.PiJJdii1.U行性结果Hd11iM积和像发何是#性位的对非线性哄射,物FCN结!K5传统模里的结果相融合,的果实段在4个数期上与最新的10种方法进行了比较.在HK1.MS数据集中.相比于性能第2的粳型.F(itf72.6%.在MSRA故据集中.桶比F性能第2的根小F值捉Mr2.2%,MAEWtt/5.6%!在DUTOMRoN数妪集中,相比于性能第2的模型,F值提高5.6%.MAEHf
3、KT17.4,同时也HMSRA数据架中进行了对比实验以验证融合H法的有效性,对I匕实段结果证明提出(!,1.J-Uf!P.I。绪攀、焦点检索词啸方面精选关位词)Sa1.iencydetectionviafusionofdeepmode1.andtraditiona1.mode1.(首词首字母大写,其他小写,居中排)作者-2,作者2(4号,各作者之间用逗号分隔,居中期h单位,所在省市邮编.期箝2、单位,所在省市邮址,国京(6号斜付,KqoAh%tract:Objcc1.iveSx1.icncydetectionifund;IECnuI1.PrehICnIincomputervi%inn11n!i
4、11cprocessing,whichAims1.identifythemoMconspicuousobjectsOCregionsinunimage.SaJicncydetectionhasbeenwide1.yUSCdinsevera1.visua1.a1.icaiMns.inc1.udingObjZ1.rctafgctn.scenec1.asMaiiovisua1.tracking,imgeretrieva1.,andscancscgmenuion.Inmost(11d1.iu1.UPPnKId1.eXSa1.ien1.(bjec1.xavderivedIxxwdmiI1.ieextru
5、dedtea1.urefmniP1.Rekorrvgiut.Fina1.%a1.iencymapsconsistofthcnmcthxis;indthese1.ectionoffeatures.TbcwnproadccannotPrOdUCCsaisfactoresu1.tsWhEimageswithmu1.tip1.esa1.ientobjectsorIoWYomnistcoiMcnuarecounicrcd.Tradiixma1.apnx1.utionncu11works(CNNs)havebeenintroducedinpixe1.-wisepredictionb1.enH.suchas
6、sa!iencydcicciion.duetotheiroutstandingPC1.fUmUnNinimagec1.assificaiiontasks.CNNsredefine(beSaJiCnCyprob1.emasa1.abe1.ingpbkmWherVIheIcaiurvSeIeeIiOnbc(*rnsa1.icinandnosa1.icnObjCC1.Sisaxomaica1.1.yPernEnedt1.vughgradie1.decen1.ACNNcantMXbedirect1.yusedtoHumaKi1.ieneymode1.,axiaCNNcunbeU1.i1.izcx1.i
7、nii)icncydc1.ec1.nbyextractingmihicpatchamundeachpixe1.andbyuingthepatchtoPnRiCtthecenterpixe1.sCiaMPaichesarcfrequent1.yobtainedfromdifferentrcsoicnsoftheinputimugctocaptureg1.oba1.information.AnothermethodIstheadditionofup-samp1.ed1.ayc11*intheCNN.AmodifiedCNNisc1.1.cdafu1.1.yconneccdIKIWOfk(FCN).
8、v*tichisfi11*proposedforenan(icenenU1.iu.Miw1.Xa1.iCnCyde1.tx*txnCNN11dehueFVNtocapturecrtierab1.eg1.oba1.and1.oca1.InkKina1.iDi1.FCNisaPnPU1.armx!c1.thatfncx1.i11cvtheCNNtofitdensePfvdiCtionPrnWCe、whichrep1.aces(heSoHMaxandfi1.1.yCnnneCCCd1.ayc11iintheCNNintoconvo1.utionanddeconvo1.ution1.ayers.Com
9、Parcdwithtraditiona1.11thods.FCNscanaccurate1.y1.nIU1.CSare1.owerthanIhCUudi1.iona1.UPP“融be、dueIoIheJuimmphiig!(u.tuinKN%.Todea1.vi1.hI1.iehmiU1.iaiisIhe2kix1.%of3ecy11xxck.WCPmPOHCpositeIicncymode1.thatcombines(headv;InUigc%andrcstninthedrawbacksCfIWoMi1.icncynxx1.c1.s.MethodInIhKstudy,anewFCNbased
10、ondenseCanvo1.uiiona1.ncwidIuyerisaPP1.1.CdIdub1.anUUrM1.ienCynq%.MtheIKIinmgPrOCeStIheSaIig1.Cyt1.workb(aifuasquaredEuc1.it11inourMier*mode1.Ourtrainingsetconsistsof3900imagesthatUrcrandom1.yse1.ectedfrom5sa1.ic,pub1.icdataset,name1.y.ECSSD.SOD.HKU-IS.MSRA.andICOSEG.Oursa1.icncyworkisimp1.ementedin
11、Caffctoo1.box.Theinputimagesandund-(thmapsarcresized105OOr5OOforItuining.thenunen1.umxruw1.erisMrtto0.99.IbeIeannngru1.eisietIo10,0.ax1.theweightdecayxO.(MX5.1heSGDIBninjJPfnCed1.IrCiXeCICnUCdKinIIaNVIDIAGTXTTTANXGPudevice,hihtakesapproximate1.yonedayin2(X(KM)ic11nions.Then,weuseaImdi1.iona1.sa1.icn
12、cymx1.c1.TbcSdZcdmode1.adopsmu1.ti-1.eve1.wgnntaiontoprx1.uccsevera1.SCgmenuiiiosofanimage,whereeachhueixc1.isrvsencdbyafeatureVeC1.(M1.hd1.containsdifferentkindsofimagefeatures.AIundoInfores!istrainedbyIhusefeaturevector%Idfeivehecymap%.OnIhebasisft1.x;2nuk1.s.pop0ieafu%inAp11ithmthatcombinesthead、FnofInuh1.inna1.APProaChGanddeepkamingmebnxoftheimngcareproduced,andthesa1.icncymapsofu1.1.segmentationsarcderivedbythe11ndomforw(.Then,weIKCFCNcoPfVdUCCanothertypeofsa1.icncynwoftheimage.Thefusiona1.pofithmapp1.iestheHadamaidP(OdUC1.onthe2typesofsa1