IEEE International Conference on Computer Vision (ICCV) 2019 EGNet:,,,,, Abstract (FCN) FCN (EGNet) 6

Similar documents
Mixtions Pin Yin Homepage

<D2BDC1C6BDA1BFB5CDB6C8DAD7CAB8DFB7E5C2DBCCB3B2CEBBE1C3FBB5A52E786C7378>

ti2 guan4 bo1 bo5 huai4 zheng4 hong1 xi2 luo2 ren4

Microsoft Word - Chord_chart_-_Song_of_Spiritual_Warfare_CN.docx


Microsoft Word - Chord_chart_-_The_Word_of_God_in_Song CN.docx

2019 Chinese Taipei National High School Athletic Game Boxing Championship Junior Men Division Top 8 As of WED 24 APR 2019 Men s Mosquito(38-41Kg) Ran

現代學術之建立 陳平 美學十五講 淩繼堯 美學 論集 徐複觀 書店出版社 的方位 陳寶生 宣傳 敦煌文藝出版社 論集續篇 徐複觀 書店出版社 莊子哲學 王博 道家 的天方學 沙宗平 伊斯蘭教 周易 經傳十

Stock Transfer Service Inc. Page No. 1 CENTURY PEAK METALS HOLDINGS CORPORATION (CPM) List of Top 100 Stockholders As of 12/31/2015 Rank Sth. No. Name

(CIP) : /. :, (/ ) ISBN T S H CI P (2006) XIANGPIAOWANLI JIUW ENH UA YU CH ENGYU

(CIP) : /. :, (/ ) ISBN T S H CI P (2006) CH IJIASH EN GXIAN G YINSHI WEN H U A Y U CHENGY U 1

FM

<B3ACBDDD>


Wu Yi Shan Slalom Open Wu Yi Shan, China, November 2018 Final Ranking Battle Senior Women Rank ID Name Ctry Zapuskalova Nadezhda

0f3fdf2bb8e55502b65cf5790db2b9fdf793fd84c5ee29b8d80ee7fb09a2cf82.xls

RPN 2 DeepParts 22 part pool 2 3 HOG 12 LBP 13 Harr - like 15 DPM 16 Deformable Parts Model VGG16 X. Wang 14 VGG Convolutiona

DELE_MAYO2013.xls

* CUSUM EWMA PCA TS79 A DOI /j. issn X Incipient Fault Detection in Papermaking Wa

<B3ACBDDD>

lí yòu qi n j n ng

NCCC Swim Team James Logan High school - 8/5/2018 Results - Adult Event 102 Mixed Yard Medley Relay Team Relay Finals Time 1 Beida-Zhonglian


é é

Sep (SCI) 10. Jiann-Ming Wu, Annealing by two sets of interactive dynamics, IEEE Trans. on Systems Man and Cybernetics Part B-Cybernetics 34 (3)

j n yín

Meet Results: 2015 CHN Nationals Baoji

untitled

DELE noviembre 2017

píng liú zú


诗 经 简介 诗经 是中国第一部诗歌总集 它汇集了从西周初年到春秋中期 五百多年间的诗歌三百零五篇 诗经 在先秦叫做 诗 或者取诗的 数目整数叫 诗三百 本来只是一本诗集 从汉代起 儒家学者把 诗 当作经典 尊称为 诗经 列入 五经 之中 它原来的文学性质就 变成了同政治 道德等密切相连的教化人的教

No. Name College Major 1 Ke Yang Chuan College of Science Chemistry 2 Wang Li Qun College of Science Mathematics

Closing Ceremony


DELE julio 2018



Please sign your attendance in the column in accordance to the date of the lesson you are attending Semiconductor Physics and Devices Attendanc

Supplementary Information LC-MS-guided isolation of anti-inflammatory 2-(2-phenylethyl)chromone dimers from Chinese agarwood (Aquilaria sinensis) Hui-

Nurture Category Boy's Under 12 Event - Main Draw Q -Final S- Final Final Cornelius Tan Yu En Ang Jiun Yong M205 2/6, 4.10pm, T5 Loser to A1 M230 2/6,

jiàn shí

QINGDAOLAOGANBUZHIYOU

NCCC Swim Team HY-TEK's MEET MANAGER 7.0-6:11 PM 8/4/2017 Page 1 NCCC /6/2017 Meet Program - Afternoon Session Event 102 Men 70 & Over 50 Yard F

成 大 中 文 學 報 第 五 十 二 期 Re-examination of the Core Value in Yi Jing Studies of Xun Shuang: Constructing the Confucius Meaning via Phenomenon and Number

Ling Zhoujiu Shi Kuang

Vol.39 No. 8 August 2017 Hyeonwoo Noh [4] boundng box PASCALV VOC PASCAL VOC Ctyscapes bt 8 bt 1 14 bt

bài bì

第 2 期 王 向 东 等 : 一 种 运 动 轨 迹 引 导 下 的 举 重 视 频 关 键 姿 态 提 取 方 法 257 竞 技 体 育 比 赛 越 来 越 激 烈, 为 了 提 高 体 育 训 练 的 效 率, 有 必 要 在 体 育 训 练 中 引 入 科 学 定 量 的 方 法 许 多


JUAN SHOU YU 卷 首 语 在 生 活 中, 我 们 常 常 可 以 看 到, 一 些 人 即 使 在 危 机 中, 在 困 境 中, 在 磨 难 中, 依 然 满 怀 激 情, 依 然 坚 定 自 信, 依 然 顽 强 奋 斗 是 什 么 赋 予 他 们 在 困 境 中 搏 斗 的 意


untitled

Ps22Pdf

双 语 教 学 之 中 综 上 所 述, 科 大 讯 飞 畅 言 交 互 式 多 媒 体 教 学 系 统, 围 绕 语 音 核 心 技 术 的 研 究 与 创 新, 取 得 了 一 系 列 自 主 产 权 并 达 到 国 际 领 先 水 平 的 技 术 成 果, 同 时 获 得 发 明 专 利 3


封面封底.FIT)

填 写 要 求 一 以 word 文 档 格 式 如 实 填 写 各 项 二 表 格 文 本 中 外 文 名 词 第 一 次 出 现 时, 要 写 清 全 称 和 缩 写, 再 次 出 现 时 可 以 使 用 缩 写 三 涉 密 内 容 不 填 写, 有 可 能 涉 密 和 不 宜 大 范 围 公

<B7E2C3E E6169>

Fig. 1 Frame calculation model 1 mm Table 1 Joints displacement mm

Late-comers are NOT allowed to take the exam. Group 1: Reporting Time in SAR: 14:55 6A 21 CHEUNG HIU KWAN F 6B 32 TAM SHUK CHUN F 6C 2 CHIM HO WANG M


Supporting_Information_revise

~-' 一 ~ U 百 陳 子 展 ( ), 本 名 炳 聾, 字 子 展, 以 字 行 於 世, 湖 南 長 沙 人 幼 時 曾 在 私 塾 求 學, 後 入 長 沙 縣 立 師 範 學 校, 再 入 東 南 大 學 教 育 系, 因 病 輯 學 回 湖 南, 寄 住 長 沙

34 www. cjig. cn wavelet transform 1 2 JPEG LIVE E s o = 1 T Σ log 2 C s o + 1 E T C s o Lu Wen contourlet C 0 7 N

Microsoft Word - T集团发行股份购买资产法律意见书 公告版 0711 psq.docx

Microsoft Word - 詩經注釋.docx

标题

C.C.C. Heep Woh College English Department S.1 English Oral Exam (2nd Term) Exam Date: 16/6/2015 Exam Time: 8:30-11:30a.m. Exam Room: 402, 4

C.C.C. Heep Woh College English Department S.1 English Oral Exam (1st Term) Exam Date: 9/1/2015(TUE) Exam Time: 8:30 11:30a.m. Exam Room: 40

MasterPlan2019.xlsx

GLOBAL GEOLOGY Vol. 35 No. 4 Dec

háng, y u jiàn xiá shì zhèn

附件4

封面封底.FIT)

GLOBAL GEOLOGY Vol. 35 No. 4 Dec Landsat 5 TM Landsat 7 ETM TM ETM + P A doi 1

Microsoft Word - Ming Rebel Checklist.docx


Ps22Pdf

Final Results Classic SSO 2015.xlsx

合肥民商 2013 年第 10 期

湘 粤 跨 界 水 环 境保护合作座谈会召开 南省政府副秘书长张银桥 湖南省环保厅 国土资源厅 水利厅 湖南郴州市政府相关负责人出席座谈 湘粤跨省界河流主要为发源于郴州临武的武水 河 它是广东省韶关市内重要河流北江上游的一级支 流 近年来 湘粤两省就跨省界河流水环境保护达成 多项共识 通过一系列举措

11th East Asian Judo Championships Ulaanbaatar (MGL), 09 Jun 2018 Medals Total 1st place 2nd place 3rd place 5th place 7th place 1. KOR Korea, South 4

Microsoft PowerPoint - Aqua-Sim.pptx

<4D F736F F D20B8BDBCFE3220BDCCD3FDB2BFD6D8B5E3CAB5D1E9CAD2C4EAB6C8BFBCBACBB1A8B8E6A3A8C4A3B0E5A3A92E646F6378>

WM-2009-HE

List of Delegation No. Name Gender Organization Title Zhao China Education Association for 1 M Secretary General Lingshan International Exchange China

2016 YOUNG MATHEMATICIAN FORUM Introduction To promote academic communication and cooperation between young staffs from the SMS and the BICMR of Pekin

COCO18-DensePose-BUPT-PRIV

MasterPlan2019.xlsx

C.C.C. Heep Woh College English Department S.3 English Oral Exam (Final Term) Exam Date: 08/06/2017 Exam Time: 8:30 p.m. - 12:30 p.m.(4hrs)

2 / 中 冶 月 刊 MONTHLY MAGAZINE

: (2012) Control Theory & Applications Vol. 29 No. 1 Jan Dezert-Smarandache 1,2, 2,3, 2 (1., ; 2., ;

2 GOLD ELITE AUTUMN GOLD ELITE AUTUMN 3 主 编 寄 语 2013 / 第 一 期 G O L D E L I T E 银 行 黄 金 文 化 读 本 中 国 工 商 银 行 贵 金 属 & 21 世 纪 传 媒 联 合 出 品 东 风 之 手 SPONSOR

Closing Ceremony

封面封底.FIT)

彩色地图中道路的识别和提取

CSK Athletics Meet Day 1 Field Results B Grade Long Jump SD1: 4.10m SD2: 4.70m R: 6.04m Final No. Class Name Distance Position 29 4D 30 Yeun

报 告 1: 郑 斌 教 授, 美 国 俄 克 拉 荷 马 大 学 医 学 图 像 特 征 分 析 与 癌 症 风 险 评 估 方 法 摘 要 : 准 确 的 评 估 癌 症 近 期 发 病 风 险 和 预 后 或 者 治 疗 效 果 是 发 展 和 建 立 精 准 医 学 的 一 个 重 要 前

2 會 園 誇 勝 早 期 的 師 子 林 及 相 關 詩 畫 知 也 5 鄭 元 祐 僑 吳 集 卷 九 立 雪 堂 記 : 昔 普 應 國 師 ( 明 本 ) 倡 道 天 目 時, 予 ( 買 住 ) 先 君 秦 國 公 方 平 章 江 浙, 以 其 素 學 參 扣 于 國 師 國 師 之 弟

Transcription:

IEEE International Conference on Computer Vision (ICCV) 2019 EGNet:,,,,, http://mmcheng.net/egnet/ Abstract (FCN) FCN (EGNet) 6 http://mmcheng.net/egnet/ 1. (SO) [6] [42] [4] [41] [19] [15] [12, 54] RGB- [11, 66] (cmm@nankai.edu.cn) ICCV 2019 [67] Source Baseline Ours G 1. Visual examples of our method. After we model and fuse the salient edge information, the salient object boundaries become clearer. [7,21,39], (CNN) [25] (FCN) [34] CNN CNN SO [64, 65] [17, 18, 23, 28, 31, 50, 60, 68]

SO SO U-Net [40] [32, 33, 59, 61] (Superpixel) [20] (CRF) [17,28,33] : EGNet 6 15 2. [5] [57,69] [24,44] [22,44,51] [1,2,9] (CNN) Li [27] Wang [45] [34] long (FCN) FCN Wang [47] FCN Hou HE [55] [17, 18] [62] Zhang dropout [61] Zhang Zhang [59] Wang [53] [35] Luo U-Net IoU [26] li [29] U-Net SO [32,33,59,61] [14,58,70] [14,58,70] NL [35] Mumford-Shah [38] sobel 3. Fig. 2 Sec. 3.1 Sec. 3.2 non-local Sec. 3.3

NLSEM : layer 1-2 2-2 Explicit edge modelling O2OGM FF + U U : Upsample + : Pixel-wise add : Saliency Spv. : Edge Spv. 3-3 PSFEM FF 4-3 5-3 op-down location propagation FF FF 6-3 FF 2. PSFEM O2OGM : FF: Spv.: 3.1. [17, 18, 31, 33, 59, 61] [17,28,33] CRF NLF [35] IOU EGNet 3.2. [17, 35] VGG SS [17, 18] VGG 1-2 2-2, 3-3, 4-3, 5-3, 6-3 1-2 S (1) 5 S (2), S (3), S (4), S (5), S (6) C : C = {C (2), C (3), C (4), C (5), C (6) }, (1) C (2) 2-2 features 2-2 [61] S (2) 3.2.1 Fig. 2 PSFEM U-Net [40] U-Net (Fig. 2 ) ReLU (ab. 1) ReLU (ab. 1) ab. 1 3.2.2 Non-local 2-2 2- U-Net S (2)

S 1 2 3 2 3 1 128 3 1 128 3 1 128 3 1 1 3 3 1 256 3 1 256 3 1 256 3 1 1 4 5 2 512 5 2 512 5 2 512 3 1 1 5 5 2 512 5 2 512 5 2 512 3 1 1 6 7 3 512 7 3 512 7 3 512 3 1 1 1. (Fig. 2 ) : 1, 2, 3 ReLu 3 1 128 3 1 128 S C (2) : C (2) = C (2) + Up(ϕ(rans( ˆF (6) ; θ)); C (2) ), (2) rans( ; θ) θ ϕ() ReLU Up( ; C (2) ) * C (2) Up(^F (i) ; θ, C (j) ) Up(ϕ(rans( ˆF (i) ; θ)); C (j) ). ˆF (6) S (6) (6) ˆF f(c (6) ; W (6) ) S(3) S (4) S (5) : Up(ϕ(rans( ˆF (i) ; θ)); C (j) ). ˆF (6) denotes the enhanced features in side path S (6). ˆF (i) = f(c (i) + Up( ˆF (i+1) ; θ, C (i) ); W (i) ), (3) W (i) W (i) (i) f( ; W (i) ) C (2) S (2) F E f( C (2) ; W (2) ) ab. 1 : L (2) (F E ; W (2) ) = log P r(y j = 1 F E ; W (2) ) j Z + log P r(y j = 0 F E ; W (2) ), (4) j Z Z + Z W ab. 1 P r(y j = 1 F E ; W (2) ) : L (i) ( ˆF (i) ; W (i) ) = log P r(y j = 1 F ˆ (i) ; W (i) ) j Y + log P r(y j = 0 ˆF (i) ; W (i) ), i [3, 6], (5) j Y Y + Y L : L = L (2) (F E ; W (2) 3.3. ) + 6 i=3 L (i) ( ˆF (i) ; W (i) ). (6) F E ˆF (3) S (3), S (4), S (5), S (6) (s ) : G (i) = Up( ˆF (i) ; θ, F E ) + F E, i [3, 6]. (7) PSFEM s Eq. (3) s-features Ĝ (i) s : L (i) (Ĝ(i) ; W (i) ) = log P r(y j = 1 Ĝ(i) ; W (i) ) j Y + log P r(y j = 0 Ĝ(i) ; W (i) ), i [3, 6]. (8) j Y

: L f (Ĝ; W ) = σ(y, 6 i=3 β i f(ĝ(i) ; W (i) )), (9) σ( ) Eq. (5) : 4. i=6 L = L f (Ĝ; W ) + L (i) (Ĝ(i) ; W (i) ) L t = L + L. 4.1. i=3 (10) [33,49,59,63] US [46] VGG [43] ResNet [16] [17,28] MSRA-B Pyorch (σ = 0.01) 0 : =5e-5 =0.0005 =0.9 1 24 epoch 15 epoch 10 4.2. : ECSS [56] PASCAL-S [30] U- OMRON [57] SO [36,44] HKU-IS [27] US [46]. ECSS [56] 1000 PASCAL-S [30] PASCAL VOC [8] 850 U- OMRON [57] 5168 SO [36] 300 HKU-IS [27] 4447 2500 500 2000 US [46] 10553 5019 [33, 49, 52] US F (MAE) [2] S [10] F F β = (1 + β2 )P recision Recall β 2, (11) P recision + Recall β 2 = 0.3 [5] - [17, 18] [17, 18, 32, 59] - F MAE P Y [0,1] MAE 1 W H ε = P (x, y) Y (x, y), (12) W H x=1 y=1 W H S F S S : S = γs o + (1 γ)s r, (13) S o S r γ 0.5 [10] 4.3. US-R [46] SO [36] US-E [46]

ECSS [56] PASCAL-S [30] U-O [57] HKU-IS [27] SO [36, 37] US-E [46] MaxF MAE S MaxF MAE S MaxF MAE S MaxF MAE S MaxF MAE S MaxF MAE S VGG-based CL [28] 0.896 0.080 0.863 0.805 0.115 0.791 0.733 0.094 0.743 0.893 0.063 0.859 0.831 0.131 0.748 0.786 0.081 0.785 SS [17, 18] 0.906 0.064 0.882 0.821 0.101 0.796 0.760 0.074 0.765 0.900 0.050 0.878 0.834 0.125 0.744 0.813 0.065 0.812 MSR [26] 0.903 0.059 0.875 0.839 0.083 0.802 0.790 0.073 0.767 0.907 0.043 0.852 0.841 0.111 0.757 0.824 0.062 0.809 NLF [35] 0.903 0.065 0.875 0.822 0.098 0.803 0.753 0.079 0.750 0.902 0.048 0.878 0.837 0.123 0.756 0.816 0.065 0.805 RAS [3] 0.915 0.060 0.886 0.830 0.102 0.798 0.784 0.063 0.792 0.910 0.047 0.884 0.844 0.130 0.760 0.800 0.060 0.827 EL [13] 0.865 0.082 0.839 0.772 0.122 0.757 0.738 0.093 0.743 0.843 0.072 0.823 0.762 0.154 0.705 0.747 0.092 0.749 HS [32] 0.905 0.062 0.884 0.825 0.092 0.807 - - - 0.892 0.052 0.869 0.823 0.128 0.750 0.815 0.065 0.809 RFCN [48] 0.898 0.097 0852 0.827 0.118 0.799 0.747 0.094 0.752 0.895 0.079 0.860 0.805 0.161 0.730 0.786 0.090 0.784 UCF [62] 0.908 0.080 0.884 0.820 0.127 0.806 0.735 0.131 0.748 0.888 0.073 0.874 0.798 0.164 0.762 0.771 0.116 0.777 Amulet [61] 0.911 0.062 0.894 0.826 0.092 0.820 0.737 0.083 0.771 0.889 0.052 0.886 0.799 0.146 0.753 0.773 0.075 0.796 C2S [29] 0.909 0.057 0.891 0.845 0.081 0.839 0.759 0.072 0.783 0.897 0.047 0.886 0.821 0.122 0.763 0.811 0.062 0.822 PAGR [63] 0.924 0.064 0.889 0.847 0.089 0.818 0.771 0.071 0.751 0.919 0.047 0.889 0.841 0.146 0.716 0.854 0.055 0.825 Ours 0.941 0.044 0.913 0.863 0.076 0.848 0.826 0.056 0.813 0.929 0.034 0.910 0.869 0.110 0.788 0.880 0.043 0.866 ResNet-based SRM [49] 0.916 0.056 0.895 0.838 0.084 0.832 0.769 0.069 0.777 0.906 0.046 0.887 0.840 0.126 0.742 0.826 0.058 0.824 GRL [52] 0.921 0.043 0.906 0.844 0.075 0.839 0.774 0.062 0.791 0.910 0.036 0.896 0.843 0.103 0.774 0.828 0.049 0.836 PiCANet [33] 0.932 0.048 0.914 0.864 0.077 0.850 0.820 0.064 0.808 0.920 0.044 0.905 0.861 0.103 0.790 0.863 0.050 0.850 Ours 0.943 0.041 0.918 0.869 0.074 0.852 0.842 0.052 0.818 0.937 0.031 0.918 0.890 0.097 0.807 0.893 0.039 0.875 2. 6 F MAE S - & 1 1 1 0.9 0.9 0.9 CL RFCN 0.8 MSR SS Amulet SRM 0.7 PAGR GRL PiCANet Ours 0.6 0 0.2 0.4 0.6 0.8 1 CL RFCN 0.8 MSR SS Amulet SRM 0.7 PAGR GRL PiCANet Ours 0.6 0 0.2 0.4 0.6 0.8 1 CL RFCN 0.8 MSR SS Amulet SRM 0.7 PAGR GRL PiCANet Ours 0.6 0 0.2 0.4 0.6 0.8 1 (a) US-E [46] (b) U-OMRON [57] (c) HKU-IS [27] 3. ( ) ( ) 4.3.1 U-Net PSFEM(Fig. 2) ( 2-2 6-3) S (2) ˆF (3) (3-3 ) 2-2 ˆF (3) edge_prog ab. 3

Model SO US Model MaxF MAE S MaxF MAE S 1. B.851.116.780.855.060 NLF.844 0.513 0.541 0.527 0.318 0.659 0.429 2. B + edge_prog.873.105.799.872.051 Ours.851 0.637 0.534 0.581 0.446 0.680 0.539 3. B + edge_lp.882.100.807.879.044.866 4. B + edge_nlf.857.112.794.866.053.860 4. NLF 5. B + edge_lp + MRF_PROG.882.106.796.880.046.869 6. B + edge_lp + MRF_OO.890.097.807.893.039.875 3. Ablation analyses on SO [36] and US-E [46]. Here, B denotes the baseline model. edge_prog, edge_lf, edge_nlf, MRF_PROG, MRF_OO are introduced in the Sec. 4.3. Source B B+edge_NLF B+edge_LP G 4. B edge_nlf edge_lp NLF Sec. 4.3 4.3.2 Sec. 4.3.1 edge_prog S (3) edge_lp ab. 3 ab. 3 (F 3.1% 2.4%) 4.3.3 NLF [35] NLF IoU IOU edge_nlf ab. 3 4 SO US Recall Precision MaxF Recall Precision MaxF Fig. 4 NLF [35] NLF ab. 4 F 4.3.4 U-Net (F E ) ˆF (3) ˆF (3) ˆF (4) ˆF (5) ˆF (6) MRF_PROG MRF_OO MRF ab. 3 4.4. EGNet 15 CL [28], SS [17,18], NLF [35], MSR [26], EL [13], HS [32], RFCN [48], UCF [62], Amulet [61], PAGR [63], PiCANet [33], SRM [49], GRL [52], RAS [3] and C2S [29]. [10, 17, 18] F MAE S F MAE S

Image G Ours PiCANet [33] PAGR [63] GRL [52] SRM [49] UCF [62] Amulet [61] SS [17, 18] HS [32] 图 5. 与最先进方法的定量比较 检测方法进行评估和比较 如 ab. 2所示 我们可以看 益于互补的显著边缘特征 我们的方法表现得更好 对 到 不同的方法可能使用不同的主干网络 为了比较公 于第二个例子 其中显著目标相对较小 我们的结果仍 平 我们分别在 VGG [43] 和 ResNet [16] 上训练我们 然非常接近真实标注 的模型 可以看到 在所有比较的数据集的所有评估指 标下 我们的模型优于最先进的方法 特别是在相对具 5. 总结 有挑战性的数据集 SO [36, 44] 上 (2.9% 与 1.7% 的 F 在本文中 我们的目标是保留显著目标边缘 不同 度量与 S 度量提升) 以及在最大型数据集 US [46] 于其他综合多尺度特征或利用后处理的方法 我们关 上 (3.0% 与 2.5%) 具体来说 与当前的最佳方法相 注显著边缘信息与显著目标信息之间的互补性 基于 比 6 个数据集上的平均 F 度量改进了 1.9 注意 这是 这一思想 我们提出了 EGNet 来建模网络内部的这些 在没有任何预处理和后处理的情况下实现的 互补特征 首先 基于 U-Net 提取多分辨率显著目标 精度-召回曲线 除了在 ab. 2中显示的数值比较 特征; 然后 我们提出了一个 non-local 显著边缘特征 外 我们还绘制了三个数据集上所有比较方法的精度 提取模块 该模块将局部边缘信息与全局位置信息相 召回曲线 Fig. 3 可以看出 实红线表示所提方法在大 结合 得到显著边缘特征 最后 我们采用一对一的引 多数阈值上优于其他方法 由于互补的显著边缘信息 导模块来融合这些互补特征 在显著边缘特征的帮助 的帮助 结果产生了清晰的边缘信息和精确的定位 从 下 显著目标的边缘和定位得到了改善 我们的模型在 而得到了更好的 PR 曲线 没有任何预处理或后处理的情况下 在 6 个广泛使用 视觉比较 在 Fig. 5中 我们展示了一些可视化结 果 可以看出 我们方法在显著目标分割和定位方面有 的数据集上超过了目前最先进的方法 我们还提供了 EGNet 的有效性分析 更好的效果 值得一提的是 得益于显著边缘特征 我 们的结果不仅可以突出显著区域 而且还可以产生连 致谢 贯的边缘 例如 对于第一个例子 由于复杂场景的影 划 青年拔尖人才支持计划 天津市自然科学基金 响 其他方法无法准确定位和分割显著目标 然而 得 (17JCJQJC43700, 18ZXZNGX00110) 的支持 本研究受到 NSFC (61572264) 国家 万人计

[1] Ali Borji, Ming-Ming Cheng, Qibin Hou, Huaizu Jiang, and Jia Li. Salient object detection: A survey. CVM, 5(2):117 150, 2019. [2] Ali Borji, Ming-Ming Cheng, Huaizu Jiang, and Jia Li. Salient object detection: A benchmark. IEEE IP, 24(12):5706 5722, 2015. [3] Shuhan Chen, Xiuli an, Ben Wang, and Xuelong Hu. Reverse attention for salient object detection. In ECCV, pages 234 250, 2018. [4] ao Chen, Ming-Ming Cheng, Ping an, Ariel Shamir, and Shi-Min Hu. Sketch2photo: Internet image montage. ACM OG, 28(5):124:1 10, 2009. [5] Ming Cheng, Niloy J Mitra, Xumin Huang, Philip HS orr, and Song Hu. Global contrast based salient region detection. IEEE PAMI, 37(3):569 582, 2015. [6] Ming-Ming Cheng, Fang-Lue Zhang, Niloy J Mitra, Xiaolei Huang, and Shi-Min Hu. Repfinder: finding approximately repeated scene elements for image editing. ACM OG, 29(4):83, 2010. [7] Wolfgang Einhäuser and Peter König. oes luminance-contrast contribute to a saliency map for overt visual attention? European Journal of Neuroscience, 17(5):1089 1097, 2003. [8] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. he pascal visual object classes (voc) challenge. IJCV, 88(2):303 338, 2010. [9] eng-ping Fan, Ming-Ming Cheng, Jiang-Jiang Liu, Shang-Hua Gao, Qibin Hou, and Ali Borji. Salient objects in clutter: Bringing salient object detection to the foreground. In ECCV, pages 186 202. Springer, 2018. [10] eng-ping Fan, Ming-Ming Cheng, Yun Liu, ao Li, and Ali Borji. Structure-measure: A new way to evaluate foreground maps. In ICCV, pages 4548 4557, 2017. [11] eng-ping Fan, Zheng Lin, Jia-Xing Zhao, Yun Liu, Zhao Zhang, Qibin Hou, Menglong Zhu, and Ming- Ming Cheng. Rethinking rgb-d salient object detection: Models, datasets, and large-scale benchmarks. arxiv preprint arxiv:1907.06781, 2019. [12] eng-ping Fan, Wenguan Wang, Ming-Ming Cheng, and Jianbing Shen. Shifting more attention to video salient object detection. In CVPR, pages 8554 8564, 2019. [13] Lee Gayoung, ai Yu-Wing, and Kim Junmo. eep saliency with encoded low level distance map and high level features. In CVPR, 2016. [14] Wenlong Guan, iantian Wang, Jinqing Qi, Lihe Zhang, and Huchuan Lu. Edge-aware convolution neural network based salient object detection. IEEE SPL, 26(1):114 118, 2018. [15] Junfeng He, Jinyuan Feng, Xianglong Liu, ao Cheng, ai-hsu Lin, Hyunjin Chung, and Shih-Fu Chang. Mobile product search with bag of hash bits and boundary reranking. In CVPR, pages 3005 3012, 2012. [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. eep residual learning for image recognition. In ICCV, pages 770 778, 2016. [17] Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen u, and Philip orr. eeply supervised salient object detection with short connections. In CVPR, pages 3203 3212, 2017. [18] Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen u, and Philip orr. eeply supervised salient object detection with short connections. IEEE PAMI, 41(4):815 828, 2019. [19] Qibin Hou, Peng-ao Jiang, Yunchao Wei, and Ming- Ming Cheng. Self-erasing network for integral object attention. In NIPS, 2018. [20] Ping Hu, Bing Shuai, Jun Liu, and Gang Wang. eep level sets for salient object detection. In CVPR, pages 2300 2309, 2017. [21] Laurent Itti and Christof Koch. Computational modeling of visual attention. Nature reviews neuroscience, 2(3):194 203, 2001. [22] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE PAMI, 20(11):1254 1259, 1998. [23] Sen Jia and Neil B Bruce. Richer and deeper supervision network for salient object detection. arxiv preprint arxiv:1901.02425, 2019. [24] ominik A Klein and Simone Frintrop. Centersurround divergence of feature statistics for salient object detection. In ICCV, pages 2214 2219. IEEE, 2011. [25] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to

document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998. [26] Guanbin Li, Yuan Xie, Liang Lin, and Yizhou Yu. Instance-level salient object segmentation. In CVPR, 2017. [27] Guanbin Li and Yizhou Yu. Visual saliency based on multiscale deep features. In CVPR, pages 5455 5463, 2015. [28] Guanbin Li and Yizhou Yu. eep contrast learning for salient object detection. In CVPR, 2016. [29] Xin Li, Fan Yang, Hong Cheng, Wei Liu, and inggang Shen. Contour knowledge transfer for salient object detection. In ECCV, pages 355 370, 2018. [30] Yin Li, Xiaodi Hou, Christof Koch, James M Rehg, and Alan L Yuille. he secrets of salient object segmentation. In CVPR, pages 280 287, 2014. [31] Zun Li, Congyan Lang, Yunpeng Chen, Junhao Liew, and Jiashi Feng. eep reasoning with multi-scale context for salient object detection. arxiv preprint arxiv:1901.08362, 2019. [32] Nian Liu and Junwei Han. hsnet: eep hierarchical saliency network for salient object detection. In CVPR, pages 678 686, 2016. [33] Nian Liu, Junwei Han, and Ming-Hsuan Yang. Picanet: Learning pixel-wise contextual attention for saliency detection. In CVPR, pages 3089 3098, 2018. [34] Jonathan Long, Evan Shelhamer, and revor arrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431 3440, 2015. [35] Zhiming Luo, Akshaya Kumar Mishra, Andrew Achkar, Justin A Eichel, Shaozi Li, and Pierre-Marc Jodoin. Non-local deep features for salient object detection. In CVPR, 2017. [36] avid Martin, Charless Fowlkes, oron al, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, volume 2, pages 416 423, 2001. [37] Vida Movahedi and James H Elder. esign and perceptual validation of performance measures for salient object segmentation. In IEEE CVPRW, pages 49 56. IEEE, 2010. [38] avid Mumford and Jayant Shah. Optimal approximations by piecewise smooth functions and associated variational problems. CPAM, 42(5):577 685, 1989. [39] errick Parkhurst, Klinton Law, and Ernst Niebur. Modeling the role of salience in the allocation of overt visual attention. Vision research, 42(1):107 123, 2002. [40] Olaf Ronneberger, Philipp Fischer, and homas Brox. U-net: olutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234 241. Springer, 2015. [41] Paul L Rosin and Yu-Kun Lai. Artistic minimal rendering with lines and blocks. Graphical Models, 75(4):208 229, 2013. [42] Ueli Rutishauser, irk Walther, Christof Koch, and Pietro Perona. Is bottom-up attention useful for object recognition? In CVPR, 2004. [43] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [44] Jingdong Wang, Huaizu Jiang, Zejian Yuan, Ming- Ming Cheng, Xiaowei Hu, and Nanning Zheng. Salient object detection: A discriminative regional feature integration approach. IJCV, 123(2):251 268, 2017. [45] Lijun Wang, Huchuan Lu, Xiang Ruan, and Ming- Hsuan Yang. eep networks for saliency detection via local estimation and global search. In ICCV, pages 3183 3192, 2015. [46] Lijun Wang, Huchuan Lu, Yifan Wang, Mengyang Feng, ong Wang, Baocai Yin, and Xiang Ruan. Learning to detect salient objects with image-level supervision. In CVPR, pages 136 145, 2017. [47] Linzhao Wang, Lijun Wang, Huchuan Lu, Pingping Zhang, and Xiang Ruan. Saliency detection with recurrent fully convolutional networks. In ECCV, pages 825 841. Springer, 2016. [48] Linzhao Wang, Lijun Wang, Huchuan Lu, Pingping Zhang, and Xiang Ruan. Saliency detection with recurrent fully convolutional networks. In ECCV, 2016. [49] iantian Wang, Ali Borji, Lihe Zhang, Pingping Zhang, and Huchuan Lu. A stagewise refinement model for detecting salient objects in images. In ICCV, pages 4019 4028, 2017. [50] iantian Wang, Yongri Piao, Li Xiao, Lihe Zhang, and Huchuan Lu. eep learning for light field saliency detection. In ICCV, 2019.

[51] iantian Wang, Lihe Zhang, Huchuan Lu, Chong Sun, and Jinqing Qi. Kernelized subspace ranking for saliency detection. In ECCV, pages 450 466, 2016. [52] iantian Wang, Lihe Zhang, Shuo Wang, Huchuan Lu, Gang Yang, Xiang Ruan, and Ali Borji. etect globally, refine locally: A novel approach to saliency detection. In CVPR, pages 3127 3135, 2018. [53] Wenguan Wang, Jianbing Shen, Xingping ong, and Ali Borji. Salient object detection driven by fixation prediction. In ICCV, pages 1711 1720, 2018. [54] Ziqin Wang, Jun Xu, Li Liu, Fan Zhu, and Ling Shao. Ranet: Ranking attention network for fast video object segmentation. In ICCV, Oct 2019. [55] Saining Xie and Zhuowen u. Holistically-nested edge detection. In ICCV, pages 1395 1403, 2015. [56] Qiong Yan, Li Xu, Jianping Shi, and Jiaya Jia. Hierarchical saliency detection. In CVPR, pages 1155 1162, 2013. [57] Chuan Yang, Lihe Zhang, Huchuan Lu, Xiang Ruan, and Ming-Hsuan Yang. Saliency detection via graphbased manifold ranking. In CVPR, pages 3166 3173, 2013. [58] Jing Zhang, Yuchao ai, Fatih Porikli, and Mingyi He. eep edge-aware saliency detection. arxiv preprint arxiv:1708.04366, 2017. [59] Lu Zhang, Ju ai, Huchuan Lu, You He, and Gang Wang. A bi-directional message passing model for salient object detection. In ICCV, pages 1741 1750, 2018. [60] Pingping Zhang, Wei Liu, Huchuan Lu, and Chunhua Shen. Salient object detection with lossless feature reflection and weighted structural loss. IEEE IP, 2019. [61] Pingping Zhang, ong Wang, Huchuan Lu, Hongyu Wang, and Xiang Ruan. Amulet: Aggregating multilevel convolutional features for salient object detection. In ICCV, pages 202 211, 2017. [62] Pingping Zhang, ong Wang, Huchuan Lu, Hongyu Wang, and Baocai Yin. Learning uncertain convolutional features for accurate saliency detection. In ICCV, pages 212 221. IEEE, 2017. [63] Xiaoning Zhang, iantian Wang, Jinqing Qi, Huchuan Lu, and Gang Wang. Progressive attention guided recurrent network for salient object detection. In CVPR, pages 714 722, 2018. [64] Jiaxing Zhao, Ren Bo, Qibin Hou, Ming-Ming Cheng, and Paul Rosin. Flic: Fast linear iterative clustering with active search. CVM, 4(4):333 348, ec 2018. [65] JiaXing Zhao, Bo Ren, Qibin Hou, and Ming-Ming Cheng. Flic: Fast linear iterative clustering with active search. In AAAI, 2018. [66] Jia-Xing Zhao, Yang Cao, eng-ping Fan, Ming-Ming Cheng, Xuan-Yi Li, and Le Zhang. Contrast prior and fluid pyramid integration for rgbd salient object detection. In CVPR, 2019. [67] Jia-Xing Zhao, Jiangjiang Liu, eng-ping Fan, Yang Cao, Jufeng Yang, and Ming-Ming Cheng. Egnet:edge guidance network for salient object detection. In ICCV, Oct 2019. [68] Kai Zhao, Shanghua Gao, Wenguan Wang, and Ming- Ming Cheng. Optimizing the f-measure for thresholdfree salient object detection. In ICCV, Oct 2019. [69] Wangjiang Zhu, Shuang Liang, Yichen Wei, and Jian Sun. Saliency optimization from robust background detection. In CVPR, pages 2814 2821, 2014. [70] Yunzhi Zhuge, Gang Yang, Pingping Zhang, and Huchuan Lu. Boundary-guided feature aggregation network for salient object detection. IEEE SPL, 25(12):1800 1804, 2018.