A Unified Approach to Saliency Detection via Label Propagation
Hongyang Li 1, Huchuan Lu 1, Zhe Lin 2, Xiaohui Shen 2, Brian Price 2
1. Intelligent Image and Analysis Understanding Lab, Dalian University of Technology, China.
2. Adobe Research, San Jose, CA, United States.
Paper published as poster in the 13th European Conference on Computer Vision (ECCV), Zurich, Switzerland, 2014.
+ June, 20, 2014: The website is officially open! Download links will be available soon. BiTeX.
In this paper, we propose a novel label propagation based method for saliency detection. A key observation is that saliency in an image can be estimated by propagating the labels extracted from the most certain background and object regions. For most natural images, some boundary superpixels serve as the background labels and the saliency of other superpixels are determined by ranking their similarities to the boundary labels based on an inner propagation scheme. For images of complex scenes, we further deploy a 3-cue-center-biased objectness measure to pick out and propagate foreground labels. A co-transduction algorithm is devised to fuse both boundary and objectness labels based on an inter propagation scheme. The compactness criterion decides whether the incorporation of objectness labels is necessary, thus greatly enhancing computational eciency. Results on three benchmark datasets with pixel-wise accurate annotations show that the proposed method achieves superior performance compared with the newest state-of-the-arts.
There are two algorithms in this paper. Algorithm 1 is inspired by , where the saliency of background nodes is propagated to other nodes based on the color affinity matrix. Algorithm 2 incorporates the foreground priors in a co-transduction framework to better enhance the results generated by Alg.1.
 Bai Xiang, et al. Learning context-sensitive shape similarity by graph transduction. IEEE Trans. PAMI 32(5) (2010)
We (i) show the individual performance of intermediate processes, and (ii) compare the LPS algorithm with many state-of-the-arts on different datasets.
Please refer to the main paper and/or supplementary for method abbreviations and more results.
Fig.2 Quantitative results. Click it for better visualization. (a) Individual component analysis on MSRA-1000. Note that ‘CoTrans_’ means implementing Alg.2 for every image; (b)-(d) MAE metric on MSRA-1000, CCSD-1000, MSRA-5000; (e)-(h) Performance comparison on MSRA- 1000. Bars with oblique lines denote the highest score in the corresponding metric.