Joint Versus Independent Multiview Hashing for Cross-View Retrieval

Joint Versus Independent Multiview Hashing for Cross-View Retrieval
Joint Versus Independent Multiview Hashing for Cross-View Retrieval
Other Titles:
IEEE Transactions on Cybernetics
Publication Date:
07 May 2021
Hu P, Peng X, Zhu H, et al. Joint Versus Independent Multiview Hashing for Cross-View Retrieval[J]. IEEE Transactions on Cybernetics, 2021.
Thanks to the low storage cost and high query speed, cross-view hashing (CVH) has been successfully used for similarity search in multimedia retrieval. However, most existing CVH methods use all views to learn a common Hamming space, thus making it difficult to handle the data with increasing views or a large number of views. To overcome these difficulties, we propose a decoupled CVH network (DCHN) approach which consists of a semantic hashing autoencoder module (SHAM) and multiple multiview hashing networks (MHNs). To be specific, SHAM adopts a hashing encoder and decoder to learn a discriminative Hamming space using either a few labels or the number of classes, that is, the so-called flexible inputs. After that, MHN independently projects all samples into the discriminative Hamming space that is treated as an alternative ground truth. In brief, the Hamming space is learned from the semantic space induced from the flexible inputs, which is further used to guide view-specific hashing in an independent fashion. Thanks to such an independent/decoupled paradigm, our method could enjoy high computational efficiency and the capacity of handling the increasing number of views by only using a few labels or the number of classes. For a newly coming view, we only need to add a view-specific network into our model and avoid retraining the entire model using the new and previous views. Extensive experiments are carried out on five widely used multiview databases compared with 15 state-of-the-art approaches. The results show that the proposed independent hashing paradigm is superior to the common joint ones while enjoying high efficiency and the capacity of handling newly coming views.
License type:
Funding Info:
National Key Research and Development Project of China; 10.13039/501100001809-National Natural Science Foundation of China; Sichuan Science and Technology Planning Projects; 10.13039/501100012226-Fundamental Research Funds for the Central Universities; Agency for Science Technology and Research under its AME Programmatic Funds under Project A1892b0026.
“© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”
Files uploaded:
File Size Format Action
There are no attached files.