Q-instruct: Improving low-level visual abilities for multi-modality foundation models

Page view(s)
12
Checked on Feb 07, 2025
Q-instruct: Improving low-level visual abilities for multi-modality foundation models
Title:
Q-instruct: Improving low-level visual abilities for multi-modality foundation models
Journal Title:
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - (I²R Band 1) - 2024
Keywords:
Publication Date:
16 September 2024
Citation:
H. Wu et al., "Q-Instruct: Improving Low-Level Visual Abilities for Multi-Modality Foundation Models," 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2024, pp. 25490-25500, doi: 10.1109/CVPR52733.2024.02408.
Abstract:
Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model. While existing foundation models have shown exciting potentials on low-level visual tasks, their related abilities are still preliminary and need to be improved. In order to enhance these models, we conduct a large-scale subjective experiment collecting a vast number of real human feedbacks on low-level vision. Each feedback follows a pathway that starts with a detailed description on the low-level visual appearance (*e.g. clarity, color, brightness* of an image, and ends with an overall conclusion, with an average length of 45 words. The constructed **Q-Pathway** dataset includes 58K detailed human feedbacks on 18,973 images with diverse low-level appearance. Moreover, to enable foundation models to robustly respond to diverse types of questions, we design a GPT-participated conversion to process these feedbacks into diverse-format 200K instruction-response pairs. Experimental results indicate that the **Q-Instruct** consistently elevates low-level perception and understanding abilities across several foundational models. We anticipate that our datasets can pave the way for a future that general intelligence can perceive, understand low-level visual appearance and evaluate visual quality like a human. Our dataset, model zoo, and demo is published at: https://q-future.github.io/Q-Instruct
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Agency of Science, Technology and Research - MTC Programmatic Funds
Grant Reference no. : M23L7b0021
Description:
© 2024 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. For the publisher's version, refer here: https://ieeexplore.ieee.org/document/10655670
ISBN:

Files uploaded:

File Size Format Action
q-instruct-cr.pdf 2.76 MB PDF Request a copy