Modeling Visual Attention for Enhanced Image and Video Processing Applications

Authors

  • Uzair Ishtiaq Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia https://orcid.org/0009-0005-2407-0163
  • Ajaz Khan Baig Department of Computer Sciences, IBADAT International University, Islamabad, Pakistan
  • Zubair Ishtiaque Department of Analytical, Biopharmaceutical and Medical Sciences, Atlantic Technological University, H91 T8NW Galway, Ireland

Keywords:

Image Processing, Visual Attention, Video Processing, Visual Saliency

Abstract

Human attention is naturally drawn to visually salient or distinct stimuli. However, identifying all potentially interesting targets in a scene can be computationally complex. Visual saliency plays a crucial role in this process by highlighting important regions either through bottom-up (stimulus-driven) or top-down (goal-driven) mechanisms. In a bottom-up approach, attention is guided by inherent visual properties of the stimulus, whereas in a top-down approach, it is influenced by the user's intent or task. Over the past decade, researchers have developed various methods and models to detect visual distinctiveness in images and video frames. In this paper, we have discussed the visual attention modeling that has demonstrated wide-ranging applications, including image and video quality assessment, video summarization (such as video skimming and key frame extraction), and more.

Downloads

Published

2025-10-03

How to Cite

Ishtiaq, U., Baig, A. K., & Ishtiaque, Z. (2025). Modeling Visual Attention for Enhanced Image and Video Processing Applications. International Journal of Theoretical & Applied Computational Intelligence, 210–226. Retrieved from https://ijtaci.com/index.php/ojs/article/view/11

Issue

Section

Articles

Most read articles by the same author(s)

Similar Articles

You may also start an advanced similarity search for this article.