Discovering Visual Saliency For Image Analysis

Download Discovering Visual Saliency For Image Analysis PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Discovering Visual Saliency For Image Analysis book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Discovering Visual Saliency for Image Analysis

Salient object detection is a key step in many image analysis tasks such as object detection and image segmentation, as it not only identifies relevant parts of a visual scene but may also reduce computational complexity by filtering out irrelevant segments of the scene. Traditional methods of salient object detection are based on binary classification to determine whether a given pixel or region belongs to a salient object. However, binary classification-based approaches are limited because they ignore the shape of the salient object by assigning a single output value to an input (pixel, patch, or superpixel). In this work, we introduce novel salient object detection methods that consider the shape of the object. We claim that encoding spatial image content to facilitate the information of the object shape can result in more-accurate prediction of the salient object than the traditional binary classification-based approaches. We propose two deep learning-based salient object detection methods to detect the object. The first proposed method combines a shape-preserving saliency prediction driven by a convolutional neural network (CNN) with pre-defined saliency shapes. Our model learns a saliency shape dictionary, which is subsequently used to train a CNN to predict the salient class of a target region and estimate the full, but coarse, saliency map of the target image. The map is then refined using image-specific, low- to mid-level information. In the second method, we explicitly predict the shape of the salient object using a specially designed CNN model. The proposed CNN model facilitates both global and local context of the image to produce better prediction than that obtained by considering only the local information. We train our models with pixel-wise annotated training data. Experimental results show that the proposed methods outperform previous state-of-the-art methods in salient object detection. Next, we propose novel methods to find characteristic landmarks and recognize ancient Roman imperial coins. The Roman coins play an important role in understanding the Roman Empire because they convey rich information about key historical events of the time. Moreover, as large amounts of coins are traded daily over the Internet, it becomes necessary to develop automatic coin recognition systems to prevent illegal trades. Because the coin images do not have the pixel-wise annotations, we use a weakly-supervised approach to discover the characteristic landmarks on the coin images instead of using the previously mentioned models. For this purpose, we first propose a spatial-appearance coin recognition system to visualize the contribution of the image regions on the Roman coins using Fisher vector representation. Next, we formulate an optimization task to discover class-specific salient coin regions using CNNs. Analysis of discovered salient regions confirms that they are largely consistent with human expert annotations. Experimental results show that the proposed methods can effectively recognize the ancient Roman coins as well as successfully identify landmarks in the coin images and in a general fine-grained classification problem. For this research, we have collected new Roman coin datasets in which all coin images are annotated.
Visual Saliency: From Pixel-Level to Object-Level Analysis

This book provides an introduction to recent advances in theory, algorithms and application of Boolean map distance for image processing. Applications include modeling what humans find salient or prominent in an image, and then using this for guiding smart image cropping, selective image filtering, image segmentation, image matting, etc. In this book, the authors present methods for both traditional and emerging saliency computation tasks, ranging from classical low-level tasks like pixel-level saliency detection to object-level tasks such as subitizing and salient object detection. For low-level tasks, the authors focus on pixel-level image processing approaches based on efficient distance transform. For object-level tasks, the authors propose data-driven methods using deep convolutional neural networks. The book includes both empirical and theoretical studies, together with implementation details of the proposed methods. Below are the key features fordifferent types of readers. For computer vision and image processing practitioners: Efficient algorithms based on image distance transforms for two pixel-level saliency tasks; Promising deep learning techniques for two novel object-level saliency tasks; Deep neural network model pre-training with synthetic data; Thorough deep model analysis including useful visualization techniques and generalization tests; Fully reproducible with code, models and datasets available. For researchers interested in the intersection between digital topological theories and computer vision problems: Summary of theoretic findings and analysis of Boolean map distance; Theoretic algorithmic analysis; Applications in salient object detection and eye fixation prediction. Students majoring in image processing, machine learning and computer vision: This book provides up-to-date supplementary reading material for course topics like connectivity based image processing, deep learning for image processing; Some easy-to-implement algorithms for course projects with data provided (as links in the book); Hands-on programming exercises in digital topology and deep learning.
Integrating Artificial Intelligence and Visualization for Visual Knowledge Discovery

This book is devoted to the emerging field of integrated visual knowledge discovery that combines advances in artificial intelligence/machine learning and visualization/visual analytic. A long-standing challenge of artificial intelligence (AI) and machine learning (ML) is explaining models to humans, especially for live-critical applications like health care. A model explanation is fundamentally human activity, not only an algorithmic one. As current deep learning studies demonstrate, it makes the paradigm based on the visual methods critically important to address this challenge. In general, visual approaches are critical for discovering explainable high-dimensional patterns in all types in high-dimensional data offering "n-D glasses," where preserving high-dimensional data properties and relations in visualizations is a major challenge. The current progress opens a fantastic opportunity in this domain. This book is a collection of 25 extended works of over 70 scholars presented at AI and visual analytics related symposia at the recent International Information Visualization Conferences with the goal of moving this integration to the next level. The sections of this book cover integrated systems, supervised learning, unsupervised learning, optimization, and evaluation of visualizations. The intended audience for this collection includes those developing and using emerging AI/machine learning and visualization methods. Scientists, practitioners, and students can find multiple examples of the current integration of AI/machine learning and visualization for visual knowledge discovery. The book provides a vision of future directions in this domain. New researchers will find here an inspiration to join the profession and to be involved for further development. Instructors in AI/ML and visualization classes can use it as a supplementary source in their undergraduate and graduate classes.