Deep Domain Adaptation For Visual Recognition With Unlabeled Data


Download Deep Domain Adaptation For Visual Recognition With Unlabeled Data PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Deep Domain Adaptation For Visual Recognition With Unlabeled Data book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.

Download

Deep Domain Adaptation for Visual Recognition with Unlabeled Data


Deep Domain Adaptation for Visual Recognition with Unlabeled Data

Author: Zhongying Deng

language: en

Publisher:

Release Date: 2023


DOWNLOAD





Visual Domain Adaptation in the Deep Learning Era


Visual Domain Adaptation in the Deep Learning Era

Author: Gabriela Csurka

language: en

Publisher: Springer Nature

Release Date: 2022-06-06


DOWNLOAD





Solving problems with deep neural networks typically relies on massive amounts of labeled training data to achieve high performance. While in many situations huge volumes of unlabeled data can be and often are generated and available, the cost of acquiring data labels remains high. Transfer learning (TL), and in particular domain adaptation (DA), has emerged as an effective solution to overcome the burden of annotation, exploiting the unlabeled data available from the target domain together with labeled data or pre-trained models from similar, yet different source domains. The aim of this book is to provide an overview of such DA/TL methods applied to computer vision, a field whose popularity has increased significantly in the last few years. We set the stage by revisiting the theoretical background and some of the historical shallow methods before discussing and comparing different domain adaptation strategies that exploit deep architectures for visual recognition. We introduce the space of self-training-based methods that draw inspiration from the related fields of deep semi-supervised and self-supervised learning in solving the deep domain adaptation. Going beyond the classic domain adaptation problem, we then explore the rich space of problem settings that arise when applying domain adaptation in practice such as partial or open-set DA, where source and target data categories do not fully overlap, continuous DA where the target data comes as a stream, and so on. We next consider the least restrictive setting of domain generalization (DG), as an extreme case where neither labeled nor unlabeled target data are available during training. Finally, we close by considering the emerging area of learning-to-learn and how it can be applied to further improve existing approaches to cross domain learning problems such as DA and DG.

Domain Adaptation for Visual Understanding


Domain Adaptation for Visual Understanding

Author: Richa Singh

language: en

Publisher: Springer Nature

Release Date: 2020-01-08


DOWNLOAD





This unique volume reviews the latest advances in domain adaptation in the training of machine learning algorithms for visual understanding, offering valuable insights from an international selection of experts in the field. The text presents a diverse selection of novel techniques, covering applications of object recognition, face recognition, and action and event recognition. Topics and features: reviews the domain adaptation-based machine learning algorithms available for visual understanding, and provides a deep metric learning approach; introduces a novel unsupervised method for image-to-image translation, and a video segment retrieval model that utilizes ensemble learning; proposes a unique way to determine which dataset is most useful in the base training, in order to improve the transferability of deep neural networks; describes a quantitative method for estimating the discrepancy between the source and target data to enhance image classification performance; presents a technique for multi-modal fusion that enhances facial action recognition, and a framework for intuition learning in domain adaptation; examines an original interpolation-based approach to address the issue of tracking model degradation in correlation filter-based methods. This authoritative work will serve as an invaluable reference for researchers and practitioners interested in machine learning-based visual recognition and understanding.