Loading…

Balancing Transferability and Discriminability for Unsupervised Domain Adaptation

Unsupervised domain adaptation (UDA) aims to leverage a sufficiently labeled source domain to classify or represent the fully unlabeled target domain with a different distribution. Generally, the existing approaches try to learn a domain-invariant representation for feature transferability and add c...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2024-04, Vol.35 (4), p.5807-5814
Main Authors: Huang, Jingke, Xiao, Ni, Zhang, Lei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Unsupervised domain adaptation (UDA) aims to leverage a sufficiently labeled source domain to classify or represent the fully unlabeled target domain with a different distribution. Generally, the existing approaches try to learn a domain-invariant representation for feature transferability and add class discriminability constraints for feature discriminability. However, the feature transferability and discriminability are usually not synchronized, and there are even some contradictions between them, which is often ignored and, thus, reduces the accuracy of recognition. In this brief, we propose a deep multirepresentations adversarial learning (DMAL) method to explore and mitigate the inconsistency between feature transferability and discriminability in UDA task. Specifically, we consider feature representation learning at both the domain level and class level and explore four types of feature representations: domain-invariant, domain-specific, class-invariant, and class-specific. The first two types indicate the transferability of features, and the last two indicate the discriminability. We develop an adversarial learning strategy between the four representations to make the feature transferability and discriminability to be gradually synchronized. A series of experimental results verify that the proposed DMAL achieves comparable and promising results on six UDA datasets.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2022.3201623