Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise
- Pricing Type: Free
- Price Range Start($):
The GitHub link is https://github.com/fahim-f/manifolddividemix
The “Manifold DivideMix” project on GitHub presents a semi-supervised contrastive learning framework designed to tackle label noise in training data. Conventional neural networks struggle with noisy labels, leading to poor generalization. The proposed approach utilizes self-supervised training to create a meaningful embedding space for each sample, incorporating both in-distribution and out-of-distribution noisy samples. An iterative Manifold DivideMix algorithm is introduced to identify clean and noisy samples, while the MixEMatch algorithm enhances semi-supervised learning using mixup augmentation in both input and hidden representations. Extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of the method.
However, their performance degrades when training data contains noisy labels, leading to poor generalization on the test set.
Fahimeh Fooladgar1, Minh Nguyen Nhat To1, Parvin Mousavi2, Purang Abolmaesumi1 Codes will be uploaded soon … University of British Columbia _ _2 _3
In the pursuit of promoting the expressiveness of GNNs for tail nodes, we explore how the deficiency of structural information deteriorates the performance of tail nodes and propose a general Structural Augmentation based taIL nOde Representation learning framework, dubbed as SAILOR, which can jointly learn to augment the graph structure and extract more informative representations for tail nodes.