Develop Tools & CodePaper and LLMs

Multi-Label Knowledge Distillation

Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student...

Tags:

Pricing Type

  • Pricing Type: Free
  • Price Range Start($):

GitHub Link

The GitHub link is https://github.com/penghui-yang/l2d

Introduce

The GitHub repository “penghui-yang/L2D” contains the official implementation of the ICCV’23 paper titled “Multi-Label Knowledge Distillation.” The project focuses on multi-label knowledge distillation and provides code to replicate the results. The repository includes requirements for running the code, installation instructions, and a quick start guide for training on MS-COCO dataset. It also explains how to use your own datasets and provides guidance on creating configuration files. The distillation process involves three parts feature-based, label-wise embedding, and logits-based, each with corresponding parameters.

Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning.

Content

But it should be runnable with other PyTorch versions. You can train on MS-COCO with default settings stored in ./configs/coco/resnet101_to_resnet34_l2d.py: You can also try your own distillers and other options by making your own configuration files under the guidance of Configuration files. Your Pascal VOC 2007 dataset folder should be like this: Your MS-COCO 2014 dataset folder should be like this: train_anno.json and val_anno.json are in the fold ./appendix. Your NUS-WIDE dataset folder should be like this: All codes of the data processing part are in the fold ./data, and you can replace them with your own code. We use configuration files to pass parameters to the program. An example in the fold ./configs is shown below: We split a distiller into three parts: feature-based part, label-wise embedding part and logits-based part. Each part has a balancing parameter lambda and corresponding parameters.


Multi-Label Knowledge Distillation

Related

No comments

No comments...