- Pricing Type: Free
- Price Range Start($):
The GitHub link is https://github.com/xyuan01/self-supervised-noise2noise-for-ldct
This GitHub repository, named “Self-supervised Noise2Noise for LDCT,” presents a method for denoising LDCT (low-dose computed tomography) images that are corrupted by noise. The code is built using PyTorch (0.4.1), Torchvision (0.2.0), NumPy (1.14.2), Matplotlib (2.2.3), and Pillow (5.2.0). The training process involves options for noisy or clean targets, and it supports validation on smaller datasets. The noise parameter can be adjusted, and CUDA can be enabled for GPU support. The repository also provides instructions for testing the denoiser using pre-trained models and test images, with options to customize denoising parameters.
Note that we use LDCT images based on the noisy-as-clean strategy for corruption instead of NDCT images.
To install the latest version of all packages, run See python3 train.py –h for list of optional arguments. By default, the model train with noisy targets. To train with clean targets, use –clean-targets. To train and validate on smaller datasets, use the –train-size and –valid-size options. To plot stats as the model trains, use –plot-stats; these are saved alongside checkpoints. By default CUDA is not enabled: use the –cuda option if you have a GPU that supports it. The noise parameter is the maximum standard deviation _. Model checkpoints are automatically saved after every epoch. To test the denoiser, provide test.py with a PyTorch model (.pt file) via the argument –load-ckpt and a test image directory via –data. The –show-output option specifies the number of noisy/denoised/clean montages to display on screen. To disable this, simply remove –show-output. See python3 test.py –h for list of optional arguments, or examples/test.sh for an example.
Although unsupervised approaches based on generative adversarial networks offer a promising solution for denoising without paired datasets, they are difficult in surpassing the performance limitations of conventional GAN-based unsupervised frameworks without significantly modifying existing structures or increasing the computational complexity of denoisers.
However, due to the unavailability of experts in these locations, the data has to be transferred to an urban healthcare facility (AMD and glaucoma) or a terrestrial station (e. g, SANS) for more precise disease identification.