- Session 1: Deep Learning -- Day 2 (Nov.18), talks: 09:00-11:00 (5th floor Hall 1), poster session: 11:00-13:30
- Poster number: Mon04
- Download paper
Bahram Baloch (NUCES); Sateesh Kumar (NUCES); Sanjay Haresh (NUCES); Tahir Syed (National University of Computer and Emerging Sciences)
Deep Neural Networks (DNNs) usually suffer performance penalties when there is a skewedlabel distribution.This phenomenon, class-imbalance, is most often mitigated peripheralto the classification algorithm itself, usually by modifying the amount of examples per class,for oversampling at the expense of computational efficiency, and for undersampling at theexpense of statistical efficiency.In our solution, we combine discriminative feature learningwith cost-sensitive learning to tackle the class imbalance problem by using a two step lossfunction, which we call the Focused Anchors loss (FAL). We evaluate FAL and its variant,Focused Anchor Mean Loss (FAML), on 6 different datasets in comparison of traditionalcross entropy loss and we observe a significant gain in balanced accuracy for all datasets.We also perform better than time-costly re-sampling and ensemble methods like SMOTEand Near Miss in 4 out of 6 datasets across F1-score, AUC-ROC and balanced accuracy.We also extend our evaluation to image domain and use long-tailed CIFAR10 to evaluateourlossfunctionwhereweconsistentlyreportsignificantimprovementinaccuracy.Wethen go on to test our loss function under extreme imbalance on a propriety dataset andachieve a gain of 0.1 AUC-ROC over the baseline.