🩻 Small Data, Big Impact: A Multi-Locale Bone Fracture Detection on an Extremely Limited Dataset Via
Crack-Informed YOLOv9 Variants

1Sukkur IBA University, Pakistan
2NTNU GjΓΈvik, Norway
3Linnaeus University, Sweden
2024 International Conference on Frontiers of Information Technology (FIT)

Abstract

Automated wrist fracture recognition has become a crucial research area due to the challenge of accurate X-ray interpretation in clinical settings without specialized expertise. With the development of neural networks, YOLO models have been extensively applied to fracture detection as computer assisted diagnoses (CAD). However, detection models can struggle when trained on extremely small datasets, which is often the case in medical scenarios.

In this study, we utilize an extremely limited fracture dataset and hypothesize that the structural similarities between surface cracks and bone fractures can allow YOLOv9 to transfer knowledge effectively. We show that pre-training YOLOv9 on surface cracks rather than on COCO (how YOLO models are typically pre-trained), and fine-tuning it on the fracture dataset yields substantial performance improvements.

We achieved state-of-the-art (SOTA) performance on the recent FracAtlas dataset, surpassing the previously established benchmark. Our approach improved the mean average precision (mAP) score by 3%, precision by 5%, and sensitivity by 6%.

Key Innovation

πŸ”— Crack-Informed Transfer Learning:

Instead of traditional COCO pre-training, we leverage the structural similarities between surface cracks and bone fractures to achieve better transfer learning. This novel approach demonstrates that domain-specific pre-training can significantly outperform general-purpose pre-training in medical imaging tasks.

πŸ“ˆ Performance Gains

+3%
mAP Improvement

+5%
Precision Gain

+6%
Sensitivity Boost

Faster
Convergence

YOLOv9 Architecture

YOLOv9 Architecture Flow

YOLOv9 Architecture: Our crack-informed approach leverages the structural similarities between surface cracks and bone fractures for enhanced transfer learning.

YOLOv9 PyTorch Transfer Learning Medical Imaging Computer Vision Object Detection X-ray Analysis CUDA

Experimental Results

Key Finding: Crack-Informed YOLOv9 variants consistently outperform their default counterparts across all metrics while requiring significantly fewer epochs to converge and reduced training time.

Performance Comparison: Default vs Crack-Informed YOLOv9

Variant Default YOLOv9 Crack-Informed YOLOv9
Test mAP@50 Val mAP@50 Convergence Ep Train Time Test mAP@50 Val mAP@50 Convergence Ep Train Time
M 0.38 0.44 320 5.68h 0.49 0.57 123 1.89h
C 0.52 0.53 174 2.60h 0.53 0.58 108 1.99h
E 0.45 0.47 174 2.69h 0.60 0.59 84 1.32h
GELAN 0.43 0.52 150 3.23h 0.54 0.61 174 2.22h
GELAN-C 0.37 0.60 174 3.24h 0.49 0.61 151 1.65h
GELAN-E 0.50 0.59 262 3.59h 0.51 0.63 177 2.10h

YOLOv9-E: Comprehensive Method Comparison

# Training Instances Method Test mAP@50 Val mAP@50 Test Precision Test Sensitivity
574 Default 0.45 0.47 0.64 0.50
574 Crack-Informed (Ours) 0.60 0.59 0.89 0.51
574 Pre-trained on COCO 0.45 0.54 0.61 0.44
1148 Brightness + Contrast Augmentation 0.47 0.49 0.61 0.45
5740 Albumentations Augmentation 0.25 0.37 0.45 0.30

Visual Results

FracAtlas Comparison Results

FracAtlas Dataset Comparison: Comprehensive performance comparison showing the superiority of our crack-informed approach across different metrics and datasets.

Fracture Detection Results

Fracture Detection Results: Real-world X-ray images showing accurate bone fracture localization using our Crack-Informed YOLOv9 approach.

Dataset & Implementation

πŸ“Š FracAtlas Dataset

Multi-locale bone fracture dataset with extremely limited training instances. Dataset split: 70-20-10% for train-validation-test.

Download Dataset

πŸ› οΈ System Requirements

Linux (Ubuntu), Python 3.9, PyTorch 1.13.1, NVIDIA GPU + CUDA CuDNN

pip install -r requirements.txt

πŸ‹οΈ Pre-trained Weights

Download our crack-informed YOLOv9-E weights trained on surface cracks and fine-tuned on fracture data.

Download Weights

πŸš€ Quick Start

Clone repository, install dependencies, download weights, and start training or inference on your fracture detection task.

View Code

Key Contributions

🎯 Novel Transfer Learning

First work to demonstrate that crack-informed pre-training significantly outperforms COCO pre-training for medical fracture detection tasks.

πŸ† State-of-the-Art Results

Achieved SOTA performance on FracAtlas dataset with significant improvements in mAP, precision, and sensitivity metrics.

⚑ Efficient Training

Demonstrated faster convergence and reduced training time while maintaining superior performance across all YOLOv9 variants.

🌍 Multi-Locale Validation

Comprehensive evaluation across different medical imaging contexts demonstrating the generalizability of our approach.

Citation

If you find our work useful in your research, please consider citing:

Full Citation:
A. Ahmed, A. Manaf, A. S. Imran, Z. Kastrati, and S. M. Daudpota, "Small Data, Big Impact: A Multi-Locale Bone Fracture Detection on an Extremely Limited Dataset Via Crack-Informed YOLOv9 Variants," 2024 International Conference on Frontiers of Information Technology (FIT), pp. 1–6, Dec. 2024, doi: 10.1109/fit63703.2024.10838409.

Paper: IEEE Xplore
Code: GitHub Repository
Weights: Figshare

Acknowledgments

This research contributes to the advancement of AI-assisted medical diagnosis, specifically in bone fracture detection using computer vision techniques. We acknowledge the open-source YOLOv9 community and the FracAtlas dataset creators for making this research possible.