Providing timely assistance to flood-affected regions is a critical challenge, and leveraging deep learning methodologies has shown great promise in addressing such environmental crises. While several studies have proposed methodologies for classifying flood images, most are limited by two key factors: models are typically trained on images from specific geographic regions, restricting generalizability; and many models are trained exclusively on high-resolution images, overlooking low-resolution classification.
To address these gaps, we curated a dataset by combining existing benchmark datasets and acquiring images from web repositories. Our comparative analysis of various deep learning models based on CNN architectures demonstrated that MobileNet and Xception outperformed ResNet-50, VGG-16, and InceptionV3, achieving an accuracy rate of approximately 98% and an F1-score of 92% for the flood class. Additionally, we employed Explainable AI (XAI) techniques, specifically LIME, to interpret model results.
98.2% Accuracy
92.1% F1-Score
Lightweight & Efficient
97.8% Accuracy
91.5% F1-Score
Advanced Architecture
LIME Integration
Visual Explanations
Decision Transparency
Geographic Diversity
Resolution Robustness
High Generalizability
Dataset and Methodology Overview: Our comprehensive approach combining multi-source datasets with state-of-the-art CNN architectures for flood detection
Comprehensive Evaluation: We conducted extensive experiments comparing multiple CNN architectures across different datasets, with and without data augmentation, using both pretrained and from-scratch training approaches.
Model | Dataset | Accuracy | F1-Score (Flood) | Precision | Recall |
---|---|---|---|---|---|
MobileNet | A | 98.2% | 92.1% | 91.8% | 92.4% |
MobileNet | B | 98.0% | 91.8% | 91.5% | 92.1% |
Xception | A | 97.8% | 91.5% | 91.2% | 91.8% |
ResNet-50 | A | 96.5% | 89.2% | 88.9% | 89.5% |
VGG-16 | A | 95.8% | 87.8% | 87.5% | 88.1% |
InceptionV3 | A | 96.2% | 88.5% | 88.2% | 88.8% |
MobileNet, Xception, ResNet-50, VGG-16, InceptionV3, EfficientNet, ViT
2x, 3x, 4x data augmentation variants
train_10/ folder with 10-epoch patience
without_pretrained/ experiments
LIME Integration: We implemented Local Interpretable Model-agnostic Explanations (LIME) to provide transparency in our flood detection models, enabling stakeholders to understand the decision-making process.
Water bodies and flooded areas
Main classification features
Infrastructure damage
Supporting evidence
Vegetation changes
Environmental context
First comprehensive dataset combining multiple geographic regions for improved generalization
Robust performance across varying image qualities and resolutions
Demonstrating superior performance with efficient MobileNet architecture
Integration of LIME for decision transparency and model interpretability
Rapid flood assessment for disaster management and relief operations
Automated damage evaluation for insurance claims processing
Flood risk analysis and mitigation planning for cities
Long-term flood pattern analysis and climate change studies
jupyter notebook MobileNet_\(256X256\)_A.ipynb
jupyter notebook Xception_\(256X256\)_A.ipynb
jupyter notebook ResNet-50_\(256X256\)_A.ipynb
jupyter notebook XAI_models.ipynb
Corresponding Author: Abdul Manaf
Email: abdulmanafsahito@gmail.com
Research Areas: Deep Learning, Computer Vision, Disaster Management, Explainable AI
Journal: IEEE Access
Volume: 13, Pages 35973-35984
Year: 2025
If you use this work in your research, please cite our IEEE Access paper. The complete classification implementation and pre-trained models are available in this repository for academic and research purposes.