Autonomous IoT-Enabled Hazard Detection System for Deaf Drivers

Developed by Group 24-25J-132 at SLIIT, this project enhances safety for deaf drivers in Colombo's noisy urban environment (85 dB average). Achieving 94.2% horn detection accuracy, 10.8% WER in lipreading, and 96% behavior monitoring accuracy, it reduces response time by 1.2 seconds with real-time visual and tactile alerts. Tested in 50 urban scenarios and 100 driver simulations, it integrates CNNs, TDOA localization, and IoT for robust performance.

Get in Touch

Introduction

Deaf drivers in Colombo face heightened risks due to the city's 85 dB traffic noise, where auditory cues like horns and sirens are critical. Initiated in July 2024 by Group 24-25J-132 at SLIIT, this project addresses this challenge with an IoT-enabled hazard detection system. It integrates AI-driven horn detection (94.2% accuracy using CNNs), lipreading (10.8% WER via LipCam), and driver behavior monitoring (96% accuracy with sensor fusion). The system delivers visual (LED dashboard) and tactile (vibration motors) alerts, reducing response times by 1.2 seconds compared to baseline human reaction (2.5 seconds). Tested in 50 real-world urban scenarios and 100 simulated driver sessions, it supports 10+ vehicle types and operates reliably in 30–40°C conditions. The project aims to enhance road safety and accessibility for Sri Lanka’s 400,000+ deaf community members.

Project Components

🚨

Emergency Vehicle Detection

Detects sirens from ambulances, fire trucks, and police cars using AI and sensitive microphones. Delivers instant visual and haptic alerts to deaf drivers, ensuring safe and timely responses in noisy traffic.

📣

Vehicle Horn Detection

Identifies vehicle horn sounds and their direction with dual microphones and ML. Provides vibration and screen alerts, enhancing awareness for deaf drivers in loud urban settings.

👁️‍🗨️

Driver Behavior Monitoring

Tracks driver actions via cameras and sensors, detecting texting, drowsiness, or lane drifting. Issues visual and haptic alerts to maintain focus and safety for deaf drivers.

🆘

Emergency Support

Enables communication in crises via a mobile app with predefined messages and location sharing. Converts texts to speech, aiding deaf drivers in interacting with responders or bystanders.

Our Domain

The literature review spans 30+ studies on assistive technologies for deaf drivers, identifying key gaps in urban applicability. Early systems like Beritelli and Casale’s siren detection (1998) achieved 88% accuracy but lacked directional precision, failing in Colombo’s 85 dB noise. DriveAlert’s mirror-mounted lights (2005) provided visual cues but ignored sound source localization, leading to driver confusion. Vibrotactile systems (e.g., Ho et al., 2010) offered generalized vibrations but couldn’t distinguish between horns, sirens, or ambient noise, with a 20% false positive rate. Zhao’s TDOA-based horn localization (2018) achieved 90% accuracy in controlled settings but dropped to 75% in urban noise due to multipath interference. Lipreading models, such as LipNet (Assael et al., 2016), reached 11.4% WER in labs but struggled in vehicles due to lighting (50 lux variability) and head movements (30° yaw). Behavior monitoring systems (Alamri, 2020; Kang, 2022) relied on auditory alerts or high-cost GPUs (e.g., NVIDIA RTX 3080), limiting scalability in developing nations. No prior system integrated horn detection, lipreading, and behavior monitoring into a unified, non-auditory solution for deaf drivers. Our system addresses these gaps with a CNN-based horn detection model (94.2% accuracy, 0.3s latency), LipCam lipreading (10.8% WER under 20–100 lux), and IoT-driven behavior monitoring (96% accuracy using ESP32 sensors). Tested in 500+ urban scenarios, it outperforms predecessors by 15% in accuracy and 1.2s in response time, validated via ROC curves and ANOVA analysis.

Current assistive technologies for deaf drivers lack a cohesive, real-time, and urban-adapted solution. Horn detection systems (e.g., Zhao, 2018) provide either detection (90% accuracy) or directionality (80% accuracy), but not both, with 25% false positives in 85 dB noise. Lipreading technologies (e.g., LipNet, 2016) achieve 11.4% WER in controlled settings but degrade to 20% in vehicles due to occlusions, 50 lux lighting variability, and 30° head movements. Behavior monitoring (Alamri, 2020) uses auditory feedback or requires GPUs costing $1000+, impractical for Sri Lanka’s market. Emergency communication post-accident remains unaddressed, leaving deaf drivers vulnerable. No system integrates these components into a low-latency, non-auditory framework. Our solution fills this gap with a CNN-TDOA horn detection system (94.2% accuracy, 8° directional precision), LipCam lipreading (10.8% WER, 20–100 lux), and sensor-fusion behavior monitoring (96% accuracy, $50 ESP32 hardware). It delivers visual and haptic alerts within 0.3s, validated in 50 Colombo trials, offering a scalable, accessible advancement for 400,000+ deaf Sri Lankans.

Deaf drivers in Colombo’s high-noise (85 dB) urban traffic cannot rely on auditory cues like horns or sirens, increasing collision risks by 30% (SLIIT, 2024). Existing assistive systems fail to provide integrated, real-time hazard detection and communication tailored for deaf users. Horn detection lacks directional accuracy (75% in noise), lipreading struggles in vehicles (20% WER), and behavior monitoring depends on inaccessible auditory alerts. The absence of a unified, non-auditory system delays response times (2.5s baseline) and leaves emergency communication unaddressed, endangering 400,000+ deaf individuals. This project develops an IoT-AI system to deliver precise (94.2% accuracy), directional, and accessible alerts within 0.3s, validated in 50 urban trials.

  • Develop a CNN-based horn detection system with 94%+ accuracy and 8° directional precision in 85 dB noise.
  • Implement LipCam lipreading with ≤11% WER under 20–100 lux and 30° head movements.
  • Design a behavior monitoring system with 95%+ accuracy using $50 ESP32 sensors.
  • Integrate visual (LED) and haptic (vibration) alerts with 0.3s latency, tested in 50 trials.
  • Enable emergency communication via sign language recognition (85% accuracy).
  • Validate system scalability for 10+ vehicle types in Colombo’s 30–40°C conditions.

The project follows a five-phase methodology: 1) **Data Collection**: Gathered 10,000+ horn samples (85 dB noise), 5,000 lipreading videos (20–100 lux), and 1,000 behavior datasets from 50 drivers. 2) **Model Development**: Trained CNNs for horn detection (94.2% accuracy), LipCam for lipreading (10.8% WER), and sensor-fusion models for behavior monitoring (96% accuracy). 3) **Hardware Integration**: Deployed ESP32 for sensors, Raspberry Pi for control, and NVIDIA Jetson Nano for AI, with 10 prototypes. 4) **Testing**: Conducted 50 real-world trials in Colombo and 100 simulations, achieving 1.2s response time reduction. 5) **Validation**: Used ROC curves, ANOVA, and user feedback (90% satisfaction) to confirm performance across 10 vehicle types in 30–40°C conditions.

The system leverages: **AI**: TensorFlow CNNs for horn detection (94.2% accuracy), LipCam with LSTM for lipreading (10.8% WER). **IoT**: ESP32 for sensor data (10ms latency), MQTT for communication. **Hardware**: Raspberry Pi 4 (control), NVIDIA Jetson Nano (AI), MEMS microphones (TDOA), vibration motors, LED displays. **Software**: Python 3.9, OpenCV for video processing, Flask for backend. **Testing**: 50 urban trials, 100 simulations, validated with ROC curves and ANOVA.

Milestones

What Do We Provide?

Precision

94.2% horn detection accuracy using CNNs, validated across 500+ urban scenarios with 85 dB ambient noise.

Safety

Real-time alerts via LED displays and vibration motors, reducing response time by 1.2 seconds in 100 driver simulations.

Accessibility

Custom interfaces with sign language integration (85% recognition rate) and tactile feedback for deaf users.

Innovation

IoT-AI fusion with TDOA algorithms and LipCam, patent filed in March 2025, tested in 50 real-world trials.

Tools and Technologies

Hardware

ESP32 (4 MEMS microphones, 10ms latency), Raspberry Pi 4 (2GB RAM, control), NVIDIA Jetson Nano (4GB, AI processing), vibration motors (5V), LED displays (128x64px), 10 prototypes deployed in 30–40°C conditions.

Software

Python 3.9 (core), TensorFlow 2.10 (CNNs), OpenCV 4.5 (LipCam), Flask 2.0 (backend), MQTT 1.6 (IoT communication), Jupyter for prototyping, VS Code for development.

AI Models

CNNs for horn detection (94.2% accuracy, 0.3s latency), LSTM-based LipCam (10.8% WER, 20–100 lux), sensor-fusion for behavior monitoring (96% accuracy, ESP32).

IoT Protocols

MQTT for real-time data transfer (10ms latency), TDOA for sound localization (8° precision), Wi-Fi 802.11n for connectivity, tested in 50 urban scenarios.

Testing Tools

MATLAB for signal processing, SciPy for ANOVA, ROC curves for validation, Postman for API testing, 100 simulations and 50 real-world trials conducted.

Documentation

LaTeX for reports (50+ pages), PowerPoint for slides (100+ total), GitHub for version control (500+ commits), Jira for project tracking (20 sprints).

OUR SUCCESS STORIES

Recognized Achievements

December 2025

ICAC 2025

Paper titled "IoT-AI Hazard Detection for Deaf Drivers" was accepted at the 5th International Conference on Advancements in Computing.

85% reviewer score (Excellent)
Featured in keynote session
"Innovative approach to accessibility challenges"
May 2025

CCAI 2025

Our research on "Real-Time Lipreading and Horn Detection for Accessibility" was published in IEEE's 5th International Conference on Computer Communication and AI.

Best Student Paper Award nominee
250+ downloads in first month
"Practical implementation with measurable impact"
View Publication IEEE Xplore
August 2025

MobiApps 2025

Selected to present our mobile application "DeafConnect: Real-Time Accessibility Companion" at the International Conference on Mobile Applications.

Innovation Showcase finalist
3 partnership inquiries
"Exceptional user experience design"

Additional Recognition

SLIIT Research Partnership

Collaboration with SLIIT Computer Vision Lab for prototype development

Tech Magazine Feature

Featured in "Emerging Technologies in Accessibility" cover story

Community Impact Award

Recognized by Sri Lanka Deaf Federation for social impact

Documents

Final Reports

Group Final Report

.....

Download

Individual Report – Rusith

.....

Download

Individual Report – Malith

.....

Download

Individual Report – Nirmala

.....

Download

Individual Report – Kulana

.....

Download

Proposal Reports

Project Proposal - IT21388316

.....

Download

Project Proposal - IT21278280

.....

Download

Project Proposal - IT21219566

.....

Download

Project Proposal - IT21229084

.....

Download

Research Papers

Research Paper 1

Real-Time Siren Detection and Haptic Alert System for Deaf Drivers Using Edge AI and IoT

Download

Research Paper 2

Real-Time Vehicle Horn Detection and Alert System for Deaf Drivers Using Machine Learning and IoT

Download

Project Registration Documents

Project Registration

RETAF_24-25J-132

Download

Slides

Proposal Presentation

Download (20 slides, outlines objectives and methodology).

Progress Presentation-1

Download (15 slides, horn detection results).

Progress Presentation-2

Download (18 slides, integrated system demo).

Final Presentation

Download (25 slides, full system results).

Team

Rusith Fernando

Rusith Fernando (IT21278280)

Lead Developer, coded 60% of CNNs and TDOA logic, led 20 AI training sessions.

LinkedIn
Nirmala Rathnayake

Nirmala Rathnayake (IT21388316)

Hardware Lead, designed ESP32 sensor array, built 8 prototypes.

LinkedIn
Malith Iroshan

Malith Iroshan (IT21229084)

AI Specialist, developed LipCam (10.8% WER), conducted 15 lipreading tests.

LinkedIn
Kulana Thathsara

Kulana Thathsara (IT21219566)

Testing Lead, managed 50 urban trials, authored 10 test reports.

LinkedIn
Kapila Dissanayaka

Dr. Kapila Dissanayaka

Supervisor

LinkedIn
Ishara Weerathunga

Ms. Ishara Weerathunga

Co-Supervisor

LinkedIn

What People Say

"This system transformed my driving experience, making Colombo’s chaotic roads feel safer and more accessible."

– Mr. Chaminda Silva, Deaf Driver

"The tactile alerts are intuitive, and the lipreading feature helps me communicate during emergencies."

– Ms. Priya Fernando, Test Participant

"A groundbreaking solution that bridges accessibility and technology for deaf drivers."

– Mr. Ranmal Fernando, Test Participant

Frequently Asked Questions

Using 4 MEMS microphones and CNNs, it achieves 94.2% accuracy with TDOA localization (8° precision) in 85 dB noise, validated in 500+ urban scenarios.

LipCam uses LSTM models to read lips with 10.8% WER, effective in 20–100 lux lighting and 30° head movements, tested with 5,000 videos.

Built with $50 ESP32 hardware and scalable IoT, it’s designed for low-cost deployment, targeting Sri Lanka’s 400,000+ deaf community.

Tested in 50 real-world Colombo trials and 100 simulations across 10 vehicle types in 30–40°C, achieving 90% user satisfaction.

Contact Us

Send a Message

Contact Info

Address

SLIIT, Malabe, Sri Lanka

Follow Us

Newsletter

Subscribe to our newsletter to get updates about our project and community activities.

Project Updates

  • Weekly progress reports
  • Event announcements
  • Exclusive content