Inspiration
Every year, 1 in 4 Americans over 65 experience a fall. That's over 36 million people and 3 million of them end up in emergency rooms. For many families, the fear of a loved one falling alone at home is constant. Current solutions like Life Alert cost $50/month, require cloud connectivity, and upload intimate video footage to corporate servers. We asked ourselves: what if fall detection could be private, local, and actually intelligent? As engineering students with elderly family members, this problem hits close to home. We wanted to build something that respects privacy while providing real peace of mind — not another subscription service that profits from fear. UnderWatch is our answer: a fully local, edge-AI fall detection system that runs entirely on the Arduino UNO Q. No cloud. No subscriptions. No video uploads. Just protection.
What it does
UnderWatch continuously monitors for falls using computer vision and machine learning — all processed locally on the device. When a fall is detected, it triggers a 3-stage escalation system that gives everyone a chance to respond. First, the elder themselves gets 30 seconds. A buzzer sounds, the LED matrix flashes a warning, and they can press a physical "I'm OK" button to dismiss everything. No notification goes out, we're giving them time to recover on their own. If they don't dismiss, their family gets notified via push notification. Family members have 60 seconds to check in, and if the system detects the person standing back up, it automatically adds more time. We're not trying to trigger false alarms. Only if nobody responds does the system contact emergency services, sharing location and relevant health information. The key innovation is what doesn't happen: the only thing that ever leaves the device is a tiny notification trigger. No video, no images, no personal data touches any cloud server. Privacy isn't a feature we added — it's how we built it from the ground up.
How we built it
We fully leveraged the Arduino UNO Q's dual-brain architecture. The board has two processors, and we used both of them for exactly what they're designed for. The Qualcomm QRB2210 is the brain — a quad-core ARM processor running Debian Linux with 4GB of RAM. This handles all the machine learning inference and detection logic. Our Edge Impulse model runs here, analyzing camera frames in real-time to detect falls. The STM32U585 is the reflexes — a microcontroller running Zephyr RTOS that handles real-time control. It drives the servos for camera tracking, manages the buzzer patterns, updates the LED matrix, and monitors the physical dismiss button. These need precise timing that you can't guarantee on a Linux system. The two processors communicate via RouterBridge, passing detection results from the MPU down to the MCU, and button events back up. It's a clean separation of concerns: intelligence on one side, real-time response on the other. For the ML model, we used Edge Impulse to train a custom image classification model that achieved roughly 93% accuracy on our validation set. The model runs as a native .eim file optimized for the Qualcomm silicon, so inference is fast enough for real-time detection. Notifications go through ntfy.sh, a privacy-respecting push service that we chose specifically because it can be self-hosted. The notification payload is tiny — just "fall detected" and a timestamp. All the actual video and health data stays on the device.
Challenges we ran into
Training the ML model was harder than we expected. Our first approach was to create one model that could classify everything: person, non-person, fallen, not-fallen. This didn't work well at all. The "non-person" and "not-fallen" classes don't have consistent visual patterns, they're just "everything else", so the model couldn't learn them reliably. We also tried running two image classification models simultaneously, hoping one could detect people and another could detect falls. The Arduino App Lab bricks weren't optimized for this use case, and we ran into issues getting both models to share the camera input properly. At one point we explored object detection instead of image classification, thinking bounding boxes would help us track the person's position. But the object detection brick required camera input in a format we couldn't easily provide, and we were running low on time. The breakthrough came when we simplified. Instead of trying to classify every possible state, we focused on what actually matters: detecting falls with high confidence. We retrained on a cleaner dataset, tuned the confidence thresholds, and accepted that some edge cases would need the physical button as a fallback. On the hardware side, we discovered that standard Arduino libraries like Servo.h don't work on the UNO Q's Zephyr-based MCU. We had to rewrite servo control using manual PWM and learn the RouterBridge API from scratch, the documentation is still sparse since this board is so new.
Accomplishments that we're proud of
We built true edge AI. This isn't a laptop running inference and calling it "edge" — the model runs directly on the Qualcomm processor, and the device works completely offline. Unplug the ethernet, turn off WiFi, and it still protects. We designed for privacy from day one. Zero video uploads, zero cloud dependencies. The architecture makes it impossible for video to leave the device, not just a policy that says we won't. We actually used both processors on the UNO Q for their intended purposes. A lot of projects just run Arduino sketches on the MCU and ignore the MPU, or run everything on Linux and ignore the MCU. We coordinated both: ML inference on the brain, real-time control on the reflexes. The physical button matters more than it might seem. If our software crashes, if the network goes down, if anything goes wrong, grandma can still press a button and make the buzzer stop. Safety shouldn't depend on WiFi. And our model hit 93% accuracy, which we're genuinely proud of given the time constraints and the messiness of real-world fall data.
What we learned
Development — Working with the UNO Q taught us how to coordinate between a Linux application processor and a real-time microcontroller. The RouterBridge abstraction is powerful but has its own learning curve. Edge ML Deployment — Training a model is one thing; deploying it on constrained hardware is another. Edge Impulse's optimization pipeline was crucial for getting acceptable performance on the QRB2210. Privacy-First Architecture — Designing for privacy from the start is much easier than adding it later. Every architectural decision we made considered: "what data leaves the device?" Hardware-Software Co-Design — The physical button, buzzer, and LED matrix aren't afterthoughts — they're core safety features. Software fails; hardware provides a fallback.
What's next for UnderWatch
UnderWatch 2? We want to improve the tracking system by extracting actual bounding box coordinates from the model, rather than estimating position from classification confidence. This would let the servo tracking follow the person more precisely. Adding two-way audio would let family members talk to the elder during Stage 2, potentially calming them down and preventing unnecessary emergency calls. Longer term, we're interested in analyzing movement patterns over time to detect gradual mobility decline, catching problems before they become falls. And we plan to open-source everything so other families can build their own privacy-respecting monitoring systems without paying subscription fees.
Built With
- arduino-uno-q
- computer-vision
- edge-impulse
- machine-learning
- ntfy
- python
- qualcomm
- routerbridge
- tensorflow-lite
- zephyr



Log in or sign up for Devpost to join the conversation.