Inspiration

The inspiration arose from people and family around us, specifically someone very close to you, I (Atharv S) personally got the idea for it as i have a family member who tends to forget the most basic things of everyday life sometimes forgetting people itself not remembering moments, after a while of forgetting the dont feel the same emotion towards the same person as they dont know what connects them to it, and with this dementia grows and grows, its not an curable disease sadly, but we can help delay it as long as we can strengthen our neurons, and research data shows proof of this!

What it does

Remembral is a real-time AI companion for dementia patients that lives invisibly in their everyday life. It recognizes faces as they approach, listens to conversations as they happen, and quietly surfaces the right context at the right moment showing who this person is, how you know them, what you last talked about - delivered through your phone or glasses before the moment becomes uncomfortable.

Remembral for bigger scale with more data uses Google Cloud Vertex AI to build a Cognitive Continuity Model that continuously learns each patient’s unique speech and cognition patterns at scale, gently surfacing meaningful weekly insights when subtle cognitive changes emerge.

It builds a living map of your world - your people, your places, your stories, that grows richer every single day. Everything stored in a personal encrypted cloud that only you can access. No company. No algorithm. Nobody else. For the patient: dignity, confidence, connection fully intact. For the family: peace of mind that the person they love is never lost. For the world: the first tool that doesn't just manage dementia, it fights back against what it takes most. Your story. Always within reach.

How we built it

We used Android Studio, and for languages we made the use of Kotlin, Java, and Python, MLTKIT, facial recognition, Used Vertex Ai from Google Cloud for Bigger Scale Implementation Ideas.

Challenges we ran into

First challenge we ran into was due to the shortage of time and resources we did not have access to a Meta AR Glasses for the AR Demo, so we simulated that into a android app with basically the same interface showing how it will show and work, we were able to formulate it within an app to show the workings with the main storage. Second challenge we ran into is the facial recognition being set to a specific amount of clarity while taking in data for the faces, due to the limited data resource we were able to implement it for 2 people where it does it flawlessly. Third challenge we ran into is time constraint, taken that our idea was heavily ambitious as an idea for real world scope, we could not complete everything within 24hrs but we were able to implement at least one of everything we wanted to show for the start of initiative.

Accomplishments that we're proud of

The accomplishments that we're proud for is being able to show our project come to life, and also showing how it works and the path for future updates and features which takes it to the next level, we were proud that our project helps an actual initiative that hasn't been solved yet, we want to make the user feel as "normal" as possible and not cut off or awkward and remember moments from their life.

What we learned

We learned a lot more about Android app development as we worked through our projects learning about libraries and how they work, how debugging can be made easier, how everything works, what happens when it crashes and why it crashes.

What's next for Remembral

The immediate next step is getting ARCADIA onto Meta AR glasses so the patient never has to depend on someone for everything they do, the intelligence just lives quietly in their peripheral vision. From there we build the Cognitive Continuity Model which includes 30 days of baseline data, a personal LSTM model per patient, longitudinal cognitive drift detection that catches decline weeks before any doctor's appointment would. Then Remembral Shield fully implemented with real bank integration and live scam detection. The infrastructure for all of it is already designed, we just need the time to build it properly. 55 million people need this today. 139 million will need it by 2050. We're just getting started.

Built With

Share this project:

Updates