DebriefIQ is a tool that supports clinical simulation-based education by automatically creating clear, helpful summaries of debrief sessions. In clinical simulation, debriefs play a vital role in helping learners reflect and consolidate their knowledge. But key insights can sometimes be missed, influenced by bias, or hard for facilitators to review objectively.
DebriefIQ uses AI to listen to, transcribe, and analyse debrief conversations. It turns these discussions into useful learning resources and structured reports. From a single audio recording, DebriefIQ produces two tailored outputs: one for learners, highlighting key teaching points, and another for facilitators, offering feedback and areas for improvement.
- Automated Audio Transcription: Ingests audio files (
.mp3,.wav) and generates a clean, readable text transcript using speech-to-text models. - AI-Powered Summarisation: Employs Large Language Models (LLMs) to analyse the transcript and synthesise key information against a predefined scenario template.
- Dual-Audience Output: Generates two tailored summaries from a single debriefing session:
- Learner Summary: A concise document highlighting key clinical takeaways, human factors discussed, and crucial learning points.
- Facilitator Report: A structured feedback report based on the OSAD (Objective Structured Assessment of Debriefing) framework to aid facilitation development.
- Template-Driven Analysis: Uses customisable templates for specific clinical scenarios to ensure focus on the most relevant learning objectives.
The project follows a simple, pipeline to process the debriefing sessions:
- Audio Input: A raw audio file of a debriefing session is provided to the system.
- Transcription: The audio is passed to a speech recognition engine for accurate transcription.
- Analysis & Summarisation: The raw text transcript is fed to a Large Language Model. Using a structured prompt and the relevant clinical scenario template, the AI identifies and extracts key themes.
- Structured Output Generation: The model then formats the analysed information into two distinct files:
- The Learner Summary.
- The Facilitator OSAD Report.
- Backend:
- Audio Processing:
- Transcription:
- AI & NLP:
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
Please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Distributed under the MIT License. See LICENSE.txt for more information.