Poster 8: Development and evaluation of a machine learning algorithm for predicting pressure injury risk during hospitalisation Nanthakumahrie Gunasegaran, Maybelle Auw, AxoMem Singapore, Sean Whiteley, Fazila Aloweni (Poster) #HIKM2025
Health Informatics Knowledge Management Conference’s Post
More Relevant Posts
-
Measuring Machine Intelligence Using Turing Test 2.0 Mappouras’s General Intelligence Threshold offers a test question: Can the system generate insights that were not directly programmed into it? https://lnkd.in/eqMN4hjs
To view or add a comment, sign in
-
-
Latest news from our partners at Ruhr University Bochum A new study using the #ExaDG solver (part of #dealiiX) shows just how much geometry impacts blood flow simulations in the aorta. By simulating 30,000 patient-specific geometries, the team created an open-source #dataset that paves the way for data-driven approaches in #ComputationalMedicine — from machine learning to model order reduction. Authors list Domagoj Bošnjak, Gian Marco Melito, Richard Schussnig, Katrin Ellermann, and Thomas-Peter Fries This work highlights the power of #HPC in personalised #healthcare, one of the core missions of the dealii-X project. Full Study 👉 https://lnkd.in/dfDeGfQk
To view or add a comment, sign in
-
Excited to share that our #Neurips2025 submission was REJECTED after scores of 5,4,4,4! Training deep learning models with PDE constraints often relies on automatic differentiation. But for high-order derivatives, this approach can be slow, memory-heavy, and unstable under noise. We propose Mollifier Layers – a simple, plug-and-play module that replaces autodiff with convolutional operations using smooth test functions. Key benefits: 1. Up to 10× faster training 2. Dramatically lower memory usage 3. More accurate & noise-robust parameter recovery We validated this across 1st, 2nd, and 4th order PDEs and even showed real world-application through super-resolution chromatin imaging. Mollifier Layers have the potential to make PhiML more scalable, efficient, and practical for real-world science — from materials to biology. Check out the preprint here: https://lnkd.in/ec9p3z7C
To view or add a comment, sign in
-
-
Excited to announce that our paper "Regret Lower Bounds for Decentralized Multi-Agent Stochastic Shortest Path Problems" has been accepted at NeurIPS 2025! This work explores the fundamental challenges of learning in decentralized multi-agent systems, a key ingredient in applications like traffic routing, and distributed decision-making. 🧩 What’s the paper about? Multi-Agent Stochastic Shortest Path (Dec-MASSP) problems capture scenarios where multiple agents, without centralized coordination, must learn to reach goals efficiently under uncertainty. We focus on settings with linear function approximation to model transition dynamics and costs. Most importantly, we establish the first regret lower bound of Ω(√K) over K episodes, revealing the inherent difficulty of learning in decentralized multi-agent control. 💡 Why it matters? Understanding these lower bounds provides a theoretical foundation for designing efficient algorithms and clarifies the limits of learning in decentralized environments, a crucial step toward scalable multi-agent intelligence. A huge thanks to my co-authors Utkarsh Chavan and NANDYALA Hemachandra for their motivation, insights, and tireless collaboration. 🙏 Looking forward to engaging discussions at NeurIPS 2025! 🚀 #NeurIPS2025 #MultiAgentSystems #ReinforcementLearning #DecentralizedControl #MachineLearning #Research
To view or add a comment, sign in
-
I’m happy to share that our paper "From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks" was accepted as a Spotlight at #NeurIPS2025! What we do: We show how counterfactual explanations can be leveraged to extract tree-based models (random forests, gradient boosting). We introduce a competitive-analysis framework for model extraction and design reconstruction algorithms with provable guarantees, including perfect-fidelity recovery, along with theoretical bounds on query complexity and strong anytime performance in practice. This helps quantify the risk of exposing counterfactuals through ML APIs and clarifies when (and how) such features can accelerate extraction. Why it matters: We highlight a key trade-off between explainability and intellectual property. Counterfactual explanations improve transparency and user recourse, but they can also make proprietary models easier to reconstruct. Huge thanks to my amazing co-authors Julien Ferry and Thibaut Vidal for an amazing collaboration. Looking forward to discussing this at #NeurIPS2025! #NeurIPS2025 #TrustworthyAI #CounterfactualExplanations #CombinatorialOptimization #MachineLearning #Research
To view or add a comment, sign in
-
✨ Thrilled to share that our work RHYTHM: Reasoning with Hierarchical Temporal Tokenization for Human Mobility has been accepted to NeurIPS 2025! 🎉 Understanding and forecasting human mobility is hard — trajectories have long-range dependencies, multi-scale periodic patterns, and enormous sequence lengths. We introduce RHYTHM, a lightweight yet powerful framework that addresses these challenges by combining structured temporal representation with efficient reasoning: 🌀 Tokenizing trajectories into daily & weekly units, compressing complexity while preserving cyclical structure 🧩 Prompt-guided embeddings with a frozen LLM, enabling reasoning across time with efficiency ⚡ Performance gains: +2.4% overall accuracy, +5.0% on weekends (where routines are irregular), and ~25% faster training Our model’s name, RHYTHM, not only serves as an acronym but also reflects the underlying rhythms of personal mobility that the model is designed to naturally capture. 👉 Check out our paper here: https://lnkd.in/eTNQAmsj A huge thanks to my coauthors Haoyu He , Yan Chen, and Ryan Qi Wang #NeurIPS2025 #MachineLearning #LLMs #SpatioTemporal #Mobility #HumanMobility #TrajectoryPrediction
To view or add a comment, sign in
-
Orthogonal polynomials serve as a robust method for analyzing market data, thanks to their unique mathematical properties. In trading, they can efficiently simulate time series by filtering out noise and highlighting trends. Polynomials such as Legendre and Chebyshev can be applied in technical analysis for smoothing data, trend detection, and constructing complex trading indicators. Their independence and adaptability enhance model stability and interpretability. Strategies using orthogonal polynomials, like polynomial regression and adjusted technical indicators, can significantly improve prediction accuracy and market adaptability. Moreover, they integrate well with machine learning to capture complex data relationships and reduce overfitting. #MQL5 #MT5 #Indicator #AlgoTrading https://lnkd.in/dQkeAhc7
To view or add a comment, sign in
-
-
🦔Spikee in the spotlight at SecureAI! In Stockholm, Donato Capitella showed how our prompt injection kit helps measure the resilience of LLM applications. Spikee is designed for practical use and includes features like: ✔ Attack scripts you can modify or add to ✔ Dataset generation for systematic guardrail evaluation ✔ Support for local inference and API-based targets 📎Check it out at https://lnkd.in/gMjkGu_G
To view or add a comment, sign in
-
-
🧙 Heard of Magika? This is an AI-powered file detection utility - learn how you can leverage it's capabilities and what makes it different than the file utility in this short 👇 https://lnkd.in/g6Yi3ZAE
🧙♂️ Magika Unveiled: AI-Powered File Type Detection in Action!
https://www.youtube.com/
To view or add a comment, sign in
-
Generative Modeling: What's After Diffusion? 🔬🧬 Diffusion and Flow Matching methods efficiently map source to target distributions but lack explicit modeling of scores on the data manifold. This limits controlled generation and effective use as priors in inverse problems. We're excited to introduce Energy Matching (just accepted at NeurIPS 2025), a unifying framework that combines the advantages of Flow Matching and Energy-Based Models, parameterized by a time-independent scalar potential field. It explicitly encodes data likelihood information for controllable generation while keeping curvature low off-data for curl-free, efficient sampling. Energy Matching achieves SOTA performance among likelihood-based methods, on par with Diffusion and Flow Matching, unlocking exciting new inference-time capabilities! Special thanks to my amazing co-authors: Tamaz Amiranashvili, Antonio Terpin, Suprosanna Shit, Lea Bogensperger, Sebastian Kaltenbach, Petros Koumoutsakos, Bjoern Menze. 📄 Paper: https://lnkd.in/esizV7pW 💻 Code and tutorial: https://lnkd.in/gRXqZ73i (join the growing community!) Excited to discuss this in person - see you at NeurIPS in San Diego! #NeurIPS2025 #GenerativeAI #EnergyMatching #MachineLearning #EnergyBasedModels #DiffusionModels #FlowMatching #InverseProblems
To view or add a comment, sign in
-