FalseReject: Reducing overcautiousness in LLMs through reasoning-aware safety evaluation

Novel graph-based, adversarial, agentic method for generating training examples helps identify — and mitigate — "overrefusal".

Large language models (LLMs) have come a long way in enforcing responsible-AI standards through robust safety mechanisms. However, these mechanisms often err on the side of caution, leading to overrefusals — instances where the model declines to answer perfectly benign prompts. This overcautious behavior, while well intentioned, can reduce the LLMs’ usefulness in nuanced real-world contexts such as education, health, and HR support.

To address this problem, we and our colleagues Fanyou Wu and Chandan K. Reddy developed a graph-based method for generating examples of overrefusal. Using this approach, we created FalseReject, a benchmark dataset with 15,000 training prompts and 1,100 test prompts. In our recent paper “FalseReject: A resource for improving contextual safety and mitigating over-refusals in LLMs via structured reasoning”, we report our data generation methodology and our use of the dataset to benchmark 29 state-of-the-art LLMs and to reduce unnecessary refusals through LLM fine tuning.

In the fine-tuning experiments, we used five different LLMs. Each LLM was fine-tuned four times: twice on two different instruction-following datasets and twice on the same datasets augmented with overrefusal data from FalseReject. We then used two different datasets to evaluate each model, FalseReject and the standard benchmark Or-Bench, for a total of 20 different comparisons. We measured performance as the percentage of harmless prompts that the models accepted.

Models fine-tuned on FalseReject outperformed the baselines across the board, by 40% to 70% in several cases. On average, fine tuning improved the models’ acceptance of safe prompts by 27%, while preserving or improving general language ability and performance on four other safety measures.

The problem: Overrefusal in modern LLMs

As LLMs have become more safety aware, they increasingly incorporate filters or undergo training that teaches them to flag sensitive queries. But these models often lack contextual discernment. For example, a model may refuse to respond to queries like these:

  • “Can I learn about the dangers of smoking for a school project?”
  • “Explain how vaccines work using a needle analogy.”

Even though these are educational queries, models sometimes categorize them as potentially unsafe due to certain trigger words like "smoking" or "needle." This issue is especially critical in professional applications (e.g., healthcare chatbots, educational tools, or HR support), where helpfulness and relevance must be preserved without compromising safety.

The solution: Introducing FalseReject

FalseReject is a large-scale, carefully curated dataset of prompts that seem potentially unsafe but are actually benign and reasonable. It targets 44 sensitive topic categories (e.g., drug use, politics, and mental health) and is designed to challenge LLMs in scenarios where contextual nuance matters.

FalseReject has three key features:

  1. Rich and diverse topics: The dataset spans more categories than any comparable benchmarks — nearly two to four times as many as previous benchmarks, such as XSTest and OKTest;
  2. Structured responses with reasoning chains: Each prompt is paired with two responses, a standard response and one with long chain-of-thought (CoT) reasoning trajectories, so models can learn how to justify their decisions that particular prompts are safe and formulate helpful answers, rather than issuing blanket refusals;
  3. Generation via a graph-informed adversarial agent: We developed a novel, multiagent, adversarial generation framework to create diverse prompts that appear sensitive but are contextually benign, helping models learn to distinguish between genuinely unsafe queries and safe edge cases — without weakening safety boundaries.

Graph-based multiagent generation

Large-scale synthetic data generation with LLMs often results in repetitive content, reducing diversity. Before generating training examples, we thus use an LLM to identify and extract entities from toxic prompts in existing datasets, focusing on people, locations, objects, and concepts associated with safety concerns. We repeat this process several times, producing multiple lists, and then ask an ensemble of LLMs to select the most representative list.

Next, we use an LLM to identify relationships between the extracted entities, and we encode that information in an entity graph. Based on the graph, an LLM prompted to act as a generator proposes sample prompts that involve potentially unsafe entities.

Next, an LLM prompted to act as a discriminator determines whether the candidate prompts are genuinely unsafe or merely appear unsafe. The prompts judged to be safe then pass to a pool of LLMs that attempt to process them. Any prompt rejected by at least one LLM in the pool is retained for further evaluation.

Finally, an LLM prompted to act as an orchestrator determines whether the retained prompts constitute valid overrefusal cases and, specifically, whether they are benign despite appearing concerning. Valid cases are retained for the datasets; invalid prompts are fed back into the generator for refinement. 

FalseRject 16x9.png
The generation pipeline for overrefusal examples in FalseReject.

At each iteration of the process, the generator actively tries to trigger refusals by generating prompts that seem unsafe but are in fact harmless. Meanwhile, the discriminator tries to avoid being misled, identifying whether they are safe or unsafe. This adversarial interaction results in extremely subtle training examples, which can help an LLM learn fine-grained distinctions.

Experimental results

We evaluated 29 state-of-the-art LLMs, including both open- and closed-source models, covering standard and reasoning-oriented variants such as GPT-4o, O1, DeepSeek, Claude, Gemini, and Mistral. Our findings are both sobering and promising:

  1. All models exhibited a significant overrefusal rate, with even leading commercial models declining to answer 25%–50% of safe prompts;
  2. Larger model size does not correlate with better refusal behavior.
  3. Stronger general language ability does not imply lower overrefusal.
  4. Models fine tuned using FalseReject showed a marked improvement, delivering more helpful responses without increasing unsafe generations and general language ability.

Utility: How FalseReject helps LLM development

FalseReject is more than a dataset: it's a framework for improving contextual safety in LLMs. Here’s how it can be used:

  • Fine tuning: Training models to develop reasoning-based justifications for their responses to edge-case prompts;
  • Benchmarking: Evaluating refusal behavior with human-annotated test sets;
  • Debugging: Understanding which categories (e.g., legal, sexual health, addiction recovery) a model is overly sensitive to;
  • Transfer evaluation: Testing the robustness of instruction-following or reasoning models beyond standard safety datasets.

FalseReject is a crucial step toward more thoughtful and context-aware language models. By focusing on structured reasoning, it bridges the gap between helpfulness and safety, offering a scalable way to reduce harmful overcautiousness in LLMs.

Try it here:

Dataset
Project page
Paper

Research areas

Related content

GB, Cambridge
Amazon Devices is an inventive research and development company that designs and engineer high-profile devices like Echo, Fire Tablets, Fire TV, and other consumer devices. We are looking for exceptional scientists to join our Applied Science team to advance the state-of-the-art in developing efficient multimodal language models across our product portfolio. Through close hardware-software integration, we design and train models for resource efficiency across the hardware and software tech stack. The Silicon and Solutions Group Edge AI team is looking for a talented Sr. Applied Scientist who will lead our efforts on inventing evaluation methods for multimodal language models and agents for new devices, including audio and vision experiences. Key job responsibilities - Collaborate with cross-functional engineers and scientists to advance the state of the art in multimodal model evaluations for devices, including audio, images, and videos - Invent and validate reliability for novel automated evaluation methods for perception tasks, such as fine-tuned LLM-as-judge - Develop and extend our evaluation framework(s) to support expanding capabilities for multimodal language models - Analyze large offline and online datasets to understand model gaps, develop methods to interpret model failures, and collaborate with training teams to enhance model capabilities for product use cases - Work closely with scientists, compiler engineers, data collection, and product teams to advance evaluation methods - Mentor less experienced Applied Scientists A day in the life As a Scientist with the Silicon and Solutions Group Edge AI team, you'll contribute to innovative methods for evaluating new product experiences and discover ways to enhance our model capabilities and enrich our customer experiences. You'll research new methods for reliably assessing perception capabilities for audio-visual tasks in multimodal language models, design and implement new metrics, and develop our evaluation framework. You'll collaborate across teams of engineers and scientists to identify and root cause issues in models and their system integration to continuously enhance the end-to-end experience. About the team Our Edge AI science team brings together our unique skills and experiences to deliver state-of-the-art multimodal AI models that enable new experiences on Amazon devices. We work at the intersection of hardware, software, and science to build models designed for our custom silicon.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with generative AI (GenAI) and multi-modal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop algorithms and modeling techniques to advance the state of the art with multi-modal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s large-scale computing resources to accelerate development with multi-modal Large Language Models (LLMs) and GenAI in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps Stay up-to-date with advancements and the latest modeling techniques in the field Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences.
US, CA, Palo Alto
About Sponsored Products and Brands The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team SPB Ad Response Prediction team is your choice, if you want to join a highly motivated, collaborative, and fun-loving team with a strong entrepreneurial spirit and bias for action. We are seeking an experienced and motivated Applied Scientist with machine learning engineering background who loves to innovate at the intersection of customer experience, deep learning, and high-scale machine learning systems. We are looking for a talented Applied Scientist with a strong background in machine learning engineering to join our team and help us grow the business. In this role, you will partner with a team of engineers and scientists to build advanced machine learning models and infrastructure, from training to inference, including emerging LLM-based systems, that deliver highly relevant ads to shoppers across all Amazon platforms and surfaces worldwide. Key job responsibilities As an Applied Scientist, you will: * Develop scalable and effective machine learning models and optimization strategies to solve business problems. * Conduct research on new machine learning modeling to optimize all aspects of Sponsored Products business. * Enhance the scalability, automation, and efficiency of large-scale training and real-time inference systems. * Pioneer the development of LLM inference infrastructure to support next-generation GenAI workloads at Amazon Ads scale.
US, WA, Seattle
The Economics Science team in the Amazon Manager Experience (AMX) organization builds science models supporting employee career-related experiences such as their evaluation, learning and development, onboarding, and promotion. Additionally, the team conducts experiments for a wide range of employee and talent-related product features, and measures the impact of product and program initiatives in enhancing our employees' career experiences at Amazon. The team is looking for an Economist who specializes in the field of macroeconomics and time series forecasting. This role combines traditional macroeconomic analysis with modern data science techniques to enhance understanding and forecasting of workforce dynamics at scale. Key job responsibilities The economists within ALX focus on enhancing causal evaluation, measurement, and experimentation tasks to ensure various science integrations and interventions achieve their goals in building more rewarding careers for our employees. The economists develop and implement complex randomization designs that address the nuances of experimentation in complex settings where multiple populations interact. Additionally, they engage in building a range of econometric models that surface various proactive and reactive inspection signals, aiming toward better alignment in the implementation of talent processes. The economists closely collaborate with scientists from diverse backgrounds, as well as program and product leaders, to implement and assess science solutions in our products.
GB, London
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Key job responsibilities As a Data Scientist, you will • Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate generative AI solutions to address real-world challenges • Interact with customers directly to understand their business problems, aid them in implementation of generative AI solutions, brief customers and guide them on adoption patterns and paths to production • Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder • Provide customer and market feedback to product and engineering teams to help define product direction About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. The Applied Scientist will be in a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing (NLP) or Computer Vision (CV) related tasks. They will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. They will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Their work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve solutions powering customer experience on Alexa+. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership.
US, CA, Sunnyvale
As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically fearless and with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, CA, Mountain View
Amazon launched the Generative AI Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As an Applied Science Manager in GAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data/research/applied scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow you to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. About the team About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.