How AI is Changing the Scientific Method

Explore top LinkedIn content from expert professionals.

  • View profile for Andreas Sjostrom
    Andreas Sjostrom Andreas Sjostrom is an Influencer

    LinkedIn Top Voice | AI Agents | Robotics I Vice President at Capgemini's Applied Innovation Exchange | Author | Speaker | San Francisco | Palo Alto

    13,272 followers

    AI models are reasoning, creating, and evolving. The evidence is no longer theoretical; it's peer-reviewed, measurable, and, in some domains, superhuman. In the last 18 months, we’ve seen LLMs move far beyond next-token prediction. They’re beginning to demonstrate real reasoning, hypothesis generation, long-horizon planning, and even scientific creativity. Here are six breakthroughs that redefine what these models can do: Superhuman Clinical Reasoning (Nature Medicine, 2025) In a rigorous test across 12 specialties, GPT-4 scored 89% on the NEJM Knowledge+ medical reasoning exam, outperforming the average physician score of 74%. This wasn’t just Q&A; it involved multi-hop reasoning, risk evaluation, and treatment planning. That’s structured decision-making in high-stakes domains. Creative Research Ideation (Zhou et al., 2024 – arXiv:2412.10849) Across 10 fields from physics to economics, GPT-4 and Claude generated research questions rated more creative than human-generated ones in 53% of cases. This wasn’t trivia; domain experts blindly compared ideas from AI and researchers. In over half the cases, the AI won. Falsifiable Hypotheses from Raw Data (Nemati et al., 2024) GPT-4o was fed raw experimental tables from biology and materials science and asked to propose novel hypotheses. 46% of them were judged publishable by experts, outperforming PhD students (29%) on the same task. That’s not pattern matching, that’s creative scientific reasoning from scratch. Self-Evolving Agents (2024) LLM agents that reflect, revise memory, and re-prompt themselves improved their performance on coding benchmarks from 21% → 34% in just four self-corrective cycles, without retraining. This is meta-cognition in action: learning from failure, iterating, and adapting over time. Long-Term Agent Memory (A-MEM, 2025) Agents equipped with dynamic long-term memory (inspired by Zettelkasten) achieved 2× higher success on complex web tasks, planning across multiple steps with context continuity. Emergent Social Reasoning (AgentSociety, 2025) In a simulation of 1,000 LLM-driven agents, researchers observed emergent social behaviors: rumor spreading, collaborative planning, and even economic trade. No hardcoding. Just distributed reasoning, goal propagation, and learning-by-interaction. These findings span healthcare, science, software engineering, and multi-agent simulations. They reveal systems that generate, reason, and coordinate, not just predict. So when some argue that “AI is only simulating thought,” we should ask: Are the tests capturing how real reasoning happens? The Tower of Hanoi isn’t where science, medicine, or innovation happens. The real test is: 1. Can a model make a novel discovery? 2. Can it self-correct across steps? 3. Can it outperform domain experts in structured judgment? And increasingly, the answer is: yes. Let’s not confuse symbolic puzzles with intelligence. Reasoning is already here, and it’s evolving.

  • View profile for Pradeep Sanyal

    AI & Technology Leader | Experienced CIO & CTO | Enterprise AI, Cloud & Data Transformation | Advisor to CEOs and Board | Agentic AI Strategist

    17,574 followers

    For centuries, science has been theory first. Ask a question. Form a hypothesis. Test and explain. AI doesn’t work that way. It starts with data. Finds patterns we didn’t ask for. Produces results we can’t always explain. We’re not just speeding up science. We’re changing what it means to know. In biology, chemistry, materials, AI is outperforming human-led discovery. Not by helping scientists. By doing the science differently. This isn’t a faster version of the old method. It’s a new one. No hypothesis needed No guarantee of understanding No path back to first principles We’re watching a shift from explanation to prediction. From human-led inquiry to model-driven output. There’s upside: new insights, scale, speed. But also risk: False confidence in black-box outputs Deskilling of scientific reasoning Lack of human judgment in what questions matter In my advisory work, I’ve seen this play out in labs and boardrooms. High-performing models replacing domain expertise but leaving gaps in accountability, interpretation, and ethics. Leaders can’t treat AI as a plug-in. It’s not a faster assistant. It’s a second way of knowing. Useful, but not interchangeable. The challenge isn’t adoption. It’s building the guardrails that science never needed before. Who decides what counts as valid? Who takes responsibility when models go wrong? Who ensures we’re still asking the right questions? This isn’t automation. It’s epistemology. And leaders need to treat it that seriously.

  • View profile for Mark Minevich

    Top 100 AI | Global AI Leader | Strategist | Investor | Mayfield Venture Capital | ex-IBM ex-BCG | Board member | Best Selling Author | Forbes Time Fortune Fast Company Newsweek Observer Columnist | AI Startups | 🇺🇸

    42,920 followers

    The Rise of the AI Scientist Sam Altman recently predicted that within a year, AI will solve problems beyond human teams' reach — and we may see the first "AI Scientists" discovering new knowledge. That future is already here. FutureHouse just launched AI science agents that outperform human PhDs in research tasks: Crow - serves as a general research assistant Falcon- conducts lightning-fast literature reviews across full scientific papers Owl - identifies research gaps ripe for discovery Phoenix- designs chemistry and biology experiments These agents already surpass humans in precision, speed, and recall when analyzing scientific literature. Behind the scenes, more agents are training for hypothesis generation, protein engineering, and data analysis. We're not just getting AI help with science AI is starting to do the science. The Human Question What happens to the PhD when machines generate hypotheses? What does peer review look like when AI designs the experiments? Who gets credit for AI-driven discoveries? The answer isn't replacement, it's evolution Scientists become orchestrators, creative directors managing AI research networks. PhD programs may shift from "years of manual research" to "mastering scientific AI workflows." The possibilities are staggering: - Speed: Breakthroughs in days, not years - Access: Democratized top-tier research capabilities - Ambition: Tacklin previously impossible problems But critical questions remain: Can we trust AI findings? Who's accountable when AI fails? Will these tools serve everyone — or just tech giants? We're witnessing the biggest shift in knowledge creation since the scientific method itself. The next Nobel Prize might go to a team where AI did the heavy lifting. Small labs powered by agents might outperform entire university departments. This isn't the future of science. This is today. The question isn't whether AI will transform research — it's whether we'll guide that transformation thoughtfully.

  • View profile for John Bailey

    Strategic Advisor | Investor | Board Member

    15,692 followers

    2025 could be the year we transition from AI systems that answer questions to autonomous AI agents capable of performing complex, real-world tasks independently. Last week, I explored the groundbreaking work being done by Google's AI Co-Scientists and Stanford and Chan Zuckerberg BioHub's Virtual Lab, highlighting how autonomous AI agents are already transforming complex research processes. Now, two additional studies further showcase the remarkable capabilities of advanced AI systems working to accomplish tasks: Researchers from Harvard and MIT introduced TxAgent, an AI agent leveraging an extensive toolkit of 211 specialized tools. TxAgent analyzes drug interactions, contraindications, and patient-specific health data to suggest personalized medical treatments in real-time. It thoroughly evaluates medications at molecular, pharmacokinetic, and clinical levels, factoring in individual patient risks such as comorbidities, existing medications, age, and genetic predispositions. By synthesizing vast biomedical evidence, TxAgent rapidly generates precise and tailored recommendations, dramatically optimizing healthcare delivery, which is particularly beneficial for resource-limited settings. Meanwhile, Sakana AI introduced "AI Scientist-v2," a remarkable autonomous AI researcher that generated the first-ever fully AI-written scientific paper to pass peer review at an ICLR 2025 workshop. This achievement marks a milestone in AI-driven research, demonstrating AI’s capability to independently execute the full scientific research cycle, systematically generate hypotheses, perform computational experiments using advanced machine learning models, rigorously analyze results, iteratively refine methodologies, and draft comprehensive manuscripts that meet the rigorous standards of peer review. LinkedIn: Why Your Next Coworker Might Be an AI Agent  https://lnkd.in/eAznknyh TxAgent: An AI agent for therapeutic reasoning across a universe of tools: https://lnkd.in/e7HW7j7t The AI Scientist Generates its First Peer-Reviewed Scientific Publication: https://lnkd.in/eYWmQs7m American Enterprise Institute Sakana AI Harvard Medical School Massachusetts Institute of Technology Harvard Data Science Initiative Coalition for Health AI (CHAI)

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    212,584 followers

    Google DeepMind’s AI Co-Scientist paper was just released, and you should check it out! It represents a paradigm shift in scientific discovery, leveraging a multi-agent system built on Gemini 2.0 to autonomously generate, refine, and validate new research hypotheses. 🔹How does it work? Well the system uses a generate, debate, and evolve framework, where distinct agents called Generation, Reflection, Ranking, Evolution, Proximity, and Meta-Review, collaborate in an iterative hypothesis refinement loop. 🔹Some key innovations that pop out include an asynchronous task execution framework, which enables dynamic allocation of computational resources, and a tournament-based Elo ranking system that continuously optimizes hypothesis quality through simulated scientific debates. 🔹The agentic orchestration accelerates hypothesis validation for processes that take humans decades in some instance. For example empirical validation in biomedical applications, such as drug repurposing for acute myeloid leukemia (AML) and epigenetic target discovery for liver fibrosis, quickly helped researchers generate clinically relevant insights. What should we all get from this? 🔸Unlike traditional AI-assisted research tools, AI Co-Scientist doesn’t summarize existing knowledge but instead proposes experimentally testable, original hypotheses, fundamentally reshaping the research paradigm by acting as an intelligent collaborator that augments human scientific inquiry. Do take some time this Sunday to read! #genai #technology #artificialintelligence

  • View profile for Anurag(Anu) Karuparti

    Agentic AI Leader @Microsoft | Author - Generative AI for Cloud Solutions | LinkedIn Learning Instructor | Responsible AI Advisor | Ex-PwC, EY | Marathon Runner

    12,849 followers

    Here’s a truly impactful AI multi-agent application that I’m excited to share! Imagine a world where the boundaries of scientific research are pushed beyond traditional limits, not just by human intelligence but with the help of AI Agents. That's exactly what the Virtual Lab is doing! At the heart of this innovation lies large language models (LLMs) that are reshaping how we approach interdisciplinary science. These LLMs have recently shown an impressive ability to aid researchers across diverse domains by answering scientific questions. 𝐅𝐨𝐫 𝐦𝐚𝐧𝐲 𝐬𝐜𝐢𝐞𝐧𝐭𝐢𝐬𝐭𝐬, 𝐚𝐜𝐜𝐞𝐬𝐬𝐢𝐧𝐠 𝐚 𝐝𝐢𝐯𝐞𝐫𝐬𝐞 𝐭𝐞𝐚𝐦 𝐨𝐟 𝐞𝐱𝐩𝐞𝐫𝐭𝐬 𝐜𝐚𝐧 𝐛𝐞 𝐜𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐢𝐧𝐠. But with Virtual Lab, few Stanford Researchers turned that dream into reality by creating an AI human research collaboration. 𝐇𝐞𝐫𝐞'𝐬 𝐡𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬: → The Virtual Lab is led by an LLM principal investigator agent. → This agent guides a team of LLM agents, each with a distinct scientific expertise. → A human researcher provides high level feedback to steer the project. → Team meetings are held by agents to discuss scientific agendas. → Individual agent meetings focus on specific tasks assigned to each agent. 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐚 𝐠𝐚𝐦𝐞𝐜𝐡𝐚𝐧𝐠𝐞𝐫? The Stanford team applied the Virtual Lab to tackle the complex problem of designing nanobody binders for SARSCoV2 variants. This requires expertise from biology to computer science. The results? A novel computational design pipeline that churned out 92 new nanobodies. Among these, two exhibit improved binding to new variants while maintaining efficacy against the ancestral virus. making them promising candidates for future studies and treatments. This is not just a theoretical exercise. It's a real-world application that holds significant promise for scientific discovery and medical advancements. AI isn't just a tool anymore; it's becoming a partner in discovery. Isn't it time we embrace the future of collaborative research? What do you think about the potential of AI in revolutionizing science? Let's discuss! Read the full research here: https://lnkd.in/eBxUQ7Zy #aiagents #scientificrevolution #artificialintelligence

  • View profile for Vik Pant, PhD

    Applied AI and Quantum Information @ PwC, Synthetic Intelligence Forum, University of Toronto

    12,087 followers

    Generative Multiagent Systems are accelerating scientific discovery by overcoming traditional research barriers and igniting a revolution in interdisciplinary innovation. 🤖 In today’s rapidly evolving research landscape, interdisciplinary collaboration is key to solving complex scientific challenges. 🔬 Yet, many scientists lack ready access to experts across all relevant domains related to their scientific inquiries. 🔭 This is where Generative Multiagent Systems, that are powered by large language models, are poised to make a transformative impact. 🌟 Imagine a specialized team of computational experts composed and orchestrated by a research leader and guided by incisive human insight and prescience. 💡 This bold fusion of #GenerativeAI with #AgenticAI and human ingenuity is transforming research by turbocharging scientific discovery. 💎 1️⃣ Imagine a research system where an ensemble of LLMs acts as a principal investigator that builds and manages a team of specialized research agents. 2️⃣ Each AI agent brings domain-specific expertise to the table, engaging in both collective “team meetings” and focused individual sessions. 3️⃣ During team meetings, AI agents deliberate on a scientific agenda, iterating hypotheses and aligning on research strategies. 4️⃣ In individual sessions, each AI agent tackles targeted tasks, from experimental design to computational modeling and rigorous self-critique. 5️⃣ Throughout this process, a human researcher provides overall direction and strategic oversight, ensuring that the system’s outputs align with real-world scientific priorities. By harnessing the diverse perspectives of specialized agents under a unified, intelligent framework, Generative Multiagent Systems can rapidly generate novel insights and accelerate the discovery process. 💫 This human and #AI research collaboration not only enhances efficiency but also broadens the scope of scientific inquiry, opening pathways for breakthroughs in areas such as drug discovery and beyond. ✨ I was delighted to welcome my dear friend and globally renowned thought leader, Professor James Zou, to the Synthetic Intelligence Forum for a discussion about Virtual Lab. ⚡ In this talk, Professor Zou describes Virtual Lab which is a Generative Multiagent System for scientific research. 🖥️ As an Associate Professor of Biomedical Data Science, with courtesy appointments in Computer Science and Electrical Engineering departments, at Stanford University, Professor Zou is reputed for his high impact research in computational biology, data science, machine learning, and public health. 📚 During our session, Professor Zou offered a roadmap for extending and expanding the coverage of Virtual Lab across multiple scientific disciplines. 🔦 Special thanks to my distinguished partner in the Synthetic Intelligence Forum, Olga, for her esteemed collaboration in convening this thoughtful and thought provoking discussion. 🚀 Recording: https://lnkd.in/eEN6UPpP 🌐

  • View profile for Joris Poort

    CEO at Rescale

    17,040 followers

    🔬 Exciting Progress in AI for Science this week as Google Unveils AI Co-Scientist - A New Era of Accelerated Scientific Discovery! Key takeaways from this new paper published yesterday: 🤖 Introduction of AI Co-Scientist: Google has developed an AI system named "AI Co-Scientist," built on Gemini 2.0, designed to function as a virtual collaborator for scientists. This system aims to assist in generating novel hypotheses and accelerating scientific and biomedical discoveries. 👨👩👦👦 Multi-Agent Architecture: The AI Co-Scientist employs a multi-agent framework that mirrors the scientific method. It utilizes a "generate, debate, and evolve" approach, allowing for flexible scaling of computational resources and iterative improvement of hypothesis quality. 🧬 Biomedical Applications: In its initial applications, the AI Co-Scientist has demonstrated potential in several areas: 1. Drug Repurposing: Identified candidates for acute myeloid leukemia that exhibited tumor inhibition in vitro at clinically relevant concentrations. 2. Novel Target Discovery: Proposed new epigenetic targets for liver fibrosis, validated by anti-fibrotic activity and liver cell regeneration in human hepatic organoids. 3. Understanding Bacterial Evolution: Recapitulated unpublished experimental results by discovering a novel gene transfer mechanism in bacterial evolution through in silico methods. 🤝 Collaborative Enhancement: The system is designed to augment, not replace, human researchers. By handling extensive literature synthesis and proposing innovative research directions, it allows scientists to focus more on experimental validation and creative problem-solving. 💡 Implications for Future Research: The AI Co-Scientist represents a significant advancement in AI-assisted research, potentially accelerating the pace of scientific breakthroughs and fostering deeper interdisciplinary collaboration. This development underscores the transformative role AI can play in scientific inquiry, offering tools that enhance human ingenuity and expedite the journey from hypothesis to discovery.

Explore categories