The role of open source in shaping AI’s future takes center stage at AI Investment Summit 2025@ UC Berkeley. Matt White joins the panel, "Designing AI-Native Products — Innovation and Data Feedback Loops." Alongside Richard Socher (You.com), Sewon Min (UC Berkeley Electrical Engineering & Computer Sciences (EECS), Irving Hsu (Mayfield), and Ashish Agrawal (Eudia), Matt will discuss how open source and open standards are becoming key competitive advantages that drive interoperability, transparency, and long-term sustainability in AI. 📍 Sunday, November 2, UC Berkeley 🔗 Learn more: https://lnkd.in/gqkd7F9W #PyTorch #OpenSourceAI #AIInfrastructure
Matt White to discuss open source in AI at AI Investment Summit 2025
More Relevant Posts
-
Convergence in tech is not just a buzzword anymore 🎯 Quantum computing, AI, and ML are merging forces to break the chains traditional computing has shackled us with. Quantum computing supercharges AI by solving problems that once seemed impossible—think hyper-boosted ML model training and complex optimization challenges. Quantum isn't just the turbo button for AI and ML; AI and ML enhance quantum systems too. By solving quantum error dilemmas and optimizing algorithms, they’re helping quantum computing slide into operational reality, not just theoretical paradise. 🟠 Quantum computing accelerates ML model training and creates AI architectures that traditional tech can't manage. 🟠 AI and ML improve error correction and optimize algorithms in quantum mechanics, making real-world applications of quantum tech a possibility. 🟠 Industries like software engineering, scientific discovery, and financial systems stand on the cusp of transformation. Yes, the fusion of these powerhouses will revolutionize industries, but lurking behind this promise are challenges like maintaining a flexible, hybrid approach to integrate advancements while hurdling complex tech kinds of stuff. To leverage this trifecta of tech, companies should step up their game plan: invest in cross-discipline research, adopt agile methods, and don’t shy away from the ethical and technical challenges that lie ahead. Are you ready for the dawn of this convergence? How do you plan to ride this technological wave? 🚀 #QuantumComputing #ArtificialIntelligence #MachineLearning #SoftwareEngineering #InnovationRevolution 🙃 🔗https://lnkd.in/dJ5wAqEJ 👉 Post of the day: https://lnkd.in/dACBEQnZ 👈
To view or add a comment, sign in
-
-
As I explore the evolving landscape of computation, this piece explains how quantum computing and AI intersect and why it matters for strategy, research, and engineering teams. I outline the current hardware landscape, practical use cases where quantum-inspired approaches can complement classical AI, and a cautious, iterative path toward adoption. The article emphasizes responsible innovation, essential skills to develop, and a roadmap for early adopters to pilot projects, measure impact, and scale thoughtfully. If your organization is preparing for a quantum-enabled future, this piece offers a framework to prioritize efforts and stay ahead of rapid advances. https://lnkd.in/eQ2KP2bQ
To view or add a comment, sign in
-
-
I thoroughly enjoyed the University of Waterloo Faculty of Engineering hosted event "Moving at the Speed of AI" at Microsoft in Toronto. Dean Mary Wells welcomed a panel of speakers including Joel Blit, Sirisha Rambhatla, Sheldon Fernandez, and moderated by Erin Chapple. A few take-away thoughts from the evening: - "AI is fundamentally a democratic technology" - J. Blit. This highlights that AI is available to everyone, we use it in plain English. However, we can't forget: "when you're not paying for the product, you are the product". I wonder how access to AI compute capacity affects this fundamental, including the specific AI agent that receives the benefit of our interaction. - On risk and governance: "Regulation can be beneficial. The regulation of electricity actually sped up adoption, internationally." - E. Chapple. This was a cool counter-intuitive point. - "AI systems may not have our wisdom, they may not have our experience, but through sheer compute power, they can see in dimensions that we cannot." S. Fernandez. An example was also provided that a key insight in Scott Aaronson's recent quantum computing work was produced by GPT-5. - "Consider AI as probabilistic tools rather than deterministic; they are probably approximately correct." - S. Rambhatla. This is a concern for me because humans are famously bad at interpreting information presented as probabilities.
To view or add a comment, sign in
-
-
WOW 😳 this one has me truly excited. Google is claiming a major milestone: a quantum computer that has surpassed the ability of classical supercomputers for a particular task. Here’s what’s happening: Their system executed an algorithm that classical machines simply couldn’t keep up with, the first time a quantum machine has run a verifiable algorithm beyond classical machines. The task? Modelling the structure of a molecule, which hints at breakthroughs in materials science, medicine and computation. Importantly, yes, this is still early. Real-world quantum computers that can solve broad, meaningful problems are “years away,” as the article cautions. Why this matters to me: As someone who’s deeply fascinated by where AI meets quantum, this milestone feels like a turning point. If quantum machines are now outperforming classical machines, even in a narrow task, then imagine what comes when we pair that with advanced AI systems. Models trained on quantum-derived data, new hybrid architectures, and computation at entirely new scales. That’s the future I want to live in. This example isn’t just a tech headline: it’s proof that we’re entering a new era of computing where the rules are being rewritten. The way we process information, the way we model nature, the way we create tools, all of that is shifting under our feet. read about it https://lnkd.in/gWzkHapQ watch about it https://lnkd.in/gAjtryuJ What about the #futureofai are you excited about? #QuantumComputing #ArtificialIntelligence #QuantumTechnology #Supercomputing #GoogleAI #QuantumBreakthrough #StrAIghtPath
To view or add a comment, sign in
-
-
Another HPE powered AI supercomputer is here! Universities are entering the AI era and playing a big role in research and development. "The new TX-Generative AI Next (TX-GAIN) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) is the most powerful AI supercomputer at any U.S. university. With its recent ranking from TOP500, which biannually publishes a list of the top supercomputers in various categories, TX-GAIN joins the ranks of other powerful systems at the LLSC, all supporting research and development at Lincoln Laboratory and across the MIT campus. " https://lnkd.in/gzBVvMJx Top500 ranking: https://lnkd.in/gQQ55DUM
To view or add a comment, sign in
-
-
📘 I recently found myself turning the pages of #AI_Value_Creators, and one point truly grabbed me: “From Generative Computing to a Generative Computer — What Does All of This Mean for Hardware?” As I read, I realized this wasn’t just about #software innovation. It felt like the moment where the #physical fabric of #computing begins to shift beneath us. Imagine: - We’ve grown accustomed to thinking of #generative_AI as a service, an application layer. - But what if the next wave isn’t merely about #smarter models, but about smarter machines, hardware built from the ground up for generative intelligence? - What if we move beyond “gen AI runs on #hardware” to “hardware becomes #generative”? That’s the leap this reading provoked for me. In the book, the authors explore how generative computing is evolving into a new computing style, one that demands new architectures, new accelerators, new ways of allocating compute that favour reasoning, creative output and flexibility over brute force. For those of us in #AI engineering, data science, and hardware architecture, this is big. what i mean once I share it 🌚: - Rethinking 🤔 our assumptions about compute how we optimise for inference, for multi-modal generative tasks, for adaptiveness. - Considering the full stack, from silicon to model to application as co-evolving. - And importantly ; seeing value in being early in this shift, not just riding the current wave of generative AI but preparing for the next paradigm. If you’ve been exploring generative AI, I encourage you to ask yourself these questions : 🔹what hardware constraints are we overlooking now, that will become bottlenecks tomorrow? 🔹how might the next generation of “computers” look different when they are designed for generative workloads first, and traditional tasks second? 🔹and how can we, as technologists, prepare our teams and organisations for that shift? The future of compute isn’t just faster it may be fundamentally different. If you wish to read the book #AI_Value_Creators, please obtain it here: https://lnkd.in/eRc2ZGYF #gdgocucb #AI #GenerativeAI #Hardware #ComputingFuture #Innovation #DeepTech Quantum Computing GDG on Campus UCB GDG Kivu GDGs Afrique Francophone Generative Futures Generative AI IA Hardware
To view or add a comment, sign in
-
-
“Google's quantum computer solved a problem in minutes that would take the world's fastest supercomputer 10 septillion years.” Landmarks like this—a calculation that's septillions of years faster than a supercomputer—aren't just scientific milestones; they're commercial inflection points. This moves the conversation from "if" to "when" for mainstream business. But this incredible power, spotlighted by Google, raises the real question for every enterprise: How do you translate your most complex business problems (like risk modeling, logistics, or drug discovery) into a format this new class of machine can even understand? A Formula 1 engine is useless in a car with no steering wheel. This is the challenge Quantum Star Systems was built to solve. While the hardware giants build the engines, we build the interface—the universal operating system and software tools that let your existing developers harness that power without needing a Ph.D. in quantum physics. Our hybrid-first approach means you can start today. You can build and test quantum-ready algorithms on our simulators, gain immediate ROI on classical systems, and be ready to deploy on hardware breakthroughs like this from day one. The hardware is arriving faster than anyone predicted. The time to get your software and strategy 'quantum-ready' is now. https://lnkd.in/gJUh2Abf #QuantumComputing #QuantumAdvantage #Innovation #Software #BusinessStrategy #DigitalTransformation #AI #QSS
Inside Google’s quantum AI lab
https://www.youtube.com/
To view or add a comment, sign in
-
MIT Researchers Develop Brain-Inspired Neuromorphic Computing for Energy-Efficient AI Researchers at the Massachusetts Institute of Technology (MIT) are advancing brain-inspired computing technologies to make artificial intelligence systems more energy efficient and environmentally sustainable. Their work focuses on elec... #Technology #MIT #NeuromorphicComputing https://lnkd.in/dJB6Um5w
To view or add a comment, sign in
-
I’ve been following the evolution of quantum computing for years, and most “quantum supremacy” claims have always felt a bit… premature. But Google’s October 2025 announcement caught my attention for one reason: it’s verifiable. Using their 65-qubit Willow processor and a new algorithm called Quantum Echoes, they simulated a complex quantum system, a second-order out-of-time-order correlator (OTOC₂) in hours, compared to the years it would take the fastest supercomputer. That’s a 13,000× speed-up, but what makes it different is not just the speed. it’s the fact that the results can actually be checked classically. That’s what makes this a genuine quantum advantage, not just a marketing claim. What fascinates me most is how AI is quietly integrated into this achievement optimising quantum circuits, minimising decoherence, and tuning the parameters on the fly. It’s not just quantum computing anymore; it’s AI-assisted physics. As someone who lives at the intersection of physics and AI, I see this as a glimpse into our next era of computation... where machine learning doesn’t replace physical reasoning, it amplifies it. 🔗 Google Research: A Verifiable Quantum Advantage 📄 Nature (2025)--“Observation of constructive interference at the edge of quantum ergodicity” #QuantumComputing #AI #Physics #QuantumAdvantage #Innovation #Research #DeepTech
To view or add a comment, sign in
-
Are you still building tomorrow's solutions with yesterday's computer paradigms? 🤔 While traditional silicon is familiar, the most complex problems require thinking beyond the chip. Welcome to the fascinating world of Unconventional Computing, where nature inspires the next generation of processing power! 💡 DNA Computing: Massively Parallel Power Imagine a computer built from DNA molecules. It's not science fiction! This field uses the bio-chemical properties of DNA to perform calculations. They've already tackled hard problems like the Traveling Salesman Problem for small networks. The incredible advantage? Billions of molecules compute simultaneously—true massive parallelism. 🌊 Reservoir Computing: The Brain's Hidden Trick Less known but powerfully efficient, Reservoir Computing is a type of recurrent neural network. It features a fixed, randomly connected "reservoir" of neurons. By training only the output layer, it excels at complex time-series analysis, from predicting chaotic systems to advanced speech recognition. The magic is in the internal, non-linear dynamics of the network itself. At widget d.o.o., we don't just build conventional solutions; we are passionate about the future of computational problem-solving. We harness advanced logic and deep understanding to develop systems that are efficient, scalable, and ready for tomorrow's challenges. Ready to leverage innovative IT Solutions and advanced Web Development to solve your most complex business problems? 👉 Explore our portfolio and see how we think outside the silicon box! (widgetinfo.com) #UnconventionalComputing #DNAComputing #ReservoirComputing #AI #ITSolutions #WebDevelopment #Innovation #TechLeadership
To view or add a comment, sign in
-