Sign in to view R. Nathanael’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
United Kingdom
Sign in to view R. Nathanael’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
552 followers
500+ connections
Sign in to view R. Nathanael’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with R. Nathanael
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View mutual connections with R. Nathanael
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Sign in to view R. Nathanael’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
New to LinkedIn? Join now
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View R. Nathanael’s full profile
-
See who you know in common
-
Get introduced
-
Contact R. Nathanael directly
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Explore more posts
-
Frédéric Barbaresco
Quantum algorithms for algebraic problems https://lnkd.in/e3-xdAUC Quantum computers can execute algorithms that dramatically outperform classical computation. As the best-known example, Shor discovered an efficient quantum algorithm for factoring integers, whereas factoring appears to be difficult for classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article reviews the current state of quantum algorithms, focusing on algorithms with superpolynomial speedup over classical computation, and in particular, on problems with an algebraic flavor.
239
4 Comments -
Todd Harrison
I've been getting a lot of questions about how I calculated the space-based interceptor options in my costs analysis for Golden Dome. A lot of the confusion stems from the fact that the size and cost of an SBI system depends on a lot of different assumptions about the performance of the interceptors, the threats addressed, and the costs of interceptors and launches. The results are highly sensitive to some of these assumptions and relatively insensitive to others. I decided to make all the calculations available so you can change the assumptions and run the calculations for yourself. I initially did this all in a spreadsheet (like a typical Gen Xer would), but I asked ChatGPT Codex to help turn that spreadsheet into an interactive website. After a few hours of back and forth, here it is. This SBI calculator allows you to vary the inputs (or just use my default assumptions) and see how it affects cost, constellation size, number of launches required, etc. And it also produces a trade-offs chart at the bottom of the page where you can see how each input affects cost, constellation size, etc. So if you hear someone say, "you only need 10,000 interceptors in the constellation," go run the numbers and see what assumptions they must be making about interceptor performance, number of missiles in a salvo, etc. Also remember that Golden Dome is more than just SBIs! This is just one part of what the architecture could include, albeit an expensive part. https://lnkd.in/e_gWD7ye
121
7 Comments -
Keith King
Caltech Scientists Store Quantum Data as Sound, Extending Lifetimes 30-Fold Over Traditional Qubits ⸻ Introduction: A team at Caltech has made a major advance in quantum memory by turning quantum information into sound. Using a chip-based mechanical oscillator—akin to a microscopic tuning fork—researchers have demonstrated that storing data as acoustic vibrations can extend coherence times up to 30 times longer than conventional superconducting qubits. This innovation represents a powerful new pathway toward scalable and reliable quantum computing. ⸻ Key Breakthroughs: ��� Quantum Sound Storage: • Caltech developed a hybrid quantum memory system that translates microwave-based qubit information into mechanical vibrations. • These vibrations behave like a miniaturized tuning fork, preserving quantum states with far less energy loss and interference. • Why It Works: • Superconducting qubits, while excellent at processing quantum data, have short coherence times—meaning they can’t hold information for long. • Mechanical waves degrade much more slowly than electromagnetic signals, dramatically boosting data retention. • This hybrid method mitigates unwanted coupling and environmental noise that often plague traditional qubit systems. • Performance and Scalability: • The prototype achieved 30x longer storage lifetimes compared to current qubit-based memories. • Researchers are now focused on increasing data transfer rates between qubits and the mechanical memory for faster, more efficient use in quantum processors. • Toward the Quantum Internet: • This work complements recent breakthroughs in long-distance quantum communication. • In April 2025, UK and European scientists successfully achieved the first global quantum data storage and retrieval, suggesting this hybrid approach may soon support a global quantum network. ⸻ Why It Matters: Quantum computing’s power lies not just in how fast it can process information, but in how reliably it can store and transfer that information. By converting qubit data into sound-based memory, Caltech’s research tackles one of the field’s most persistent challenges: short coherence times. This breakthrough could be the missing link to practical, long-term quantum storage, paving the way for robust quantum networks, modular computing architectures, and even a quantum internet. It shows that when it comes to quantum data, the future may be heard as much as seen. ⸻ I share daily insights with 22,000+ followers and 8,000+ professional contacts across defense, tech, and policy. If this topic resonates, I invite you to connect and continue the conversation. Keith King https://lnkd.in/gHPvUttw
183
17 Comments -
Chris Sensor
https://lnkd.in/ee7M_Hui Never heard of a Goodman diagram before? Me either before I started working at Siemens. A Goodman-Haigh diagram is used to check if a cyclic stress time history is within the infinite life region for a product made of a given material. Still confused? This 10-15 minute read is a great primer on this tool and how its used.
75
-
Jesuino Takachi Tomita
For young future engineers and professionals who want to better understand what lies ahead in the field of space business, I highly recommend reading this open access book on the subject. Reading it will likely open your mind to reflect on and explore new businesses and possibilities in the space sector, from basic research involving fundamental sciences to the innovation of new products and services. Thank you Arto Ojala and William Baber for writing this book and making it available for us to read. Download in: https://lnkd.in/dJ9nKs_D #space #stem #education #engineering #mindset Abstract: This book updates understanding of commercial activities of firms acting in space-related industries or utilizing services provided by space technology firms. These commercial activities are largely conceptualized by “New Space” concept where commercial activities in space are mainly taken by private firms, partly replacing the actions of government-resourced space institutions, i.e., “Old Space.” New Space refers to business opportunities exploited through small and low-cost satellites and innovative space data services. These services include, for example, precise navigation solutions, satellite imagery and processing, satellite telecommunication, data communication, remote sensing, among others. Further, commercial use of space technologies has created new services, businesses, business models, value chains, and ecosystems. Thus, space-related technologies, activities, and services are nowadays more easily available for entrepreneurs and small businesses. This increasing accessibility has created numerous research opportunities in this field that is known broadly as space business and which includes New Space as well as traditional space activities and business opportunities. Although space technologies and services have attracted growing interest in many technical disciplines, academic studies of space business and management activities among firms acting in New Space or utilizing the services provided by New Space are just emerging.”
31
2 Comments -
Humna Khan
ASTRO's weekly Think Tank sessions are powerful! This week we delved into how agentic AI and recursive graph reasoning can autonomously map the vast, multi-dimensional space of alloy design. We are building a system that isn't just faster computation; it's a new discovery paradigm. It's starting to identify non-obvious correlations (for instance, how a specific heat treatment from aerospace superalloys might be adapted to improve corrosion resistance in a new bio-implant alloy.) The next generation of high-entropy alloys, shape-memory metals, and sustainable alternatives are on the horizon and THIS HOUSE right here is advancing from sequential testing to systemic exploration.
86
1 Comment -
Luca Leone
Britain just announced plans to deploy AI drone swarms within five years, but the timeline might be more ambitious than the budget allows. Defence Minister Luke Pollard confirmed that uncrewed systems will become central to UK forces, drawing lessons from Ukraine where AI-enabled drones have transformed battlefield operations. The plan involves a "high-low" mix: advanced manned aircraft working alongside numerous autonomous platforms across land, sea, air, and undersea environments... The strategy makes tactical sense. AI drones can maintain surveillance when human teams rest, operate in hazardous conditions, and provide the kind of persistent coverage that traditional forces struggle to match. Ukraine has demonstrated how relatively cheap autonomous systems can punch well above their weight against conventional military assets. But here's the reality check: moving from "concept phase" to "large numbers" in five years requires the kind of procurement speed that the MoD hasn't exactly been famous for. With every overtime request now requiring senior approval and budgets stretched thin, the gap between ambitious strategy and delivery capacity looks substantial. The technology exists, the tactical need is proven, but can Britain's defence procurement system move fast enough to make this timeline realistic? #ukdefence #militarytechnology #artificialintelligence #defenceinnovation #drones
50
5 Comments -
Hannes Fassold
"Unfortunately, most current counter-drone systems look like someone strapped $500,000 worth of sensors to a laser pointer and hoped for the best. Enter yet another tech marvel from Sweden: the Kreuger 100. A stripped-down, software-driven interceptor that’s less F-35 and more Ikea flat-pack missile. That’s not an insult. That’s the future. Launched by Nordic Air Defense (NAD), a Stockholm startup that clearly got tired of watching Europe buy defense tech from across the Atlantic, the Kreuger 100 was designed from the ground up to be cheap, scalable, and fast to deploy. What sets the Kreuger 100 apart isn’t just what’s inside but what’s missing. In the traditional world of air defense, interceptors come bloated with cost-heavy payloads: radar transceivers, laser rangefinders, gimbaled optics, complex gyroscopic stabilization, and propulsion systems that look like they were ripped from Cold War cruise missiles. The Kreuger 100 throws that model out the window and replaces it with a radical, minimalist architecture where the real brainpower lives not in hardware but in code. At the heart of this interceptor is a machine-learning-based flight control algorithm that adapts to environmental variables in real time: wind, angle of attack, target evasion maneuvers, and even thermal distortion caused by cluttered urban landscapes. Instead of reacting like a heat-seeking missile on rails, the Kreuger 100 behaves more like a predator drone with a nervous system. It doesn’t just follow, it predicts. It calculates an interception course based on probabilistic modeling of the drone’s behavior, a kind of anticipatory flight path generation that gives it a split-second edge in a knife fight in the sky. And unlike traditional systems locked into proprietary software ecosystems, the Kreuger 100 is designed to run on modular, updateable codebases. That means when a new drone threat emerges, say, a smaller, faster loitering munition or a decoy swarm, the Kreuger’s software can be updated without touching the hardware. In war, that adaptability is gold. Its infrared tracking system, while simple by Western standards, is fully integrated into this software layer. Rather than relying on heavy stabilization and high-end optics to isolate a heat signature, the Kreuger uses digital signal processing and software-based noise filtering to lock onto targets even with low contrast or amidst thermal clutter. It’s not the most powerful eye in the sky, but it’s smart enough to see through fog, rain, or smoke and still make the shot. [...] In short, the Kreuger 100 doesn’t match legacy interceptors feature-for-feature. It leapfrogs them by reducing complexity, cutting costs, and moving the brain from silicon to code. The result is a nimble, adaptable air defense solution that behaves more like a swarm AI than a missile." From https://archive.ph/pumek
569
67 Comments -
Ron Carson, PhD, ESEP
SRA4: 2000 - Fuzzy Sets as Requirements Antecedents With the realized benefit of elementary set theory on defining sets of requirements antecedents, in ~1998 I took advantage of an in-house Boeing course on Fuzzy Logic. I hoped that this approach might provide some benefit in handling the tolerances inherent in real hardware. In a nutshell, "fuzzy sets" have non-binary membership functions. That is, an element of a set can have partial membership, something between zero and one. If one relates the membership function to some consequence ("if member...then consequence..."), then the resulting consequence is a mixture of the associated consequences of each of the associated sets of partial membership. I had been impressed with the course application to speed control compared with a standard PID controller algorithm. In my SRA4 INCOSE paper I analyzed a simple example of TTL logic, which yields a binary result for logic 0 and logic 1 based on voltage levels. A problem was the undefined result for voltages between the upper threshold for logic 0 (0.7V) and the lower threshold for logic 1 (2.0V). Using fuzzy logic I was able to generate a deterministic consequence for all possible voltages, including the undefined range of 0.7 to 2.0. Unfortunately, I judged that the process cost was too high to be generally useful for requirements. For, in addition to prescribing the membership functions and associated consequences for the requirements antecedents, one must also define the processing syllogism (how to combine the consequences for partial membersip). This "how" made it evident that such application was more appropriate for "design" rather than "requirements", where the latter should only address "what", not "how". As in all things in science, finding a negative result is still progress, for I learned that "fuzzy logic" was probably not a useful direction for requirements research, even if it had benefit for design implementation. In the next note (SRA5) I'll address some specific issues in requirements development and syntax, my initial attempt to begin addressing what would eventially become the Boeing approach to "structured requirements" in SRA8 and related 2014 patents. This note is the fourth in my retrospective series. The original papers are available in the Wiley-INCOSE library, free for INCOSE members (see links in SRA1). A video of my presentation is available on my YouTube channel: https://lnkd.in/g453dqSQ
23
1 Comment -
Kameshwara Pavan Kumar Mantha
I am in berlin for vector space day and as part of vector search week let me take some time to talk about vectors and models. Most of the time, people talk about how to use a given model — the use cases, demos, and benchmarks. But very few actually talk about the other side of the story: hosting the model and trying it out for a use case. If you go down that path, you’ll learn a lot more in the age of LLMs. Recently, I was playing with IBM Granite-Docling-258M and fell into the classic divisibility traps of vLLM: 👉 AssertionError: 100352 is not divisible by 3 👉 ValueError: Total number of attention heads (9) must be divisible by tensor parallel size (4) 🔎 The problem? Vocabulary size = 100,352 Attention heads = 9 Both must be divisible by your --tensor-parallel-size. That left me with only one valid option: ✅ --tensor-parallel-size 1 Now the question is: how do we scale across GPUs when TP=1 is the only choice? Here’s what worked for me: ⚡ Data Parallelism (DP) instead of TP: Option 1: Ray → --num-replicas N and let Ray spread replicas across GPUs. Option 2: Kubernetes/Docker → one replica per GPU, load-balanced via a Service or Ingress. 💡 Takeaway: Not every model plays nicely with tensor parallelism. Sometimes the smarter way is replication + distribution. Most of the real engineering magic happens not in the paper, but when you actually host and experiment with these models in production-like settings. #vLLM #GenAI #LLMScaling #DistributedAI #VectorSharding
72
1 Comment -
Pari Singh
Boom Supersonics' secret to re-inventing supersonic flight was software: "the real magic isn't the time savings—it's sort of a Jevon's Law of engineering: when engineering iteration is quick and cheap, many more designs can be evaluated and a much better design can be discovered...The magic of software has compounding effects within our engineering team. Great tools reduce rote engineering work, making jobs more enjoyable. Because we don’t need a ton of people, we can be much more selective in our hiring—building small but mighty teams that are fun to be part of. One of my favorites, Blake Scholl at Boom Supersonic, on how their internal software (mkboom) makes them move fast: 1. software reduces the effort/cost to iterate and experiment 2. changes the teams culture/approach to iterations 3. you get more, iterations in the same time 4. each extra iteration broadens the design envelope/helps you discover design points that people didnt think exist before. Full link added in comments - highly recommend engineers read it
67
3 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More