Iniciar sesión para ver el perfil completo de Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Iniciar sesión para ver el perfil completo de Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Madrid, Comunidad de Madrid, España
Iniciar sesión para ver el perfil completo de Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
773 seguidores
Más de 500 contactos
Iniciar sesión para ver el perfil completo de Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Ver tus contactos en común con Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Ver tus contactos en común con Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Iniciar sesión para ver el perfil completo de Roman
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Experiencia y educación
-
Qonto
***** ******** ********
-
********
********* ****
-
******** *****
******** ******** ** **********
-
********** ******** ********* **********
******'* ****** ****** ********, *********** ********** *** ********** undefined
-
-
********** ******** ********* **********
********'* ****** *********** ******* *** ************
-
Mira la experiencia completa de Roman
Mira su cargo, antigüedad y más
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Idiomas
-
English
Competencia básica profesional
-
Russian
Competencia bilingüe o nativa
Ver el perfil completo de Roman
-
Descubrir a quién conocéis en común
-
Conseguir una presentación
-
Contactar con Roman directamente
Sign in
Stay updated on your professional world
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
Perfiles similares
Ver más publicaciones
-
Puneet Agrawal
🔥 NUMA: The Hidden Latency Penalty in Multi-Socket Servers When we talk low-latency C++, we often obsess over branch predictors, inlining, or cache lines. But in multi-socket production servers, a bigger invisible cost lurks: NUMA (Non-Uniform Memory Access). 💡 What’s going on under the hood? • Each CPU socket has its own local memory. • Accessing memory on your own socket is fast. • Accessing memory on the other socket can be 2–3× slower, since it travels across the interconnect (Intel UPI, AMD Infinity Fabric, etc.). Now imagine a thread pinned to Socket 0, but allocating memory the OS placed on Socket 1’s NUMA node. 👉 Every access now pays a remote memory penalty — adding tens of nanoseconds per load. 👉 In a trading engine, that’s the difference between hitting the market first… or missing it entirely. ⸻ ✅ NUMA-Aware Best Practices: • Thread Affinity: Pin threads to cores deliberately. • NUMA-Aware Allocation: Use numactl, libnuma, or OS APIs to control where memory lives. • Partition Work: Keep data + compute on the same socket. • Socket Discipline: Some workloads isolate latency-critical tasks to a single socket to keep memory fully local. ⸻ 🔑 Key Takeaways • NUMA effects don’t show up on dev laptops — but they do on real multi-socket servers. • Cross-socket memory access can silently add tens of nanoseconds. • Profiling and locality awareness are essential to keep latency deterministic. 👉 Have you profiled your system for NUMA penalties? What’s your strategy — pinning, binding, or redesigning the workload? #Cplusplus #LowLatency #HFT #NUMA #PerformanceEngineering #SystemDesign #TradingSystems
68
2 comentarios -
Amartya Jha
For months, our customers kept asking us one thing: “Can CodeAnt AI not just review code but also enforce code quality?” We listened. And today, it’s live. From now on, every pull request and commit will automatically show you: 1/ Test coverage impact: Did you increase or decrease coverage, and by how much? 2/ Deep dive reports: See exactly which files, functions, and classes lack tests. 3/ Fixes: Generate missing unit tests directly inside VS Code, IntelliJ, or Visual Studio. 4/ Enterprise-grade quality gates: Set rules like “new code must always have >30% coverage” and let CodeAnt AI enforce it. Live across Azure DevOps, GitHub, GitLab, Bitbucket This isn’t just code review anymore. This is quality enforcement, built into your workflow. We’re just getting started. Welcome to the future of AI code reviews!
125
7 comentarios -
David Heiny
At SimScale, we're building two AI systems: Engineering AI and Physics AI. Why? To help engineering teams innovate faster by collapsing simulation lead time. Simulation lead time has two components: 1. The compute time to run the simulation 2. The wait time for the sim task to make it through the backlog and the manual work to process it Physics AI handles the first. Predicting results in seconds by learning from past simulations. Engineering AI tackles the second. Automating setup, execution, and reporting, turning sim into a self-serve tool while maintaining central governance. Together, they let engineers explore more design options in less time. Thinking through your engineering team's AI strategy and want to learn more? Check out the link below or get in touch! (CAD from Onshape) #ai #engineering #simulation #cloud #fea #cfd #thermal #emag
181
10 comentarios -
Jerry Liu
Low-code is nice, but if I had to bet on a future, it’s code-based orchestration + coding agents to let anyone bridge that gap. OpenAI’s AgentKit lets you get started building various flows, like comparing docs, or a basic assistant. Once you need to encode more domain-specific logic/fetch from a data source/create a longer-running agent, you’ll need to export to code and maintain your own workflow. I’m bullish on building advanced agents over your data that live natively on top of a code-based orchestration framework. We’ve built core tools in LlamaIndex to help enable building code-based agentic workflows and then deploying them. You can easily get started through a vibe-coding tool or through our templates, but then you get the full flexibility to add whatever you want on top. You get the underlying benefits of agentic orchestration: state management, checkpointing, human-in-the-loop. We’ve also been super deep in coding tools like Claude Code/Cursor/Codex to make sure you’re able to build these automations super easily. With our latest alpha release of LlamaAgents, you can build whatever workflow you want in code and deploy it as an e2e agent on LlamaCloud! Come check it out 👇 https://lnkd.in/g_JfKi9q
195
15 comentarios -
Jonathan Schneider
Interesting visual of a head-of-line blocking behavior in the Moderne CLI before and after a fix. Each block here represents a different repository with an #OpenRewrite recipe running on it in a large-scale multi-repository run. Knut Wannheden found the bug here yesterday and fixed. Concurrency is hard and the visuals here were produced by Claude Code looking at Moderne trace data we co-designed with our friends at American Express and introduced in 3.45.0. Next, we're exploring different prioritization methods where we're trying to balance between two apparently competing objectives: 1. Runs that should be fast are prioritized early to show incremental results as quickly as possible. After all, if a recipe result is incorrect on the first few repositories, developers like to cancel them quickly and iterate further on the recipe. 2. Runs on repositories that are going to take the longest shouldn't be prioritized last or we have the long tail effect on a run that you can observe even after the fix to the head-of-line blocking bug. We're thinking that Highest Response Ratio Next (HRRN) is a good balance between the two with a simple formula shown below. Over the years we've iterated on a "weight" measure for repositories based roughly on the density of information (including type attribution) of a repository present in the LST. Regardless of which method we wind up choosing, we'll be using this weight measure as one that run time is roughly proportional to. I know it may be a little outside of your typical problem space Geoffrey De Smet, but do you have any suggestions? I imagine our problem area is quite a bit simpler than yours typically. 😉
46
5 comentarios -
Miles Matthias
Previewing a new Stripe project – billing for LLM tokens. You focus on shipping, while we: * auto update model prices as they change * enforce your markup % with usage based billing * auto record usage via a LLM proxy: Stripe’s (new!), OpenRouter, Cloudflare, Vercel, Helicone (YC W23) Let's get into the details 👇 Say you're building an AI app: you want a consistent 30% margin on LLM costs. How do you stay focused on your product when model prices change constantly and you use many models across providers? First things first – token prices in one place. We can now give you a single page within the Stripe Dashboard to see token prices across all of the popular model providers. We’ll keep it up to date, so you don’t need to chase. Set up billing in seconds: enter your token markup (e.g., 30%), submit, and we'll configure everything for usage-based billing with your business model. No math, no deciphering pricing plans vs rate cards vs meters—just give us a % and keep shipping. Connect an LLM without the glue work. Instead of integrating with providers and separately logging usage to bill, use our LLM proxy. Pass your prompt, chosen model, and Customer ID: We handle the LLM connections to get your prompt response, and we automatically record your customer’s usage so they’ll get billed correctly. Already integrated with a proxy? We’re grateful to have fantastic partners: OpenRouter Cloudflare Vercel Helicone (YC W23) If you’re using one of these, we can automatically record usage so we can bill your customers for you. No extra API calls necessary. Interested in trying this? We’re in an experimental private preview and looking for motivated users to push the edges and give honest feedback. Learn more and sign up with the link below in comments!
408
26 comentarios -
Anh Nguyen
I chatted with 100+ tech leaders and discovered a common mistake non-tech founders make when choosing a CTO. They focus too much on technical skills alone. But the real values of tech leaders go far beyond coding skills. Great CTOs are actually tech-business translators who: 1, Put customers first → Dive deeper understanding pain points → Choose solving problems over elegant architecture → Make decisions based on user feedback 2, Bridge the business-tech gap → Convert business goals into actionable tech plans → Explain complex concepts in simple terms 3, Lead with empathy → Balance speed with quality → Optimize with limited resources → Accept good-enough over perfection If you see one of these red flags when talking to a CTO, run: 1. More excited about new tech than user problems 2. Pushes for perfection over progress 3. Avoids business conversations 4. Uses complex jargons too much The best CTOs are business partners first, coders second. Don't be blinded by impressive resumes.
51
55 comentarios -
Ori Keren
We’ve spent the last year watching engineering teams invest heavily in AI tooling to help them write better code faster. But the real pressure is downstream: review, testing, release, and that's where Platform and DevEx teams can make the biggest impact on their developers. Today, I’m proud to announce major updates to LinearB’s AI Productivity Platform. It brings together three capabilities built for the teams that own delivery: MCP Server: Ask delivery data questions in plain language and get clear, answers. AI Insights Dashboard: Understand which tools are being adopted (down to the repo) and how they affect commit patterns, PR volume, and delivery speed. Supports more tools out of the box than any other technology. See partial list below. Developer Surveys: Layer in human signals: how developers feel about the changes, the tools, and workflows. Where is trust high? Where is friction compounding? Our new Essentials offer starts at $19/contributor/month, includes 1,000 credits per seat, and is priced to scale across teams. A PR with multiple automations—like AI PR descriptions and AI code review—only uses 100 credits. Teams can also choose from Managed Mode (fully hosted) or Self-Managed Mode (which gives you granular control). If you lead DevEx or Platform, this is built for you. It offers visibility, automation, and powerful feedback loops without waiting for quarterly retros or another custom dashboard project. AIDER, CodeRabbit, GitHub Copilot, Atlassian Code Reviewer for Bitbucket, OpenAI's Codex, GitLab Duo, Qodo, Atlassian Rovo, Cursor, Google Jules, Sourcegraph, Cognition Devin, Graphite, Tabnine, Anthropic Claude Code, Greptile, Tusk, CodeAnt AI, Google Gemini, Korbit AI, Windsurf
61
-
Mike Rossi
I never answer team member Slack DMs. Instead, I ask them to repost it in a public channel. I can practically hear the confusion through the screen. Did the CEO just...redirect me? Yes, I did, and for a very specific reason. Questions, decisions, updates, discussions...everything happens on open channels where others can see it. This is an unwritten rule at Smile.io that throws new hires for a loop every single time. When you work in an office, you overhear conversations and learn without even trying. Someone chatting over coffee, a teammate sharing a win, a question shouted across the desk. You pick up context just by existing in the same space. But when you’re working remotely, all that knowledge gets trapped in DMs and 1:1 calls. The sales team learns something important about customer behavior, but product never hears about it. Engineering discovers a workaround, but support is still manually fixing the same issue. You only hear what you’re explicitly told, and that’s dangerous for team alignment, context, and growth. That’s why, at Smile, we live by one core value: If it doesn’t have to be private, say it in public. As a result, we’ve seen: • Faster onboarding: New hires don’t need hours of training sessions; they ramp faster by observing real conversations as they happen. • Shared context: Teams better understand each other’s workflows, roadblocks, and the ripple effects of their decisions. • Better questions: When you know 50 people might see your question, you think before you type. • Searchable knowledge: Everything becomes documentation. That debugging session from six months ago? It’s right there in the thread, open to all. When information lives in DMs, you're building a company where only half the team knows what's happening.
75
13 comentarios -
Hiren Dhaduk
How fast your API partners go live determines how quickly they generate mutual revenue. A payments company I spoke with tracked time from the signed agreement to the first live transaction with API integration partners, such as e-commerce platforms and merchant processors. Each week of integration delay cost them $80,000 in partner-driven revenue. Three weeks meant $240,000 in opportunity cost. Their challenge was manual integration processes that required engineering resources for every new API partner. Every integration meant documentation handoffs, scheduled calls, and custom environment setups. They were treating API partner onboarding like a consulting project instead of a scalable system. Then they rebuilt it as a self-service platform. API partners get immediate sandbox access, automated documentation, and standardized authentication flows. The result: integration time dropped from six weeks to eight days. Partner-generated revenue increased 40% year-over-year because API partners went live faster and started contributing revenue earlier. The platform tools that accelerate your internal teams' work the same way for external API partners: - Sandbox environments for testing integrations without engineering support - Real-time API documentation that updates automatically - Streamlined authentication and immediate production access - Consistent 8-day integration timeline When done right, the same platform infrastructure that helps your developers also accelerates partner success. In this week's newsletter, I show how platform engineering makes API partner onboarding self-service. Link is in the comments.
45
3 comentarios