Iniciar sesión para ver el perfil completo de Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Iniciar sesión para ver el perfil completo de Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
España
Iniciar sesión para ver el perfil completo de Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
361 seguidores
366 contactos
Iniciar sesión para ver el perfil completo de Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Ver tus contactos en común con Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Ver tus contactos en común con Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Iniciar sesión para ver el perfil completo de Juan Francisco
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Acerca de
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
Experiencia y educación
-
Redslim
********* **** ***** ******
-
*********
********* ** ******** ******
-
*** **** ****** & *****
********* **** *****
-
*********** ** *********
*á**** ************* ** ********í* ******á**** ********í* ******á**** Sobresaliente, GPA 3.7
-
-
*********** ** *********
***** ** ********í* ******á**** ********í* ******á****
-
Mira la experiencia completa de Juan Francisco
Mira su cargo, antigüedad y más
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Licencias y certificaciones
Proyectos
-
2D Video Game Engine
-
Ver proyectoI created a 2D graphics engine along with a sample videogame using C++ and DirectX.
-
Audaspace
-
I collaborated in this project implementing a real time sound convolution system capable of generating binaural sound in an interactive environment.
Otros creadoresVer proyecto
Idiomas
-
English
Competencia básica profesional
-
Spanish
Competencia bilingüe o nativa
Ver el perfil completo de Juan Francisco
-
Descubrir a quién conocéis en común
-
Conseguir una presentación
-
Contactar con Juan Francisco directamente
Sign in
Stay updated on your professional world
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
Perfiles similares
Ver más publicaciones
-
Julien Chaumond
Code is the product. How do you prevent a 1M+ LoC Python library, built by thousands of contributors, from collapsing under its own weight? In transformers, we do it with a set of explicit software engineering tenets. With Pablo Montalvo, Lysandre Debut, Pedro Cuenca and Yoni Gozlan, we just published a deep dive on the principles that keep our codebase hackable at scale. What’s inside: – The Tenets We Enforce: From One Model, One File to Standardize, Don't Abstract, these are the rules that guide every PR. – "Modular Transformers": How we used visible inheritance to cut our effective maintenance surface by ~15× while keeping modeling code readable from top to bottom. – Pluggable Performance: A standard attention interface and config-driven tensor parallelism mean semantics stay in the model while speed (FlashAttention, community kernels, TP sharding) is a configurable add-on, not a code rewrite. This matters for anyone shipping models, contributing to OSS, or managing large-scale engineering projects. It’s how we ensure a contribution to transformers is immediately reusable across the ecosystem (vLLM, ggml, SGLang, etc.). Read more on the Hugging Face blog
479
20 comentarios -
Yan Cui
A single Lambda function was burning $5,000+/month for a client. The worst part? Most of it was wasted. The function performs CPU-intensive task, so the team cranked memory to 10 GB to make it go faster. In Lambda, more memory = more CPU, but that CPU scales horizontally - you unlock another vCPU for every 1.8GB of memory. But this was a Node.js function, which is single-threaded by default, so it only used 1 vCPU while they were paying for 6 vCPUs (that came with the 10 GB). The code didn't take advantage of multi-core and didn’t need that much memory either. So they didn't use most of the capacity they paid for. The team misunderstood how Lambda's CPU scaling worked and didn't realise the multi-core aspect of it. It's an innocent mistake and one that's far less common than underpowered functions, but it was very impactful. This kind of costly mistake is not limited to serverless either. In fact, it's far more common with EC2 and containers. A misconfigured instance size or min auto-scaling count would much more damage because you're paying for uptime, not just usage. At least with Lambda, you only (over)pay when your code runs. I've worked in places that easily spent $10,000+/month for dev servers that averaged 5% CPU... and no one batted an eyelid... Luckily, there are tools to address this kind of problem, but you still need to know about and use them. There's the Lambda Powertuning tool, which requires some effort on your part to proactively tune every function. There's also the AWS Compute Optimizer, which you can just opt-in to receive recommendations. But it might take a while to get recommendations because Compute Optimizer only shares recommendations when it has high confidence in them. But to complicate things, if you have a Lambdalith, then these optimizers also don't work very well, because your function can do many different things and exhibit drastically different performance characteristics. Something to keep in mind when you settle for a Lambdalith. ps. the "fix" we did was simple - change the memory setting back to 1.8GB until the team rewrites the code to take advantage of multi-core.
446
36 comentarios -
Julien Truffaut
After 12 years of Scala, I’ve decided it’s time to seriously learn Rust 🦀. To stay motivated, I’m documenting the journey in a blog series where I build a tool for optimizing gear in the RPG Dofus. The series will cover everything from modeling data, to scraping APIs, to optimization — and all the beginner mistakes along the way. First post is up 👉 https://lnkd.in/dFH9-vWf I’d love feedback from experienced Rustaceans, and encouragement from fellow learners!
848
68 comentarios -
Gaurav Sen
MIT is trying to beat the context-window hole. https://lnkd.in/dhywXFsF This paper describes how their model breaks a problem into subtasks, and solves them with different threads. For example, "write code" -> thread 1. "push to git" -> thread 2. This leads to small context windows, reducing memory and compute requirements. A topic worth exploring :)
126
1 comentario -
Miguel Otero Pedrido
Implementing an MCP Agent from scratch No frameworks. Just 5 minutes 👇 When we started Kubrick, Alex and I had a choice ... 𝐋𝐚𝐧𝐠𝐆𝐫𝐚𝐩𝐡? 𝐂𝐫𝐞𝐰𝐀𝐈? 𝐒𝐦𝐨𝐥𝐚𝐠𝐞𝐧𝐭𝐬? Well ... we picked none 😅 We went free-solo to really teach you: 🤖 how 𝐚𝐠𝐞𝐧𝐭𝐬 𝐰𝐨𝐫𝐤 under the hood ⚒️ how 𝐭𝐨𝐨𝐥 𝐝𝐢𝐬𝐜𝐨𝐯𝐞𝐫𝐲 actually happens 🌐 how to connect agents to 𝐌𝐂𝐏 𝐬𝐞𝐫𝐯𝐞𝐫𝐬 Today I'm sharing the base Agent class we use for any provider (Groq, OpenAI, Claude, your pick). Next up: 𝐆𝐫𝐨𝐪𝐀𝐠𝐞𝐧𝐭 + translating 𝐌𝐂𝐏 𝐓𝐨𝐨𝐥𝐬 ↔︎ 𝐩𝐫𝐨𝐯𝐢𝐝𝐞𝐫 𝐭𝐨𝐨𝐥𝐬 + logging 𝐦𝐮𝐥𝐭𝐢𝐦𝐨𝐝𝐚𝐥 𝐭𝐫𝐚𝐜𝐞𝐬 to Opik. Curious? 👉 𝐒𝐭𝐚𝐫𝐭 𝐡𝐞𝐫𝐞: https://lnkd.in/dEAcr4gT
344
18 comentarios -
Cesar Miguelanez
Where are all the agents? We were promised autonomous intelligence and we only got chatbots and glorified workflows sprinkled with LLMs. Where is the autonomy if I still have to account for all possible edge cases? Why do I still have to deal with APIs, spaghetti logic, and esoteric protocols? Why can’t I just describe the outcome I want in plain english and the computer will figure out all the details? These are the questions we’ve been asking ourselves at Latitude for the past few months. Today we’re introducing a new way to build automation software. Building for autonomy requires upgrading your mental model of what software is. All you need now are prompts and tools. Code is out, the LLM is the backend now. Latitude is the first full-stack agent engineering platform built around this idea. It is a code editor but you won’t see any code, only plain-english instructions. We have all the advanced tools you need to design flexible behavior: agent orchestration, tool calling, evaluations, and monitoring. And the best part is you don’t even have to worry about any of it. We’ve built an agent—Latte!—that takes care of everything for you. You don’t have to be an expert to program computers anymore. We’re bringing joy to building software, and this is just the first step.
121
9 comentarios -
Santhosh Bandari
Serious engineers aren’t “vibe-coding.” We use LLMs the way we’d work with a junior developer: provide clear instructions, break down tasks, and iterate until the outcome is solid. As code is generated, we review every line, request adjustments, and ensure tests are in place. AI doesn’t inherently introduce sloppy code or security risks—those only appear if you skip proper planning, iteration, and review.
129
30 comentarios -
Natia Kurdadze
Lovable used 12 channels at once to explode to $50M ARR. Here’s how they did it: 1️⃣ They launched on GitHub early • Getting discovered fast matters • Start with open-source • Pick a viral use case • Name it clearly • Engage with early devs • Virality = developers sharing what they love to use 2️⃣ Dominate Product Hunt • Multiple launches build momentum • Launch under MVP name • Relaunch post-pivot • Use updates for traction • Rally your users 3️⃣ Use X as a distribution engine • Build in public drives demand • Post product wins daily • Show user feedback • Drop behind-the-scenes shots • Engage with replies • Most founders quit too soon, @antonosika tweeted every day 4️⃣ Mirror on Linkedin with pro tone • Same content, different voice • Repost from X • Add polish and insights • Tag relevant people • Focus on business results • LinkedIn is X with suits, use both sides of the coin 5️⃣ Turn growth into SEO fuel • People Google what’s already winning • Write case studies • Publish numbers • Optimize titles • Cross-link your posts • Traction builds backlinks, visibility compounds from there 6️⃣ Create agency partnerships • Built-in distribution beats chasing customers • Give early access • Offer rev-share • Train their teams • Feature their work • Incentivize others to sell for you, scale becomes exponential 7️⃣ Grow on YouTube, even with low effort • Presence > perfection • Post demos • Recycle event talks • Add voiceovers to walkthroughs • Use clear titles • 20K+ subs came from just showing up consistently 8️⃣ Build a Discord community • Turn users into loyalists • Launch early • Add feedback channels • Highlight power users • Use bots for onboarding • Communities make your product sticky, Discord made theirs magnetic 9️⃣ Fuel growth with strategic ads • Don’t wait to go paid • Test Google early • Layer YouTube retargeting • Repurpose organic wins • Show credibility (ARR, case studies) • Cold traffic isn’t cold when your brand’s everywhere 🔟 Use Reddit without sounding like spam • Great place for honest validation • Post walkthroughs • Share use cases • Answer questions • Link only when asked • Reddit respects value, not hype, play the long game 11. Dominate niche podcasts • Borrow trust from hosts • Pitch niche shows • Focus on founder journey • Repeat key points • Send traffic to landing pages • A good podcast = 10X warm leads over any cold ad 12. Show up at founder events • Events aren’t just for fun • Speak if invited • Network backstage • Demo when asked • Capture content while there Every IRL moment feeds your digital channels Lovable hit $50M ARR by stacking 12 growth channels • Launch early on GitHub • Go multi-round on PH • Tweet daily • Post smartly on LinkedIn • Use SEO to fuel compounding • Build distribution via partners • Add layers with ads and podcasts Success wasn’t viral. It was intentional.
227
28 comentarios -
April Gittens
Nothing’s worst than seeing a code tutorial/demo and the presenter doesn’t share the code. 😡 Unless the code is proprietary, make it a point to share the sample with the audience. And if you’re still struggling to crack the code on how to create engaging technical content, then check out this advice! 📚 Lesson 2: Share the Code Unless you don’t want the audience to give your latest feature, API, MCP server, etc. a try, always strive to share the code that you show on screen. Otherwise, you’re leaving the audience in a state of trying their best to repeatedly pause the video, and copy + paste what they’re seeing on the screen. If you don’t already have yourself a GitHub account, get one today! If you create a lot of code tutorials, then create a template repository (💡 learn more here: https://lnkd.in/gudGG9hn) to both save yourself some time and establish consistency with your audience. Use short links (ex: bit.ly) whenever possible to link to your code samples. Long URLs can sometimes be clunky and are prone to typos. Share your link(s) on screen, in your content’s description (if it’s a video), or in a convenient place within your written content (if it’s a blog post or article). This week, I’m testing out some custom NFC tags that I made for our AI Tour. It’s so much easier to get someone to scan for repo access if they happen to stop me while walking around the halls. I’m all about convenience and most important…getting people access to information! ⭐️ If you found this helpful, then please tag/share this post with another budding technical content creator!
55
6 comentarios -
Dheeraj Pandey
From Dev Ittycheria’s callout yesterday from his MongoDB earnings call. If all you’re doing is vector search, you don’t have a knowledge graph. You need a graph DB, a vector DB, a data warehouse (a large SQL), and a super fast SQL engine on the edge (a small SQL). Now we can get conversational 💬 Dev: “DevRev, a well funded AI native platform with proven founders disrupting the help desk market built AgentOS, its complete agentic platform that autonomously handles billions of monthly requests on Atlas. DevRev accelerated development velocity, lowered costs and scaled globally with low latency by using Atlas. Agent OS also leverages Atlas Vector Search for semantic search enriching its knowledge graph and LMs with domain specific content.” Kash Rangan (Goldman Sachs): “it it's super interesting. You were talking about how some of the Silicon Valley AI startup founders don't have it have time to think about databases, but our good friend, Dheeraj at DevRev, seems to have made a wise choice here.” Dev: “Obviously, have so much respect for Dheeraj. He built Nutanix into a real great business, and he's gonna do the same at DevRev we feel good that someone like Dheeraj is betting early on MongoDB because that's a good signal for other founders who are thinking about doing the same…” ______ Thank you both, for the faith. We’re brewing something that will truly make us chatty 🔥
382
10 comentarios -
Manuel Leone
🚀 Stop Using LINQ Wrong: PLINQ Can Save You Seconds (or Waste Them) LINQ is elegant and readable.But when your dataset explodes into millions of items, performance can drop dramatically ⏳. That's where PLINQ (Parallel LINQ) comes in: it distributes the workload across CPU cores.The twist? Sometimes it's faster, sometimes it's slower. 🛠 How to Use PLINQ Add .AsParallel() to your query → parallel execution. Use .WithDegreeOfParallelism(n) to control thread count. Tune with .WithMergeOptions() to optimize result merging. Check .AsOrdered() if result order matters. 🌍 Real-World Scenarios ✅ Prime number checking / CPU-intensive computations. ✅ Processing millions of records (ETL jobs, log analysis). ✅ Image/file operations (compression, filtering, transformations). ❌ Avoid PLINQ for small datasets or I/O-bound operations → overhead may hurt performance. ⚠️ Key Considerations Parallel != always faster. Debugging parallel queries is more complex. Thread overhead and contention → always benchmark before adopting. Benchmark Example 👇 using System; using System.Linq; using System.Diagnostics; public class Program { public static void Main() { // Generate a range of numbers (1 to 1,500,000) var numbers = Enumerable.Range(1, 1500000).ToList(); // Function to check if a number is prime bool IsPrime(int n) { if (n <= 1) return false; if (n <= 3) return true; if (n % 2 == 0 || n % 3 == 0) return false; for (int i = 5; i * i <= n; i += 6) { if (n % i == 0 || n % (i + 2) == 0) return false; } return true; } // Sequential execution var stopwatch = Stopwatch.StartNew(); var sequentialPrimes = numbers.Where(IsPrime).ToList(); stopwatch.Stop(); Console.WriteLine( $"Sequential - Primes: {sequentialPrimes.Count}, Time: {stopwatch.ElapsedMilliseconds} ms"); // Parallel execution with AsParallel stopwatch = Stopwatch.StartNew(); var parallelPrimes = numbers .AsParallel() .WithDegreeOfParallelism(2) // Limited for demo .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Where(IsPrime) .ToList(); stopwatch.Stop(); Console.WriteLine( $"Parallel - Primes: {parallelPrimes.Count}, Time: {stopwatch.ElapsedMilliseconds} ms"); } } 👉 You can run and test this code yourself here: https://lnkd.in/dRHQhkGV 🔑 Takeaway: LINQ → clarity & simplicity. PLINQ → real speed boost on large, CPU-bound workloads. Always measure before switching. 👉 Did you know you can use Stopwatch to measure execution time in your own processes? Have you tried it before? Drop your comment ⬇️ #Dotnet #CSharp
48
3 comentarios -
Aditya Sharma
𝗧𝗼𝗼𝗹𝘀 𝘁𝗼 𝗠𝗮𝘀𝘁𝗲𝗿 𝗢𝗽𝗲𝗻-𝗦𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝘀 If you’re serious about building with 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗟𝗠𝘀 two names should already be on your radar: 👇 Let's break them down: 1️⃣ 𝗢𝗹𝗹𝗮𝗺𝗮 Think of this as the easiest way to run models locally. 𝗪𝗵𝘆 𝗶𝘁’𝘀 𝗰𝗼𝗼𝗹? 🔸 One-line install, super dev-friendly 🔸 Runs models locally — even on CPU 🔸 Ships with models like LLaMA 3, Mistral, Gemma 🔸 Great for testing, prototyping, and offline use Built for devs who want to experiment fast, no infra headaches. And yes, it supports models like LLaMA 3, Mistral, Gemma out of the box. 2️⃣ 𝘃𝗟𝗟𝗠 Blazing-fast, production-ready LLM inference. 𝗪𝗵𝘆 𝗶𝘁’𝘀 𝗰𝗼𝗼𝗹? 🔸 Insanely fast inference with PagedAttention 🔸 Efficient multi-model serving 🔸 Optimized GPU memory usage 🔸 Ideal for building scalable RAG or LLM APIs Built by folks from UC Berkeley + industry collabs. If you're scaling LLM APIs or working on RAG at production level — this is your go-to. ↳ Ollama makes it easy to get started. ↳ vLLM helps you scale when you’re ready. ↳ Together, they take you from local prototyping to production-grade LLMs. 𝗖𝗵𝗲𝗰𝗸 𝗼𝘂𝘁 𝗳𝗿𝗲𝗲 𝗿𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄👇🏻 🧑🏻💻𝗙𝗼𝗹𝗹𝗼𝘄 𝘁𝗵𝗲𝘀𝗲 𝗲𝘅𝗽𝗲𝗿𝘁𝘀 𝘁𝗼 𝗹𝗲𝗮𝗿𝗻 𝗮𝗯𝗼𝘂𝘁 𝗟𝗟𝗠𝘀: Alexandre Zajac Paweł Huryn Khizer Abbas Zain Kahn Philipp Schmid ♻️ Repost or share so others can stay ahead in AI. For high-quality resources on AI and Immigration, join my newsletter here https://lnkd.in/eBGib_va #OpenSource #Ollama #vLLM
39
15 comentarios