Iniciar sesión para ver el perfil completo de Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Iniciar sesión para ver el perfil completo de Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Barcelona, Cataluña, España
Iniciar sesión para ver el perfil completo de Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
147 seguidores
122 contactos
Iniciar sesión para ver el perfil completo de Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Ver tus contactos en común con Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Ver tus contactos en común con Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Iniciar sesión para ver el perfil completo de Albert
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
¿Estás empezando a usar LinkedIn? Únete ahora
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Acerca de
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
Actividad
147 seguidores
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I just shipped a Guardrails-powered reliability layer and a smart cache for both embeddings and LLM calls! ⚡🤖 🔧 What I Built: 🛡️ Guardrails Integration Spec-first outputs with JSON Schema/Pydantic (types, enums, ranges, regex). Critique/repair loop with targeted re-asks when validations fail. Per-endpoint policies (what’s allowed, max lengths, language, PII checks). 🧠 Embedding Cache Content-hash keys (e.g., sha256(normalize(text))) at document & chunk level. Automatic warm-up on ingestion; de-dup by semantic similarity to avoid re-embedding near-duplicates. ⚙️ LLM Response Cache Prompt+params+system+toolset fingerprinting (model, temperature, top_p, schema version). TTL by domain (e.g., short for news, long for FAQs), LRU eviction, and stale-while-revalidate. 🔍 Three Key Takeaways: 1️⃣ Guardrails turn “good outputs” into “guaranteed structures.” Spec-first validation + re-asks dramatically reduces flaky JSON and unsafe text—trust goes up, pagers stay quiet. 2️⃣ Caching is product design, not just infra. Domain-aware TTLs, versioned keys, and semantic de-dup drive big wins on cost/latency without serving stale or wrong answers. 3️⃣ Versioning is everything. Embedding/LLM cache keys that encode model + prompt + policy + schema make upgrades safe and rollbacks boring. A quick Loom video showing it 👇 https://lnkd.in/e7tgsysD Let's continue pushing the boundaries of what's possible in the world of AI assistants! 🚀 Shout out to AI Makerspace for the guidance and inspiration! #Guardrails #LLM #Embeddings #Caching #MLOps #AI #LangGraph #LangChain AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker Chris Lusk🕵️♂️ Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥
-
Albert Díaz Benitez ha compartido esto🚀 Exciting News! 🚀 I am thrilled to announce that I have just completed implementing the Agent-to-Agent (A2A) Protocol using LangGraph! 🎯 🏗️ The A2A Protocol is a standardized communication framework that enables different AI agents to communicate seamlessly, regardless of their underlying framework or implementation. 🔗 It provides AI agents with the ability to talk to each other and share information directly, working together smoothly, even if they were created by different systems. 🔧 What I Built: 🗺️🤖 LangGraph Client Agent - Built a simple LangGraph graph that acts as a client to my existing A2A server, demonstrating clean separation between client and server architectures. ✍️ A2A Protocol Integration - Connected my LangGraph client to an A2A server using the standardized protocol, enabling seamless agent-to-agent communication. 🔍 Three Key Takeaways: 1️⃣ A2A Protocol Learning - Learned how standardized protocols enable different agent frameworks to communicate seamlessly, creating truly interoperable AI ecosystems. 2️⃣ Framework Interoperability - Discovered how LangGraph and LangChain can work together through A2A, demonstrating the power of standardized communication protocols. 3️⃣ Agent Architecture Design - Gained expertise in the separation of concerns between client agents (LangGraph/LangChain) and server agents (A2A), creating modular and scalable systems. A quick Loom video showing it 👇 https://lnkd.in/eD3_aF_s Let's continue pushing the boundaries of what's possible in the world of AI assistants! 🚀 Shout out to AI Makerspace for the guidance and inspiration! #A2A #LangGraph #AI #Innovation #Tech Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker Chris Lusk🕵️♂️• Discord | #aie7-announcements | AIM Community - 14 August 2025• Discord | #aie7-announcements | AIM Community - 14 August 2025
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I have had the opportunity to work with an incredible tool which is LangGraph Studio 🎯 LangGraph Studio is a visual development environment for building and debugging LangGraph applications 🏗️ It provides the ability to see the Graph in action, visualize the flow of data between nodes, and debug multi-agent conversations step by step in real-time 👁️ 🔍 Three Key Takeaways: 1️⃣ LangGraph Studio Mastery - Learned to visualize and debug AI workflows in real-time, making complex graphs transparent and manageable 2️⃣ Helpfulness Node Importance - Discovered how the helpfulness evaluator in agent_helpful graph adds quality control and self-improvement loops versus the simple agent graph 3️⃣ LangGraph Configuration Mastery - Mastered langgraph.json to define multiple graph variants with different behaviors and routing logic A quick Loom video showing it 👇 https://lnkd.in/efCSHEzB Let's continue pushing the boundaries of what's possible in the world of AI assistants! 🚀 Shout out to AI Makerspace for the guidance and inspiration! #LangGraph #LangGraphStudio #AI #Innovation #Tech #ArtificialIntelligence #API #Integration Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace 👨🏫 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker Chris Lusk🕵️♂️
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I am thrilled to announce that I have just completed a comprehensive MCP (Model Context Protocol) learning journey, building custom tools and integrating them with LangGraph applications! 🎉🤖 🔧 What I Built Step by Step: 🧠 Custom MCP Server - Built a News MCP server integrating with News API, featuring three powerful tools : 📊 get_top_headlines : for country-specific news 🔍 search_news : for keyword-based searches 📰 get_news_sources : for discovering reliable outlets 🔎 Extended MCP Tools - Added another utility tool to the base server : 🌤️ weather_search : using Weather Stack API for real-time weather data ✍️ LangGraph Integration - Developed a sophisticated conversational AI system using LangGraph's StateGraph architecture that seamlessly connects with my custom MCP server 🛠️ Production-Ready System - Implemented state-based graph architecture with intelligent decision-making for tool selection and conversation management 📊 Real-Time Data Access - Built a system that combines AI reasoning with real-time data retrieval from multiple APIs 🔍 Three Key Takeaways: 1️⃣ MCP Protocol Power - The Model Context Protocol makes building custom AI tools incredibly straightforward, allowing seamless integration of external APIs and data sources into AI applications 2️⃣ LangGraph Flexibility - LangGraph's StateGraph architecture enables sophisticated conversation management and dynamic tool orchestration, making complex AI workflows surprisingly manageable 3️⃣ Production-Ready Integration - The combination of custom MCP tools, LangGraph orchestration, and real-time data access creates robust, scalable AI systems that can handle complex queries and provide intelligent responses A quick Loom video showing it 👇 https://lnkd.in/e3NYxqbB Let's continue pushing the boundaries of what's possible in the world of AI assistants! 🚀 Shout out to AI Makerspace for the guidance and inspiration! #MCP #LangGraph #AI #OpenAI #MachineLearning Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker Chris Lusk🕵️♂️
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I am thrilled to announce that I have just built and shipped a research agent using OpenAI's Agents SDK! 🎉🤖 🔧 What I Built Step by Step: 🧠 Planner Agent - Analyzes research queries and generates 5-20 strategic search terms with reasoning for each search 🔎 Search Agent - Performs web searches using WebSearchTool and creates concise 2-3 paragraph summaries of findings ✍️ Writer Agent - Synthesizes all research into comprehensive reports with proper structure and follow-up questions 🧑💼 Research Manager - Orchestrates the entire workflow, providing real-time progress updates and error handling 🛠️ Supporting Infrastructure - Built tracing, structured outputs with Pydantic validation, and robust error handling 📊 Custom Legal Domain - Implemented specialized WRITER_PROMPT_CUISINE_RESEARCHER for generating cuisine recipes research reports 🔍 Three Key Takeaways: 1️⃣ The OpenAI Agents SDK makes building sophisticated multi-agent systems surprisingly straightforward with its minimal abstractions and built-in tools 2️⃣ Structured outputs with Pydantic models ensure reliable data exchange between agents, making the system robust and maintainable 3️⃣ The combination of real-time progress updates, tracing capabilities, and error handling creates a production-ready research system A quick Loom video showing it 👇 https://lnkd.in/eGbASF3h Let's continue pushing the boundaries of what's possible in the world of AI assistants! 🚀 Shout out to AI Makerspace for the guidance and inspiration! #OpenAI #AgentsSDK #DeepResearch #Innovation #AI #TechMilestone Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker Chris Lusk🕵️♂️
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I’m thrilled to announce that I just built and shipped Open Deep Research – an AI-powered research assistant that automates the entire research pipeline using LangGraph, Tavily, and LLMs! 🎉🤖 This notebook demonstrates how to build a multi-step research assistant using LangGraph and large language models 🤖. The goal is to generate detailed reports on any topic by guiding the AI through a structured process. 🔐 It starts by setting up API keys (Anthropic, Tavily, and optionally OpenAI) for language generation and web research. 📋 The core component is a state object, which tracks all information about the research and report writing process. 🧠 The state includes topic selection, planning feedback, section breakdowns, completed content, research data, and the final report. 🔎 For each section, it performs iterative web searches, tracks queries used, and integrates findings into the report. 🧩 Multiple functions help define and manipulate these sections, such as planning content, refining drafts, and validating quality. 🕸️ Tavily is used to perform live web searches to enhance factual accuracy. 🗣️ The assistant gets feedback at different stages to improve content iteratively. 🔄 The LangGraph framework enables a modular and looped approach to research and writing, ensuring refinement at every step. 🧪 Nodes are defined for planning, research, writing, feedback, and final assembly. 🧭 The graph is then executed with a user-defined topic to demonstrate the end-to-end pipeline. 📑 The final result is a polished, structured report enriched with web-sourced insights. 🔍 Three Key Takeaways: 1️⃣ Context Matters More Than Ever The system uses topic-aware search strategies (e.g. medical → PubMed, tech → ArXiv), boosting citation relevance and report quality dramatically 📚🎯 2️⃣ Human Feedback = Smarter Automation Strategic checkpoints let users review and steer the research flow. This simple loop improved final output quality 🙌 3️⃣ Parallel Processing = Massive Speed Gains Using LangGraph's async tools, it builds report sections in parallel – cutting research time without compromising depth ⚡📈 A quick Loom video showing it 👇 https://lnkd.in/eBUBgE3y Let’s keep pushing the boundaries of AI-powered knowledge generation! Here's to many more innovations ahead 🚀 Shout out to AI Makerspace for the guidance and inspiration! #LangGraph #AgentEvaluation #OpenAI #AIEvaluation #ContextEngineering #AI #Agents #LangSmith #DeepResearch 💬 Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace Chris Lusk🕵️♂️ 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I am thrilled to announce that I have just built and shipped a full pipeline for Synthetic Data Generation, Retriever Benchmarking, and Metric Evaluation using RAGAS & LangChain! 🎉🤖 🧠 What I built step-by-step: 📥 Loaded and split real-world complaint data into document chunks 🧪 Generated a synthetic golden dataset using RAGAS’s TestsetGenerator 🔍 Evaluated multiple retrievers: • Naive Retriever (Based on Embeddings) • BM25 Retriever (Based on Bag-Of-Words) • Multi-query Retriever • Contextual Compression Retriever (Reranking) • Parent Document Retriever • Ensemble Retriever • Semantic Chunking Retriever 📊 Compared their performance using the full suite of RAGAS metrics: • LLMContextRecall • Faithfulness • FactualCorrectness • ResponseRelevancy • ContextEntityRecall • NoiseSensitivity 📉 Visualized all results using matplotlib for side-by-side analysis ⏱️ Collected latency and cost data per retriever via LangSmith's tracing UI 🧾 Summarized the insights to determine the top-performing retrievers for this dataset 🔍 Three Key Takeaways: 1️⃣ Built a golden synthetic dataset using real-world complaint narratives and evaluated it with robust LLM-generated questions and answers. 2️⃣ Benchmarked 7 different retrievers (Naive, BM25, Multi-query, Compression, ParentDoc, Ensemble, and Semantic Chunking) using RAGAS metrics—like context recall, factual correctness, and noise sensitivity. 3️⃣ Integrated LangSmith to measure latency and cost for each retriever, and visualized results using matplotlib for clear, data-driven insights on performance and trade-offs. A quick Loom video showing it 👇 https://lnkd.in/eVT8N6mW Let's continue pushing the boundaries of what's possible in the world of AI and question-answering. Here's to many more innovations! 🚀 Shout out to AI Makerspace ! #LangChain #QuestionAnswering #RetrievalAugmented #Innovation #AI #TechMilestone #SemanticChunking Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Mark Walker Chris Lusk🕵️♂️
-
Albert Díaz Benitez ha publicado esto🚀 Exciting News! 🚀 I am thrilled to announce that I have just built and shipped Synthetic Data Generation, benchmarking, and iteration with RAGAS & LangChain! 🎉🤖 🔍 Three Key Takeaways: 1️⃣ Stop guessing, start measuring - Moving from "it feels right" to quantitative metrics like Context Recall, Faithfulness, and Tool Call Accuracy transforms how we improve AI systems 2️⃣ Synthetic data is a game-changer - Ragas automatically generates realistic test questions and ground truth using knowledge graphs. No more manual test case creation! 3️⃣ Different AI systems need different evaluation approaches - RAG systems focus on retrieval quality and response grounding, while agents need evaluation for tool usage, goal achievement, and topic adherence Let's continue pushing the boundaries of what's possible in the world of AI and question-answering. Here's to many more innovations! 🚀 A quick Loom video showing it 👇 https://lnkd.in/eN3PNMPM #LangChain #QuestionAnswering #RetrievalAugmented #Innovation #AI #TechMilestone Shout out to AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Chris Lusk🕵️♂️ Mark Walker Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥
-
Albert Díaz Benitez ha reaccionado a estoAlbert Díaz Benitez ha reaccionado a estoI am very happy to share that I have successfully completed the AI Engineering Bootcamp and obtained my AI Engineering Certification with AI Makerspace 🏅 🏆 I'm very proud to have earned this certificate. It wasn't easy. It took many weeks of hard work, many hours, and balancing the course with work and personal life, but it was so worth it 💪⏳📚💼❤️➡️🏆🎉 This is just the beginning of a great journey. I still have a lot to learn, and I hope to stay in touch with all the people who have been supporting me throughout the course 🚀🤝 I am very grateful to 👨🏫🤖 "Dr. Greg" Loughnane and Chris "The Wiz 🪄" Alexiuk for the way they have taught us in each class 🙏👨🏫 I'm very grateful to all the peer supporters, who were always willing to help and discuss with a great attitude 🤝😊 Mark Walker Mike Dean Eva Draganova Todd Deshane Thank you so much, Jacob Kilpatrick, for your patience whenever I've written to you to review or ask questions 🙏✍️ I especially want to highlight Mark Walker great mentorship. From the very beginning, he has been always willing to help and advise. In his responses, he always offered diverse perspectives and tips for improvement. I think having such support is essential for any student. No matter the day or time, he always answered, giving his perspective. Following our LCEL, thank you Mark !! 🙇♂️ Thank you all ! 🤘 #AI #LangChain #LangGraph #LangSmith #RAGAS #ContinuousLearning #LLMAgents #A2A #DeepSearch #Guardrails
-
Albert Díaz Benitez ha recomendado estoAlbert Díaz Benitez ha recomendado estoInsurance Post just published a Q&A on why we started Diesta and why premium payments in insurance are long overdue for a rethink. In the conversation we dug into: ✔️ The billions wasted every year on month-long, manual payment processes ✔️ How our platform makes moving money faster, cheaper, and more secure ✔️ What’s next as we expand across various geographies and companies What I like about the interview is that it's not only about technology. It’s about our founding story, the purpose of the business and transforming a part of the industry that is centuries old. See the link the to article in the comments below.
-
Albert Díaz Benitez ha recomendado estoAlbert Díaz Benitez ha recomendado esto🚀 Exciting News! 🚀 I have had the opportunity to work with an incredible tool which is LangGraph Studio 🎯 LangGraph Studio is a visual development environment for building and debugging LangGraph applications 🏗️ It provides the ability to see the Graph in action, visualize the flow of data between nodes, and debug multi-agent conversations step by step in real-time 👁️ 🔧 What I Built: 🧠 Custom MCP Server - Built a 📰 News MCP server using FastMCP with News API integration, featuring three tools: 📊 get_top_headlines, 🔍search_news and 📰 get_news_sources ✍️ LangGraph Studio Integration - Connected my MCP server to LangGraph Studio for real-time workflow visualization and debugging 🛠️ Production-Ready System - Implemented robust error handling, environment management, and API security 🔍 Three Key Takeaways: 1️⃣ LangGraph Studio Mastery - Learned to visualize and debug AI workflows in real-time, making complex graphs transparent and manageable 2️⃣ Helpfulness Node Importance - Discovered how the helpfulness evaluator in agent_helpful graph adds quality control and self-improvement loops versus the simple agent graph 3️⃣ LangGraph Configuration Mastery - Mastered langgraph.json to define multiple graph variants with different behaviors and routing logic 🎯 Advanced Build Deep Dive: 🏗️ FastMCP Implementation - Created lightweight MCP server with clean tool definitions and proper error handling 🔄 Real-Time Monitoring - Used LangGraph Studio to watch tool execution, data flow, and conversation state in real-time 🔐 Secure Integration - Implemented proper environment management and API key security for production deployment 💡 The result? A production-ready MCP Server that integrates seamlessly with LangGraph Studio, providing real-time visibility into AI workflows and making complex tool orchestration both powerful and debuggable! A quick Loom video showing it 👇 https://lnkd.in/dZUhzXVu Let's continue pushing the boundaries of what's possible in the world of AI assistants! 🚀 Shout out to AI Makerspace for the guidance and inspiration! #MCP #LangGraph #LangGraphStudio #AI #Innovation #Tech #ArtificialIntelligence #API #Integration Feel free to reach out if you're curious or would like to collaborate on similar projects! 🤝🔥 AI Makerspace 👨🏫🤖 "Dr. Greg" Loughnane Chris Lusk🕵️♂️ Mark Walker
-
Albert Díaz Benitez ha recomendado estoAlbert Díaz Benitez ha recomendado esto🔹 ¿Por qué Docker sigue siendo un pilar clave en el éxito tecnológico de cualquier empresa moderna? En el mundo actual, donde la velocidad de entrega y la eficiencia operativa marcan la diferencia entre destacar o desaparecer, Docker se consolida como una herramienta estratégica para el negocio, no solo técnica. 🚀 Ventajas para el negocio: Portabilidad total: lo que funciona en desarrollo, funcionará en producción. Sin sorpresas, sin desperdicio de tiempo. Estandarización del entorno: reduce errores humanos y problemas de "works on my machine". Escalabilidad eficiente: combina perfectamente con sistemas orquestadores como Kubernetes o ECS para crecer de forma inteligente. Onboarding más rápido: nuevos developers pueden empezar en minutos con entornos consistentes. Costes operativos reducidos: contenedores ligeros implican menor consumo de recursos frente a máquinas virtuales tradicionales. 🛠️ Tip avanzado: Multi-stage builds ¿Sabías que puedes reducir significativamente el tamaño de tus imágenes con multi-stage builds? Esto permite generar imágenes optimizadas solo con lo necesario para ejecutar, excluyendo toolchains, dependencias de build, y archivos intermedios. 📦 Resultado: imágenes más pequeñas, más seguras y más rápidas de desplegar. 👉 Docker ya no es solo una herramienta de DevOps, es una ventaja competitiva para las empresas que apuestan por agilidad, fiabilidad y escalabilidad. ¿Tú ya estás aprovechando todo el potencial de Docker en tu stack? 💬 #Docker #DevOps #SoftwareEngineering #CloudNative #Productividad #Contenedores #CICD #TechLeadership
-
Albert Díaz Benitez ha reaccionado a estoAlbert Díaz Benitez ha reaccionado a estoTeam days are always such great fun! When all the daily hustle, hard work and detailed crafting comes to a stop for a few hours, it’s play time at Diesta. Yesterday we took advantage of the London Sports Week initiative and played a few rounds of Paddle on the fantastic court at Hays Gallery. We love a good challenge and rising to the occasion - hence, congratulations to all the beginners playing paddle for the first time and our two proud winners 👏🏼 Now back to smashing premium payments and building the insurance transaction system of the future!
Experiencia y educación
-
INARI.IO
******** ** **********
-
*****.**
**** *** *********
-
***** ***
****** ******** ********
-
*********** *****è***** ** *********
******** ****** ** *********** *********** ******** ******** *********** 7.95
-
Mira la experiencia completa de Albert
Mira su cargo, antigüedad y más
¡Hola de nuevo!
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
o
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
Licencias y certificaciones
Experiencia de voluntariado
-
Teacher
Generalitat de Catalunya
- 4 mes
Educación
Education and introduction to Scratch programming environment for children 8 to 14 years, under the Code Club Catalonia training program.
Proyectos
-
TooPath API
-
TooPath v3 is an API that let you manage tracks and locations related to a device. This API is protected with JWT authentication and follows the GeoJSON RFC 7946.
This project is developed as Final Degree Project of the Bachelor Degree in Informatics Engineering on Barcelona School of Informatics (UPC).Otros creadoresVer proyecto
Idiomas
-
Spanish
Competencia bilingüe o nativa
-
Catalan
Competencia bilingüe o nativa
-
English
Competencia básica profesional
Recomendaciones recibidas
1 persona ha recomendado a Albert
Unirse para verloVer el perfil completo de Albert
-
Descubrir a quién conocéis en común
-
Conseguir una presentación
-
Contactar con Albert directamente
Sign in
Stay updated on your professional world
Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.
¿Estás empezando a usar LinkedIn? Únete ahora
Perfiles similares
-
Hugo González Romero
Hugo González Romero
International Game Technology PLC (formerly GTECH S.p.A.)
180 seguidoresBarcelona y alrededores
Ver más publicaciones
-
Alex Barády
You must master these 11 concepts. If you want to succeed with development in 2025... It's not optional anymore. If it were a game... Using GitHub Copilot is basically level 1. While top product teams are already at level 50. And the rules are changing fast, So mastering them is critical for staying competitive. Here are the 11 must-learn AI concepts 👇 (Master all of them or get left behind) 1. Context Management ↳ Building structured project specs for consistent AI output. 2. Prompt Patterns ↳ Proven templates that improve AI coding consistency. 3. MCP (Model Context Protocol) ↳ Protocol connecting AI models to external tools and data. 4. RAG (Retrieval-Augmented Generation) ↳ AI system that reduces hallucinations using external sources. 5. Agentic Development ↳ AI agents that autonomously write and modify code. 6. AI-Assisted Programming ↳ Real-time collaboration between developers and AI. 7. Vibe Coding ↳ Natural language programming for rapid prototyping. 8. AI Guardrails ↳ Safety controls preventing harmful code generation. 9. Monitoring & Evaluation ↳ Measuring AI performance in development workflows. 10. Cost Optimization ↳ Smart model selection and caching to control expenses. 11. Model Routing ↳ Using appropriate AI models based on task complexity. You need to master EVERY concept. Missing even one puts you at a disadvantage. Save this list and refer back to it... Your product team's efficiency depends on it. ♻️ Repost to help others level up their AI game Follow Alex Barady for practical AI strategies
123
64 comentarios -
Yan Cui
One of the biggest misconceptions I hear about serverless is this: "It’s just a spaghetti of Lambda functions calling each other." Yes, I’ve seen that happen. When it does, it’s ugly, and it usually comes from inexperience and lack of design. But that’s not what serverless architectures are supposed to be. It’s like saying "cats are animals that poop on your bed", sure, accidents happen (…probably 😹), but that’s not the norm or the expected behaviour! So, what does a well-designed serverless architecture actually look like? From a bird’s-eye view, pretty much the same as what you'd build with containers or EC2: • Separate accounts per team/workload • System is decomposed into independent services • Every service owns its own data (no shared DBs) • Services are loosely coupled through events • Centralised logging and observability Whether I have an API (synchronous communication) or use events (asynchronous communication) does not depend on whether I use Lambda vs. containers. A serverless architecture doesn't have to be event-driven. Equally, an event-driven architecture can run on containers or EC2. Those are orthogonal architectural choices. Inside each service, I use the serverless-first mindset to decide on my tech stack, e.g. • Prefer DynamoDB over RDS • Prefer API Gateway of ALB • Prefer Lambda functions over containers or EC2 • Prefer EventBridge over Kafka The guiding principle is simple: pick the service that does the most heavy lifting. And with serverless technologies like Lambda, you get built-in multi-AZ redundancy; scalability; reduced attack surface; no need to manage infrastructure; simplified deployment; and, pay-per-use pricing. So no, you don’t expose "a bunch of Lambdas" as your service boundary. That’s not the goal. That’s just a mistake.
376
14 comentarios -
Brendan O'Neil
🚀 New Release from ComfyUI - Subgraph Publishing & Selection Toolbox Redesign ComfyUI 0.3.63 update unlocks two powerful enhancements that make working with this tool even more fluid and modular: ✨ What’s New ⁉️ 1. 𝗦𝘂𝗯𝗴𝗿𝗮𝗽𝗵 𝗣𝘂𝗯𝗹𝗶𝘀𝗵𝗶𝗻𝗴 You can now save any subgraph as a reusable component in your node library. Use the publish icon or menu in the Selection Toolbox, then find it under Node Library → Subgraph Blueprints. When you need to tweak the behavior, just edit and update it like a regular node. This change elevates composability and reusability in your workflows. ComfyUI 2. 𝗦𝗲𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗧𝗼𝗼𝗹𝗯𝗼𝘅 𝗥𝗲𝗱𝗲𝘀𝗶𝗴𝗻 The UI of the selection toolbox has been refreshed with clearer icons and an expandable menu. This redesign isn’t just aesthetic — it paves the way for future enhancements and more flexible workflows. ComfyUI Blog 🔍 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 𝘔𝘰𝘥𝘶𝘭𝘢𝘳𝘪𝘵𝘺 & 𝘙𝘦𝘶𝘴𝘦: Subgraphs can now live as first-class entities—no recreating from scratch each time. 𝘉𝘦𝘵𝘵𝘦𝘳 𝘜𝘟: Cleaner UI and menu structure help speed up experimentation and iteration. 𝘌𝘹𝘵𝘦𝘯𝘴𝘪𝘣𝘪𝘭𝘪𝘵𝘺: The foundation is laid for even richer features down the line. As someone who values efficient, scalable design in creative and engineering workflows, I’m excited to see how this deepens what’s possible in ComfyUI. #ComfyUI #UIUX #subgraphs #ai #monks
98
4 comentarios -
Henri Maxime Demoulin
Frameworks like Node.js lack the primitives for safe applications. Postgres has had them for decades. It should be at the core of our designs. Almost a decade after this paper's publication, concurrent bugs are still plaguing applications. Even seasoned programmers get caught. (And new coding assistants like Claude Code also struggle.) That's because it takes very little to create a concurrency bug: 2 concurrent actions... What's really sad is that these problems have been solved by databases like Postgres decades ago. We need libraries that bring transactional guarantees into application code: steps that always run at-least-once or exactly-once, in order, backed by the database. Look at the findings of the paper (consistent with other studies). The root causes of bugs in Node.js: - 65% atomicity violation (no transactions in Node.js...) - 35% order violation (no synchronization primitives in Node.js...) Note that 40% of bugs came from interacting with external APIs. For example, check figure 4 in the paper: this is a typical example of un-orchestrated calls to an external API leading to data losses. When this happens, teams often reach for heavy orchestration frameworks like Airflow or Temporal, even though a simple Postgres-backed SAGA could suffice. The paper also finds that 50% of concurrency bugs are caused by API mis-use. i.e., people get fooled by async/await. So, what do we do? Node.js is very popular and for good reasons: javascript/typescript are much more approachable than traditional backend languages like C# or Java. Many devs enjoy using the same language for both frontend and backend. But twisting the framework to make concurrent programs safe is really, really hard in Node.js. One approach is to bring ACID properties to the application by embedding a RDBMS-backed library to the framework. For example, if you provide a "step" abstraction, with automatic ID generation and checkpointing, you can ensure that each step is executed at-least-once or exactly-once and that they order is always the same. Happy to chat more about that idea :) #postgres #nodejs
44
18 comentarios -
Andrew Bolis
With vibe coding, it's quick and easy to build apps. (But integrations take a while to set up, until now.) To set up integrations, you typically need to: → Create a developer account with the service → Find and manage API keys → Share credentials with your team → Repeat for every new app you build This can take several hours or days. Luckily, there’s a new and better solution… Replit just launched Connectors to streamline this entire process. ↳ OAuth-based integrations where you just log in to a service once and start building - no developer accounts, no API keys, no manual OAuth setup required. Here's the new workflow: → Tell Replit's Agent what you want to build → Log in to the service (Salesforce, Dropbox, etc.) → Agent immediately starts building with your data That's it. One login. Unlimited apps using that service. With Replit Connectors: ✅ Integrations feel native - no credentials to manage for OAuth services ✅ Faster time-to-value - create apps with external data in minutes ✅ Safer by default - no manual credential management ✅ Enterprise governance - assign connections to specific users and roles Connectors work with the tools you already use: → Google (Sheets, Docs, Calendar, YouTube) → Salesforce, HubSpot, Notion, Linear → GitHub, Jira, Confluence, Asana → Dropbox, Box, OneDrive, SharePoint → Discord, Spotify, Outlook Log in once to any service, and your entire team can build apps on top of that data. Some apps you can build in plain English: • “Pull my YouTube analytics, spot trending topics, and plan them in Google Calendar.” • “Scan Salesforce opportunities, flag renewal risks, and draft outreach emails.” • “Read a Confluence PRD and turn it into Jira epics, stories, and tasks.” That’s the shift: from integrations as a blocker to integrations as a starting point. Build apps & automations on top of your data with Connectors. 📌 Try Replit Connectors today: https://replit.com/ 🔄 Repost to help others discover faster integration workflows #AI #BuildingApps #AppIntegrations #SponsoredByReplit #ReplitPartnership
881
253 comentarios -
Aurimas Griciūnas
💥 Postman has just released a collection of 𝗳𝗿𝗲𝗲, 𝗿𝗲𝗮𝗱𝘆 𝘁𝗼 𝗯𝗲 𝘂𝘀𝗲𝗱, 𝗼𝗳𝗳-𝘁𝗵𝗲-𝘀𝗵𝗲𝗹𝗳 𝗔𝗴𝗲𝗻𝘁 𝘁𝗲𝗺𝗽𝗹𝗮𝘁𝗲𝘀 𝗳𝗼𝗿 𝗗𝗲𝘃𝗢𝗽𝘀 𝘁𝗮𝘀𝗸 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻𝘀. There are plenty to choose from, my favourite - Slack-to-Jira Filer Agent. Here is the situation I was unfortunate to be part of for too many times: 👉 You discover a bug or consider a new feature. 👉 You open a Slack thread for discussions. 👉 50 messages later you come to the conclusion. 👉 You 𝘀𝗽𝗲𝗻𝗱 𝟭 𝗵𝗼𝘂𝗿 to read through the thread and create a Jira ticket. Here is how Slack-to-Jira Filer Agent fixes the above: ✅ Connect the agent to your Slack and Jira once. ✅ Type a predefined emoji in the thread. ✅ The agent summarises the conversation, creates a Jira ticket with all relevant information. ✅ You 𝘀𝗮𝘃𝗲 𝟭 𝗵𝗼𝘂𝗿 of tedious work. Find this and other available templates here: https://fnf.dev/44TxZBW Thank you to Postman for collaborating on this post and helping to share such useful resources with the community! What other templates would you like to see to help with DevOps automation? Let me know in the comments 👇 #Agents #AI #DevOps
172
34 comentarios -
Daniel Moka
I stopped using Cursor for coding. I use Claude Code now. It's 10x better. Here are 5 tips on how we use Claude Code in my team: 1. Create a CLAUDE.MD file to include project goals, tech stack, folder structure and best practices 2. Don't ask Claude to code first. Instead, ask it to explore first, then plan, then implement features. 3. Use /clear command often to reduce hallucinations and save tokens 4. Build custom slash commands to automate repetitive tasks like debugging 5. Paste screenshots into Claude to better describe errors, layouts and bugs. A picture says more than 1000 words To learn more Claude Code tips, check out my latest post: https://lnkd.in/daBja6sC AI won't replace software engineers. But an engineer using AI will.
1542
188 comentarios -
Erwin Hofman
We just added something nerdy to our tool, as treemaps seems to be hot in #JavaScript + web #performance world 🔥 When using RUMvision - Core Web Vitals monitoring to track #UX in real-time, we already show site owners a third party dashboard. But we now added a treemap-view as well: → sort by events This will get the third party with most ocurrences at the top left. In this case #GTM. However, the color+opacity coding is telling us that it doesn't come with biggest INP impact 💡 → sort by JS time To get the one with highest JS time at the top left. This newly introduced view should help all shareholders (even less/non technical ones) to start discussions around third parties and make the right decisions. For example: where should we start optimizing? In this case: 1️⃣ probably cookie-script because of high-ish amount of events 2️⃣ then new relic being in the middle with high JS time 3️⃣ and for example nosto + hotjar 4️⃣ consentmanager + intercom are very red as well, is what mouseover will reveal
46
18 comentarios -
Brett Bohannon
Amazon walked back their variation theme removal approach. Original plan: Remove all deprecated themes Sept 2-Nov 30 Updated plan: Only remove themes with zero sales in 12 months What this means for you: Critical themes (size, color, style) are safe Existing variations keep operating normally Much smaller impact than first announced Still worth auditing your themes This is a good reminder that Amazon's initial announcements often get refined. The key is having systems to adapt quickly either way. Free audit tool still available for anyone who wants to check their inventory.
72
33 comentarios -
Shubham Vora
I was checking out Sonnet 4.5 (via Claude) — hyped as one of the best coding models right now. 🧵 Then, just saw this Reddit post where a developer tried using Claude Sonnet 4.5 for a simple coding task — and got burned. 👉 https://lnkd.in/dQbVTswV They asked it to add a file upload feature. Instead of reusing an existing helper function from the codebase, the model generated new logic — blindly and confidently. Classic case of what people now call vibe coding. It looks clean. It sounds smart. But under the hood, it’s chaos. No context. No reuse. No accountability. This isn't the first time I've seen LLMs do this — confidently skipping over edge cases, ignoring constraints, and hallucinating structure. And yet, vibe coding isn't all bad. ✅ It's great for: – Quick POCs – Exploring unfamiliar code – Learning new stacks – Getting unstuck ❌ But it fails at: – Stability – Debugging – Legacy integration – Team handoff Vibe coding is fine... until you try to ship it. So here’s my take: Use AI to move fast — but don't skip the real engineering. Curious — how do you use LLMs in your workflow? Where do you draw the line between vibing and building? Follow Shubham Vora to learn more about AI agents and AI tools.
43
33 comentarios -
Chanchal Kumar Mandal
🚨 Backend Devs, You Know This Struggle 🚨 You build the API. You test it in Postman. Everything works flawlessly. ✅ No errors. No surprises. Smooth sailing. But then… 🚨 The frontend team integrates it, and suddenly—𝗯𝗼𝗼𝗺—nothing works. You double-check: 🔸 Request body? Correct. 🔸 Headers? Fine. 🔸 Auth token? Looks good. So what went wrong? 🧠 These are the kinds of bugs that haunt us: • Postman bypasses browser-level CORS restrictions • The frontend might structure the JSON 𝘴𝘭𝘪𝘨𝘩𝘵𝘭𝘺 differently • Content-Type headers could be missing or incorrect • Or maybe… a sneaky middleware is quietly sabotaging the request 📌 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Just because it works in Postman doesn’t mean it works in the browser. Welcome to the joys of backend debugging. 😅 Please check 🌍 https://lnkd.in/gfZJqPTR and follow ✅ https://lnkd.in/gZ6myPbn Please follow Chanchal Kumar Mandal #BackendDevelopment #Postman #APIIntegration #DeveloperStruggles #DebuggingTips #SoftwareEngineering #CodingLife #WebDevelopment
114
4 comentarios -
Chirag Goel
The Web is more powerful than you think. Modern Web APIs are bridging the gap between browsers and native apps. A few examples you’re probably already using without noticing: 1️⃣ Clipboard API – Copy/paste across apps and browsers. → Use case: Google Docs syncing your clipboard with OS seamlessly. 2️⃣ Web Share API – Share links/files directly to native apps. → Use case: A news site letting you share an article to WhatsApp in one tap. 3️⃣ Device APIs (Camera, Geolocation, Bluetooth) → Use case: Scanning QR codes for payments, connecting fitness trackers, live navigation. 4️⃣ File System Access API – Read/write local files securely. → Use case: Figma in browser saving files directly to your machine. These APIs unlock “native-like” experiences inside the browser. But here’s the catch, System Design matters more than ever. 👉 How do you handle permissions safely? 👉 How do you scale features across browsers & devices? 👉 How do you degrade gracefully when an API isn’t supported? The power is there, but the architecture decisions determine if your app feels magical or broken. If you want to go deeper, I highly recommend exploring Namaste Frontend – System Design by Akshay Saini 🚀 & Chirag Goel. ❓What’s the Web API that impressed you the most? Regards, Chirag Goel
364
12 comentarios -
Shivam Chopra
ChatGPT 5 vs Claude Opus 4.1 for development… and why this update isn’t just for developers. The new ChatGPT drop isn’t only about writing code. It’s shaping up as a full creation and execution surface for product teams. For Development: ChatGPT 5: stronger repo-scale reasoning, cleaner tool use, faster scaffolds, smarter tests, and refactors. Great for spinning up prototypes and integrating with workflows. Claude Opus 4.1: excellent spec reading, careful code edits, solid architecture notes, reliable docstrings, and explanations. Great for large design reviews and safety-sensitive changes. Beyond Developers: ChatGPT 5 now leans into product and ops: data analysis, slide outlines, site copy, UI drafts, lightweight agents, and quick connections to the tools teams use daily. Claude Opus 4.1 shines for research memos, strategy docs, policy language, and extracting structured insights from long materials. How would I choose??? - If you need speed to prototype end-to-end features, then try ChatGPT. - If you need meticulous reasoning and clear writing, consider using Claude. Best stack uses both. Standardize prompts, add guardrails, and route tasks by strength. Bottom line: This ChatGPT update expands the audience from “developers only” to “whole product teams.” PMs, designers, analysts, marketers—everyone gets a lift.
39
1 comentario -
Milan Jovanović
Have you heard of the new type of UUID? I often use UUIDs (Guid in C#) as unique identifiers in my database. They do have one problem: they're "random", which is good and bad. It's good for distributed systems, because we can generate them from different nodes. But what makes them bad? One thing is size. A Guid in C# is 16 bytes, and this can add up. This means your database tables and indexes will consume more memory. Also, if you're a database geek, you're familiar with index fragmentation. To put it simply, indexes work better with sortable data. Luckily, there's a new type of UUID. It's called UUID V7, and it's a sortable value because it has a time component. You can create one in .NET 9 via Guid.CreateVersion7. It's also natively supported in Postgres 18 (coming out soon). Would you consider using it? --- Sign up for the .NET Weekly with 72K+ other engineers, and get a free Clean Architecture template: https://lnkd.in/ekMyTe3N
437
48 comentarios