SlideShare a Scribd company logo
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
• Everything in the company is a real-time stream
• > 1.2 trillion messages written per day
• > 3.4 trillion messages read per day
• ~ 1 PB of stream data
• Thousands of engineers
• Tens of thousands of producer processes
• Used as commit log for distributed database
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Introduction To Streaming Data and Stream Processing with Apache Kafka
Coming Up Next
Date Title Speaker
10/6 Deep Dive Into Apache Kafka Jun Rao
10/27 Data Integration with Kafka Gwen Shapira
11/17 Demystifying Stream Processing Neha Narkhede
12/1 A Practical Guide To Selecting A Stream
Processing Technology
Michael Noll
12/15 Streaming in Practice: Putting Apache
Kafka in Production
Roger Hoover

More Related Content

What's hot (20)

PPTX
Resilience reloaded - more resilience patterns
Uwe Friedrichsen
 
PPTX
Introduction to Apache Kafka
AIMDek Technologies
 
PPTX
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Jean-Paul Azar
 
PDF
Apache Kafka - Martin Podval
Martin Podval
 
KEY
Introduction to memcached
Jurriaan Persyn
 
PDF
Service Mesh with Apache Kafka, Kubernetes, Envoy, Istio and Linkerd
Kai Wähner
 
PPTX
Microservice vs. Monolithic Architecture
Paul Mooney
 
PPTX
Introduction to Microservices
Roger van de Kimmenade
 
PDF
NATS Streaming - an alternative to Apache Kafka?
Anton Zadorozhniy
 
PPSX
Apache Flink, AWS Kinesis, Analytics
Araf Karsh Hamid
 
PPTX
Agile Methodology PPT
Mohit Kumar
 
PDF
RabbitMQ vs Apache Kafka - Part 1
Erlang Solutions
 
PPTX
Introduction to Apache ZooKeeper
Saurav Haloi
 
PDF
Apache Kafka in Financial Services - Use Cases and Architectures
Kai Wähner
 
PPTX
Service mesh
Arnab Mitra
 
PDF
KSQL: Streaming SQL for Kafka
confluent
 
PPTX
Apache Kafka - Patterns anti-patterns
Florent Ramiere
 
PDF
Next-level integration with Spring Data Elasticsearch
Elasticsearch
 
PPTX
Containers and Docker
Damian T. Gordon
 
Resilience reloaded - more resilience patterns
Uwe Friedrichsen
 
Introduction to Apache Kafka
AIMDek Technologies
 
Kafka Tutorial - Introduction to Apache Kafka (Part 1)
Jean-Paul Azar
 
Apache Kafka - Martin Podval
Martin Podval
 
Introduction to memcached
Jurriaan Persyn
 
Service Mesh with Apache Kafka, Kubernetes, Envoy, Istio and Linkerd
Kai Wähner
 
Microservice vs. Monolithic Architecture
Paul Mooney
 
Introduction to Microservices
Roger van de Kimmenade
 
NATS Streaming - an alternative to Apache Kafka?
Anton Zadorozhniy
 
Apache Flink, AWS Kinesis, Analytics
Araf Karsh Hamid
 
Agile Methodology PPT
Mohit Kumar
 
RabbitMQ vs Apache Kafka - Part 1
Erlang Solutions
 
Introduction to Apache ZooKeeper
Saurav Haloi
 
Apache Kafka in Financial Services - Use Cases and Architectures
Kai Wähner
 
Service mesh
Arnab Mitra
 
KSQL: Streaming SQL for Kafka
confluent
 
Apache Kafka - Patterns anti-patterns
Florent Ramiere
 
Next-level integration with Spring Data Elasticsearch
Elasticsearch
 
Containers and Docker
Damian T. Gordon
 

More from confluent (20)

PDF
Stream Processing Handson Workshop - Flink SQL Hands-on Workshop (Korean)
confluent
 
PPTX
Webinar Think Right - Shift Left - 19-03-2025.pptx
confluent
 
PDF
Migration, backup and restore made easy using Kannika
confluent
 
PDF
Five Things You Need to Know About Data Streaming in 2025
confluent
 
PDF
Data in Motion Tour Seoul 2024 - Keynote
confluent
 
PDF
Data in Motion Tour Seoul 2024 - Roadmap Demo
confluent
 
PDF
From Stream to Screen: Real-Time Data Streaming to Web Frontends with Conflue...
confluent
 
PDF
Confluent per il settore FSI: Accelerare l'Innovazione con il Data Streaming...
confluent
 
PDF
Data in Motion Tour 2024 Riyadh, Saudi Arabia
confluent
 
PDF
Build a Real-Time Decision Support Application for Financial Market Traders w...
confluent
 
PDF
Strumenti e Strategie di Stream Governance con Confluent Platform
confluent
 
PDF
Compose Gen-AI Apps With Real-Time Data - In Minutes, Not Weeks
confluent
 
PDF
Building Real-Time Gen AI Applications with SingleStore and Confluent
confluent
 
PDF
Unlocking value with event-driven architecture by Confluent
confluent
 
PDF
Il Data Streaming per un’AI real-time di nuova generazione
confluent
 
PDF
Unleashing the Future: Building a Scalable and Up-to-Date GenAI Chatbot with ...
confluent
 
PDF
Break data silos with real-time connectivity using Confluent Cloud Connectors
confluent
 
PDF
Building API data products on top of your real-time data infrastructure
confluent
 
PDF
Speed Wins: From Kafka to APIs in Minutes
confluent
 
PDF
Evolving Data Governance for the Real-time Streaming and AI Era
confluent
 
Stream Processing Handson Workshop - Flink SQL Hands-on Workshop (Korean)
confluent
 
Webinar Think Right - Shift Left - 19-03-2025.pptx
confluent
 
Migration, backup and restore made easy using Kannika
confluent
 
Five Things You Need to Know About Data Streaming in 2025
confluent
 
Data in Motion Tour Seoul 2024 - Keynote
confluent
 
Data in Motion Tour Seoul 2024 - Roadmap Demo
confluent
 
From Stream to Screen: Real-Time Data Streaming to Web Frontends with Conflue...
confluent
 
Confluent per il settore FSI: Accelerare l'Innovazione con il Data Streaming...
confluent
 
Data in Motion Tour 2024 Riyadh, Saudi Arabia
confluent
 
Build a Real-Time Decision Support Application for Financial Market Traders w...
confluent
 
Strumenti e Strategie di Stream Governance con Confluent Platform
confluent
 
Compose Gen-AI Apps With Real-Time Data - In Minutes, Not Weeks
confluent
 
Building Real-Time Gen AI Applications with SingleStore and Confluent
confluent
 
Unlocking value with event-driven architecture by Confluent
confluent
 
Il Data Streaming per un’AI real-time di nuova generazione
confluent
 
Unleashing the Future: Building a Scalable and Up-to-Date GenAI Chatbot with ...
confluent
 
Break data silos with real-time connectivity using Confluent Cloud Connectors
confluent
 
Building API data products on top of your real-time data infrastructure
confluent
 
Speed Wins: From Kafka to APIs in Minutes
confluent
 
Evolving Data Governance for the Real-time Streaming and AI Era
confluent
 
Ad

Recently uploaded (20)

PDF
Difference Between Kubernetes and Docker .pdf
Kindlebit Solutions
 
PDF
>Nitro Pro Crack 14.36.1.0 + Keygen Free Download [Latest]
utfefguu
 
PDF
Building scalbale cloud native apps with .NET 8
GillesMathieu10
 
PDF
Automated Test Case Repair Using Language Models
Lionel Briand
 
PDF
>Wondershare Filmora Crack Free Download 2025
utfefguu
 
PPTX
EO4EU Ocean Monitoring: Maritime Weather Routing Optimsation Use Case
EO4EU
 
PPTX
Android Notifications-A Guide to User-Facing Alerts in Android .pptx
Nabin Dhakal
 
PPTX
IDM Crack with Internet Download Manager 6.42 [Latest 2025]
HyperPc soft
 
PPTX
CV-Project_2024 version 01222222222.pptx
MohammadSiddiqui70
 
PDF
Alur Perkembangan Software dan Jaringan Komputer
ssuser754303
 
PPTX
Avast Premium Security crack 25.5.6162 + License Key 2025
HyperPc soft
 
PDF
IObit Uninstaller Pro 14.3.1.8 Crack for Windows Latest
utfefguu
 
PDF
AI Software Development Process, Strategies and Challenges
Net-Craft.com
 
PPTX
For my supp to finally picking supp that work
necas19388
 
PPTX
computer forensics encase emager app exp6 1.pptx
ssuser343e92
 
PDF
capitulando la keynote de GrafanaCON 2025 - Madrid
Imma Valls Bernaus
 
PPTX
IObit Uninstaller Pro 14.3.1.8 Crack Free Download 2025
sdfger qwerty
 
PPTX
NeuroStrata: Harnessing Neuro-Symbolic Paradigms for Improved Testability and...
Ivan Ruchkin
 
PDF
AWS Consulting Services: Empowering Digital Transformation with Nlineaxis
Nlineaxis IT Solutions Pvt Ltd
 
PDF
IDM Crack with Internet Download Manager 6.42 Build 41
utfefguu
 
Difference Between Kubernetes and Docker .pdf
Kindlebit Solutions
 
>Nitro Pro Crack 14.36.1.0 + Keygen Free Download [Latest]
utfefguu
 
Building scalbale cloud native apps with .NET 8
GillesMathieu10
 
Automated Test Case Repair Using Language Models
Lionel Briand
 
>Wondershare Filmora Crack Free Download 2025
utfefguu
 
EO4EU Ocean Monitoring: Maritime Weather Routing Optimsation Use Case
EO4EU
 
Android Notifications-A Guide to User-Facing Alerts in Android .pptx
Nabin Dhakal
 
IDM Crack with Internet Download Manager 6.42 [Latest 2025]
HyperPc soft
 
CV-Project_2024 version 01222222222.pptx
MohammadSiddiqui70
 
Alur Perkembangan Software dan Jaringan Komputer
ssuser754303
 
Avast Premium Security crack 25.5.6162 + License Key 2025
HyperPc soft
 
IObit Uninstaller Pro 14.3.1.8 Crack for Windows Latest
utfefguu
 
AI Software Development Process, Strategies and Challenges
Net-Craft.com
 
For my supp to finally picking supp that work
necas19388
 
computer forensics encase emager app exp6 1.pptx
ssuser343e92
 
capitulando la keynote de GrafanaCON 2025 - Madrid
Imma Valls Bernaus
 
IObit Uninstaller Pro 14.3.1.8 Crack Free Download 2025
sdfger qwerty
 
NeuroStrata: Harnessing Neuro-Symbolic Paradigms for Improved Testability and...
Ivan Ruchkin
 
AWS Consulting Services: Empowering Digital Transformation with Nlineaxis
Nlineaxis IT Solutions Pvt Ltd
 
IDM Crack with Internet Download Manager 6.42 Build 41
utfefguu
 
Ad

Introduction To Streaming Data and Stream Processing with Apache Kafka

  • 73. • Everything in the company is a real-time stream • > 1.2 trillion messages written per day • > 3.4 trillion messages read per day • ~ 1 PB of stream data • Thousands of engineers • Tens of thousands of producer processes • Used as commit log for distributed database
  • 77. Coming Up Next Date Title Speaker 10/6 Deep Dive Into Apache Kafka Jun Rao 10/27 Data Integration with Kafka Gwen Shapira 11/17 Demystifying Stream Processing Neha Narkhede 12/1 A Practical Guide To Selecting A Stream Processing Technology Michael Noll 12/15 Streaming in Practice: Putting Apache Kafka in Production Roger Hoover

Editor's Notes

  • #2: Hi, I’m Jay Kreps, I’m one of the creators of Apache Kafka and also one of the co-founders of Confluent, the company driving Kafka development as well as developing Confluent Platform, the leading Kafka distribution. Welcome to our Apache Kafka Online Talk Series. This first talk is going to introduce Kafka and the problems it was built to solve. This is a series of talks meant to help introduce you to the world of Apache Kafka and stream processing. Along the way I’ll give pointers to areas we are going to dive into into more depth in upcoming talks.
  • #3: Rather than starting off by diving into a bunch of Kafka features let me instead introduce the problem area. So what is the problem we have today that needs a new thing? To show that let me start but just laying out the architecture for most companies.
  • #4: Most applilcations are request/response (client/server) HTTP services OLTP databases Key/value stores You send a request they send back a response. These do little bits of work quickly. UI rendering is inherently this way: client sends a request to fetch the data to display the UI. Inherently synchronous—can’t display the UI until you get back the response with the data.
  • #5: The second big area is batch processing. This is the domain of the datawarehouse and hadoop clusters. Cron jobs. These are usually once a day things, though you can potentially run them a little quicker. So this the architecture we have today? What are the problems?
  • #6: How does data get around?
  • #7: Database data, log data Lots of systems—databases, specialized system like search, caches Business units N^2 connections Tons of glue code to stitch it all together
  • #8: Request/response is inherently synchronous. Hard to scale.
  • #9: Either big apps with huge amounts of work per request, or lots of little microservices…still all that work is synchronous. Has to be synchronous---say you make an HTTP request but don’t wait for the response, then you don’t know if it actually happened or not.
  • #10: Example: retail Sales are synchronous—you give me money and I give you a product (or commit to ship you a product) and give you a receipt or confirmation number. But a lot of the backend isn’t synchronous—I need to process shipments of new products, adjust prices, do inventory adjustments, re-order products, do things like analytics. Most of these don’t make sense to do in the process of a single sale—they are asynchronous. If something gets borked in my inventory reordering process I don’t want to block sales.
  • #11: These are the two problems that data streams can solve: Data pipeline sprawl Asynchronous services
  • #12: This is what that architecture looks like relying on streaming. Data pipelines go to the streaming platform, no longer N^2 separate pipelines. Async apps can feed off of this as well. Obviously that streaming box is going to be filled by Kafka. Now let’s dive into these two areas.
  • #14: Companies are real-time not batch
  • #15: Event = something that happened Record A product was viewed, a sale occurred, a database was updated, etc It’s a piece of data, a fact. But can also be a trigger or command (a sale occurred, so now let’s reorder). Not specific to a particular system or service, just a fact. Let’s look a few concrete examples to get a feel for it, first some simple ones then something a bit more complex.
  • #16: Event is “a web page was viewed” or “an error occurred” or whatever you’re logging. In fact the “log file” is totally incidental to the data being recorded—these data in the log is clearly a sequence of events.
  • #17: Sensors can also be represented as event streams. The event is something like “the value of this sensor is X” This covers a lot of instrumentation of the world, IOT use cases, logistics and vehicle positions, or even taking readings of metrics from monitoring counters or gauges in your apps. All these sensors can be captured into a stream of events. Okay, those were the easy and obvious ones, now let’s look at something more surprising.
  • #18: Databases can be thought of as streams of events! This isn’t obvious, but it’s really important because most valuable data is stored in databases. What do I mean that you can think of a database as a stream of events? Well what’s the most common data representation in a database? Table/Stream duality.
  • #19: It’s a table. A table looks something like this, a rectangle with columns, right? In my simplified table I am just going to have two columns a primary key and a value…both of these could be made up of multiple columns in real life. But in reality this representation of a table is a little bit over simplified because tables are always being updated (that is the whole point of database, after all). But this table is just static. How can I represent a table that is getting updated like our sensors or log files are?
  • #20: Well the easy way to do it would be just dump out a full copy of the table periodically. In this picture I’ve represented a sequence of snapshots of the table as time goes by.
  • #21: Now it’s a bit inefficient to take a full dump of the table over and over, right? Probably if your tables are like mine, not all your rows are getting updated all the time. An alternative that might be a bit more efficent would be to just dump out the rows that changed. This would give me a sequence of “diffs”. Now imagine I increase the frequency of this process to make the diff as small as possible. Clearly the smallest possible diff would be a single changed row. Here I’ve listed the sequence of single changed rows, each represented by a single PUT operation (an update or insert). Now the key thing is that if I have this sequence of changes it actually represents all the states of my table. And, of course, that sequence of updates is a stream of events. The event is something like “the value of this primary key is now X”.
  • #22: Now I can represent all these different data pipelines as event streams. I can capture changes from a data system or application, and take that stream and feed it into another system.
  • #23: That is going to be the key to solving my pipeline sprawl problem. Instead of having N^2 different pipelines, one for each pair of systems I am going to have a central place that hosts all these event streams—the streaming platform. This is a central way that all these systems and applications can plug in to get the streams they need. So I can capture streams from databases, and feed them into DWH, Hadoop, monitoring and analytics systems. They key advantage is that there is a single integration point for each thing that wants data. Now obviously to make this work I’m going to need to ensure I have met the reliability, scalability, and latency guarantees for each of these systems.
  • #24: Let’s dive into an example to see the example of this model of data. Let’s say that we have a web app that is recording events about a product being viewed. And let’s say we are using Hadoop for analytics and want to get this data there. In this model the web app publishes its stream of clicks to our streaming platform and Hadoop loads these. With only two systems, the only real advantage is some decoupling—the web app isn’t tied to the particular technology we are using for analytics, and the Hadoop cluster doesn’t need to be up all the time. But the advantage is that additional uses of this data become really easy.
  • #25: For example if other apps can also generate product view events, they just publish these, Hadoop doesn’t need to know there are more publishers of this type of event.
  • #26: And if additional use cases arise they can be added a well. In this example there turn out to be a number of other uses for product views—analytics, recommendations, security monitoring, etc. These can all just subscribe without any need to go back and modify any of the apps that generate product views.
  • #27: Okay so we talked about how streams can be used for solving the data pipeline sprawl problem. Now let’s talk about the solution to the second problem---too much synchrony. This comes from being able to process real-time streams of data and this is called stream processing. So what is stream processing?
  • #28: Best way to think about it is as a third paradigm for programming. We talked about request/response and batch processing. Let’s dive into these a bit and use them to motivate stream processing.
  • #29: HTTP/REST All databases Run all the time Each request totally independent—No real ordering Can fail individual requests if you want Very simple! About the future!
  • #30: “Ed, the MapReduce job never finishes if you watch it like that” Job kicks off at a certain time Cron! Processes all the input, produces all the input Data is usually static Hadoop! DWH, JCL Archaic but powerful. Can do analytics! Compex algorithms! Also can be really efficient! Inherently high latency
  • #31: Generalizes request/response and batch. Program takes some inputs and produces some outputs Could be all inputs Could be one at a time Runs continuously forever!
  • #32: Basically a service that processes, reacts to, or transforms streams of events. Asynchronous so it allows us to decouple work from our request/response services.
  • #33: Many of things are naturally thought of as stream processing. Walmart blog
  • #34: Now we’ve talked about these two motivations for streams---solving pipline spawl and asynchronous stream processing. It won’t surprise anyone that when I talk about this streaming platform that enables these pipelines and processing I am talking about Apache Kafka.
  • #35: So what is Kafka? It’s a streaming platform. Lets you publish and subscribe to streams of data, stores them realiably, and lets you process them in real time. The second half of this talk with dive into Apache Kafka and talk about it acts as streaming platform and let’s you build real-time streaming pipelines and do stream processing.
  • #36: It’s widely used and in production at thousands of companies. Let’s walk through the the basics of Kafka and understand how it acts as a streaming platform.
  • #37: Events = Record = Message Timestamp, an optional key and a value Key is used for partitioning. Timestamp is used for retention and processing.
  • #38: Not an apache log Different: Commit log Stolen from distributed database internals Key abstraction for systems, real-time processing, data integration Formalization of a stream Reader controls progress—unifies batch and real-time
  • #39: Relate to pub/sub
  • #40: World is a process/threads (total order) but no order between
  • #41: Four APIs to read and write streams of events First two are easy, the producer and consumer allow applications to read and write to Kafka. The connect API allows building connectors that integrate Kafka with existing systems or applications. The streams api allows stream processing on top of Kafka. We’ll go through each of these briefly.
  • #42: The producer writes (publishes) streams of events to Kafka to be stored.
  • #43: Consumer reads (subscribes) to streams of events from topics.
  • #44: Kafka topics are always multi-reader and can be scaled out. So in this example I have two logical consumers: A and B. Each of these logical consumers is made up of multiple physical processes, potentially running on different machines. Two processes for A and three for B. These groups are dynamic: processes can join a group or leave a group at any time and Kafka will balance the load over the new set of processes.
  • #45: So for example if one of the B processes dies, the data being consumed by that process will be transitioned to the remaining B processes automatically. These groups are a fundamental abstraction in Kafka and they support not only groups of consumers, but also groups of connectors or stream processors.
  • #46: In our streaming platform vision we had a number of apps or data systems that were integrated with Kafka. Either they are loading streams of data out of Kafka or publishing streams of data into Kafka. If these systems are built to directly integrate with Kafka they could use the producer and consumer API. But many apps and databases simple have read and write apis, they don’t know anything about Kafka. How can we make integration with this kind of existing app or system easy? After all these systems don’t know that they need to push data into kafka or pull data out? The answer is the Connect APIs
  • #47: These APIs allow writing reusable connectors to Kafka. A source is a connector that reads data out of the external system and publishes to Kafka. A sink is a connector that pulls data out of Kafka and writes it to the external system. Of course you could build this integration using the producer and consumer apis, so how is this better?
  • #48: REST Apis for management A few examples help illustrate this
  • #52: We’ll dive into Kafka connect in more detail in the third installment of this talk series which goes far deeper into the practice of building streaming pipelines with Kafka.
  • #53: The final API for Kafka is the streams api. This api lets you build real time stream processing on top of Kafka. These stream processors take input from kafka topics and either react to the input or transform it into output to output topics.
  • #54: So in effect a stream processing app is basically just some code that consumes input and produces output. So why not just use the producer and consumer APIs? Well, it turns out there are some hard parts to doing real-time stream processing.
  • #58: Add screenshot example
  • #59: Add screenshot example
  • #62: Companies == streams What a retail store do Streams Retail - Sales - Shipments and logistics - Pricing - Re-ordering - Analytics - Fraud and theft
  • #63: Table/Stream duality
  • #67: Othing you might be thinking is that this streaming vision isn’t really different from existing technology like Enterprise Messaging Systems or Enterprise Service Buses?
  • #68: So I thought it might be worth giving a quick cliff notes on how Kafka and modern stream processing technologies compare to previous generations of systems. For those really interested in this question we’re putting together a white paper that gives a much more detailed answer. But for those who just want the cliff notes I think there are three key differences.
  • #69: The richness of the stream processing capabilities is a major advance over the previous generations of technoglogy The other two difference really come from Kafka being a modern distributed system --it scales horizontally on commodity machines --and it gives strong guarantees for data Let’s dive into these two a little bit.
  • #70: So we’ve talked about the APIs and abstractions, in the next few slides I’ll give a preview of Kafka as a data system—the guaranatees and capabilities it has. Jun, my co-founder, will be doing a much deeper dive in this area in the next talk in this series, so if you want to learn more about how kafka works that is the thing to see. But I’ll give a quick walk through of what Kafka provides. Each of these characteristics is really essential to it’s usage as a “unniversal data pipeline” and processing technology.
  • #71: First it scales well and cheaply. You can do hundreds of MB/sec of writes per server and can have many servers Kafka doesn’t get slower as you store more data in it In this respect it performs a lot like a distribute file system This is very different from existing messaging systems Without this a lot of the “big data” workloads that kafka gets used for, which often have very high volume data streams, would not be possible or feasible. This scalability is also really important for centralizing a lot of data streams in the same place—if that didn’t scale well it just wouldn’t be practical.
  • #72: Next Kafka provides strong guarantees for data written to the cluster. Writes are replicated across multiple machines for fault tolerance, and we acknowledge the write back to the client. All data is persisted to the filesystem. And writes to the kafka cluster are strong ordered. This is another difference from a traditional messaging system—they usually do a poor job of supporting strong ordering of updates with more than a single consumer.
  • #73: Works as a cluster Can replace machines without bringing down the cluster Failures are handled transparently Data not lost if a machine destroyed Can scale elastically as usage grows.