Kafka Producer/Consumer in 30 Minutes
Kafka basics in 30 minutes: enough to understand the model, not yet enough to operate it in production.
Step 1: Run Kafka
docker compose up -d with the bitnami/kafka image (compose file in any tutorial).
Verify: kafka-topics.sh --list.
Step 2: Producer
from kafka import KafkaProducer producer = KafkaProducer(bootstrap_servers="localhost:9092") producer.send("test", b"hello") producer.flush()
Step 3: Consumer
from kafka import KafkaConsumer
consumer = KafkaConsumer("test", bootstrap_servers="localhost:9092")
for msg in consumer:
print(msg.value)
Step 4: Topics and partitions
Topics: named queues. Partitions: parallel-consumable shards.
For 1 consumer = 1 partition; for parallelism, multiple partitions.
Antipatterns
- Single-partition topics for high-throughput. Bottleneck.
- No consumer group config. Wrong delivery semantics.
- Local Kafka in production. Use managed.
What to do this week
Three moves. (1) Run the tutorial end-to-end on your own laptop / sandbox. (2) Apply the pattern to one production workload. (3) Document the variations you needed; share with the team.