A journey from batch to streaming using Kafka Streams

Zaal 4

11:40 - 12:30


Liander, a local Distributed System Operators (DSO) (netbeheerder), is building a new system for the handling of millions of smart meter data requests.
In this talk we dive into the details of this new system which has a planned go-live date of September 30th. The new system is built around a central Kafka bus. It uses Kafka Streams applications which are wrapped in Spring Boot for data processing. Avro is used as a message format with a central schema manager. The market interface is, and in the near future will be, a soap batch interface. Spring Boot reactive webapps make the transformation from these existing batch interface endpoints to internal streaming. Kafka Connect is used to move data to Elasticsearch which is used for auditing and the datasource for the client gui’s. We will share our learnings and present samples of complex Kafka Stream joins and Kafka Connect configs with Connect Transformations.
The whole system is deployed on Kubernetes. We will explain why Kubernetes is such a nice fit for Kafka Streams based applications. A special role is for Kubernetes Statefulsets which make life much easier for deploying Zookeeper/Kafka/Elasticsearch as well as the Kafka Streams workers and their local RocksDB caches; stuff with volumes.