14 Connecting to systems with Alpakka

 

This chapter covers

  • Reading from and writing to Kafka
  • Reading from and writing to CSV

Alpakka is a project that implements multiple connectors that allow you to interact with different technologies in a reactive streaming way according to the principles of the Reactive Streams project (www.reactive-streams.org). For example, you can read from a Kafka topic with an asynchronous stream with no blocking backpressure.

This chapter covers two of the most commonly used connectors, Kafka and CSV. The CSV Alpakka connector can be classified as finite streams because it ends after the file is read. The Kafka Alpakka connector, on the other hand, is an example of an infinite stream where there is an open connection to a topic that can produce records at any time.

Alpakka Kafka has an extensive API; most of this chapter is dedicated to it. The rest of the chapter is dedicated to CSV, which has a relatively simple API.

NOTE

The source code for this chapter is available at www.manning.com/books/akka-in-action-second-edition or https://github.com/franciscolopezsancho/akka-topics/tree/main/chapter14. You can find the contents of any snippet or listing in the .scala file with the same name as the class, object, or trait.

14.1 Alpakka Kafka

14.1.1 Consuming from Kafka in action

14.1.2 Detecting consumer failures

14.1.3 Auto-commit

14.1.4 Committable sources

14.2 Pushing to Kafka

14.2.1 At-most-once delivery guarantee

14.2.2 At-least-once delivery guarantee

14.3 Effectively-once delivery

14.4 Alpakka CSV

14.4.1 Mapping by column

14.4.2 Reading and writing with FileIO

Summary