本文实践了如何连接 Kafka 生产和消费 Avro 序列化格式的数据, 不能像 NgAgo-gDNA 那样, 为保证实验内容及结果的可重复性, 文中所用的各中间件和组件版本如下:
- Apache Kafka: kafka_2.11-0.10.0.1, 这个版本在初始始化生产者消费者的属性与之前版本有所不同.
- kafka-clients: Java API 客户端, 版本为 0.10.0.1
- Apache Avro: 1.8.1. 关于 Avro 序列化的内容可参见 Apache Avro 序列化与反序列化 (Java 实现)
- Java 8
Apache Kafka 消息系统设计为可以传输字符串, 二进制等数据, 但直接用于传输生产消费两端都能理解的对象数据会更友好. 所以我们这里用 Avro 的 Schema 来定义要传输的数据格式, 通信时采用自定义的序列化和反序列化类进行对象与字节数组间的转换.
以下是整个实验过程
本地启动 Apache Kafka 服务
请参考 简单搭建 Apache Kafka 分布式消息系统 启动 ZooKeeper 和 Kafka 即可. 程序运行会自动创建相应的主题. 启动后 Kafka 开启了本地的 9092 端口, 程序中只需要连接这个端口, 不用管 ZooKeeper 的 2181 端口.
交换的数据格式定义 user.avsc
| 1 2 3 4 5 6 7 8 9 | {   "namespace": "cc.unmi.data",   "type": "record",   "name": "User",   "fields": [     {"name": "name", "type": "string"},     {"name": "address", "type": ["string", "null"]}   ] } | 
需要用到 avro-tools 或 avro-maven-plugin 把上面的 Schema 编译成 cc.unmi.data.User.java 类文件. 该文件留有整个 Schema 的定义, 所以运行时无须 user.avsc 文件. 关于 Avro Schema 生成 Java 也可参见 Apache Avro 序列化与反序列化 (Java 实现)
创建生产者 Producer
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | package cc.unmi; import cc.unmi.serialization.AvroSerializer; import org.apache.avro.specific.SpecificRecordBase; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.serialization.StringSerializer; import java.util.Properties; public class Producer<T extends SpecificRecordBase> {     private KafkaProducer<String, T> producer = new KafkaProducer<>(getProperties());     public void sendData(Topic topic, T data) {         producer.send(new ProducerRecord<>(topic.topicName, data),                 (metadata, exception) -> {                     if (exception == null) {                         System.out.printf("Sent user: %s \n", data);                     } else {                         System.out.println("data sent failed: " + exception.getMessage());                     }                 });     }     private Properties getProperties() {         Properties props = new Properties();         props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");         props.put(ProducerConfig.CLIENT_ID_CONFIG, "DemoProducer");         props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,                 StringSerializer.class.getName());         props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,                 AvroSerializer.class.getName());         return props;     } } | 
由于 Avro Schema 编译出的类都继承自 SpecificRecordBase, 因此泛型类型是 <T extends SpecificRecordBase>. 在本实验中发送消息时未设置 Key, 所以 KEY_SERIALIZER_CLASS_CONFIG 可不用, 这里用到了自定义的 AvroSerializer 序列化类, 所以
Avro 对象的序列化类 AvroSerializer
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | package cc.unmi.serialization; import org.apache.avro.io.BinaryEncoder; import org.apache.avro.io.DatumWriter; import org.apache.avro.io.EncoderFactory; import org.apache.avro.specific.SpecificDatumWriter; import org.apache.avro.specific.SpecificRecordBase; import org.apache.kafka.common.errors.SerializationException; import org.apache.kafka.common.serialization.Serializer; import java.io.ByteArrayOutputStream; import java.io.IOException; import java.util.Map; public class AvroSerializer<T extends SpecificRecordBase> implements Serializer<T>{     @Override     public void configure(Map<String, ?> configs, boolean isKey) {     }     @Override     public byte[] serialize(String topic, T data) {         DatumWriter<T> userDatumWriter = new SpecificDatumWriter<>(data.getSchema());         ByteArrayOutputStream outputStream = new ByteArrayOutputStream();         BinaryEncoder binaryEncoder = EncoderFactory.get().directBinaryEncoder(outputStream, null);         try {             userDatumWriter.write(data, binaryEncoder);         } catch (IOException e) {             throw new SerializationException(e.getMessage());         }         return outputStream.toByteArray();     }     @Override     public void close() {     } } | 
这个只是负责把 Java 对象转换成字节数组便于网络传输. 因为 serilaize 方法处理的数据类型是 <T extends SpecificRecordBase>, 所以构造 SpecificDatumWriter 可直接传入 data.getSchema(). 等会反序列化时可没这么轻松了.
创建消费者 Consumer
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | package cc.unmi; import cc.unmi.serialization.AvroDeserializer; import org.apache.avro.specific.SpecificRecordBase; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.common.serialization.StringDeserializer; import java.util.Collections; import java.util.List; import java.util.Properties; import java.util.stream.Collectors; import java.util.stream.StreamSupport; public class Consumer<T extends SpecificRecordBase> {     private KafkaConsumer<String, T> consumer = new KafkaConsumer<>(getProperties());     public List<T> receive(Topic topic) { //        TopicPartition partition = new TopicPartition(topic.topicName, 0);         consumer.subscribe(Collections.singletonList(topic.topicName)); //        consumer.assign(Collections.singletonList(partition));         ConsumerRecords<String, T> records = consumer.poll(10);         return StreamSupport.stream(records.spliterator(), false)                 .map(ConsumerRecord::value).collect(Collectors.toList());     }     private Properties getProperties() {         Properties props = new Properties();         props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");         props.put(ConsumerConfig.GROUP_ID_CONFIG, "DemoConsumer");         props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,                 StringDeserializer.class.getName());         props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,                 AvroDeserializer.class.getName());         return props;     } } | 
同理, 本实验中 KEY_DESERIALIZER_CLASS_CONFIG 也可不用. GROUP_ID_CONFIG 的设定是 Kafka 的一条消息, 多个消费者的情况, 如果它们的 group id 不同, 都能获得这条消息, 如果一样的 group id, 都只有组中的一个消费都能获得这条消息. 这里也用到了自定义的反序列化类 AvroDeserializer.
Avro 对象的反序列化类 AvroDeserializer
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 | package cc.unmi.serialization; import cc.unmi.Topic; import com.sun.xml.internal.ws.encoding.soap.DeserializationException; import org.apache.avro.io.BinaryDecoder; import org.apache.avro.io.DatumReader; import org.apache.avro.io.DecoderFactory; import org.apache.avro.specific.SpecificDatumReader; import org.apache.avro.specific.SpecificRecordBase; import org.apache.kafka.common.serialization.Deserializer; import java.io.ByteArrayInputStream; import java.io.IOException; import java.util.Map; public class AvroDeserializer<T extends SpecificRecordBase> implements Deserializer<T> {     @Override     public void configure(Map<String, ?> configs, boolean isKey) {     }     @Override     public T deserialize(String topic, byte[] data) {         DatumReader<T> userDatumReader = new SpecificDatumReader<>(Topic.matchFor(topic).topicType.getSchema());         BinaryDecoder binaryEncoder = DecoderFactory.get().directBinaryDecoder(new ByteArrayInputStream(data), null);         try {             return userDatumReader.read(null, binaryEncoder);         } catch (IOException e) {             throw new DeserializationException(e.getMessage());         }     }     @Override     public void close() {     } } | 
把字节数组转换为一个 Avro 的对象, 虽然这里的泛型是 <T extends SpecificRecordBase>, 但是代码中是无法从 T 得到 T.class 的. 见 http://www.blogjava.net/calvin/archive/2006/04/28/43830.html. 所以我们专门定义了一个 Topic 枚举, 把每一个 Topic 与具体要传输的数据类型关联起来了, 像上面是通过 Topic.matchFor(topic).topicType.getSchema() 来获得 Schema 的. 这时也必须保证一个主题只为特定类型服务.
看看这个 Topic 枚举类型定义
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | package cc.unmi; import cc.unmi.data.User; import org.apache.avro.specific.SpecificRecordBase; import java.util.EnumSet; public enum Topic {     USER("user-info-topic", new User());     public final String topicName;     public final SpecificRecordBase topicType;     Topic(String topicName, SpecificRecordBase topicType) {         this.topicName = topicName;         this.topicType = topicType;     }     public static Topic matchFor(String topicName) {         return EnumSet.allOf(Topic.class).stream()                 .filter(topic -> topic.topicName.equals(topicName))                 .findFirst()                 .orElse(null);     } } | 
运行实例 KafkaDemo
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | package cc.unmi; import cc.unmi.data.User; import java.util.List; import java.util.Random; import java.util.Scanner; public class KafkaDemo {     public static void main(String[] args) {         Producer<User> producer = new Producer<>();         Consumer<User> consumer = new Consumer<>();         System.out.println("Please input 'send', 'receive', or 'exit'");         Scanner scanner = new Scanner(System.in);         while (scanner.hasNext()) {             String input = scanner.next();             switch (input) {                 case "send":                     producer.sendData(Topic.USER, new User("Yanbin", "Address: " + new Random().nextInt()));                     break;                 case "receive":                     List<User> users = consumer.receive(Topic.USER);                     if(users.isEmpty()) {                         System.out.println("Received nothing");                     } else {                         users.forEach(user -> System.out.println("Received user: " + user));                     }                     break;                 case "exit":                     System.exit(0);                     break;                 default:                     System.out.println("Please input 'send', 'receive', or 'exit'");             }         }     } } | 
现在才是激动人心的时刻, 输入 send 发送消息, receive 接收消息. 实际中消费都应实现为自动监听器模式, 有消息到来时自动提醒, 不过底层还是一个轮询的过程, 和这里一样.
完整项目已上传到了 GitHub 上, 见 https://github.com/yabqiu/kafka-avro-demo. 这是一个 Maven 项目, 所以可以通过 Maven 来运行
mvn exec:java -Dexec.mainClass=cc.unmi.KafkaDemo
效果如下:

Address: -2066758714 这条消息是启动程序之前就已存在于 Kafka 中的消息. 其他是 send 就能接收到, 有时稍有延迟.
参考链接:
- KafkaProducer API
- KafkaConsumer API
- Introducing the Kafka Consumer: Getting Started with the New Apache Kafka 0.9 Consumer Client
- Kafka的Java实例
- Kafka+Avro的demo
本文链接 https://yanbin.blog/kafka-produce-consume-avro-data/, 来自 隔叶黄莺 Yanbin Blog
[版权声明]  本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。
 本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。