pom.xml中引入依賴 <!-- https://mvnrepository.com/artifact/org.apache.commons/commons-lang3 --> <dependency> <groupId>org.apache.commons</groupId> <artifact ...
前言
這周學習下Flink相關的知識,學習到一個讀寫Kafka消息的示例, 自己動手實踐了一下,別人示例使用的是普通的Java Main方法,沒有用到spring boot. 我們在實際工作中會使用spring boot。 因此我做了些加強, 把流程打通了,過程記錄下來。
準備工作
首先我們通過docker安裝一個kafka服務,參照Kafka的官方知道文檔
https://developer.confluent.io/tutorials/kafka-console-consumer-producer-basics/kafka.html
主要的是有個docker-compose.yml文件
---
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.3.0
hostname: broker
container_name: broker
depends_on:
- zookeeper
ports:
- "29092:29092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
docker compose up -d
就可以把kafka docker 環境搭起來,
使用以下命令,創建一個flink.kafka.streaming.source的topic
docker exec -t broker kafka-topics --create --topic flink.kafka.streaming.source --bootstrap-server broker:9092
然後使用命令,就可以進入到kafka機器的命令行
docker exec -it broker bash
官方文檔示例中沒有-it, 運行後沒有進入broker的命令行,加上來才可以。這裡說明下
Flink我們打算直接採用開發工具運行,暫時未搭環境,以體驗為主。
開發階段
首先需要引入的包POM文件
<properties>
<jdk.version>1.8</jdk.version>
<maven.compiler.source>8</maven.compiler.source>
<maven.compiler.target>8</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<spring-boot.version>2.7.7</spring-boot.version>
<flink.version>1.16.0</flink.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
</dependencies>
這裡我們使用Java8, 本來想使用Spring Boot 3的,但是Spring Boot 3 最低需要Java17了, 目前Flink支持Java8和Java11,所以我們使用Spring Boot 2, Java 8來開發。
spring-boot-starter 我們就一個命令行程式,所以用這個就夠了
lombok 用來定義model
flink-java, flink-clients, flink-streaming-java 是使用基本組件, 缺少flink-clients編譯階段不會報錯,運行的時候會報java.lang.IllegalStateException: No ExecutorFactory found to execute the application.
flink-connector-kafka 是連接kafka用
我們這裡把
在運行的configuration上面勾選上“add dependencies with provided scope to classpath”可以解決這個問題。
主要代碼
@Component
@Slf4j
public class KafkaRunner implements ApplicationRunner
{
@Override
public void run(ApplicationArguments args) throws Exception {
try{
/****************************************************************************
* Setup Flink environment.
****************************************************************************/
// Set up the streaming execution environment
final StreamExecutionEnvironment streamEnv
= StreamExecutionEnvironment.getExecutionEnvironment();
/****************************************************************************
* Read Kafka Topic Stream into a DataStream.
****************************************************************************/
//Set connection properties to Kafka Cluster
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:29092");
properties.setProperty("group.id", "flink.learn.realtime");
//Setup a Kafka Consumer on Flnk
FlinkKafkaConsumer<String> kafkaConsumer =
new FlinkKafkaConsumer<>
("flink.kafka.streaming.source", //topic
new SimpleStringSchema(), //Schema for data
properties); //connection properties
//Setup to receive only new messages
kafkaConsumer.setStartFromLatest();
//Create the data stream
DataStream<String> auditTrailStr = streamEnv
.addSource(kafkaConsumer);
//Convert each record to an Object
DataStream<Tuple2<String, Integer>> userCounts
= auditTrailStr
.map(new MapFunction<String,Tuple2<String,Integer>>() {
@Override
public Tuple2<String,Integer> map(String auditStr) {
System.out.println("--- Received Record : " + auditStr);
AuditTrail at = new AuditTrail(auditStr);
return new Tuple2<String,Integer>(at.getUser(),at.getDuration());
}
})
.keyBy(0) //By user name
.reduce((x,y) -> new Tuple2<String,Integer>( x.f0, x.f1 + y.f1));
//Print User and Durations.
userCounts.print();
/****************************************************************************
* Setup data source and execute the Flink pipeline
****************************************************************************/
//Start the Kafka Stream generator on a separate thread
System.out.println("Starting Kafka Data Generator...");
Thread kafkaThread = new Thread(new KafkaStreamDataGenerator());
kafkaThread.start();
// execute the streaming pipeline
streamEnv.execute("Flink Windowing Example");
}
catch(Exception e) {
e.printStackTrace();
}
}
}
簡單說明下程式
DataStream
.addSource(kafkaConsumer);
就是接通了Kafka Source
Thread kafkaThread = new Thread(new KafkaStreamDataGenerator());
kafkaThread.start();
這段代碼是另外開一個線程往kafka裡面去發送文本消息
我們在這個示例中就是一個線程發,然後flink就讀出來,然後統計出每個用戶的操作時間。
auditTrailStr.map 就是來進行統計操作。
運行效果
可以看到Kafka一邊發送,然後我們就一邊讀出來,然後就統計出了每個用戶的時間。
總結
本文只是簡單的打通了幾個環節,對於flink的知識沒有涉及太多,算是一個環境入門。後面學習更多的以後我們再深入些來記錄flink. 示例代碼會放到 https://github.com/dengkun39/redisdemo.git spring-boot-flink 文件夾。