vlambda博客
学习文章列表

大数据技术之Flink-CDC

第1章 CDC简介

1.1 什么是CDC

CDC 的全称是 Change Data Capture ,在广义的概念上,只要是能捕获数据变更的技术,我们都可以称之为 CDC 。目前通常描述的 CDC 技术主要面向数据库的变更,是一种用于捕获数据库中数据变更的技术。CDC 技术的应用场景非常广泛:

  • 数据同步:用于备份,容灾;

  • 数据分发:一个数据源分发给多个下游系统;

  • 数据采集:面向数据仓库 / 数据湖的 ETL 数据集成,是非常重要的数据源。

CDC 的技术方案非常多,目前业界主流的实现机制可以分为两种:

  • 基于查询的 CDC:

    • 离线调度查询作业,批处理。把一张表同步到其他系统,每次通过查询去获取表中最新的数据;

    • 无法保障数据一致性,查的过程中有可能数据已经发生了多次变更;

    • 不保障实时性,基于离线调度存在天然的延迟。

  • 基于日志的 CDC:

    • 实时消费日志,流处理,例如 MySQL 的 binlog 日志完整记录了数据库中的变更,可以把 binlog 文件当作流的数据源;

    • 保障数据一致性,因为 binlog 文件包含了所有历史变更明细;

    • 保障实时性,因为类似 binlog 的日志文件是可以流式消费的,提供的是实时数据。

对比常见的开源 CDC 方案,我们可以发现:

 

  • 对比增量同步能力,

    • 基于日志的方式,可以很好的做到增量同步;

    • 而基于查询的方式是很难做到增量同步的。

  • 对比全量同步能力,基于查询或者日志的 CDC 方案基本都支持,除了 Canal。

  • 而对比全量 + 增量同步的能力,只有 Flink CDC、Debezium、Oracle Goldengate 支持较好。

  • 从架构角度去看,该表将架构分为单机和分布式,这里的分布式架构不单纯体现在数据读取能力的水平扩展上,更重要的是在大数据场景下分布式系统接入能力。例如 Flink CDC 的数据入湖或者入仓的时候,下游通常是分布式的系统,如 Hive、HDFS、Iceberg、Hudi 等,那么从对接入分布式系统能力上看,Flink CDC 的架构能够很好地接入此类系统。

  • 在数据转换 / 数据清洗能力上,当数据进入到 CDC 工具的时候是否能较方便的对数据做一些过滤或者清洗,甚至聚合?

    • 在 Flink CDC 上操作相当简单,可以通过 Flink SQL 去操作这些数据;

    • 但是像 DataX、Debezium 等则需要通过脚本或者模板去做,所以用户的使用门槛会比较高。

  • 另外,在生态方面,这里指的是下游的一些数据库或者数据源的支持。Flink CDC 下游有丰富的 Connector,例如写入到 TiDB、MySQL、Pg、HBase、Kafka、ClickHouse 等常见的一些系统,也支持各种自定义 connector。



第2章 FlinkCDC案例实操

2.1 DataStream方式的应用

2.1.1 导入依赖

<dependencies> <dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-java</artifactId> <version>1.12.0</version> </dependency>
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-streaming-java_2.12</artifactId> <version>1.12.0</version> </dependency>
<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-clients_2.12</artifactId> <version>1.12.0</version> </dependency>
<dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>3.1.3</version> </dependency>
<dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.49</version> </dependency>
<dependency> <groupId>com.alibaba.ververica</groupId> <artifactId>flink-connector-mysql-cdc</artifactId> <version>1.1.1</version> </dependency>
<dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.75</version> </dependency></dependencies>

2.1.2 编写代码

import com.alibaba.ververica.cdc.connectors.mysql.MySQLSource;import com.alibaba.ververica.cdc.debezium.DebeziumSourceFunction;import com.alibaba.ververica.cdc.debezium.StringDebeziumDeserializationSchema;import org.apache.flink.api.common.restartstrategy.RestartStrategies;import org.apache.flink.runtime.state.filesystem.FsStateBackend;import org.apache.flink.streaming.api.CheckpointingMode;import org.apache.flink.streaming.api.datastream.DataStreamSource;import org.apache.flink.streaming.api.environment.CheckpointConfig;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import java.util.Properties;
public class FlinkCDC {
public static void main(String[] args) throws Exception {
//1.创建执行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1);
//2.Flink-CDC将读取binlog的位置信息以状态的方式保存在CK,如果想要做到断点续传,需要从Checkpoint或者Savepoint启动程序 //2.1 开启Checkpoint,每隔5秒钟做一次CK env.enableCheckpointing(5000L); //2.2 指定CK的一致性语义 env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE); //2.3 设置任务关闭的时候保留最后一次CK数据 env.getCheckpointConfig().enableExternalizedCheckpoints(CheckpointConfig.ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION); //2.4 指定从CK自动重启策略 env.setRestartStrategy(RestartStrategies.fixedDelayRestart(3, 2000L)); //2.5 设置状态后端 env.setStateBackend(new FsStateBackend("hdfs://hadoop102:8020/flinkCDC")); //2.6 设置访问HDFS的用户名 System.setProperty("HADOOP_USER_NAME", "atguigu");
//3.创建Flink-MySQL-CDC的Source Properties properties = new Properties();
//initial (default): Performs an initial snapshot on the monitored database tables upon first startup, and continue to read the latest binlog. //latest-offset: Never to perform snapshot on the monitored database tables upon first startup, just read from the end of the binlog which means only have the changes since the connector was started. //timestamp: Never to perform snapshot on the monitored database tables upon first startup, and directly read binlog from the specified timestamp. The consumer will traverse the binlog from the beginning and ignore change events whose timestamp is smaller than the specified timestamp. //specific-offset: Never to perform snapshot on the monitored database tables upon first startup, and directly read binlog from the specified offset. properties.setProperty("scan.startup.mode", "initial"); DebeziumSourceFunction<String> mysqlSource = MySQLSource.<String>builder() .hostname("hadoop102") .port(3306) .username("root") .password("000000") .databaseList("gmall-flink-200821") .tableList("gmall-flink-200821.z_user_info") //可选配置项,如果不指定该参数,则会读取上一个配置下的所有表的数据//注意:指定的时候需要使用"db.table"的方式 .debeziumProperties(properties) .deserializer(new StringDebeziumDeserializationSchema()) .build();
//4.使用CDC Source从MySQL读取数据 DataStreamSource<String> mysqlDS = env.addSource(mysqlSource);
//5.打印数据 mysqlDS.print();
//6.执行任务 env.execute();
}
}

2.1.3 案例测试

1)打包并上传至Linux

2)开启MySQL Binlog并重启MySQL

3)启动Flink集群

[atguigu@hadoop102 flink-standalone]$ bin/start-cluster.sh

4)启动HDFS集群 

[atguigu@hadoop102 flink-standalone]$ start-dfs.sh

5)启动程序

[atguigu@hadoop102 flink-standalone]$ bin/flink run -c com.atguigu.FlinkCDC flink-200821-1.0-SNAPSHOT-jar-with-dependencies.jar

6)在MySQLgmall-flink-200821.z_user_info表中添加、修改或者删除数据 

7)给当前的Flink程序创建Savepoint

[atguigu@hadoop102 flink-standalone]$ bin/flink savepoint JobId hdfs://hadoop102:8020/flink/save

 

8)关闭程序以后从Savepoint重启程序

[atguigu@hadoop102 flink-standalone]$ bin/flink run -s hdfs://hadoop102:8020/flink/save/... -c com.atguigu.FlinkCDC flink-200821-1.0-SNAPSHOT-jar-with-dependencies.jar

2.2 FlinkSQL方式的应用

2.2.1 添加依赖

<dependency> <groupId>org.apache.flink</groupId> <artifactId>flink-table-planner-blink_2.12</artifactId> <version>1.12.0</version></dependency>

2.2.2 代码实现

import org.apache.flink.api.common.restartstrategy.RestartStrategies;import org.apache.flink.runtime.state.filesystem.FsStateBackend;import org.apache.flink.streaming.api.CheckpointingMode;import org.apache.flink.streaming.api.environment.CheckpointConfig;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
public class FlinkSQL_CDC {
public static void main(String[] args) throws Exception {
//1.创建执行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
//2.创建Flink-MySQL-CDC的Source tableEnv.executeSql("CREATE TABLE user_info (" + " id INT," + " name STRING," + " phone_num STRING" + ") WITH (" + " 'connector' = 'mysql-cdc'," + " 'hostname' = 'hadoop102'," + " 'port' = '3306'," + " 'username' = 'root'," + " 'password' = '000000'," + " 'database-name' = 'gmall-flink-200821'," + " 'table-name' = 'z_user_info'" + ")");
tableEnv.executeSql("select * from user_info").print();
env.execute();
}
}


2.3 自定义反序列化器

2.3.1 代码实现

import com.alibaba.fastjson.JSONObject;import com.alibaba.ververica.cdc.connectors.mysql.MySQLSource;import com.alibaba.ververica.cdc.debezium.DebeziumDeserializationSchema;import com.alibaba.ververica.cdc.debezium.DebeziumSourceFunction;import io.debezium.data.Envelope;import org.apache.flink.api.common.restartstrategy.RestartStrategies;import org.apache.flink.api.common.typeinfo.TypeInformation;import org.apache.flink.runtime.state.filesystem.FsStateBackend;import org.apache.flink.streaming.api.CheckpointingMode;import org.apache.flink.streaming.api.datastream.DataStreamSource;import org.apache.flink.streaming.api.environment.CheckpointConfig;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.util.Collector;import org.apache.kafka.connect.data.Field;import org.apache.kafka.connect.data.Struct;import org.apache.kafka.connect.source.SourceRecord;
import java.util.Properties;
public class Flink_CDCWithCustomerSchema {
public static void main(String[] args) throws Exception {
//1.创建执行环境 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); env.setParallelism(1);
//2.创建Flink-MySQL-CDC的Source Properties properties = new Properties();
//initial (default): Performs an initial snapshot on the monitored database tables upon first startup, and continue to read the latest binlog. //latest-offset: Never to perform snapshot on the monitored database tables upon first startup, just read from the end of the binlog which means only have the changes since the connector was started. //timestamp: Never to perform snapshot on the monitored database tables upon first startup, and directly read binlog from the specified timestamp. The consumer will traverse the binlog from the beginning and ignore change events whose timestamp is smaller than the specified timestamp. //specific-offset: Never to perform snapshot on the monitored database tables upon first startup, and directly read binlog from the specified offset. properties.setProperty("debezium.snapshot.mode", "initial"); DebeziumSourceFunction<String> mysqlSource = MySQLSource.<String>builder() .hostname("hadoop102") .port(3306) .username("root") .password("000000") .databaseList("gmall-flink-200821") .tableList("gmall-flink-200821.z_user_info") //可选配置项,如果不指定该参数,则会读取上一个配置下的所有表的数据,注意:指定的时候需要使用"db.table"的方式 .debeziumProperties(properties) .deserializer(new DebeziumDeserializationSchema<String>() { //自定义数据解析器 @Override public void deserialize(SourceRecord sourceRecord, Collector<String> collector) throws Exception {
//获取主题信息,包含着数据库和表名 mysql_binlog_source.gmall-flink-200821.z_user_info String topic = sourceRecord.topic(); String[] arr = topic.split("\\."); String db = arr[1]; String tableName = arr[2];
//获取操作类型 READ DELETE UPDATE CREATE Envelope.Operation operation = Envelope.operationFor(sourceRecord);
//获取值信息并转换为Struct类型 Struct value = (Struct) sourceRecord.value();
//获取变化后的数据 Struct after = value.getStruct("after");
//创建JSON对象用于存储数据信息 JSONObject data = new JSONObject(); for (Field field : after.schema().fields()) { Object o = after.get(field); data.put(field.name(), o); }
//创建JSON对象用于封装最终返回值数据信息 JSONObject result = new JSONObject(); result.put("operation", operation.toString().toLowerCase()); result.put("data", data); result.put("database", db); result.put("table", tableName);
//发送数据至下游 collector.collect(result.toJSONString()); }
@Override public TypeInformation<String> getProducedType() { return TypeInformation.of(String.class); } }) .build();
//3.使用CDC Source从MySQL读取数据 DataStreamSource<String> mysqlDS = env.addSource(mysqlSource);
//4.打印数据 mysqlDS.print();
//5.执行任务 env.execute(); }}