创建Hudi
1、创建Hudi
2、在app-11上进行三台机器的认证和启动集群 命令:
sudo /bin/bash
cd /hadoop
./initHosts.sh
su – hadoop
cd /hadoop/
./startAll.sh
通过Spark-shell启动
3、Hudi是高度耦合Spark的,也就是说操作Hudi是依赖Spark。我们的Hudi是在app-13上,打开app-13并以hadoop用户登录。
命令:su - hadoop
4、通过Spark-shell启动
命令:spark-shell --packages org.apache.hudi:hudi-spark-bundle_2.11:0.6.0,org.apache.spark:spark-avro_2.11:2.4.0 --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'
注:首先指定一个packages包,引用Hudi对Spark支持的jar包,在spark原生包中是没有的,需要通过maven下载下来,下载的这个包后面会有一个版本信息,这个版本信息就是当前的Spark版本信息2.4.0;在指定一个conf需要将Spark的序列方式改成Kryo,Hudi只支持Kryo。
第一次启动的时候速度会比较慢,需要下载spark包。
演示操作
5、导入包 命令:
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
6、设置表名、写出的路径以及本身生成的类 命令:
val tableName = "hudi_trips_cow"
val basePath = "hdfs://dmcluster/tmp/hudi_trips_cow"
val dataGen = new DataGenerator
7、生成一些新的行程,将其加载到DataFrame中,然后将DataFrame写入Hudi表中 命令:
val inserts = convertToStringList(dataGen.generateInserts(10))
val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
df.write.format("org.apache.hudi").
options(getQuickstartWriteConfigs).
option(PRECOMBINE_FIELD_OPT_KEY, "ts").
option(RECORDKEY_FIELD_OPT_KEY, "uuid").
option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
option(TABLE_NAME, tableName).
mode(Overwrite).
save(basePath)
8、查询数据 命令:
val tripsSnapshotDF = spark.read.format("hudi").load(basePath + "/*/*/*/*")
tripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")
spark.sql("select fare, begin_lon, begin_lat, ts from hudi_trips_snapshot where fare > 20.0").show()
spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from hudi_trips_snapshot").show()
9、更新数据 命令:
val updates = convertToStringList(dataGen.generateUpdates(10))
val df = spark.read.json(spark.sparkContext.parallelize(updates, 2))
df.write.format("hudi").
options(getQuickstartWriteConfigs).
option(PRECOMBINE_FIELD_OPT_KEY, "ts").
option(RECORDKEY_FIELD_OPT_KEY, "uuid").
option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
option(TABLE_NAME, tableName).
mode(Append).
save(basePath)
10、再次查询数据 命令:
val tripsSnapshotDF = spark.read.format("hudi").load(basePath + "/*/*/*/*")
tripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")
spark.sql("select fare, begin_lon, begin_lat, ts from hudi_trips_snapshot where fare > 20.0").show()
spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from hudi_trips_snapshot").show()
11、增量查询 命令:
spark.
read.
format("hudi").
load(basePath + "/*/*/*/*").
createOrReplaceTempView("hudi_trips_snapshot")
val commits = spark.sql("select distinct(_hoodie_commit_time) as commitTime from hudi_trips_snapshot order by commitTime").map(k => k.getString(0)).take(50)
val beginTime = commits(commits.length - 2) // commit time we are interested in
// incrementally query data
val tripsIncrementalDF = spark.read.format("hudi").
option(QUERY_TYPE_OPT_KEY, QUERY_TYPE_INCREMENTAL_OPT_VAL).
option(BEGIN_INSTANTTIME_OPT_KEY, beginTime).
load(basePath)
tripsIncrementalDF.createOrReplaceTempView("hudi_trips_incremental")
spark.sql("select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from hudi_trips_incremental where fare > 20.0").show()