SparkSQL讀取hive數(shù)據(jù)本地idea運(yùn)行的方法詳解
環(huán)境準(zhǔn)備:
hadoop版本:2.6.5
spark版本:2.3.0
hive版本:1.2.2
master主機(jī):192.168.100.201
slave1主機(jī):192.168.100.201
pom.xml依賴如下:
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.spark</groupId> <artifactId>spark_practice</artifactId> <version>1.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <spark.core.version>2.3.0</spark.core.version> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>${spark.core.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.11</artifactId> <version>${spark.core.version}</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.38</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_2.11</artifactId> <version>2.3.0</version> </dependency> </dependencies> </project>
注意:一定要將hive-site.xml配置文件放到工程resources目錄下
hive-site.xml配置如下:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl" rel="external nofollow" ?> <configuration> <!-- hive元數(shù)據(jù)服務(wù)url --> <property> <name>hive.metastore.uris</name> <value>thrift://192.168.100.201:9083</value> </property> <property> <name>hive.server2.thrift.port</name> <value>10000</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://node01:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>123456</value> </property> <property> <name>hive.zookeeper.quorum</name> <value>node01,node02,node03</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node01,node02,node03</value> </property> <!-- hive在hdfs上的存儲(chǔ)路徑 --> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <!-- 集群hdfs訪問url --> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.100.201:9000</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <property> <name>datanucleus.autoCreateSchema</name> <value>true</value> </property> <property> <name>datanucleus.autoStartMechanism</name> <value>checked</value> </property> </configuration>
主類代碼:
import org.apache.spark.sql.SparkSession object SparksqlTest2 { def main(args: Array[String]): Unit = { val spark: SparkSession = SparkSession .builder .master("local[*]") .appName("Java Spark Hive Example") .enableHiveSupport .getOrCreate spark.sql("show databases").show() spark.sql("show tables").show() spark.sql("select * from person").show() spark.stop() } }
前提:數(shù)據(jù)庫訪問的是default,表person中有三條數(shù)據(jù)。
測試前先確保hadoop集群正常啟動(dòng),然后需要啟動(dòng)hive的metastore服務(wù)。
./bin/hive --service metastore
運(yùn)行,結(jié)果如下:
如果報(bào)錯(cuò):
Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.io.IOException: (null) entry in command string: null chmod 0700 C:\Users\dell\AppData\Local\Temp\c530fb25-b267-4dd2-b24d-741727a6fbf3_resources;
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
at com.tongfang.learn.spark.hive.HiveTest.main(HiveTest.java:15)
解決:
1.下載hadoop windows binary包,鏈接:https://github.com/steveloughran/winutils
2.在啟動(dòng)類的運(yùn)行參數(shù)中設(shè)置環(huán)境變量,HADOOP_HOME=D:\winutils\hadoop-2.6.4,后面是hadoop windows 二進(jìn)制包的目錄。
到此這篇關(guān)于SparkSQL讀取hive數(shù)據(jù)本地idea運(yùn)行的方法詳解的文章就介紹到這了,更多相關(guān)SparkSQL讀取hive數(shù)據(jù)本地idea運(yùn)行內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
SpringBoot使用?Sleuth?進(jìn)行分布式跟蹤的過程分析
Spring Boot Sleuth是一個(gè)分布式跟蹤解決方案,它可以幫助您在分布式系統(tǒng)中跟蹤請(qǐng)求并分析性能問題,Spring Boot Sleuth是Spring Cloud的一部分,它提供了分布式跟蹤的功能,本文將介紹如何在Spring Boot應(yīng)用程序中使用Sleuth進(jìn)行分布式跟蹤,感興趣的朋友一起看看吧2023-10-10Java解決浮點(diǎn)數(shù)計(jì)算不精確問題的方法詳解
在 Java 中,浮點(diǎn)數(shù)計(jì)算不精確問題指的是使用浮點(diǎn)數(shù)進(jìn)行運(yùn)算時(shí),由于浮點(diǎn)數(shù)的內(nèi)部表示方式和十進(jìn)制數(shù)的表示方式存在差異,導(dǎo)致計(jì)算結(jié)果可能出現(xiàn)誤差,本文就給大家介紹一下Java如何解決浮點(diǎn)數(shù)計(jì)算不精確問題,需要的朋友可以參考下2023-09-09Java 通過JDBC連接Mysql數(shù)據(jù)庫
本文給大家詳細(xì)介紹了java如何使用JDBC連接Mysql的方法以及驅(qū)動(dòng)包的安裝,最后給大家附上了java通過JDBC連接其他各種數(shù)據(jù)庫的方法,有需要的小伙伴可以參考下。2015-11-11selenium-java實(shí)現(xiàn)自動(dòng)登錄跳轉(zhuǎn)頁面方式
利用Selenium和Java語言可以編寫一個(gè)腳本自動(dòng)刷新網(wǎng)頁,首先,需要確保Google瀏覽器和Chrome-Driver驅(qū)動(dòng)的版本一致,通過指定網(wǎng)站下載對(duì)應(yīng)版本的瀏覽器和驅(qū)動(dòng),在Maven項(xiàng)目中添加依賴,編寫腳本實(shí)現(xiàn)網(wǎng)頁的自動(dòng)刷新,此方法適用于需要頻繁刷新網(wǎng)頁的場景,簡化了操作,提高了效率2024-11-11在IntelliJ IDEA 搭建springmvc項(xiàng)目配置debug的教程詳解
這篇文章主要介紹了在IntelliJ IDEA 搭建springmvc項(xiàng)目配置debug的教程詳解,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2020-09-09Spring boot項(xiàng)目部署到云服務(wù)器小白教程詳解
這篇文章主要介紹了Spring boot項(xiàng)目部署到云服務(wù)器小白教程詳解,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧2018-04-04基于Java多線程notify與notifyall的區(qū)別分析
本篇文章對(duì)Java中多線程notify與notifyall的區(qū)別進(jìn)行了詳細(xì)的分析介紹。需要的朋友參考下2013-05-05Java如何處理json字符串value多余雙引號(hào)
這篇文章主要介紹了Java如何處理json字符串value多余雙引號(hào),文中通過示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2020-03-03