ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的詳細(xì)教程
準(zhǔn)備4臺(tái)虛擬機(jī),安裝好ol7.7,分配固定ip192.168.168.11 12 13 14,其中192.168.168.11作為master,其他3個(gè)作為slave,主節(jié)點(diǎn)也同時(shí)作為namenode的同時(shí)也是datanode,192.168.168.14作為datanode的同時(shí)也作為secondary namenodes
首先修改/etc/hostname將主機(jī)名改為master、slave1、slave2、slave3
然后修改/etc/hosts文件添加
192.168.168.11 master 192.168.168.12 slave1 192.168.168.13 slave2 192.168.168.14 slave3
然后卸載自帶openjdk改為sun jdk,參考http://chabaoo.cn/article/190489.htm
配置無(wú)密碼登陸本機(jī)
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys
配置互信
master上把公鑰傳輸給各個(gè)slave
scp ~/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/ scp ~/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/ scp ~/.ssh/id_rsa.pub hadoop@slave3:/home/hadoop/
在slave主機(jī)上將master的公鑰加入各自的節(jié)點(diǎn)上
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
master上安裝hadoop
sudo tar -xzvf ~/hadoop-3.2.1.tar.gz -C /usr/local sudo mv hadoop-3.2.1-src/ ./hadoop sudo chown -R hadoop: ./hadoop
.bashrc添加并使之生效
export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
集群配置,/usr/local/hadoop/etc/hadoop目錄中有配置文件:
修改core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> </configuration>
修改hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/data/nameNode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/data/dataNode</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.secondary.http.address</name> <value>slave3:50090</value> </property> </configuration>
修改mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value> </property> </configuration>
修改yarn-site.xml
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
修改hadoop-env.sh找到JAVA_HOME的配置將目錄修改為
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_191
修改workers
[hadoop@master /usr/local/hadoop/etc/hadoop]$ vim workers master slave1 slave2 slave3
最后將配置好的/usr/local/hadoop文件夾復(fù)制到其他節(jié)點(diǎn)
sudo scp -r /usr/local/hadoop/ slave1:/usr/local/ sudo scp -r /usr/local/hadoop/ slave2:/usr/local/ sudo scp -r /usr/local/hadoop/ slave3:/usr/local/
并且把文件夾owner改為hadoop
sudo systemctl stop firewalld sudo systemctl disable firewalld
關(guān)閉防火墻
格式化hdfs,首次運(yùn)行前運(yùn)行,以后不用,在任意節(jié)點(diǎn)執(zhí)行都可以/usr/local/hadoop/bin/hadoop namenode –format
看到這個(gè)successfuly formatted就是表示成功
start-dfs.sh啟動(dòng)集群hdfs
jps命令查看運(yùn)行情況
通過(guò)master的9870端口可以網(wǎng)頁(yè)監(jiān)控http://192.168.168.11:9870/
也可以通過(guò)命令行查看集群狀態(tài)hadoop dfsadmin -report
[hadoop@master ~]$ hadoop dfsadmin -report WARNING: Use of this script to execute dfsadmin is deprecated. WARNING: Attempting to execute replacement "hdfs dfsadmin" instead. Configured Capacity: 201731358720 (187.88 GB) Present Capacity: 162921230336 (151.73 GB) DFS Remaining: 162921181184 (151.73 GB) DFS Used: 49152 (48 KB) DFS Used%: 0.00% Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 ------------------------------------------------- Live datanodes (4): Name: 192.168.168.11:9866 (master) Hostname: master Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9796546560 (9.12 GB) DFS Remaining: 40636280832 (37.85 GB) DFS Used%: 0.00% DFS Remaining%: 80.58% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 Name: 192.168.168.12:9866 (slave1) Hostname: slave1 Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9710411776 (9.04 GB) DFS Remaining: 40722415616 (37.93 GB) DFS Used%: 0.00% DFS Remaining%: 80.75% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 Name: 192.168.168.13:9866 (slave2) Hostname: slave2 Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9657286656 (8.99 GB) DFS Remaining: 40775540736 (37.98 GB) DFS Used%: 0.00% DFS Remaining%: 80.85% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 Name: 192.168.168.14:9866 (slave3) Hostname: slave3 Decommission Status : Normal Configured Capacity: 50432839680 (46.97 GB) DFS Used: 12288 (12 KB) Non DFS Used: 9645883392 (8.98 GB) DFS Remaining: 40786944000 (37.99 GB) DFS Used%: 0.00% DFS Remaining%: 80.87% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Jul 03 11:14:44 CST 2020 Last Block Report: Fri Jul 03 11:10:35 CST 2020 Num of Blocks: 0 [hadoop@master ~]$
start-yarn.sh可以開(kāi)啟yarn,可以通過(guò)master8088端口監(jiān)控
啟動(dòng)集群命令,可以同時(shí)開(kāi)啟hdfs和yarn /usr/local/hadoop/sbin/start-all.sh
停止集群命令 /usr/local/hadoop/sbin/stop-all.sh
就這樣,記錄過(guò)程,以備后查
到此這篇關(guān)于ol7.7安裝部署4節(jié)點(diǎn)hadoop 3.2.1分布式集群學(xué)習(xí)環(huán)境的文章就介紹到這了,更多相關(guān)ol7.7安裝部署hadoop分布式集群內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
ant?design?vue?圖片預(yù)覽組件自定義樣式
這篇文章主要為大家介紹了ant?design?vue?圖片預(yù)覽組件自定義樣式方法詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2023-03-03bs架構(gòu)和cs架構(gòu)的區(qū)別_動(dòng)力節(jié)點(diǎn)Java學(xué)院整理
這篇文章主要介紹了bs架構(gòu)和cs架構(gòu)的區(qū)別,小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2017-07-07Notepad++文本比較插件Compare詳解(最新免費(fèi))
Notepad++是一款強(qiáng)大的文本編輯器,它提供了文件對(duì)比功能,可以幫助我們快速找出兩個(gè)文件之間的差異點(diǎn),這篇文章主要介紹了Notepad++文本比較插件Compare詳解(最新免費(fèi)),感興趣的朋友一起看看吧2024-01-01chatgpt成功解決Access denied 1020錯(cuò)誤問(wèn)題(最新推薦)
從前兩天網(wǎng)上開(kāi)始一直開(kāi)著的chatgpt網(wǎng)頁(yè)突然打不開(kāi)了,提示1020錯(cuò)誤,嘗試換了不同代理軟件或者代理地點(diǎn)仍然無(wú)法解決,這篇文章主要介紹了chatgpt成功解決Access denied 1020錯(cuò)誤,需要的朋友可以參考下2023-05-05gradle+shell實(shí)現(xiàn)自動(dòng)系統(tǒng)簽名
這篇文章主要介紹了gradle+shell實(shí)現(xiàn)自動(dòng)系統(tǒng)簽名的相關(guān)資料,需要的朋友可以參考下2019-08-08Git基礎(chǔ)之git與SVN版本控制優(yōu)缺點(diǎn)區(qū)別分析
這篇文章主要為大家介紹了Git基礎(chǔ)之git與SVN優(yōu)缺點(diǎn)及區(qū)別分析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪2022-04-04解決appcode打開(kāi)workspace無(wú)法找到xcodeproj項(xiàng)目文件問(wèn)題
這篇文章主要介紹了解決appcode打開(kāi)workspace無(wú)法找到xcodeproj項(xiàng)目文件問(wèn)題,本文通過(guò)圖文并茂的形式給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2021-02-02