博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
【Hadoop】伪分布式环境搭建、验证
阅读量:6814 次
发布时间:2019-06-26

本文共 5327 字,大约阅读时间需要 17 分钟。

Hadoop伪分布式环境搭建:

 

 

自动部署脚本:

#!/bin/bashset -euxexport APP_PATH=/opt/applicationsexport APP_NAME=Ares# 安装apt依赖包apt-get update -y \    && apt-get install supervisor -y \    && apt-get install python-dev python-pip libmysqlclient-dev -y# 安装pip、python依赖pip install --upgrade pip \    && pip install -r ./build-depends/pip-requirements/requirements.txt# 安装JDKtar -xzvf ./build-depends/jdk-package/jdk-7u60-linux-x64.tar.gz \    && ln -s jdk1.7.0_60/ jdk# 配置JAVA环境变量echo -e '\n' >> /etc/profileecho '# !!!No Modification, This Section is Auto Generated by '${APP_NAME} >> /etc/profileecho 'export JAVA_HOME='${APP_PATH}/${APP_NAME}/jdk >> /etc/profileecho 'export JRE_HOME=${JAVA_HOME}/jre' >> /etc/profileecho 'export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar' >> /etc/profileecho 'export PATH=${PATH}:${JAVA_HOME}/bin:${JRE_HOME}/bin' >> /etc/profilesource /etc/profile && java -version# 安装Hadooptar -xzvf ./build-depends/hadoop-package/hadoop-2.5.2.tar.gz \    && ln -s hadoop-2.5.2 hadoop# hadoop-env.sh配置JAVA_HOMEmv ./hadoop/etc/hadoop/hadoop-env.sh ./hadoop/etc/hadoop/hadoop-env.sh.bak \    && cp -rf ./build-depends/hadoop-conf/hadoop-env.sh ./hadoop/etc/hadoop/ \    && sed -i "25a export JAVA_HOME=${APP_PATH}/${APP_NAME}/jdk" ./hadoop/etc/hadoop/hadoop-env.sh# core-site.xml配置mv ./hadoop/etc/hadoop/core-site.xml ./hadoop/etc/hadoop/core-site.xml.bak \    && python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/core-site.xml ./hadoop/etc/hadoop/core-site.xml# hdfs-site.xml配置mv ./hadoop/etc/hadoop/hdfs-site.xml ./hadoop/etc/hadoop/hdfs-site.xml.bak \    && python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/hdfs-site.xml ./hadoop/etc/hadoop/hdfs-site.xml# mapred-site.xml配置python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/mapred-site.xml.template ./hadoop/etc/hadoop/mapred-site.xml# yarn-site.xml配置mv ./hadoop/etc/hadoop/yarn-site.xml ./hadoop/etc/hadoop/yarn-site.xml.bak \    && python ./build-utils/configueUpdate/templateInvoke.py ./build-depends/hadoop-conf/yarn-site.xml ./hadoop/etc/hadoop/yarn-site.xml# slaves, 即DataNode配置mv ./hadoop/etc/hadoop/slaves ./hadoop/etc/hadoop/slaves.bakDataNodeList=(`echo ${DataNodeList} | tr ";" "\n"`)for DataNode in ${DataNodeList}; do    echo ${DataNode} >> ./hadoop/etc/hadoop/slavesdone# 配置Hadoop环境变量echo -e '\n' >> /etc/profileecho '# !!!No Modification, This Section is Auto Generated by '${APP_NAME} >> /etc/profileecho 'export HADOOP_HOME='${APP_PATH}/${APP_NAME}/hadoop >> /etc/profileecho 'export PATH=${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin' >> /etc/profilesource /etc/profile && hadoop version# Namenode格式化# hadoop namenode -format -forcehdfs namenode -format -force# 启动hdfs、yarnstop-dfs.sh && start-dfs.sh && jpsstop-yarn.sh && start-yarn.sh && jps# hdfs测试# hadoop fs -put ./build-depends/jdk-package/jdk-7u60-linux-x64.tar.gz hdfs://HADOOP-NODE1:9000/hdfs dfs -put ./build-depends/jdk-package/jdk-7u60-linux-x64.tar.gz hdfs://HADOOP-NODE1:9000/# hadoop fs -get hdfs://HADOOP-NODE1:9000/jdk-7u60-linux-x64.tar.gz .hdfs dfs -get hdfs://HADOOP-NODE1:9000/jdk-7u60-linux-x64.tar.gz .rm -rf jdk-7u60-linux-x64.tar.gz# mapred测试hadoop jar ./hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar pi 5 10# word-count测试touch word-count.txt    \    && echo "hello world" >> word-count.txt \    && echo "hello tom" >> word-count.txt \    && echo "hello jim" >> word-count.txt \    && echo "hello kitty" >> word-count.txt \    && echo "hello baby" >> word-count.txt# hadoop fs -put word-count.txt hdfs://HADOOP-NODE1:9000/# hadoop fs -rm hdfs://HADOOP-NODE1:9000/word-count.txthadoop fs -mkdir hdfs://HADOOP-NODE1:9000/word-counthadoop fs -mkdir hdfs://HADOOP-NODE1:9000/word-count/input# hadoop fs -mkdir hdfs://HADOOP-NODE1:9000/word-count/output# hadoop fs -rmdir hdfs://HADOOP-NODE1:9000/word-count/outputhadoop fs -put word-count.txt hdfs://HADOOP-NODE1:9000/word-count/inputhadoop jar ./hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar wordcount hdfs://HADOOP-NODE1:9000/word-count/input hdfs://HADOOP-NODE1:9000/word-count/outputhadoop fs -ls hdfs://HADOOP-NODE1:9000/word-count/outputhadoop fs -cat hdfs://HADOOP-NODE1:9000/word-count/output/part-r-00000# supervisord 配置文件#cp ${APP_PATH}/supervisor.conf.d/*.conf /etc/supervisor/conf.d/# start supervisord nodaemon# /usr/bin/supervisord --nodaemon#/usr/bin/supervisord

 

运行脚本:

# 此处描述应用运行命令使用方法.export APP_PATH=/opt/applicationsexport APP_NAME=Aresexport APP_Version=2.5.2# 单节点-伪分布式#HOSTNAME           IP              HDFS                                YARN#HADOOP-NODE1       10.20.0.11      NameNode/SNameNode/DataNode         NodeManager/ResourceManagerexport NameNode_HOST=HADOOP-NODE1export NameNode_RPCPort=9000export NameNode_HTTP_PORT=50070export SNameNode_HOST=HADOOP-NODE1export SNameNode_HTTP_PORT=50090export SNameNode_HTTPS_PORT=50091export HDFS_Replication=1export YARN_RSC_MGR_HOST=HADOOP-NODE1export YARN_RSC_MGR_HTTP_PORT=8088export YARN_RSC_MGR_HTTPS_PORT=8090export DataNodeList='HADOOP-NODE1'mkdir -p ${APP_PATH}/${APP_NAME} \    && mv ${APP_NAME}-${APP_Version}.zip ${APP_PATH}/${APP_NAME}/ \    && cd ${APP_PATH}/${APP_NAME}/ \    && unzip ${APP_NAME}-${APP_Version}.zip \    && chmod a+x run.sh \    && ./run.sh

 

ssh免密码登录过程:

 

转载地址:http://uydzl.baihongyu.com/

你可能感兴趣的文章
shell脚本实现杨辉三角形
查看>>
ComponentOne 2019V1火热来袭!全面支持 Visual Studio 2019
查看>>
装了一款系统优化工具,如何从Mac上卸载MacBooster 7?
查看>>
使用符号表调试release程序
查看>>
Delphi 设置系统默认打印机
查看>>
AliOS Things网络适配框架 - SAL
查看>>
数组 将一个数组的元素和另一个素组的元素相加,然后赋给第三个数组
查看>>
Python常用模块汇总
查看>>
sa提开放系统下的虚拟新贵Virtualbox权技巧之xp_regwrite替换sethc.exe
查看>>
SpringBoot开发案例之整合Dubbo提供者(一)
查看>>
变态的程序
查看>>
腾讯抄你肿么办 ?
查看>>
java多线程的Fork/Join
查看>>
ftp 服务器的配置
查看>>
JavaScript的浏览器兼容性问题小结。
查看>>
Oracle Hint的用法
查看>>
Postfix邮件系统
查看>>
《编写可读代码的艺术》读书文摘--第一部分 表面层次的改进
查看>>
使用Nodejs创建基本的网站 Microblog--《Node.js开发指南》 3
查看>>
网管工作是否值得做下去?
查看>>