[Hadoop] Windows 下的 Hadoop 2.7.5 環境搭建

来源:https://www.cnblogs.com/memento/archive/2018/06/07/9148721.html
-Advertisement-
Play Games

原文地址:https://www.cnblogs.com/memento/p/9148721.html準備說明:jdk:jdk-8u161-windows-x64.exehadoop:hadoop-2.7.5.tar.gzOS:Window 10一、JDK 安裝配置詳見:JDK 環境配置(圖文)二、... ...


原文地址:https://www.cnblogs.com/memento/p/9148721.html

準備說明:

jdk:jdk-8u161-windows-x64.exe

hadoop:hadoop-2.7.5.tar.gz

OS:Window 10

一、JDK 安裝配置

詳見:JDK 環境配置(圖文)

二、Hadoop 安裝配置

1、在 http://hadoop.apache.org/releases.html  處下載 hadoop-2.7.5.tar.gz

2、將 hadoop-2.7.5.tar.gz 文件解壓縮(以放在 D 盤根目錄下為例);

04505bb3-f871-481a-8a09-16f9403003e0

3、配置 HADOOP_HOME 環境路徑;

0c87ca8a-a395-4cf8-ae60-c2737ddba153

並追加目錄下的 bin 和 sbin 文件夾路徑到 PATH 變數中;

964017531

4、在命令行視窗中輸入 hadoop 命令進行驗證;

563b4dd6-eb82-4a6d-a166-101ad5984148

如果提示 JAVA_HOME 路徑不對,需要去修改 %HADOOP_HOME%\etc\hadoop\hadoop-env.cmd 里的配置:

8985213f-d814-45f2-a2c4-9b0292682df0

set JAVA_HOME=%JAVA_HOME%
@rem 修改為
set JAVA_HOME=C:\Progra~1\Java\jdk1.8.0_161


三、Hadoop 配置文件

core-site.xml

<configuration>
    <!-- 指定使用 hadoop 時產生文件的存放目錄 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/D:/hadoop/workplace/tmp</value>
        <description>namenode 上本地的 hadoop 臨時文件夾</description>
    </property>
	<property>
        <name>hadoop.name.dir</name>
        <value>/D:/hadoop/workplace/name</value>
    </property>
    <!-- 指定 namenode 地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
        <description>HDFS 的 URI,文件系統://namenode標識:埠號</description>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <!-- 指定 hdfs 保存數據的副本數量 -->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
        <description>副本個數,配置預設是 3,應小於 datanode 伺服器數量</description>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/D:/hadoop/workplace/name</value>
        <description>namenode 上存儲 HDFS 命名空間元數據</description>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/D:/hadoop/workplace/data</value>
        <description>datanode 上數據塊的物理存儲位置</description>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>true</value>
        <description>
            If "true", enable permission checking in HDFS.
            If "false", permission checking is turned off,
            but all other behavior is unchanged.
            Switching from one parameter value to the other does not change the mode,
            owner or group of files or directories.
    </description>
    </property>
</configuration>

mapred-site.xml

<configuration>
    <!-- MR 運行在 YARN 上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>localhost:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>localhost:19888</value>
    </property>
</configuration>

yarn-site.xml

<configuration>
    <!-- nodemanager 獲取數據的方式是 shuffle -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
	<property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>


四、格式化 namenode

hadoop namenode –format 出現異常:

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
18/02/09 12:18:11 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable D:\hadoop\bin\winutils.exe in the Hadoop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:382)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:397)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:390)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
        at org.apache.hadoop.hdfs.server.common.HdfsServerConstants$RollingUpgradeStartupOption.getAllOptionString(HdfsServerConstants.java:80)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<clinit>(NameNode.java:265)

下載 window-hadoop-bin.zip 壓縮包,解壓並替換掉 hadoop\bin 目錄下的文件,然後再重新格式化:

C:\Users\Memento>hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
18/06/07 06:25:02 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = PC-Name/IP
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.5
STARTUP_MSG:   classpath = D:\hadoop\etc\hadoop;D:\hadoop\share\hadoop\common\lib\activation-1.1.jar;D:\hadoop\share\hadoop\common\lib\apacheds-i18n-2.0.0-M15.jar;D:\hadoop\share\hadoop\common\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\hadoop\share\hadoop\common\lib\api-asn1-api-1.0.0-M20.jar;D:\hadoop\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\hadoop\share\hadoop\common\lib\asm-3.2.jar;D:\hadoop\share\hadoop\common\lib\avro-1.7.4.jar;D:\hadoop\share\hadoop\common\lib\commons-beanutils-1.7.0.jar;D:\hadoop\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:\hadoop\share\hadoop\common\lib\commons-cli-1.2.jar;D:\hadoop\share\hadoop\common\lib\commons-codec-1.4.jar;D:\hadoop\share\hadoop\common\lib\commons-collections-3.2.2.jar;D:\hadoop\share\hadoop\common\lib\commons-compress-1.4.1.jar;D:\hadoop\share\hadoop\common\lib\commons-configuration-1.6.jar;D:\hadoop\share\hadoop\common\lib\commons-digester-1.8.jar;D:\hadoop\share\hadoop\common\lib\commons-httpclient-3.1.jar;D:\hadoop\share\hadoop\common\lib\commons-io-2.4.jar;D:\hadoop\share\hadoop\common\lib\commons-lang-2.6.jar;D:\hadoop\share\hadoop\common\lib\commons-logging-1.1.3.jar;D:\hadoop\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\hadoop\share\hadoop\common\lib\commons-net-3.1.jar;D:\hadoop\share\hadoop\common\lib\curator-client-2.7.1.jar;D:\hadoop\share\hadoop\common\lib\curator-framework-2.7.1.jar;D:\hadoop\share\hadoop\common\lib\curator-recipes-2.7.1.jar;D:\hadoop\share\hadoop\common\lib\gson-2.2.4.jar;D:\hadoop\share\hadoop\common\lib\guava-11.0.2.jar;D:\hadoop\share\hadoop\common\lib\hadoop-annotations-2.7.5.jar;D:\hadoop\share\hadoop\common\lib\hadoop-auth-2.7.5.jar;D:\hadoop\share\hadoop\common\lib\hamcrest-core-1.3.jar;D:\hadoop\share\hadoop\common\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop\share\hadoop\common\lib\httpclient-4.2.5.jar;D:\hadoop\share\hadoop\common\lib\httpcore-4.2.5.jar;D:\hadoop\share\hadoop\common\lib\jackson-core-asl-1.9.13.jar;D:\hadoop\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop\share\hadoop\common\lib\jackson-xc-1.9.13.jar;D:\hadoop\share\hadoop\common\lib\java-xmlbuilder-0.4.jar;D:\hadoop\share\hadoop\common\lib\jaxb-api-2.2.2.jar;D:\hadoop\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;D:\hadoop\share\hadoop\common\lib\jersey-core-1.9.jar;D:\hadoop\share\hadoop\common\lib\jersey-json-1.9.jar;D:\hadoop\share\hadoop\common\lib\jersey-server-1.9.jar;D:\hadoop\share\hadoop\common\lib\jets3t-0.9.0.jar;D:\hadoop\share\hadoop\common\lib\jettison-1.1.jar;D:\hadoop\share\hadoop\common\lib\jetty-6.1.26.jar;D:\hadoop\share\hadoop\common\lib\jetty-sslengine-6.1.26.jar;D:\hadoop\share\hadoop\common\lib\jetty-util-6.1.26.jar;D:\hadoop\share\hadoop\common\lib\jsch-0.1.54.jar;D:\hadoop\share\hadoop\common\lib\jsp-api-2.1.jar;D:\hadoop\share\hadoop\common\lib\jsr305-3.0.0.jar;D:\hadoop\share\hadoop\common\lib\junit-4.11.jar;D:\hadoop\share\hadoop\common\lib\log4j-1.2.17.jar;D:\hadoop\share\hadoop\common\lib\mockito-all-1.8.5.jar;D:\hadoop\share\hadoop\common\lib\netty-3.6.2.Final.jar;D:\hadoop\share\hadoop\common\lib\paranamer-2.3.jar;D:\hadoop\share\hadoop\common\lib\protobuf-java-2.5.0.jar;D:\hadoop\share\hadoop\common\lib\servlet-api-2.5.jar;D:\hadoop\share\hadoop\common\lib\slf4j-api-1.7.10.jar;D:\hadoop\share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar;D:\hadoop\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;D:\hadoop\share\hadoop\common\lib\stax-api-1.0-2.jar;D:\hadoop\share\hadoop\common\lib\xmlenc-0.52.jar;D:\hadoop\share\hadoop\common\lib\xz-1.0.jar;D:\hadoop\share\hadoop\common\lib\zookeeper-3.4.6.jar;D:\hadoop\share\hadoop\common\hadoop-common-2.7.5-tests.jar;D:\hadoop\share\hadoop\common\hadoop-common-2.7.5.jar;D:\hadoop\share\hadoop\common\hadoop-nfs-2.7.5.jar;D:\hadoop\share\hadoop\hdfs;D:\hadoop\share\hadoop\hdfs\lib\asm-3.2.jar;D:\hadoop\share\hadoop\hdfs\lib\commons-cli-1.2.jar;D:\hadoop\share\hadoop\hdfs\lib\commons-codec-1.4.jar;D:\hadoop\share\hadoop\hdfs\lib\commons-daemon-1.0.13.jar;D:\hadoop\share\hadoop\hdfs\lib\commons-io-2.4.jar;D:\hadoop\share\hadoop\hdfs\lib\commons-lang-2.6.jar;D:\hadoop\share\hadoop\hdfs\lib\commons-logging-1.1.3.jar;D:\hadoop\share\hadoop\hdfs\lib\guava-11.0.2.jar;D:\hadoop\share\hadoop\hdfs\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;D:\hadoop\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop\share\hadoop\hdfs\lib\jersey-core-1.9.jar;D:\hadoop\share\hadoop\hdfs\lib\jersey-server-1.9.jar;D:\hadoop\share\hadoop\hdfs\lib\jetty-6.1.26.jar;D:\hadoop\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;D:\hadoop\share\hadoop\hdfs\lib\jsr305-3.0.0.jar;D:\hadoop\share\hadoop\hdfs\lib\leveldbjni-all-1.8.jar;D:\hadoop\share\hadoop\hdfs\lib\log4j-1.2.17.jar;D:\hadoop\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;D:\hadoop\share\hadoop\hdfs\lib\netty-all-4.0.23.Final.jar;D:\hadoop\share\hadoop\hdfs\lib\protobuf-java-2.5.0.jar;D:\hadoop\share\hadoop\hdfs\lib\servlet-api-2.5.jar;D:\hadoop\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;D:\hadoop\share\hadoop\hdfs\lib\xml-apis-1.3.04.jar;D:\hadoop\share\hadoop\hdfs\lib\xmlenc-0.52.jar;D:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.7.5-tests.jar;D:\hadoop\share\hadoop\hdfs\hadoop-hdfs-2.7.5.jar;D:\hadoop\share\hadoop\hdfs\hadoop-hdfs-nfs-2.7.5.jar;D:\hadoop\share\hadoop\yarn\lib\activation-1.1.jar;D:\hadoop\share\hadoop\yarn\lib\aopalliance-1.0.jar;D:\hadoop\share\hadoop\yarn\lib\asm-3.2.jar;D:\hadoop\share\hadoop\yarn\lib\commons-cli-1.2.jar;D:\hadoop\share\hadoop\yarn\lib\commons-codec-1.4.jar;D:\hadoop\share\hadoop\yarn\lib\commons-collections-3.2.2.jar;D:\hadoop\share\hadoop\yarn\lib\commons-compress-1.4.1.jar;D:\hadoop\share\hadoop\yarn\lib\commons-io-2.4.jar;D:\hadoop\share\hadoop\yarn\lib\commons-lang-2.6.jar;D:\hadoop\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;D:\hadoop\share\hadoop\yarn\lib\guava-11.0.2.jar;D:\hadoop\share\hadoop\yarn\lib\guice-3.0.jar;D:\hadoop\share\hadoop\yarn\lib\guice-servlet-3.0.jar;D:\hadoop\share\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;D:\hadoop\share\hadoop\yarn\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop\share\hadoop\yarn\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;D:\hadoop\share\hadoop\yarn\lib\javax.inject-1.jar;D:\hadoop\share\hadoop\yarn\lib\jaxb-api-2.2.2.jar;D:\hadoop\share\hadoop\yarn\lib\jaxb-impl-2.2.3-1.jar;D:\hadoop\share\hadoop\yarn\lib\jersey-client-1.9.jar;D:\hadoop\share\hadoop\yarn\lib\jersey-core-1.9.jar;D:\hadoop\share\hadoop\yarn\lib\jersey-guice-1.9.jar;D:\hadoop\share\hadoop\yarn\lib\jersey-json-1.9.jar;D:\hadoop\share\hadoop\yarn\lib\jersey-server-1.9.jar;D:\hadoop\share\hadoop\yarn\lib\jettison-1.1.jar;D:\hadoop\share\hadoop\yarn\lib\jetty-6.1.26.jar;D:\hadoop\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;D:\hadoop\share\hadoop\yarn\lib\jsr305-3.0.0.jar;D:\hadoop\share\hadoop\yarn\lib\leveldbjni-all-1.8.jar;D:\hadoop\share\hadoop\yarn\lib\log4j-1.2.17.jar;D:\hadoop\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;D:\hadoop\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;D:\hadoop\share\hadoop\yarn\lib\servlet-api-2.5.jar;D:\hadoop\share\hadoop\yarn\lib\stax-api-1.0-2.jar;D:\hadoop\share\hadoop\yarn\lib\xz-1.0.jar;D:\hadoop\share\hadoop\yarn\lib\zookeeper-3.4.6-tests.jar;D:\hadoop\share\hadoop\yarn\lib\zookeeper-3.4.6.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-api-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-launcher-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-client-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-common-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-registry-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-applicationhistoryservice-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-common-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-nodemanager-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-resourcemanager-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-sharedcachemanager-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-tests-2.7.5.jar;D:\hadoop\share\hadoop\yarn\hadoop-yarn-server-web-proxy-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\lib\aopalliance-1.0.jar;D:\hadoop\share\hadoop\mapreduce\lib\asm-3.2.jar;D:\hadoop\share\hadoop\mapreduce\lib\avro-1.7.4.jar;D:\hadoop\share\hadoop\mapreduce\lib\commons-compress-1.4.1.jar;D:\hadoop\share\hadoop\mapreduce\lib\commons-io-2.4.jar;D:\hadoop\share\hadoop\mapreduce\lib\guice-3.0.jar;D:\hadoop\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;D:\hadoop\share\hadoop\mapreduce\lib\hadoop-annotations-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\lib\hamcrest-core-1.3.jar;D:\hadoop\share\hadoop\mapreduce\lib\jackson-core-asl-1.9.13.jar;D:\hadoop\share\hadoop\mapreduce\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop\share\hadoop\mapreduce\lib\javax.inject-1.jar;D:\hadoop\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;D:\hadoop\share\hadoop\mapreduce\lib\jersey-guice-1.9.jar;D:\hadoop\share\hadoop\mapreduce\lib\jersey-server-1.9.jar;D:\hadoop\share\hadoop\mapreduce\lib\junit-4.11.jar;D:\hadoop\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;D:\hadoop\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;D:\hadoop\share\hadoop\mapreduce\lib\netty-3.6.2.Final.jar;D:\hadoop\share\hadoop\mapreduce\lib\paranamer-2.3.jar;D:\hadoop\share\hadoop\mapreduce\lib\protobuf-java-2.5.0.jar;D:\hadoop\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;D:\hadoop\share\hadoop\mapreduce\lib\xz-1.0.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-app-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-common-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-core-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-hs-plugins-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.5-tests.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-jobclient-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-client-shuffle-2.7.5.jar;D:\hadoop\share\hadoop\mapreduce\hadoop-mapreduce-examples-2.7.5.jar
STARTUP_MSG:   build = https://[email protected]/repos/asf/hadoop.git -r 18065c2b6806ed4aa6a3187d77cbe21bb3dba075; compiled by 'kshvachk' on 2017-12-16T01:06Z
STARTUP_MSG:   java = 1.8.0_151
************************************************************/
18/06/07 06:25:02 INFO namenode.NameNode: createNameNode [-format]
18/06/07 06:25:03 WARN common.Util: Path /usr/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
18/06/07 06:25:03 WARN common.Util: Path /usr/hadoop/hdfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-923c0653-5a78-46ca-a788-6502dc43047d
18/06/07 06:25:04 INFO namenode.FSNamesystem: No KeyProvider found.
18/06/07 06:25:04 INFO namenode.FSNamesystem: fsLock is fair: true
18/06/07 06:25:04 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
18/06/07 06:25:04 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/06/07 06:25:04 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/06/07 06:25:04 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/06/07 06:25:04 INFO blockmanagement.BlockManager: The block deletion will start around 2018 六月 07 06:25:04
18/06/07 06:25:04 INFO util.GSet: Computing capacity for map BlocksMap
18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
18/06/07 06:25:04 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
18/06/07 06:25:04 INFO util.GSet: capacity      = 2^21 = 2097152 entries
18/06/07 06:25:04 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/06/07 06:25:04 INFO blockmanagement.BlockManager: defaultReplication         = 3
18/06/07 06:25:04 INFO blockmanagement.BlockManager: maxReplication             = 512
18/06/07 06:25:04 INFO blockmanagement.BlockManager: minReplication             = 1
18/06/07 06:25:04 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
18/06/07 06:25:04 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/06/07 06:25:04 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
18/06/07 06:25:04 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
18/06/07 06:25:04 INFO namenode.FSNamesystem: fsOwner             = Memento (auth:SIMPLE)
18/06/07 06:25:04 INFO namenode.FSNamesystem: supergroup          = supergroup
18/06/07 06:25:04 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/06/07 06:25:04 INFO namenode.FSNamesystem: HA Enabled: false
18/06/07 06:25:04 INFO namenode.FSNamesystem: Append Enabled: true
18/06/07 06:25:04 INFO util.GSet: Computing capacity for map INodeMap
18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
18/06/07 06:25:04 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
18/06/07 06:25:04 INFO util.GSet: capacity      = 2^20 = 1048576 entries
18/06/07 06:25:04 INFO namenode.FSDirectory: ACLs enabled? false
18/06/07 06:25:04 INFO namenode.FSDirectory: XAttrs enabled? true
18/06/07 06:25:04 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/06/07 06:25:04 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/06/07 06:25:04 INFO util.GSet: Computing capacity for map cachedBlocks
18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
18/06/07 06:25:04 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
18/06/07 06:25:04 INFO util.GSet: capacity      = 2^18 = 262144 entries
18/06/07 06:25:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/06/07 06:25:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/06/07 06:25:04 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
18/06/07 06:25:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/06/07 06:25:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/06/07 06:25:04 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/06/07 06:25:04 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/06/07 06:25:04 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/06/07 06:25:04 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/06/07 06:25:04 INFO util.GSet: VM type       = 64-bit
18/06/07 06:25:04 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
18/06/07 06:25:04 INFO util.GSet: capacity      = 2^15 = 32768 entries
18/06/07 06:25:04 INFO namenode.FSImage: Allocated new BlockPoolId: BP-869377568-192.168.1.104-1528323904862
18/06/07 06:25:04 INFO common.Storage: Storage directory C:\usr\hadoop\hdfs\name has been successfully formatted.
18/06/07 06:25:04 INFO namenode.FSImageFormatProtobuf: Saving image file C:\usr\hadoop\hdfs\name\current\fsimage.ckpt_0000000000000000000 using no compression
18/06/07 06:25:05 INFO namenode.FSImageFormatProtobuf: Image file C:\usr\hadoop\hdfs\name\current\fsimage.ckpt_0000000000000000000 of size 324 bytes saved in 0 seconds.
18/06/07 06:25:05 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/06/07 06:25:05 INFO util.ExitUtil: Exiting with status 0
18/06/07 06:25:05 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Memento-PC/192.168.1.104
************************************************************/


五、啟動 Hadoop

C:\Users\Memento>start-all.cmd
This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
starting yarn daemons

如果出現如下異常,提示說無法解析 master 地址:

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: Failed on local exception: java.net.SocketException: Unresolved address; Host Details : local host is: "master"; destination host is: (unknown):0

此時需要在 C:\Windows\System32\drivers\etc\hosts 文件中追加 master 的映射:192.168.1.104    master

然後再重新執行啟動命令 start-all.cmd

隨後會出現四個命令視窗,依次如下:

1、Apache Hadoop Distribution - hadoop namenode

2、Apache Hadoop Distribution - yarn resourcemanager

3、Apache Hadoop Distribution - yarn nodemanager

4、Apache Hadoop Distribution - hadoop datanode

六、JPS 查看啟動進程

C:\Users\XXXXX>jps
13460 Jps
14676 NodeManager
12444 NameNode
14204 DataNode
14348 ResourceManager


七、MapReduce 任務和 hdfs 文件

通過瀏覽器瀏覽 localhost:8080 和 localhost:50070 訪問瀏覽:

baeaa48d-e2e2-4410-b847-3ccef63091a5

400b42d4-5aa1-4c49-9f81-0426a866411e

至此,Hadoop 在 Windows 下的環境搭建完成!

關閉 hadoop

C:\Users\XXXXX>stop-all.cmd
This script is Deprecated. Instead use stop-dfs.cmd and stop-yarn.cmd
成功: 給進程發送了終止信號,進程的 PID 為 27204。
成功: 給進程發送了終止信號,進程的 PID 為 7884。
stopping yarn daemons
成功: 給進程發送了終止信號,進程的 PID 為 20464。
成功: 給進程發送了終止信號,進程的 PID 為 12516。

信息: 沒有運行的帶有指定標準的任務。


相關參考:

winutils:https://github.com/steveloughran/winutils

不想下火車的人:https://www.cnblogs.com/wuxun1997/p/6847950.html

bin 附件下載:https://pan.baidu.com/s/1XCTTQVKcsMoaLOLh4X4bhw


By. Memento

您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 前言: 大家對shell腳本應該都不算陌生了,如果突然問你什麼是shell腳本?是乾什麼用的?由什麼組成以及怎麼使用?變數的概念是什麼?作用範圍是什麼?變數間的算術運算怎麼表示?你能很容易答出來嗎 本文整理自博主當年的學習筆記,若有疏漏歡迎指正! Shell編程規範與變數 學習目標: 掌握shell ...
  • VMware下載與安裝 一、虛擬機的下載 1.進入VMware官網,點擊左側導航欄中的下載,再點擊圖中標記的Workstation Pro,如下圖所示。 2.根據操作系統選擇合適的產品,在這裡以Windows系統為例,點擊轉至下載,如下圖所示。 3.在1處可以選擇版本,預設為最新版本。選擇好版本後點 ...
  • 我們用的是jdk1.8.0_171 1、從官網下載JDK 2、創建java目錄 $ sudo mkdir /usr/lib/java 3、將下載的jdk移入該文件夾下麵 $ sudo mv /home/holder/下載/jdk-8u171-linux-x64.tar.gz /usr/lib/jav ...
  • 有時我想玩玩蘋果系統,但自己有沒有mac,只能在虛擬機上裝一個蘋果玩玩,但又由於某些原因虛擬機軟體VMware不支持安裝蘋果系統,還在有大佬出於不明目的,在網上散佈了適用於Windows版本的VMware解鎖安裝Mac OS的補丁( Unlocker),顯然我起碼需要先搞到二個東西,1.Unlock ...
  • 層級目錄結構的Makefile編寫方法. 層級目錄結構的Makefile編寫方法.0.前言1.如何編譯整個工程2.過濾每層不需要編譯的目錄3將所有輸出文件定向輸出. 層級目錄結構的Makefile編寫方法.0.前言1.如何編譯整個工程2.過濾每層不需要編譯的目錄3將所有輸出文件定向輸出. 層級目錄結 ...
  • 索引: 開源Spring解決方案--lm.solution 參看代碼 GitHub: jdk.txt 一、Linux (DeepinOS) 環境 1.官網下載 2.創建目錄 3.提取文件 4.打開.profile文件 5.在.profile文件追加環境變數 6.生效環境變數 7.移除多餘 8.重啟L ...
  • 一、Windows Server 2008 R2 介紹 1、Windows Server 2008 R2 基本概念 2、Windows Server 2008 R2 家族系列 二、VMware虛擬機安裝 Windows Server 2008 R2 1、準備Windows Server 2008 R ...
  • 官網配置步驟:https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-ce-1 安裝Docker社區版倉庫 Update the apt package index: $ sudo apt-get update $ ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...