iis服务器助手广告广告
返回顶部
首页 > 资讯 > 精选 >hadoop-2.7.3编译和搭建集群环境的方法是什么
  • 461
分享到

hadoop-2.7.3编译和搭建集群环境的方法是什么

2023-06-03 05:06:19 461人浏览 薄情痞子
摘要

这篇文章主要讲解了“hadoop-2.7.3编译和搭建集群环境的方法是什么”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“hadoop-2.7.3编译和搭建集群环境的方法是什么”吧!环境:Ce

这篇文章主要讲解了“hadoop-2.7.3编译和搭建集群环境的方法是什么”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“hadoop-2.7.3编译和搭建集群环境的方法是什么”吧!

环境:CentOS6.5


1.下载hadoop2.7.3最新源码
[root@sht-sgmhadoopnn-01 ~]# mkdir -p learnproject/compilesoft
[root@sht-sgmhadoopnn-01 ~]# cd learnproject/compilesoft
[root@sht-sgmhadoopnn-01 compilesoft]# wget Http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3-src.tar.gz
[root@sht-sgmhadoopnn-01 compilesoft]# tar -xzvf hadoop-2.7.3-src.tar.gz
[root@sht-sgmhadoopnn-01 compilesoft]# cd hadoop-2.7.3-src
[root@sht-sgmhadoopnn-01 hadoop-2.7.3-src]# cat BUILDING.txt
Build instructions for Hadoop

----------------------------------------------------------------------------------
Requirements:

* Unix System
* jdk 1.7+
* Maven 3.0 or later
* Findbugs 1.3.9 (if running findbugs)
* ProtocolBuffer 2.5.0
* CMake 2.6 or newer (if compiling native code), must be 3.0 or newer on Mac
* Zlib devel (if compiling native code)
* openssl devel ( if compiling native hadoop-pipes and to get the best hdfs encryption perfORMance )
* linux FUSE (Filesystem in Userspace) version 2.6 or above ( if compiling fuse_dfs )
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)
----------------------------------------------------------------------------------
Installing required packages for clean install of ubuntu 14.04 LTS Desktop:

* oracle JDK 1.7 (preferred)
  $ sudo apt-get purge openjdk*
  $ sudo apt-get install software-properties-common
  $ sudo add-apt-repository ppa:WEBupd8team/java
  $ sudo apt-get update
  $ sudo apt-get install oracle-java7-installer
* Maven
  $ sudo apt-get -y install maven
* Native libraries
  $ sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev
* ProtocolBuffer 2.5.0 (required)
  $ sudo apt-get -y install libprotobuf-dev protobuf-compiler

Optional packages:

* Snappy compression
  $ sudo apt-get install snappy libsnappy-dev
* Bzip2
  $ sudo apt-get install bzip2 libbz2-dev
* Jansson (C Library for JSON)
  $ sudo apt-get install libjansson-dev
* Linux FUSE
  $ sudo apt-get install fuse libfuse-dev

 

2.安装依赖包
[root@sht-sgmhadoopnn-01 compilesoft]# yum install svn autoconf automake libtool cmake ncurses-devel openssl-devel GCc*

3.安装jdk
[root@sht-sgmhadoopnn-01 compilesoft]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
export PATH=$JAVA_HOME/bin:$PATH

[root@sht-sgmhadoopnn-01 compilesoft]# source /etc/profile
[root@sht-sgmhadoopnn-01 compilesoft]# java -version
java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
You have mail in /var/spool/mail/root
[root@sht-sgmhadoopnn-01 compilesoft]#


4.安装maven
[root@sht-sgmhadoopnn-01 compilesoft]# wget http://ftp.cuhk.edu.hk/pub/packages/apache.org/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz -O apache-maven-3.3.9-bin.tar.gz
[root@sht-sgmhadoopnn-01 compilesoft]# tar xvf apache-maven-3.3.9-bin.tar.gz
[root@sht-sgmhadoopnn-01 compilesoft]# vi /etc/profile

export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
export MAVEN_HOME=/root/learnproject/compilesoft/apache-maven-3.3.9
# 在编译过程中为了防止Java内存溢出,需要加入以下环境变量
export MAVEN_OPTS="-Xmx2048m -XX:MaxPermSize=512m"

export PATH=$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH

[root@sht-sgmhadoopnn-01 compilesoft]# source /etc/profile
[root@sht-sgmhadoopnn-01 compilesoft]# mvn -version
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323Dc5; 2015-11-11T00:41:47+08:00)
Maven home: /root/learnproject/compilesoft/apache-maven-3.3.9
Java version: 1.7.0_67, vendor: Oracle Corporation
Java home: /usr/java/jdk1.7.0_67-cloudera/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-431.el6.x86_64", arch: "amd64", family: "unix"
You have new mail in /var/spool/mail/root
[root@sht-sgmhadoopnn-01 apache-maven-3.3.9]#


5.编译安装protobuf
[root@sht-sgmhadoopnn-01 compilesoft]# wget ftp://ftp.netbsd.org/pub/pkgsrc/distfiles/protobuf-2.5.0.tar.gz -O protobuf-2.5.0.tar.gz
[root@hadoop-01 compilesoft]# tar -zxvf protobuf-2.5.0.tar.gz
[root@hadoop-01 compilesoft]# cd protobuf-2.5.0/
[root@hadoop-01 protobuf-2.5.0]# ./configure
[root@hadoop-01 protobuf-2.5.0]# make
[root@hadoop-01 protobuf-2.5.0]# make install


# 查看protobuf版本以测试是否安装成功
[root@hadoop-01 protobuf-2.5.0]# protoc --version
protoc: error while loading shared libraries: libprotobuf.so.8: cannot open shared object file: No such file or directory
[root@hadoop-01 protobuf-2.5.0]# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
[root@hadoop-01 protobuf-2.5.0]# protoc --version
libprotoc 2.5.0
[root@hadoop-01 protobuf-2.5.0]#


6.安装snappy
[root@sht-sgmhadoopnn-01 compilesoft]# wget http://pkgs.fedoraproject.org/repo/pkgs/snappy/snappy-1.1.1.tar.gz/8887e3b7253b22a31f5486bca3cbc1c2/snappy-1.1.1.tar.gz
# 用root用户执行以下命令
[root@sht-sgmhadoopnn-01 compilesoft]#tar -zxvf snappy-1.1.1.tar.gz
[root@sht-sgmhadoopnn-01 compilesoft]# cd snappy-1.1.1/
[root@sht-sgmhadoopnn-01 snappy-1.1.1]# ./configure
[root@sht-sgmhadoopnn-01 snappy-1.1.1]# make
[root@sht-sgmhadoopnn-01 snappy-1.1.1]# make install

#查看snappy库文件
[root@sht-sgmhadoopnn-01 snappy-1.1.1]# ls -lh /usr/local/lib |grep snappy
-rw-r--r--  1 root root 229K Jun 21 15:46 libsnappy.a
-rwxr-xr-x  1 root root  953 Jun 21 15:46 libsnappy.la
lrwxrwxrwx  1 root root   18 Jun 21 15:46 libsnappy.so -> libsnappy.so.1.2.0
lrwxrwxrwx  1 root root   18 Jun 21 15:46 libsnappy.so.1 -> libsnappy.so.1.2.0
-rwxr-xr-x  1 root root 145K Jun 21 15:46 libsnappy.so.1.2.0
[root@sht-sgmhadoopnn-01 snappy-1.1.1]#


7.编译
[root@sht-sgmhadoopnn-01 compilesoft]# cd hadoop-2.7.3-src

mvn clean package -Pdist,native -DskipTests -Dtar

mvn package -Pdist,native -DskipTests -Dtar

[root@sht-sgmhadoopnn-01 hadoop-2.7.3-src]# mvn clean package –Pdist,native –DskipTests –Dtar
[INFO] Executing tasks
main:
     [exec] $ tar cf hadoop-2.7.3.tar hadoop-2.7.3
     [exec] $ gzip -f hadoop-2.7.3.tar
     [exec]
     [exec] Hadoop dist tar available at: /root/learnproject/compilesoft/hadoop-2.7.3-src/hadoop-dist/target/hadoop-2.7.3.tar.gz
     [exec]
[INFO] Executed tasks
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /root/learnproject/compilesoft/hadoop-2.7.3-src/hadoop-dist/target/hadoop-dist-2.7.3-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 14.707 s]
[INFO] Apache Hadoop Build Tools .......................... SUCCESS [  6.832 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 12.989 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 14.258 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [  0.411 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [  4.814 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 23.566 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [02:31 min]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 29.587 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 13.954 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [03:03 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [  9.285 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [ 45.068 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [  0.049 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [03:49 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [01:08 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 28.935 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [  4.599 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [  0.044 s]
[INFO] hadoop-yarn ........................................ SUCCESS [  0.043 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [02:49 min]
[INFO] hadoop-yarn-common ................................. SUCCESS [ 40.792 s]
[INFO] hadoop-yarn-server ................................. SUCCESS [  0.041 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 15.750 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 25.311 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [  6.415 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 12.274 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 27.555 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [  7.751 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [ 11.347 s]
[INFO] hadoop-yarn-server-sharedcachemanager .............. SUCCESS [  5.612 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [  0.038 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [  4.029 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  2.611 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [  0.077 s]
[INFO] hadoop-yarn-reGIStry ............................... SUCCESS [  8.045 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [  5.456 s]
[INFO] hadoop-mapReduce-client ............................ SUCCESS [  0.226 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 28.462 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 25.872 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [  6.697 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 14.121 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [  9.328 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 23.801 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [  2.412 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [  8.876 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [  4.237 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 14.285 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 19.759 s]
[INFO] Apache Hadoop ArcHives ............................. SUCCESS [  3.069 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [  7.446 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  5.765 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  3.752 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  2.771 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  5.612 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 10.332 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  7.131 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [01:32 min]
[INFO] Apache Hadoop Azure support ........................ SUCCESS [ 10.622 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 12.540 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  1.142 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  7.354 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 12.269 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.035 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 58.051 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 26:29 min
[INFO] Finished at: 2016-12-24T21:07:09+08:00
[INFO] Final Memory: 214M/740M
[INFO] ------------------------------------------------------------------------
You have mail in /var/spool/mail/root
[root@sht-sgmhadoopnn-01 hadoop-2.7.3-src]#
[root@sht-sgmhadoopnn-01 hadoop-2.7.3-src]# cp /root/learnproject/compilesoft/hadoop-2.7.3-src/hadoop-dist/target/hadoop-2.7.3.tar.gz ../../
You have mail in /var/spool/mail/root
[root@sht-sgmhadoopnn-01 hadoop-2.7.3-src]# cd ../../
[root@sht-sgmhadoopnn-01 learnproject]# ll
total 193152
drwxr-xr-x 5 root root      4096 Dec 24 20:24 compilesoft
-rw-r--r-- 1 root root 197782815 Dec 24 21:16 hadoop-2.7.3.tar.gz
[root@sht-sgmhadoopnn-01 learnproject]#

 
8.搭建HDFS HA,YARN HA集群(5个节点)
参考: 
http://blog.itpub.net/30089851/viewspace-1994585/
https://github.com/Hackeruncle/Hadoop

9.搭建集群,验证版本和支持的压缩信息
[root@sht-sgmhadoopnn-01 app]# hadoop version
Hadoop 2.7.3
Subversion Unknown -r Unknown
Compiled by root on 2016-12-24T12:45Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /root/learnproject/app/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar

[root@sht-sgmhadoopnn-01 app]# hadoop checknative
16/12/25 15:55:43 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
16/12/25 15:55:43 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /root/learnproject/app/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /usr/local/lib/libsnappy.so.1
lz4:     true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so
 
[root@sht-sgmhadoopnn-01 app]# file /root/learnproject/app/hadoop/lib/native/libhadoop.so.1.0.0
/root/learnproject/app/hadoop/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
[root@sht-sgmhadoopnn-01 app]#

感谢各位的阅读,以上就是“hadoop-2.7.3编译和搭建集群环境的方法是什么”的内容了,经过本文的学习后,相信大家对hadoop-2.7.3编译和搭建集群环境的方法是什么这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是编程网,小编将为大家推送更多相关知识点的文章,欢迎关注!

--结束END--

本文标题: hadoop-2.7.3编译和搭建集群环境的方法是什么

本文链接: https://www.lsjlt.com/news/232859.html(转载时请注明来源链接)

有问题或投稿请发送至: 邮箱/279061341@qq.com    QQ/279061341

本篇文章演示代码以及资料文档资料下载

下载Word文档到电脑,方便收藏和打印~

下载Word文档
猜你喜欢
  • hadoop-2.7.3编译和搭建集群环境的方法是什么
    这篇文章主要讲解了“hadoop-2.7.3编译和搭建集群环境的方法是什么”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“hadoop-2.7.3编译和搭建集群环境的方法是什么”吧!环境:Ce...
    99+
    2023-06-03
  • CentOS 7如何搭建Hadoop 2.7.3完全分布式集群环境
    这篇文章将为大家详细讲解有关CentOS 7如何搭建Hadoop 2.7.3完全分布式集群环境,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。(一)软件准备1,hadoop-2.7.3.tar.gz(包)2...
    99+
    2023-06-03
  • Docker Consul集群环境搭建的方法是什么
    本篇内容介绍了“Docker Consul集群环境搭建的方法是什么”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!一、Docker ...
    99+
    2023-06-22
  • linux交叉编译环境搭建的方法是什么
    搭建Linux交叉编译环境的方法如下:1. 安装交叉编译工具链:根据目标平台的架构,下载相应的交叉编译工具链,如ARM、MIPS、P...
    99+
    2023-08-11
    linux
  • hadoop高可用集群搭建的方法是什么
    搭建Hadoop高可用集群通常需要使用Hadoop的高可用组件,如ZooKeeper和HA(High Availability)。以...
    99+
    2024-03-13
    hadoop
  • Hadoop集群搭建的步骤是什么
    搭建Hadoop集群的步骤如下:1. 准备环境:确保所有节点都安装了Java,并且网络可访问。2. 下载Hadoop:从Apache...
    99+
    2023-09-06
    Hadoop
  • 怎么在docker中搭建一个Hadoop集群环境
    这篇文章给大家介绍怎么在docker中搭建一个Hadoop集群环境,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。docker安装国际惯例更新下apt软件包的源 curl -fssl https://mirro...
    99+
    2023-06-07
  • 怎么在vmware中搭建一个Hadoop集群环境
    怎么在vmware中搭建一个Hadoop集群环境?很多新手对此不是很清楚,为了帮助大家解决这个难题,下面小编将为大家详细讲解,有这方面需求的人可以来学习下,希望你能有所收获。先在虚拟机中关闭系统右键虚拟机,点击管理,选择克隆点击下一步,选择...
    99+
    2023-06-14
  • Hadoop安装和环境搭建方法
    这篇文章主要介绍“Hadoop安装和环境搭建方法”,在日常操作中,相信很多人在Hadoop安装和环境搭建方法问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”Hadoop安装和环境搭建方法”的疑惑有所帮助!接下来...
    99+
    2023-06-20
  • VMware + Ubuntu18.04 搭建Hadoop集群环境的图文教程
    目录前言VMware克隆虚拟机(准备工作,克隆3台虚拟机,一台master,两台node)1.创建Hadoop用户(在master,node1,node2执行)2.更新apt下载源(...
    99+
    2024-04-02
  • docker搭建hadoop集群的步骤是什么
    安装Docker:首先安装Docker,确保系统上已经安装了Docker。 创建Docker镜像:创建一个包含Hadoop的...
    99+
    2024-04-09
    hadoop docker
  • ubuntu集群搭建的方法是什么
    要搭建Ubuntu集群,可以按照以下步骤进行:1. 安装Ubuntu操作系统:在每个节点上安装Ubuntu操作系统。可以选择最新版本...
    99+
    2023-09-21
    ubuntu
  • mariadb集群搭建的方法是什么
    这篇文章主要介绍了mariadb集群搭建的方法是什么的相关知识,内容详细易懂,操作简单快捷,具有一定借鉴价值,相信大家阅读完这篇mariadb集群搭建的方法是什么文章都会有所收获,下面我们一起来看看吧。一、Galera ClusterMar...
    99+
    2023-07-05
  • mongodb集群搭建的方法是什么
    搭建MongoDB集群有多种方法,以下是其中一种常用的方法:1. 安装MongoDB:首先需要在每个集群节点上安装MongoDB数据...
    99+
    2023-09-06
    mongodb
  • redis集群搭建的方法是什么
    要搭建Redis集群,可以按照以下步骤进行操作:1. 准备多台服务器,每台服务器上都安装Redis。2. 在每台服务器上的Redis...
    99+
    2023-09-09
    redis
  • kubernetes集群搭建的方法是什么
    这篇文章主要介绍“kubernetes集群搭建的方法是什么”,在日常操作中,相信很多人在kubernetes集群搭建的方法是什么问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”kubernetes集群搭建的方法...
    99+
    2023-06-27
  • 怎么搭建Nginx和Tomcat的web集群环境
    本篇内容介绍了“怎么搭建Nginx和Tomcat的web集群环境”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!一直以来对于web服务器对to...
    99+
    2023-06-04
  • linux交叉编译环境搭建的步骤是什么
    搭建Linux交叉编译环境的步骤如下:1. 安装交叉编译工具链:根据需要的目标平台,下载对应的交叉编译工具链。常见的交叉编译工具链有...
    99+
    2023-10-20
    linux
  • hadoop集群使用的方法是什么
    Hadoop集群使用的方法通常是通过Hadoop分布式文件系统(HDFS)存储和管理大规模数据,并使用MapReduce编程模型来处...
    99+
    2024-03-05
    hadoop
  • hadoop集群启动的方法是什么
    Hadoop集群可以通过以下步骤启动: 启动Hadoop集群的NameNode(主节点):在主节点上运行start-dfs.sh...
    99+
    2024-03-05
    hadoop
软考高级职称资格查询
编程网,编程工程师的家园,是目前国内优秀的开源技术社区之一,形成了由开源软件库、代码分享、资讯、协作翻译、讨论区和博客等几大频道内容,为IT开发者提供了一个发现、使用、并交流开源技术的平台。
  • 官方手机版

  • 微信公众号

  • 商务合作