广告
返回顶部
首页 > 资讯 > 数据库 >GreenPlum 5.10.0 集群部署
  • 370
分享到

GreenPlum 5.10.0 集群部署

2024-04-02 19:04:59 370人浏览 薄情痞子
摘要

第1部分 初始化系统配置 1.1 部署环境 序号 ip地址 主机名 内存 系统版本 内核版本 1 192.168.61.61 gpmaster61 16Gb Centos 7.5.1804 3

第1部分 初始化系统配置

1.1 部署环境

序号 ip地址 主机名 内存 系统版本 内核版本
1 192.168.61.61 gpmaster61 16Gb Centos 7.5.1804 3.10.0-862.9.1.el7.x86_64
2 192.168.61.62 gpsegment62 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
3 192.168.61.63 gpsegment63 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
4 192.168.61.64 gpsegment64 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
5 192.168.61.65 gpstandby65 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64

1.2 设置主机名、同步时间

# 192.168.61.61
hostnamectl set-hostname gpmaster61
ntpdate -u ntp1.aliyun.com

# 192.168.61.62
hostnamectl set-hostname gpsegment62
ntpdate -u ntp1.aliyun.com

# 192.168.61.63
hostnamectl set-hostname gpsegment63
ntpdate -u ntp1.aliyun.com

# 192.168.61.64
hostnamectl set-hostname gpsegment64
ntpdate -u ntp1.aliyun.com

# 192.168.61.65
hostnamectl set-hostname gpstandby65
ntpdate -u ntp1.aliyun.com

1.3 添加hosts解析

cat > /etc/hosts << EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.61.61 gpmaster61
192.168.61.62 gpsegment62
192.168.61.63 gpsegment63
192.168.61.64 gpsegment64
192.168.61.65 gpstandby65
EOF

1.4 系统内核参数优化

cat > /etc/sysctl.conf << EOF
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more infORMation, see sysctl.conf(5) and sysctl.d(5).

kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 500 1024000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.overcommit_memory = 2
vm.swappiness = 1
kernel.pid_max = 655350
EOF
sysctl -p

1.5 修改linux最大限制

cat > /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
EOF

1.6 关闭 selinux 和 防火墙

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl stop firewalld && systemctl disable firewalld

1.7 设置XFS文件系统并挂载

#   单独挂载磁盘,设置文件系统为XFS,修改挂载方式
mkfs.xfs /dev/sdb1
mkdir /greenplum
mount /dev/sdb1 /greenplum

cat >> /etc/fstab << EOF
/dev/sdb1 /greenplum xfs nodev,noatime,inode64,allocsize=16m 0 0
EOF

1.8 禁用THP、调整磁盘预读扇区数

# 禁用THP
cat /sys/kernel/mm/transparent_hugepage/enabled # 查看THP
grubby --update-kernel=ALL --args="transparent_hugepage=never" # 设置为 never

# 创建 init.d 脚本
echo '#!/bin/sh
case $1 in
  start)
    if [ -d /sys/kernel/mm/transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/transparent_hugepage
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/redhat_transparent_hugepage
    else
      exit 0
    fi

    echo never > ${thp_path}/enabled
    echo never > ${thp_path}/defrag

    unset thp_path
    ;;
esac' > /etc/init.d/disable-transparent-hugepages

# 注册systemd文件
echo '[Unit]
Description=Disable Transparent Hugepages
After=multi-user.target

[Service]
ExecStart=/etc/init.d/disable-transparent-hugepages start
Type=simple

[Install]
WantedBy=multi-user.target' > /etc/systemd/system/disable-thp.service

# 磁盘预读扇区数
/sbin/blockdev --getra /dev/sdb1 # 查看大小
/sbin/blockdev --setra 65535 /dev/sdb1 # 设置大小

# 创建 init.d 脚本
echo '#!/bin/sh
device_name=/dev/sdb1
case $1 in
  start)
    if `mount | grep "^${device_name}" > /dev/null`;then
      /sbin/blockdev --setra 65535 ${device_name}
    else
      exit 0
    fi

    unset device_name
    ;;
esac' > /etc/init.d/blockdev-setra-sdb

# 注册systemd文件
echo '[Unit]
Description=Blocdev --setra N
After=multi-user.target

[Service]
ExecStart=/etc/init.d/blockdev-setra-sdb start
Type=simple

[Install]
WantedBy=multi-user.target' > /etc/systemd/system/blockdev-setra-sdb.service

# 授权并设置开机启动
chmod 755 /etc/init.d/disable-transparent-hugepages
chmod 755 /etc/init.d/blockdev-setra-sdb
chmod 755 /etc/systemd/system/disable-thp.service
chmod 755 /etc/systemd/system/blockdev-setra-sdb.service
systemctl enable disable-thp blockdev-setra-sdb

1.9 配置完毕,重启服务器

reboot

第2部分 安装GreenPlum

2.1 所有节点安装依赖包

yum -y install epel-release
yum -y install wget cmake3 git GCc gcc-c++ bison flex libedit-devel zlib zlib-devel perl-devel perl-ExtUtils-Embed python-devel libevent libevent-devel libxml2 libxml2-devel libcurl libcurl-devel bzip2 bzip2-devel net-tools libffi-devel openssl-devel

2.2 创建安装目录

mkdir /greenplum/soft

2.3 主节点安装软件

./greenplum-db-5.10.0-rhel7-x86_64.bin

If Customer has a currently enforceable, written and separately signed

*****************************************************************************
Do you accept the Pivotal Database license agreement? [yes|no]
*****************************************************************************

yes # 同意许可

*****************************************************************************
Provide the installation path for Greenplum Database or press ENTER to 
accept the default installation path: /usr/local/greenplum-db-5.10.0
*****************************************************************************

/greenplum/soft/greenplum-db-5.10.0 # 指定安装目录

*****************************************************************************
Install Greenplum Database into /greenplum/soft/greenplum-db-5.10.0? [yes|no]
*****************************************************************************

yes 确认安装

*****************************************************************************
/greenplum/soft/greenplum-db-5.10.0 does not exist.
Create /greenplum/soft/greenplum-db-5.10.0 ? [yes|no]
(Selecting no will exit the installer)
*****************************************************************************

yes 创建安装目录

Extracting product to /greenplum/soft/greenplum-db-5.10.0

*****************************************************************************
Installation complete.
Greenplum Database is installed in /greenplum/soft/greenplum-db-5.10.0

Pivotal Greenplum documentation is available
for download at Http://gpdb.docs.pivotal.io
*****************************************************************************

2.4 创建所有主机列表文件

cat > all_nodes << EOF
gpmaster61
gpsegment62
gpsegment63
gpsegment64
gpstandby65
EOF

2.5 设置主机免密码登陆

source  /greenplum/soft/greenplum-db/greenplum_path.sh
gpssh-exkeys -f /root/all_nodes
[STEP 1 of 5] create local ID and authorize on local host

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to gpsegment62
  ***
  *** Enter passWord for gpsegment62: # 主机密码
  ... send to gpsegment63
  ... send to gpsegment64
  ... send to gpstandby65

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with gpsegment62
  ... finished key exchange with gpsegment63
  ... finished key exchange with gpsegment64
  ... finished key exchange with gpstandby65

[INFO] completed successfully

2.6 检查主机连接状态

gpssh -f /root/all_nodes -e "ls -l"

2.7 批量创建安装用户

gpssh -f /root/all_nodes
=> groupadd -g 3000 gpadmin
[gpsegment64]
[ gpmaster61]
[gpstandby65]
[gpsegment62]
[gpsegment63]
=> useradd -u 3000 -g gpadmin -m -s /bin/bash gpadmin
[gpsegment64]
[ gpmaster61]
[gpstandby65]
[gpsegment62]
[gpsegment63]
=> echo gpadmin | passwd  gpadmin --stdin
[gpsegment64] Changing password for user gpadmin.
[gpsegment64] passwd: all authentication tokens updated successfully.
[ gpmaster61] Changing password for user gpadmin.
[ gpmaster61] passwd: all authentication tokens updated successfully.
[gpstandby65] Changing password for user gpadmin.
[gpstandby65] passwd: all authentication tokens updated successfully.
[gpsegment62] Changing password for user gpadmin.
[gpsegment62] passwd: all authentication tokens updated successfully.
[gpsegment63] Changing password for user gpadmin.
[gpsegment63] passwd: all authentication tokens updated successfully.
=> chown -R gpadmin.gpadmin /greenplum
[gpsegment64]
[ gpmaster61]
[gpstandby65]
[gpsegment62]
[gpsegment63]
=> exit

第3部分 同步greenplum软件到所有节点

3.1 切换用户初始化环境变量

su - gpadmin
cat >> .bashrc << EOF
export MASTER_DATA_DIRECTORY=/greenplum/data/gpmaster/gpseg-1
source /greenplum/soft/greenplum-db/greenplum_path.sh
EOF
source .bashrc

3.2 创建主机列表文件

cat > all_nodes << EOF
gpmaster61
gpsegment62
gpsegment63
gpsegment64
gpstandby65
EOF

3.3 设置gpadmin免密登陆

gpssh-exkeys -f all_nodes
[STEP 1 of 5] create local ID and authorize on local host

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to gpsegment62
  ***
  *** Enter password for gpsegment62: 
  ... send to gpsegment63
  ... send to gpsegment64
  ... send to gpstandby65

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with gpsegment62
  ... finished key exchange with gpsegment63
  ... finished key exchange with gpsegment64
  ... finished key exchange with gpstandby65

[INFO] completed successfully

3.4 同步greenplum软件包

gpseginstall -f all_nodes -u gpadmin -p gpadmin

20180801:16:39:20:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /greenplum/soft/greenplum-db-5.10.0
binary_dir_location /greenplum/soft
binary_dir_name greenplum-db-5.10.0
20180801:16:39:20:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-check cluster password access
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-de-duplicate hostnames
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-master hostname: gpmaster61
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-rm -f /greenplum/soft/greenplum-db-5.10.0.tar; rm -f /greenplum/soft/greenplum-db-5.10.0.tar.gz
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-cd /greenplum/soft; tar cf greenplum-db-5.10.0.tar greenplum-db-5.10.0
20180801:16:39:26:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-gzip /greenplum/soft/greenplum-db-5.10.0.tar
20180801:16:40:18:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: mkdir -p /greenplum/soft
20180801:16:40:19:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: rm -rf /greenplum/soft/greenplum-db-5.10.0
20180801:16:40:20:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-scp software to remote location
20180801:16:40:34:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: gzip -f -d /greenplum/soft/greenplum-db-5.10.0.tar.gz
20180801:16:40:43:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-md5 check on remote location
20180801:16:40:46:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: cd /greenplum/soft; tar xf greenplum-db-5.10.0.tar
20180801:16:40:49:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: rm -f /greenplum/soft/greenplum-db-5.10.0.tar
20180801:16:40:49:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: cd /greenplum/soft; rm -f greenplum-db; ln -fs greenplum-db-5.10.0 greenplum-db
20180801:16:40:50:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-rm -f /greenplum/soft/greenplum-db-5.10.0.tar.gz
20180801:16:40:50:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-version string on master: gpssh version 5.10.0 build commit:a075db4267fa1ca9e11c2c3813e3e058da4608ce
20180801:16:40:50:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: . /greenplum/soft/greenplum-db/./greenplum_path.sh; /greenplum/soft/greenplum-db/./bin/gpssh --version
20180801:16:40:51:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: . /greenplum/soft/greenplum-db-5.10.0/greenplum_path.sh; /greenplum/soft/greenplum-db-5.10.0/bin/gpssh --version
20180801:16:40:52:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-SUCCESS -- Requested commands completed

3.5 gpadmin用户创建mdw和sdw的数据目录

cat > seg_nodes <<EOF
gpsegment62
gpsegment63
gpsegment64
EOF
mkdir -p /greenplum/data/gpmaster
gpssh -h gpstandby65 -e 'mkdir -p /greenplum/data/gpmaster'
gpssh -f seg_nodes -e 'mkdir -p /greenplum/data/gpdatap{1..4}'
gpssh -f seg_nodes -e 'mkdir -p /greenplum/data/gpdatam{1..4}'

3.6 创建初始化文件

cat > gpinitsystem_config << EOF
ARRAY_NAME="ChinaDaas Data Platform"
SEG_PREFIX=gpseg
PORT_BASE=40000
MASTER_MAX_CONNECT=1000
declare -a DATA_DIRECTORY=(/greenplum/data/gpdatap1 /greenplum/data/gpdatap2 /greenplum/data/gpdatap3 /greenplum/data/gpdatap4)
MASTER_HOSTNAME=gpmaster61
MASTER_DIRECTORY=/greenplum/data/gpmaster
MASTER_PORT=5432
TRUSTED_shell=ssh
CHECK_POINT_SEGMENTS=8
ENcoding=UNICODE
MIRROR_PORT_BASE=50000
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/greenplum/data/gpdatam1 /greenplum/data/gpdatam2 /greenplum/data/gpdatam3 /greenplum/data/gpdatam4)
DATABASE_NAME=testdb
MacHINE_LIST_FILE=/home/gpadmin/seg_nodes
EOF

3.7 初始化sdw主节点,部署smdw备份节点


-a: 不询问用户
-c:指定初始化文件。
-h:指定segment主机文件。
-s:指定standby主机,创建standby节点。

gpinitsystem -a -c gpinitsystem_config

20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Locale set to en_US.utf8
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking configuration parameters, Completed
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
20180801:16:42:42:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Configuring build for standard array
20180801:16:42:42:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20180801:16:42:42:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Building primary segment instance array, please wait...
20180801:16:42:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Building group mirror array type , please wait...
20180801:16:42:54:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking Master host
20180801:16:42:54:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking new segment hosts, please wait...
20180801:16:43:15:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking new segment hosts, Completed
20180801:16:43:15:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Building the Master instance database, please wait...
20180801:16:43:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Starting the Master in admin mode
20180801:16:43:45:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20180801:16:43:45:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
20180801:16:43:46:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Parallel process exit status
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as completed        = 12
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as killed           = 0
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as failed           = 0
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Parallel process exit status
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as completed        = 12
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as killed           = 0
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as failed           = 0
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Deleting distributed backout files
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Removing back out file
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-No errors generated from parallel processes
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -i -m -d /greenplum/data/gpmaster/gpseg-1
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Gathering information and validating the environment...
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Obtaining Segment details from master...
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 5.10.0 build commit:a075db4267fa1ca9e11c2c3813e3e058da4608ce'
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-There are 0 connections to the database
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate'
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Master host=gpmaster61
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=immediate
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Master segment instance directory=/greenplum/data/gpmaster/gpseg-1
20180801:16:44:50:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20180801:16:44:50:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Terminating processes for segment /greenplum/data/gpmaster/gpseg-1
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /greenplum/data/gpmaster/gpseg-1
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Gathering information and validating the environment...
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 5.10.0 build commit:a075db4267fa1ca9e11c2c3813e3e058da4608ce'
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Greenplum Catalog Version: '301705051'
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Starting Master instance in admin mode
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Obtaining Segment details from master...
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Setting new master era
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Master Started...
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Shutting down master
20180801:16:44:54:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Process results...
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-----------------------------------------------------
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-   Successful segment starts                  = 24
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-   Failed segment starts                      = 0
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-----------------------------------------------------
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Successfully started 24 of 24 segment instances
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-----------------------------------------------------
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Starting Master instance gpmaster61 directory /greenplum/data/gpmaster/gpseg-1 
20180801:16:45:36:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Command pg_ctl reports Master gpmaster61 instance active
20180801:16:45:36:038131 gpstart:gpmaster61:gpadmin-[INFO]:-No standby master configured.  skipping...
20180801:16:45:36:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Database successfully started
20180801:16:45:37:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Log file scan check passed
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Greenplum Database instance successfully created
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:----------------------------------------------
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-To complete the environment configuration, please
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/greenplum/data/gpmaster/gpseg-1"
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-   or, use -d /greenplum/data/gpmaster/gpseg-1 option for the Greenplum scripts
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-   Example gpstate -d /greenplum/data/gpmaster/gpseg-1
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20180801.log
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Review options for gpinitstandby
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-----------------------------------------------
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-The Master /greenplum/data/gpmaster/gpseg-1/pg_hba.conf post gpinitsystem
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-new array must be explicitly added to this file
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-located in the /greenplum/soft/greenplum-db/./docs directory
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-------------------------------------------------

# 查看数据分片
psql -d testdb -c 'select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;'

此脚本用于安装失败时,回退删除任何实用程序创建的数据目录,postgres进程和日志文件。

cd ~/gpAdminLogs/
bash backout_gpinitsystem_gpadmin_<创建日期>

3.8 添加访问权限

echo "host     all         gpadmin         0.0.0.0/0       md5" >> $MASTER_DATA_DIRECTORY/pg_hba.conf
gpstop -u

3.9 添加standby主机(可选项)

gpinitstandby -a -F pg_system:/greenplum/data/gpmaster/gpseg-1/ -s gpstandby65

20180801:16:50:44:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20180801:16:50:44:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Checking for filespace directory /greenplum/data/gpmaster/gpseg-1 on gpstandby65
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum master hostname               = gpmaster61
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum master data directory         = /greenplum/data/gpmaster/gpseg-1
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum master port                   = 5432
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master hostname       = gpstandby65
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master data directory = /greenplum/data/gpmaster/gpseg-1
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum update system catalog         = On
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:- Filespace locations
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-pg_system -> /greenplum/data/gpmaster/gpseg-1
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-The packages on gpstandby65 are consistent.
20180801:16:50:46:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Adding standby master to catalog...
20180801:16:50:46:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Database catalog updated successfully.
20180801:16:50:46:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Updating pg_hba.conf file...
20180801:16:50:48:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Updating filespace flat files...
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Filespace flat file updated successfully.
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Starting standby master
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Checking if standby master is running on host: gpstandby65  in directory: /greenplum/data/gpmaster/gpseg-1
20180801:16:50:53:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20180801:16:50:54:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20180801:16:50:54:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Successfully created standby master on gpstandby65

# 查看主备节点
psql -d testdb -c 'select * from gp_segment_configuration where content='-1';'

# 同步pg_hba.conf到gpstandby65备份节点,重新加载gpdb配置文件
gpscp -h gpstandby65 -v $MASTER_DATA_DIRECTORY/pg_hba.conf =:$MASTER_DATA_DIRECTORY/
gpstop -u

3.10 测试GPDB集群状态

gpstate -e

3.11 设置gpadmin远程访问密码

psql postgres gpadmin
alter user gpadmin encrypted password 'gpadmin';
\q

3.12 查询测试

psql -hgpmaster61 -p 5432 -d postgres -U gpadmin -c 'select dfhostname, dfspace,dfdevice from gp_toolkit.gp_disk_free order by dfhostname;'
psql -h gpmaster61 -p 5432 -d postgres -U gpadmin -c '\l+'

第4部分 部署安装 GreenPlum-cc-WEB 4.3

4.1 gpadmin创建gpperfmon数据库, 默认用户gpmon

gpperfmon_install --enable --password gpmon --port 5432

20180801:17:04:00:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-createdb gpperfmon >& /dev/null
20180801:17:04:54:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 psql -f /greenplum/soft/greenplum-db/./lib/gpperfmon/gpperfmon.sql gpperfmon >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "DROP ROLE IF EXISTS gpmon"  >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "CREATE ROLE gpmon WITH SUPERUSER CREATEDB LOGIN ENCRYPTED PASSWORD 'gpmon'"  >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "local    gpperfmon         gpmon         md5" >> /greenplum/data/gpmaster/gpseg-1/pg_hba.conf
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "host     all         gpmon         127.0.0.1/28    md5" >> /greenplum/data/gpmaster/gpseg-1/pg_hba.conf
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "host     all         gpmon         ::1/128    md5" >> /greenplum/data/gpmaster/gpseg-1/pg_hba.conf
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-touch /home/gpadmin/.pgpass >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-mv -f /home/gpadmin/.pgpass /home/gpadmin/.pgpass.1533114240 >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "*:5432:gpperfmon:gpmon:gpmon" >> /home/gpadmin/.pgpass
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-cat /home/gpadmin/.pgpass.1533114240 >> /home/gpadmin/.pgpass
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-chmod 0600 /home/gpadmin/.pgpass >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_enable_gpperfmon -v on >& /dev/null
20180801:17:05:02:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_port -v 8888 >& /dev/null
20180801:17:05:05:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_external_enable_exec -v on --masteronly >& /dev/null
20180801:17:05:06:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_log_alert_level -v warning >& /dev/null
20180801:17:05:09:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-gpperfmon will be enabled after a full restart of GPDB

4.2 修改gpmon密码

psql -d gpperfmon -c "alter user gpmon encrypted password 'gpmon';"

4.3 添加监控授权

echo "host     all         gpmon         0.0.0.0/0    md5" >> $MASTER_DATA_DIRECTORY/pg_hba.conf
gpstop -afr

# 验证gpperfmon数据库是否有数据写入
psql -d gpperfmon -c 'select * from system_now'

# 拷贝主配置文件到备份配置文件(需要安装gpstandby备份节点)
gpscp -h gpstandby65 -v $MASTER_DATA_DIRECTORY/pg_hba.conf =:$MASTER_DATA_DIRECTORY/
gpscp -h gpstandby65 -v ~/.pgpass =:~/
gpstop -afr

4.5 安装greenplum-cc-web

./gpccinstall-4.3.0

Do you agree to the Pivotal Greenplum Command Center End User License Agreement? Yy/Nn (Default=Y)
y

Where would you like to install Greenplum Command Center? (Default=/usr/local)
/greenplum/soft/

Path not exist, create it? Yy/Nn (Default=Y)
y

What would you like to name this installation of Greenplum Command Center? (Default=gpcc)
<Enter>

What port would you like gpcc webserver to use? (Default=28080)
<Enter>

Would you like to enable kerberos? Yy/Nn (Default=N)
<Enter>

Would you like enable SSL? Yy/Nn (Default=N)
<Enter>

Installation in progress...
2018/08/01 17:54:49 
Successfully installed Greenplum Command Center.

We recommend ssh to standby master before starting GPCC webserver

To start the GPCC webserver on the current host, run gpcc start

To manage Command Center, use the gpcc utility.
Usage:
  gpcc [OPTIONS] <command>

Application Options:
  -v, --version   Show Greenplum Command Center version
      --settings  Print the current configuration settings

Help Options:
  -h, --help      Show this help message

Available commands:
  help        Print list of commands
  krbdisable  Disables kerberos authentication
  krbenable   Enables kerberos authentication
  start       Starts Greenplum Command Center webserver and metrics collection agents [-W]  option to force password prompt for GPDB user gpmon [optional]
  status      Print agent status  with  [-W]  option to force password prompt for GPDB user gpmon [optional]
  stop        Stops Greenplum Command Center webserver and metrics collection agents [-W]  option to force password prompt for GPDB user gpmon [optional]

4.6 添加环境变量

ln -s /greenplum/soft/greenplum-cc-web-4.3.0 /greenplum/soft/greenplum-cc-web
source /greenplum/soft/greenplum-cc-web/gpcc_path.sh

4.7 启动gpcc web服务

gpcc start

4.8 查询生成数据

# 查看时区
psql -d gpperfmon -c 'show timezone'

#查看监控日志
psql -d gpperfmon -c 'select * from system_now'

4.9 访问web页面

通过 http://192.168.61.61:28080 访问web监控

第5部分 集群性能检测

5.1 验证网络性能

gpcheckperf -f all_nodes -r N -d /tmp/

5.2 验证磁盘I/O和内存带宽性能

gpcheckperf -f all_nodes -r ds -d /tmp

5.3 验证内存带宽性能

gpcheckperf -f all_nodes -r s -d /tmp
您可能感兴趣的文档:

--结束END--

本文标题: GreenPlum 5.10.0 集群部署

本文链接: https://www.lsjlt.com/news/48755.html(转载时请注明来源链接)

有问题或投稿请发送至: 邮箱/279061341@qq.com    QQ/279061341

本篇文章演示代码以及资料文档资料下载

下载Word文档到电脑,方便收藏和打印~

下载Word文档
猜你喜欢
  • GreenPlum 5.10.0 集群部署
    第1部分 初始化系统配置 1.1 部署环境 序号 ip地址 主机名 内存 系统版本 内核版本 1 192.168.61.61 gpmaster61 16Gb CentOS 7.5.1804 3...
    99+
    2022-10-18
  • greenplum安装部署-CentOS7.9下Greenplum6.19集群搭建详细过程
    前言 官方文档:VMware Tanzu™ Greenplum® 6.20 Documentation | Tanzu Greenplum Docs(可以参考) 一,安装说明 1.1环境说明 1、首先确定部署的环境,确定下服务器的端口,一般...
    99+
    2023-10-23
    linux centos 运维 数据库
  • elasticsearch 集群部署
    Elasticsearch是一个分布式搜索服务,提供Restful API,底层基于Lucene,采用多shard的方式保证数据安全,并且提供自动resharding的功能,github等大型的站点也都采用...
    99+
    2022-10-18
  • Mysql-mmm集群部署
      90主 <----------> 91主    |    |    |  ------------...
    99+
    2022-10-18
  • Docker Swarm部署集群
    一、Swarm简介Swarm是Docker的一个编排工具,参考官网:https://docs.docker.com/engine/swarm/Swarm 模式简介 要在Swarm模式下运行docker,需要先安装docker,参考...
    99+
    2023-01-31
    集群 Docker Swarm
  • Influxdb Cluster集群部署
    准备工作 确定安装版本 1、此次安装选择的是influxdb-cluster集群部署方案,参考项目开源地址为:https://github.com/chengshiwen/influxdb-clust...
    99+
    2023-10-07
    linux 服务器
  • php中什么是集群部署?如何实现集群部署?
    随着互联网进入快速发展的时代,各种网站、应用如雨后春笋般出现,人们对于服务的需求越来越高。而随着用户量增加,单一服务器已经无法满足需求,集群部署PHP项目成为解决方案之一。一、什么是集群部署?集群部署是将多台服务器组合在一起,按照特定的方式...
    99+
    2023-05-14
    集群部署 php
  • Liunx中部署Kettle集群
    文章目录 一、部署环境二、搭建步骤1、Linux下JDK环境搭建2、各服务器之间开启SSH免密登录3、查看所需端口是否被占用4、Linux下Kettle安装5、测试Kettle是否安装成功6、...
    99+
    2023-09-26
    服务器 ssh linux 数据仓库 运维
  • Mongodb 分片集群部署
    Mongodb分片集群介绍       分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功...
    99+
    2022-10-18
  • redis3.0集群安装部署
    wget http://download.redis.io/releases/redis-3.0.0.tar.gz && tar zxvf redis-3.0.0.tar.gz &&...
    99+
    2022-10-18
  • centos 7.4部署couchbase集群
           couchbase是一个较新的、发展迅速的nosql数据库技术。2014年,viber宣布使用couchbase替换mongodb,...
    99+
    2022-10-18
  • Redis群集部署详解
    博文大纲:一、Redis群集相关概念二、部署Redis群集 1、部署环境 2、配置Redis实例 3、配置node06主机的多Redis实例 4、主机node01安装配置ruby的运行环境,便于管理Re...
    99+
    2022-10-18
  • Redis集群部署方法
    本篇内容介绍了“Redis集群部署方法”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成! ...
    99+
    2022-10-18
  • 如何部署Spark集群
    今天就跟大家聊聊有关如何部署Spark集群,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。1. 安装环境简介硬件环境:两台四核cpu、4G内存、500...
    99+
    2022-10-18
  • MongoDB分片集群部署
    一、环境说明 1、我们prod环境MongoDB的集群架构是做的分片集群的部署,但是目前我们没有分片,即所有数据都在一个分片上,后期如果数量大,需要分配,集群随时可以分片,对业务方透明2、各个角色的部署情况...
    99+
    2022-10-18
  • 怎么部署Hadoop集群
    本篇内容主要讲解“怎么部署Hadoop集群”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“怎么部署Hadoop集群”吧!环境准备一共用5台机器作为硬件环境,全都是...
    99+
    2022-10-19
  • redis如何部署集群
    这篇文章主要介绍“redis如何部署集群”,在日常操作中,相信很多人在redis如何部署集群问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解答”redis如何部署集群”的疑惑有所帮...
    99+
    2022-10-19
  • 怎么部署redis集群
    要部署Redis集群,您可以按以下步骤进行操作:1. 安装Redis:在每个节点上安装Redis服务器。您可以从Redis官方网站上...
    99+
    2023-08-31
    redis
  • Nginx集群部署方案
    工作需要,记录一下 一、Nginx安装 集群部署需要在主服务器安装Nginx服务,以下为安装步骤: 1.访问Nginx官网(http://nginx.org/en/download.html),下载Nginx安装包。 2.下载后解压至C盘...
    99+
    2023-09-08
    服务器 nginx 运维
  • greenplum集群的搭建过程
    本篇内容主要讲解“greenplum集群的搭建过程”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“greenplum集群的搭建过程”吧!环境说明本次环境一共四台虚拟机,一台为master,三台为s...
    99+
    2023-06-02
软考高级职称资格查询
编程网,编程工程师的家园,是目前国内优秀的开源技术社区之一,形成了由开源软件库、代码分享、资讯、协作翻译、讨论区和博客等几大频道内容,为IT开发者提供了一个发现、使用、并交流开源技术的平台。
  • 官方手机版

  • 微信公众号

  • 商务合作