iis服务器助手广告广告
返回顶部
首页 > 资讯 > 数据库 >oracle12cR2如何增加节点删除节点挽救集群
  • 865
分享到

oracle12cR2如何增加节点删除节点挽救集群

2024-04-02 19:04:59 865人浏览 薄情痞子
摘要

这篇文章主要介绍了oracle12cR2如何增加节点删除节点挽救集群,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。数据库版本:[oracle

这篇文章主要介绍了oracle12cR2如何增加节点删除节点挽救集群,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。

数据库版本:

[oracle@jsbp242305 ~]$ sqlplus -V

SQL*Plus: Release 12.2.0.1.0 Production

12.2数据库目前有一个bug,sql*plus改sys用户口令会hang住,因为12.2内部行为有了变化,更改密码去更新一张基表的时候有问题,之前做过hanganalyze,我记得好像更改密码的会话在等待raw cache lock。oracle官方建议sys用户口令要用orapwd命令去修改,或者打one-offpatch(16002385)。而one-offpatch又依赖于一个最新的RU(27105253),我在打RU的时候直接报错,节点1崩溃无法挽救,节点2存活,只好采用增删节点的方式挽回集群。

12.2的数据库全是坑,打补丁的时候千万记得从节点2开始打!!

个人经验,仅供参考。

##################删除节点##############

IP信息,要删除的是节点1: jsbp242305

[grid@jsbp242306 ~]$ more /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#10.10.129.41 jsbp242306

#Public IP

10.11.176.75 jsbp242305

10.11.176.76 jsbp242306

#VIP

10.11.176.77 jsbp242305-vip

10.11.176.78 jsbp242306-vip

#SCAN

10.11.176.79 jqhwccdb-scan

#Private IP

2.1.176.75 jsbp242305-priv

2.1.176.76 jsbp242306-priv

查看是否是unpinned:

[grid@jsbp242306 ~]$ olsnodes -s -t

jsbp242305 Inactive Unpinned

jsbp242306 Active Unpinned

都是unpinned,不用运行crsctl unpin CSS命令。

软件的home目录是local的,不是共享的,所以在要删除的节点执行下面命令:

[grid@jsbp242305 ~]$ /oracle/app/12.2.0/grid/deinstall/deinstall -local

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /oracle/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############

######################### DECONFIG CHECK OPERATION START #########################

## [START] Install check configuration ##

Checking for existence of the Oracle home location /oracle/app/12.2.0/grid

Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster

Oracle Base selected for deinstall is: /oracle/app/grid

Checking for existence of central inventory location /oracle/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home /oracle/app/12.2.0/grid

The following nodes are part of this cluster: jsbp242305

Checking for sufficient temp space availability on node(s) : 'jsbp242305'

## [END] Install check configuration ##

Traces log file: /oracle/app/oraInventory/logs//crsdc_2018-04-16_10-26-36-AM.log

Network Configuration check config START

Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_check2018-04-16_10-26-44-AM.log

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_check2018-04-16_10-26-44-AM.log

Database Check Configuration START

Database de-configuration trace file location: /oracle/app/oraInventory/logs/databasedc_check2018-04-16_10-26-44-AM.log

Oracle Grid Management database was not found in this Grid Infrastructure home

Database Check Configuration END

################ DECONFIG CHECK OPERATION END #########################

########### DECONFIG CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is: /oracle/app/12.2.0/grid

The following nodes are part of this cluster: jsbp242305

The cluster node(s) on which the Oracle home deinstallation will be perfORMed are:jsbp242305

Oracle Home selected for deinstall is: /oracle/app/12.2.0/grid

Inventory Location where the Oracle home reGIStered is: /oracle/app/oraInventory

Option -local will not modify any ASM configuration.

Oracle Grid Management database was not found in this Grid Infrastructure home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.out'

Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.err'

############ DECONFIG CLEAN OPERATION START ########################

Database de-configuration trace file location: /oracle/app/oraInventory/logs/databasedc_clean2018-04-16_10-27-15-AM.log

ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_clean2018-04-16_10-27-15-AM.log

ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_clean2018-04-16_10-27-15-AM.log

Network Configuration clean config END

Run the following command as the root user or the administrator on node "jsbp242305".

/oracle/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_OraGI12Home1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

按照提示去执行:

[root@jsbp242305 ~]# /oracle/app/12.2.0/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_OraGI12Home1.rsp"

Using configuration parameter file: /tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_OraGI12Home1.rsp

The log of current session can be found at:

/oracle/app/oraInventory/logs/crsdeconfig_jsbp242305_2018-04-16_10-27-36AM.log

PRCR-1070 : Failed to check if resource ora.net1.network is registered

CRS-0184 : Cannot communicate with the CRS daemon.

PRCR-1070 : Failed to check if resource ora.helper is registered

CRS-0184 : Cannot communicate with the CRS daemon.

PRCR-1070 : Failed to check if resource ora.ons is registered

CRS-0184 : Cannot communicate with the CRS daemon.

2018/04/16 10:27:48 CLSRSC-180: An error occurred while executing the command '/oracle/app/12.2.0/grid/bin/srvctl config nodeapps'

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'jsbp242305'

CRS-2679: Attempting to clean 'ora.gipcd' on 'jsbp242305'

CRS-2681: Clean of 'ora.gipcd' on 'jsbp242305' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'jsbp242305' has completed

CRS-4133: Oracle High Availability Services has been stopped.

2018/04/16 10:28:11 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2018/04/16 10:28:42 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2018/04/16 10:28:47 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

返回之前的节点,按enter:

################ DECONFIG CLEAN OPERATION END #########################

############## DECONFIG CLEAN OPERATION SUMMARY #######################

There is no Oracle Grid Management database to de-configure in this Grid Infrastructure home

Oracle Clusterware is stopped and successfully de-configured on node "jsbp242305"

Oracle Clusterware is stopped and de-configured successfully.

###################################################################

############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2018-04-16_10-26-05AM/response/deinstall_2018-04-16_10-26-34-AM.rsp

Location of logs /oracle/app/oraInventory/logs/

############ ORACLE DEINSTALL TOOL START ############

############## DEINSTALL CHECK OPERATION SUMMARY #######################

A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.out'

Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2018-04-16_10-26-34-AM.err'

################ DEINSTALL CLEAN OPERATION START ########################

## [START] Preparing for Deinstall ##

Setting LOCAL_NODE to jsbp242305

Setting CLUSTER_NODES to jsbp242305

Setting CRS_HOME to true

Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-04-16_10-26-05AM/oraInst.loc

Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false

Setting the force flag to cleanup the Oracle Base

Oracle Universal Installer clean START

Detach Oracle home '/oracle/app/12.2.0/grid' from the central inventory on the local node : Done

Delete directory '/oracle/app/12.2.0/grid' on the local node : Done

Failed to delete the directory '/oracle/app/grid/log/diag/asmcmd/user_root/jsbp242305/trace'. Either user has no permission to delete or it is in use.

The Oracle Base directory '/oracle/app/grid' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

## [END] Oracle install clean ##

################### DEINSTALL CLEAN OPERATION END #########################

################ DEINSTALL CLEAN OPERATION SUMMARY #######################

Successfully detached Oracle home '/oracle/app/12.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/oracle/app/12.2.0/grid' on the local node.

Oracle Universal Installer cleanup was successful.

Review the permissions and contents of '/oracle/app/grid' on nodes(s) 'jsbp242305'.

If there are no Oracle home(s) associated with '/oracle/app/grid', manually delete '/oracle/app/grid' and its contents.

Oracle deinstall tool successfully cleaned up temporary directories.

###################################################################

############# ORACLE DEINSTALL TOOL END #############

结束。

在不删除的节点2的Grid_home/bin下,root用户,执行:

[root@jsbp242306 ~]# /oracle/app/12.2.0/grid/bin/crsctl delete node -n jsbp242305

CRS-4661: Node jsbp242305 successfully deleted.

验证是否被删除成功:

[grid@jsbp242306 ~]$ cluvfy stage -post nodedel -n jsbp242305

Verifying Node Removal ...

Verifying CRS Integrity ...PASSED

Verifying Clusterware Version Consistency ...PASSED

Verifying Node Removal ...PASSED

Post-check for node removal was successful.

CVU operation performed: stage -post nodedel

Date: Apr 16, 2018 10:36:21 AM

CVU home: /oracle/app/12.2.0/grid/

User: grid

验证被删除节点vip资源是否存在:

$ srvctl config vip -node jsbp242305

vip资源状态

ora.jsbp242305.vip

1 ONLINE INTERMEDIATE jsbp242306 FAILED OVER,STABLE

如果vip资源存在,删掉它:

$ srvctl stop vip -vip jsbp242305-vip

$ srvctl remove vip -vip jsbp242305-vip

删除vip资源需要root用户权限:

[grid@jsbp242306 addnode]$ srvctl remove vip -vip jsbp242305-vip

Please confirm that you intend to remove the VIPs jsbp242305-vip (y/[n]) y

PRKO-2381 : VIP jsbp242305-vip is not removed successfully:

PRCN-2018 : Current user grid is not a privileged user

[grid@jsbp242306 addnode]$ which srvctl

/oracle/app/12.2.0/grid/bin/srvctl

[grid@jsbp242306 addnode]$ loGout

[root@jsbp242306 ~]# /oracle/app/12.2.0/grid/bin/srvctl remove vip -vip jsbp242305-vip

Please confirm that you intend to remove the VIPs jsbp242305-vip (y/[n]) y

再次检查vip资源状态:

ora.jsbp242305.vip

1 OFFLINE OFFLINE STABLE

之前,我以为这个节点删除之后还要添加回来,就没有删除,后面添加节点时报错,还是要删除的。

##########增加节点##################

用addnode.sh增加节点

2.在存在节点,验证要增加的节点和集群一致性:

$ cluvfy stage -pre nodeadd -n jsbp242305 [-fixup] [-verbose]

如果验证失败,可以加fixup选项来修复集群.

[grid@jsbp242306 ~]$ cluvfy stage -pre nodeadd -n jsbp242305

Verifying Physical Memory ...PASSED

Verifying Available Physical Memory ...PASSED

Verifying Swap Size ...PASSED

Verifying Free Space: jsbp242306:/usr ...PASSED

Verifying Free Space: jsbp242306:/var ...PASSED

Verifying Free Space: jsbp242306:/etc,jsbp242306:/sbin ...PASSED

Verifying Free Space: jsbp242306:/oracle/app/12.2.0/grid ...PASSED

Verifying Free Space: jsbp242306:/tmp ...PASSED

Verifying Free Space: jsbp242305:/usr ...PASSED

Verifying Free Space: jsbp242305:/var ...PASSED

Verifying Free Space: jsbp242305:/etc,jsbp242305:/sbin ...PASSED

Verifying Free Space: jsbp242305:/oracle/app/12.2.0/grid ...PASSED

Verifying Free Space: jsbp242305:/tmp ...PASSED

Verifying User Existence: oracle ...

Verifying Users With Same UID: 1101 ...PASSED

Verifying User Existence: oracle ...PASSED

Verifying User Existence: grid ...

Verifying Users With Same UID: 1100 ...PASSED

Verifying User Existence: grid ...PASSED

Verifying User Existence: root ...

Verifying Users With Same UID: 0 ...PASSED

Verifying User Existence: root ...PASSED

Verifying Group Existence: asmadmin ...PASSED

Verifying Group Existence: asmoper ...PASSED

Verifying Group Existence: asmdba ...PASSED

Verifying Group Existence: oinstall ...PASSED

Verifying Group Membership: oinstall ...PASSED

Verifying Group Membership: asmdba ...PASSED

Verifying Group Membership: asmadmin ...PASSED

Verifying Group Membership: asmoper ...PASSED

Verifying Run Level ...PASSED

Verifying Architecture ...PASSED

Verifying OS Kernel Version ...PASSED

Verifying OS Kernel Parameter: semmsl ...PASSED

Verifying OS Kernel Parameter: semmns ...PASSED

Verifying OS Kernel Parameter: semopm ...PASSED

Verifying OS Kernel Parameter: semmni ...PASSED

Verifying OS Kernel Parameter: shmmax ...PASSED

Verifying OS Kernel Parameter: shmmni ...PASSED

Verifying OS Kernel Parameter: shmall ...PASSED

Verifying OS Kernel Parameter: file-max ...PASSED

Verifying OS Kernel Parameter: ip_local_port_range ...PASSED

Verifying OS Kernel Parameter: rmem_default ...PASSED

Verifying OS Kernel Parameter: rmem_max ...PASSED

Verifying OS Kernel Parameter: wmem_default ...PASSED

Verifying OS Kernel Parameter: wmem_max ...PASSED

Verifying OS Kernel Parameter: aio-max-nr ...PASSED

Verifying OS Kernel Parameter: panic_on_oops ...PASSED

Verifying Package: binutils-2.20.51.0.2 ...PASSED

Verifying Package: compat-libcap1-1.10 ...PASSED

Verifying Package: compat-libstdc++-33-3.2.3 (x86_64) ...PASSED

Verifying Package: libGCc-4.4.7 (x86_64) ...PASSED

Verifying Package: libstdc++-4.4.7 (x86_64) ...PASSED

Verifying Package: libstdc++-devel-4.4.7 (x86_64) ...PASSED

Verifying Package: sysstat-9.0.4 ...PASSED

Verifying Package: gcc-4.4.7 ...PASSED

Verifying Package: gcc-c++-4.4.7 ...PASSED

Verifying Package: ksh ...PASSED

Verifying Package: make-3.81 ...PASSED

Verifying Package: glibc-2.12 (x86_64) ...PASSED

Verifying Package: glibc-devel-2.12 (x86_64) ...PASSED

Verifying Package: libaio-0.3.107 (x86_64) ...PASSED

Verifying Package: libaio-devel-0.3.107 (x86_64) ...PASSED

Verifying Package: nfs-utils-1.2.3-15 ...PASSED

Verifying Package: smartmontools-5.43-1 ...PASSED

Verifying Package: net-tools-1.60-110 ...PASSED

Verifying Users With Same UID: 0 ...PASSED

Verifying Current Group ID ...PASSED

Verifying Root user consistency ...PASSED

Verifying Package: cvuqdisk-1.0.10-1 ...FAILED (PRVG-11550)

Verifying Node Addition ...

Verifying CRS Integrity ...PASSED

Verifying Clusterware Version Consistency ...PASSED

Verifying '/oracle/app/12.2.0/grid' ...PASSED

Verifying Node Addition ...PASSED

Verifying Node Connectivity ...

Verifying Hosts File ...PASSED

Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED

Verifying subnet mask consistency for subnet "2.1.176.0" ...PASSED

Verifying subnet mask consistency for subnet "10.11.176.0" ...PASSED

Verifying Node Connectivity ...PASSED

Verifying Multicast check ...PASSED

Verifying ASM Integrity ...

Verifying Node Connectivity ...

Verifying Hosts File ...PASSED

Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED

Verifying subnet mask consistency for subnet "2.1.176.0" ...PASSED

Verifying subnet mask consistency for subnet "10.11.176.0" ...PASSED

Verifying Node Connectivity ...PASSED

Verifying ASM Integrity ...PASSED

Verifying Device Checks for ASM ...

Verifying ASM device sharedness check ...

Verifying Package: cvuqdisk-1.0.10-1 ...FAILED (PRVG-11550)

Verifying ASM device sharedness check ...FAILED (PRVG-11550)

Verifying Access Control List check ...PASSED

Verifying Device Checks for ASM ...FAILED (PRVG-11550)

Verifying Database home availability ...PASSED

Verifying OCR Integrity ...PASSED

Verifying Time zone consistency ...PASSED

Verifying Network Time Protocol (NTP) ...

Verifying '/etc/ntp.conf' ...PASSED

Verifying '/var/run/ntpd.pid' ...PASSED

Verifying Daemon 'ntpd' ...PASSED

Verifying NTP daemon or service using UDP port 123 ...PASSED

Verifying NTP daemon is synchronized with at least one external time source ...PASSED

Verifying Network Time Protocol (NTP) ...PASSED

Verifying User Not In Group "root": grid ...PASSED

Verifying resolv.conf Integrity ...

Verifying (linux) resolv.conf Integrity ...FAILED (PRVG-13159)

Verifying resolv.conf Integrity ...FAILED (PRVG-13159)

Verifying DNS/NIS name service ...PASSED

Verifying User Equivalence ...PASSED

Verifying /dev/shm mounted as temporary file system ...PASSED

Verifying /boot mount ...PASSED

Verifying zeroconf check ...PASSED

Pre-check for node addition was unsuccessful on all the nodes.

Failures were encountered during execution of CVU verification request "stage -pre nodeadd".

Verifying Package: cvuqdisk-1.0.10-1 ...FAILED

jsbp242305: PRVG-11550 : Package "cvuqdisk" is missing on node "jsbp242305"

Verifying Device Checks for ASM ...FAILED

Verifying ASM device sharedness check ...FAILED

Verifying Package: cvuqdisk-1.0.10-1 ...FAILED

jsbp242305: PRVG-11550 : Package "cvuqdisk" is missing on node "jsbp242305"

Verifying resolv.conf Integrity ...FAILED

jsbp242306: PRVG-13159 : On node "jsbp242306" the file "/etc/resolv.conf" could

not be parsed because the file is empty.

jsbp242306: Check for integrity of file "/etc/resolv.conf" failed

jsbp242305: PRVG-13159 : On node "jsbp242305" the file "/etc/resolv.conf" could

not be parsed because the file is empty.

jsbp242305: Check for integrity of file "/etc/resolv.conf" failed

Verifying (Linux) resolv.conf Integrity ...FAILED

jsbp242306: PRVG-13159 : On node "jsbp242306" the file "/etc/resolv.conf"

could not be parsed because the file is empty.

jsbp242305: PRVG-13159 : On node "jsbp242305" the file "/etc/resolv.conf"

could not be parsed because the file is empty.

CVU operation performed: stage -pre nodeadd

Date: Apr 16, 2018 10:52:03 AM

CVU home: /oracle/app/12.2.0/grid/

User: grid

可以看到,报错里只需关注cvuqdisk这个安装包缺失即可,手动安装一下:

[root@jsbp242305 grid]# rpm -ivh cvuqdisk-1.0.10-1.rpm

Preparing... ########################################### [100%]

1:cvuqdisk ########################################### [100%]

/etc/resolv.conf这个不用理会。

3.To extend the Oracle Grid Infrastructure home to the node3, navigate to the Grid_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle Clusterware.

在存活节点的Grid_home/addnode目录下,用grid用户执行:

cd /oracle/app/12.2.0/grid/addnode

./addnode.sh --交互模式

./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" --静默模式

[grid@jsbp242306 addnode]$ cd /oracle/app/12.2.0/grid/addnode

[grid@jsbp242306 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}"

[FATAL] [INS-43045] CLUSTER_NEW_NODE_ROLES parameter was not specified.

CAUSE: The CLUSTER_NEW_NODE_ROLES parameter was not provided for performing addnode operation.

ACTION: Ensure that CLUSTER_NEW_NODE_ROLES parameter is passed. Refer to installation guide for more information on the syntax of passing CLUSTER_NEW_VIRTUAL_HOSTNAMES parameter.

报错,要指定CLUSTER_NEW_NODE_ROLES

./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}"

再次尝试:

[grid@jsbp242306 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}"

[FATAL] [INS-40912] Virtual host name: jsbp242305-vip is assigned to another system on the network.

CAUSE: One or more virtual host names appeared to be assigned to another system on the network.

ACTION: Ensure that the virtual host names assigned to each of the nodes in the cluster are not currently in use, and the IP addresses are registered to the domain name you want to use as the virtual host name.

报错:之前删除vip没有成功的缘故。删除vip之后再次尝试:

还是报错,查看报错日志好像是因为之前忽略的resolve.conf,重命名掉两边的resolve.conf文件:

[root@jsbp242306 ~]# mv /etc/resolv.conf /etc/resolv.conf.bak

[root@jsbp242305 ~]# mv /etc/resolv.conf /etc/resolv.conf.bak

再次执行:

[grid@jsbp242306 ~]$ cd /oracle/app/12.2.0/grid/addnode

[grid@jsbp242306 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={jsbp242305}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={jsbp242305-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}"

[WARNING] [INS-40111] The specified Oracle Base location is not empty on following nodes: [jsbp242305].

ACTION: Specify an empty location for Oracle Base.

Prepare Configuration in progress.

Prepare Configuration successful.

.................................................. 7% Done.

Copy Files to Remote Nodes in progress.

.................................................. 12% Done.

.................................................. 17% Done.

..............................

Copy Files to Remote Nodes successful.

You can find the log of this install session at:

/oracle/app/oraInventory/logs/addNodeActions2018-04-16_02-23-44-PM.log

Instantiate files in progress.

Instantiate files successful.

.................................................. 49% Done.

Saving cluster inventory in progress.

.................................................. 83% Done.

Saving cluster inventory successful.

The Cluster Node Addition of /oracle/app/12.2.0/grid was successful.

Please check '/oracle/app/12.2.0/grid/inventory/silentInstall2018-04-16_2-23-43-PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.

.................................................. 90% Done.

Update Inventory in progress.

Update Inventory successful.

.................................................. 97% Done.

As a root user, execute the following script(s):

1. /oracle/app/12.2.0/grid/root.sh

Execute /oracle/app/12.2.0/grid/root.sh on the following nodes:

[jsbp242305]

The scripts can be executed in parallel on all the nodes.

.................................................. 100% Done.

Successfully Setup Software.

按照提示,用root在添加的节点上跑下面的脚本:

/oracle/app/12.2.0/grid/root.sh

修改$ORACLE_HOME/network/admin/samples权限:

[grid@jsbp242305 admin]$ chmod 750 samples

6.Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.

用root用户,按照提示,运行Grid_home/root.sh。

提示:

如果上面跑过root.sh,那么不用运行了。

再次检查加入的节点是否有问题:

$ cluvfy stage -post nodeadd -n jsbp242305 [-verbose]

将resolve文件改回来:

[root@jsbp242306 ~]# mv /etc/resolv.conf.bak /etc/resolv.conf

[root@jsbp242305 ~]# mv /etc/resolv.conf.bak /etc/resolv.conf

提示:

如果是administrator-managed Oracle RAC database,那么可能需要用dbca来增加数据库实例。

附录12.2官方文档:
删节点:

7.2.2 Deleting a Cluster Node on Linux and UNIX Systems

Delete a node from a cluster on Linux and UNIX systems.

Note:

  • You can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. Deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster.

See Also:Oracle Real Application Clusters Administration and Deployment Guide for more information about deleting an Oracle RAC database instance

  • If you delete the last node of a cluster that is serviced by GNS, then you must delete the entries for that cluster from GNS.

  • If you have nodes in the cluster that are unpinned, then Oracle Clusterware ignores those nodes after a time and there is no need for you to remove them.

  • If one creates node-specific configuration for a node (such as disabling a service on a specific node, or adding the node to the candidate list for a server pool) that node-specific configuration is not removed when the node is deleted from the cluster. Such node-specific configuration must be removed manually.

  • Voting files are automatically backed up in OCR after any changes you make to the cluster.

  • When you want to delete a Leaf Node from an Oracle Flex Cluster, you need only complete steps 1 through 4 of this procedure.

To delete a node from a cluster:

  1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.

  2. Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:

$ olsnodes -s -t

If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.

  1. On the node that you are deleting, depending on whether you have a shared or local Oracle home, complete one of the following procedures as the user that installed Oracle Clusterware:

    • For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:

$ Grid_home/deinstall/deinstall -local

Caution:

      • If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster.

      • If you cut and paste the preceding command, then paste it into a text editor before pasting it to the command line to remove any formatting this document might contain.

    • If you have a shared home, then run the following commands in the following order on the node you want to delete.

Run the following command to deconfigure Oracle Clusterware:

$ Grid_home/crs/install/rootcrs.sh -deconfig -force

Run the following command from the Grid_home/oui/bin directory to detach the Grid home:

$ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local

Manually delete any configuration files, as prompted by the installation utility.

  1. From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:

# crsctl delete node -n node_to_be_deleted

  1. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

$ cluvfy stage -post nodedel -n node_list [-verbose]

  1. If you remove a cluster node on which Oracle Clusterware is down, then determine whether the VIP for the deleted node still exists, as follows:

$ srvctl config vip -node deleted_node_name

If the VIP still exists, then delete it, as follows:

$ srvctl stop vip -node deleted_node_name $ srvctl remove vip -vip deleted_vip_name


增加节点:

7.2.1 Adding a Cluster Node on Linux and UNIX Systems

There are three methods you can use to add a node to your cluster.

Using Rapid Home Provisioning to Add a Node

If you have a Rapid Home Provisioning Server, then you can use Rapid Home Provisioning to add a node to a cluster with one command, as shown in the following example:

$ rhpctl addnode gihome -client rhpclient -newnodes clientnode2:clientnode2-vip -root

The preceding example adds a node named clientnode2 with VIP clientnode2-vip to the Rapid Home Provisioning Client named rhpclient, using root credentials (login for the node you are adding).

Using Oracle Grid Infrastructure Installer to Add a Node

If you do you not want to use Rapid Home Provisioning to add a node to the cluster, then you can use the Oracle Grid Infrastructure installer to accomplish the task.

To add a node to the cluster using the Oracle Grid Infrastructure installer

  1. Run ./gridsetup.sh to start the installer.

  2. On the Select Configuration Option page, select Add more nodes to the cluster.

  3. On the Cluster Node Information page, click Add... to provide information for nodes you want to add.

  4. When the verification process finishes on the Perform Prerequisite Checks page, check the summary and then click Install.

Using addnode.sh to Add Nodes

This procedure assumes that:

  • There is an existing cluster with two nodes named node1 and node2

  • You are adding a node named node3 using a virtual node name, node3-vip, that resolves to an IP address, if you are not using DHCP and Grid Naming Service (GNS)

  • You have successfully installed Oracle Clusterware on node1 and node2 in a local (non-shared) home, where Grid_home represents the successfully installed home

To add a node:

  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, Grid_home must identify your successfully installed Oracle Clusterware home.

    See Also:

    Oracle Grid Infrastructure Installation and Upgrade Guide for Oracle Clusterware installation instructions

  2. Verify the integrity of the cluster and node3:

    $ cluvfy stage -pre nodeadd -n node3 [-fixup] [-verbose]

    You can specify the -fixup option to attempt to fix the cluster or node if the verification fails.

  3. To extend the Oracle Grid Infrastructure home to the node3, navigate to the Grid_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle Clusterware.

    To run addnode.sh in interactive mode, run addnode.sh from Grid_home/addnode.

    You can also run addnode.sh in silent mode for both Oracle Clusterware standard clusters and Oracle Flex Clusters.

    For an Oracle Clusterware standard cluster:

    ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
      HOSTNAMES={node3-vip}"

    If you are adding node3 to an Oracle Flex Cluster, then you can specify the node role on the command line, as follows:

    ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
      HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"

    Notes:

    If you are adding node3 to an extended cluster, then you can specify the node role on the command line, as follows:

    ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_NODE_SITES={site1,site2}"
    • Hub Nodes always have VIPs but Leaf Nodes may not. If you use the preceding syntax to add multiple nodes to the cluster, then you can use syntax similar to the following, where node3 is a Hub Node and node4 is a Leaf Node:

      ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"
        "CLUSTER_NEW_NODE_ROLES={hub,leaf}"
    • When you are adding Leaf nodes, only, you do not need to use the CLUSTER_NEW_VIRTUAL_HOSTNAMES parameter. For example:

      ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_NODE_ROLES={leaf,leaf}"
  4. If prompted, then run the orainstRoot.sh script as root to populate the /etc/oraInst.loc file with the location of the central inventory. For example:

    # /opt/oracle/oraInventory/orainstRoot.sh
  5. If you have an Oracle RAC or Oracle RAC One Node database configured on the cluster and you have a local Oracle home, then do the following to extend the Oracle database home to node3:

    If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following to extend the Oracle database home to node3:

    If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:

    Note:

    After running addnode.sh, ensure the Grid_home/network/admin/samples directory has permissions set to 750.

    1. Run the srvctl config database -db db_name command on an existing node in the cluster to obtain the mount point information.

    2. Run the following command as root on node3 to create the mount point:

      # mkdir -p mount_point_path
    3. Mount the file system that hosts the Oracle RAC database home.

    4. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER
      _NODES={local_node_name}" LOCAL_NODE="node_name" ORACLE_HOME_NAME="home_name" -cfs

      Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.

    5. Run the Grid_home/root.sh script on node3 as root, where Grid_home is the Oracle Grid Infrastructure home.

    6. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER_NODES={node3}"
      LOCAL_NODE="node3" ORACLE_HOME_NAME="home_name" -cfs
    7. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.

    8. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh "CLUSTER_NEW_NODES={node3}"
    9. Run the Oracle_home/root.sh script on node3 as root, where Oracle_home is the Oracle RAC home.

  6. Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.

    Note:

    • If you ran the root.sh script in the step 5, then you do not need to run it again.

    • If you have a policy-managed database, then you must ensure that the Oracle home is cloned to the new node before you run the root.sh script.

    • If you have any administrator-managed database instances configured on the nodes which are going to be added to the cluster, then you must extend the Oracle home to the new node before you run the root.sh script.

      Alternatively, remove the administrator-managed database instances using the srvctl remove instance command.

  7. Start the Oracle ACFS resource on the new node by running the following command as root from the Grid_home/bin directory:

    # srvctl start filesystem -device volume_device_name -node node3

    Note:

    Ensure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the Oracle home is located, are online on the newly added node.

  8. Run the following CVU command as the user that installed Oracle Clusterware to check cluster integrity. This command verifies that any number of specified nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:

    $ cluvfy stage -post nodeadd -n node3 [-verbose]

Check whether either a policy-managed or administrator-managed Oracle RAC database is configured to run on node3 (the newly added node). If you configured an administrator-managed Oracle RAC database, you may need to use DBCA to add an instance to the database to run on this newly added node.

感谢你能够认真阅读完这篇文章,希望小编分享的“oracle12cR2如何增加节点删除节点挽救集群”这篇文章对大家有帮助,同时也希望大家多多支持编程网,关注编程网数据库频道,更多相关知识等着你来学习!

您可能感兴趣的文档:

--结束END--

本文标题: oracle12cR2如何增加节点删除节点挽救集群

本文链接: https://www.lsjlt.com/news/63966.html(转载时请注明来源链接)

有问题或投稿请发送至: 邮箱/279061341@qq.com    QQ/279061341

本篇文章演示代码以及资料文档资料下载

下载Word文档到电脑,方便收藏和打印~

下载Word文档
猜你喜欢
  • oracle12cR2如何增加节点删除节点挽救集群
    这篇文章主要介绍了oracle12cR2如何增加节点删除节点挽救集群,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。数据库版本:[oracle...
    99+
    2024-04-02
  • Redis集群如何增加节点与删除节点
    这篇文章将为大家详细讲解有关Redis集群如何增加节点与删除节点,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。  查看集群启动情况:ps -ef | grep redis...
    99+
    2024-04-02
  • redis如何删除集群节点
    redis删除集群节点的方法:可以通过del-node命令进行删除从节点即可,例如:cd /usr/local/redis/redis/src./redis-trib.rb del-node 192.168.0.1:8006 //8006节...
    99+
    2024-04-02
  • 如何删除redis集群的节点
    删除redis集群节点的方法:可以通过del-node命令进行删除从节点即可,例如:cd /usr/local/redis/redis/src./redis-trib.rb del-node 192.168.182.132:7007 //7...
    99+
    2024-04-02
  • hdaoop集群中如何删除节点
    要从Hadoop集群中删除节点,可以按照以下步骤操作: 确保要删除的节点已经停止Hadoop服务,并且没有正在运行的作业或任务。...
    99+
    2024-03-05
    hadoop
  • k8s集群删除和添加node节点
      在已有k8s云平台中误删除node节点,然后将误删除的节点添加进集群中。如果是一台新服务器必须还要安装docker和k8s基础组件。查看节点数和删除node节点(master节点)[root@k8s01 ~]# kubectl...
    99+
    2023-06-04
  • redis集群怎么添加删除节点
    添加节点: 在新节点上安装 Redis,并确保所有节点都使用相同的配置文件。 在主节点上执行 CLUSTER MEET 命令,将新...
    99+
    2024-04-09
    redis
  • Oracle怎么添加和删除集群节点
    要添加和删除Oracle集群节点,需要使用Oracle Grid Infrastructure来管理集群节点。以下是添加和删除Ora...
    99+
    2024-04-09
    oracle
  • Redis集群新增、删除节点以及动态增加内存的方法
    目录一、新增服务节点到集群中1、创建配置文件2、启动新的端口3、将新增的两个端口增加到现有集群中4、设置从节点5、设置主节点master二、删除节点1、删除从节点2、删除主节点三、动...
    99+
    2024-04-02
  • mongodb副本集如何添加删除节点
    这期内容当中小编将会给大家带来有关mongodb副本集如何添加删除节点,文章内容丰富且以专业的角度为大家分析和叙述,阅读完这篇文章希望大家可以有所收获。环境介绍:三个节点的mongodb 副本集10.9.2...
    99+
    2024-04-02
  • oracle11g_RAC如何添加删除节点
    这篇文章主要介绍了oracle11g_RAC如何添加删除节点,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。删除节点删除节点1实例在节点2以oracle运行dbca,根据向导删...
    99+
    2023-06-06
  • hadoop集群增加节点要注意什么
    确保新节点具有与现有节点相同的配置和操作系统版本,以确保集群的一致性和稳定性。 在添加新节点之前,确保为新节点分配适当的资源...
    99+
    2024-03-05
    hadoop
  • redis集群添加删除节点的方法是什么
    要向 Redis 集群中添加或删除节点,可以使用以下步骤: 添加节点: 向 Redis 集群中添加一个新的 Redis 节点。启动...
    99+
    2024-04-02
  • hadoop集群增加节点的方法是什么
    Hadoop集群增加节点的方法通常有以下几个步骤: 准备新节点:首先需要准备一台新的服务器作为新增节点,确保该服务器满足Hado...
    99+
    2024-03-05
    hadoop
  • Oracle添加和删除集群节点的方法是什么
    这篇文章主要介绍“Oracle添加和删除集群节点的方法是什么”,在日常操作中,相信很多人在Oracle添加和删除集群节点的方法是什么问题上存在疑惑,小编查阅了各式资料,整理出简单好用的操作方法,希望对大家解...
    99+
    2024-04-02
  • Redis集群水平扩展、集群中添加以及删除节点的操作
    在Redis集群搭建中,我们搭建了下图所示的Redis集群,接下来我们就来看看如何为我们的Redis集群进行水平扩容。 [root@localhost redis-5.0.3]...
    99+
    2024-04-02
  • 如何增加redis的节点
    增加redis节点的示例:通过add-node命令进行增加即可:cd /usr/local/redis/redis/src //进入redis目录./src/redis-cli --cluster add-node 172.26.237.8...
    99+
    2024-04-02
  • Cassandra集群中怎么添加或移除节点
    在Cassandra集群中添加或移除节点可以通过以下步骤完成: 添加节点: 首先,在新节点上安装Cassandra软件并配置好节点...
    99+
    2024-03-11
    Cassandra
  • MongoDB副本集如何添加和删除仲裁节点
    小编给大家分享一下MongoDB副本集如何添加和删除仲裁节点,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧! ...
    99+
    2024-04-02
  • JavaScript中dom如何添加、删除节点
    这篇文章将为大家详细讲解有关JavaScript中dom如何添加、删除节点,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。javascript是一种什么语言javascript是一种动态类型、弱类型的语言,...
    99+
    2023-06-14
软考高级职称资格查询
编程网,编程工程师的家园,是目前国内优秀的开源技术社区之一,形成了由开源软件库、代码分享、资讯、协作翻译、讨论区和博客等几大频道内容,为IT开发者提供了一个发现、使用、并交流开源技术的平台。
  • 官方手机版

  • 微信公众号

  • 商务合作