oracle enterprise Linux 4下安装cluster问题 :The location /ocfs/clusterware/ocr ,entered for the Oracle CLuster Registry(OCR) is not s
问题:RH5下安装cluster问题:ocr is not shared across all the nodes in the cluster环境:VM SERVER2.0+RH5.5clusterware版本:10201安装Clusterware在Specify Oracle CLuster Register Location时报如下错误:The location /ocfs/clusterw
·
问题:
oracle enterprise Linux 4下安装cluster问题:ocr is not shared across all the nodes in the cluster
环境:VM SERVER1.0.6+RH4
clusterware版本:10201
安装Clusterware在Specify Oracle CLuster Register Location时报如下错误:
The location /ocfs/clusterware/ocr ,entered for the
Oracle CLuster Registry(OCR) is not shared across all
the nodes in the cluster. Specify a shared raw
partition or cluster file system file that is visible by the
same name on all nodes of the cluster
两台虚拟机的host文件如下
# Do not remove the following line, or various programs
# that require network functionality will fail.
#
127.0.0.1 localhost
192.168.1.46 rac1.myrac.com rac1
192.168.1.146 rac1-vip.myrac.com rac1-vip
10.10.10.46 rac1-priv.myrac.com rac1-priv
192.168.1.47 rac2.myrac.com rac2
192.168.1.147 rac2-vip.myrac.com rac2-vip
10.10.10.47 rac2-priv.myrac.com rac2-priv
public和private地址绑定网卡eth0和eth1. eth0设置网关为192.168.1.1.
两台机器通过ssh均可以互相访问.
用mounted.ocfs2查看状态. 为如下:
rac1:
[root@rac1 ~]# mounted.ocfs2 -f
Device FS Nodes
/dev/sdb1 ocfs2 rac1
rac2:
[root@rac2 ~]# mounted.ocfs2 -f
Device FS Nodes
/dev/sdb1 ocfs2 rac2
ocfs2配置文件如下:
[root@rac2 ~]# cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.1.46
number = 0
name = rac1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.1.47
number = 1
name = rac2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
想请问一下是否ocfs2配置不正确. 2边的硬盘我是分别格式化的. 然后挂接上之后如果正确的话. mounted.ocfs2查看是否Nodes下面应该显示 rac1,rac2呢..?
不正确的话应该如何解决呢..请各位大大帮忙.. 找了一下午错..还是不知道到底错在哪里..
clusterware版本:10201
安装Clusterware在Specify Oracle CLuster Register Location时报如下错误:
The location /ocfs/clusterware/ocr ,entered for the
Oracle CLuster Registry(OCR) is not shared across all
the nodes in the cluster. Specify a shared raw
partition or cluster file system file that is visible by the
same name on all nodes of the cluster
两台虚拟机的host文件如下
# Do not remove the following line, or various programs
# that require network functionality will fail.
#
127.0.0.1 localhost
192.168.1.46 rac1.myrac.com rac1
192.168.1.146 rac1-vip.myrac.com rac1-vip
10.10.10.46 rac1-priv.myrac.com rac1-priv
192.168.1.47 rac2.myrac.com rac2
192.168.1.147 rac2-vip.myrac.com rac2-vip
10.10.10.47 rac2-priv.myrac.com rac2-priv
public和private地址绑定网卡eth0和eth1. eth0设置网关为192.168.1.1.
两台机器通过ssh均可以互相访问.
用mounted.ocfs2查看状态. 为如下:
rac1:
[root@rac1 ~]# mounted.ocfs2 -f
Device FS Nodes
/dev/sdb1 ocfs2 rac1
rac2:
[root@rac2 ~]# mounted.ocfs2 -f
Device FS Nodes
/dev/sdb1 ocfs2 rac2
ocfs2配置文件如下:
[root@rac2 ~]# cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.1.46
number = 0
name = rac1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.1.47
number = 1
name = rac2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
想请问一下是否ocfs2配置不正确. 2边的硬盘我是分别格式化的. 然后挂接上之后如果正确的话. mounted.ocfs2查看是否Nodes下面应该显示 rac1,rac2呢..?
不正确的话应该如何解决呢..请各位大大帮忙.. 找了一下午错..还是不知道到底错在哪里..
解决方法:
1.首先判断两个节点是否挂载了ocfs盘。 命令 #df
如果都挂载了,执行第二步
2.关键一步,在两个节点的rac1.vmx和rac2.vmx中分别加上
disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.sharedBus = "virtual"
scsi1:0.deviceType = "disk"
scsi1:0.redo = ""
scsi1:1.deviceType = "disk"
scsi1:1.redo = ""
scsi1:2.deviceType = "disk"
scsi1:2.redo = ""
scsi1:3.deviceType = "disk"
scsi1:3.redo = ""
.......
.......
将两个节点重启。
3.哈哈 ,问题解决。
更多推荐
已为社区贡献1条内容
所有评论(0)