Friday, October 4, 2013

HAIP - Configure Multiple Private interconnect interface in Linux (11.2)

How to add one more network to Private interconnect (11.2)
==========================================================


1) Below is the current setup

a) Two node RAC with 11.2.0.3 oracle Clusterware.

b) Node details.

[oracle@rhel11gr2rac1 bin]$ ./olsnodes -n -i -s
rhel11gr2rac1   1       rhel11gr2rac1-vip       Active
rhel11gr2rac2   2       rhel11gr2rac2-vip       Active

c) Private interconnect ips.

[oracle@rhel11gr2rac1 bin]$ ./olsnodes -l -p
rhel11gr2rac1   10.10.10.20

[oracle@rhel11gr2rac2 bin]$ ./olsnodes -l -p
rhel11gr2rac2   10.10.10.21


2) Below is the new interface we are going to add to the private interconnect, update the /etc/hosts file with the below details.

10.10.10.30     rhel11gr2rac1-priv2.manzoor.com rhel11gr2rac1-priv2
10.10.10.31     rhel11gr2rac2-priv2.manzoor.com rhel11gr2rac2-priv2


3) Configure the Network and assign the above ips to the added new network.


Node 1 -

[root@rhel11gr2rac1 ~]# cd /etc/sysconfig/network-scripts/
[root@rhel11gr2rac1 network-scripts]# ifdown eth2

-- Open the eth2 config details and update the necessary details ( can refer the eth1 details)

[root@rhel11gr2rac1 network-scripts]# vi ifcfg-eth2
# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth2
HWADDR=00:0c:29:89:94:4d
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
NETMASK=255.255.255.0
IPADDR=10.10.10.30
GATEWAY=10.10.10.0
TYPE=Ethernet
USERCTL=no
IPV6INIT=no
PEERDNS=yes

[root@rhel11gr2rac1 network-scripts]# ifup eth2
[root@rhel11gr2rac1 network-scripts]# ifconfig eth2

eth2      Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:10.10.10.30  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe89:944d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:375 errors:0 dropped:0 overruns:0 frame:0
          TX packets:215 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:77632 (75.8 KiB)  TX bytes:35114 (34.2 KiB)


Node 2 -

[root@rhel11gr2rac2 ~]# cd /etc/sysconfig/network-scripts/
[root@rhel11gr2rac2 network-scripts]# ifdown eth2
[root@rhel11gr2rac2 network-scripts]# vi ifcfg-eth2

# Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
DEVICE=eth2
HWADDR=00:0c:29:75:b5:10
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
NETMASK=255.255.255.0
IPADDR=10.10.10.31
GATEWAY=10.10.10.0
TYPE=Ethernet
USERCTL=no
IPV6INIT=no
PEERDNS=yes

[root@rhel11gr2rac2 network-scripts]# ifup eth2
[root@rhel11gr2rac2 network-scripts]# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:10.10.10.31  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b510/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:489 errors:0 dropped:0 overruns:0 frame:0
          TX packets:186 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:103027 (100.6 KiB)  TX bytes:27884 (27.2 KiB)


4) Follow the steps below to add the new interface to the private network.


a) As of 11.2 Grid Infrastructure, the private network configuration is not only stored in OCR but also
in the gpnp profile.  If the private network is not available or its definition is incorrect, the
CRSD process will not start and any subsequent changes to the OCR will be impossible.             Therefore care needs to be taken when making modifications to the configuration of the private network. It is important to perform the changes in the correct order. Please also note that manual modification of gpnp profile is not supported.

b) Take the backup of the profile.xml file in all the nodes.

Node 1

[oracle@rhel11gr2rac1 ~]$ cd /grid/11.2/gpnp/rhel11gr2rac1/profiles/peer/
[oracle@rhel11gr2rac1 peer]$ cp profile.xml profile.xml_bkp_4thoct
[oracle@rhel11gr2rac1 peer]$ ls -lrt
total 20
-rw-r--r-- 1 oracle oinstall 1873 Mar 23  2013 profile_orig.xml
-rw-r--r-- 1 oracle oinstall 1880 Mar 23  2013 profile.old
-rw-r--r-- 1 oracle oinstall 1886 Mar 23  2013 profile.xml
-rw-r--r-- 1 oracle oinstall 1886 Oct  3 18:35 pending.xml
-rw-r--r-- 1 oracle oinstall 1886 Oct  3 19:48 profile.xml_bkp_4thoct


Node 2

[oracle@rhel11gr2rac2 peer]$ cd /grid/11.2/gpnp/rhel11gr2rac2/profiles/peer
[oracle@rhel11gr2rac2 peer]$ cp profile.xml profile.xml_bkp_4thoct
[oracle@rhel11gr2rac2 peer]$ ls -lrt
total 20
-rw-r--r-- 1 oracle oinstall 1873 Mar 23  2013 profile_orig.xml
-rw-r--r-- 1 oracle oinstall 1880 Mar 23  2013 profile.old
-rw-r--r-- 1 oracle oinstall 1886 Mar 23  2013 profile.xml
-rw-r--r-- 1 oracle oinstall 1886 Oct  3 18:35 pending.xml
-rw-r--r-- 1 oracle oinstall 1886 Oct  3 19:45 profile.xml_bkp_4thoct


c) Ensuare the oracle clusterware is up and running in all the nodes.

Node 1

[oracle@rhel11gr2rac1 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online


Node 2


[oracle@rhel11gr2rac2 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

d) We need to use the oifcfg tool for configuring the network.

[oracle@rhel11gr2rac1 bin]$ ./oifcfg -h

Name:
        oifcfg - Oracle Interface Configuration Tool.

Usage:  oifcfg iflist [-p [-n]]
        oifcfg setif {-node | -global} {/:}...
        oifcfg getif [-node | -global] [ -if [/] [-type ] ]
        oifcfg delif {{-node | -global} [[/]] [-force] | -force}
        oifcfg [-help]

        - name of the host, as known to a communications network
         - name by which the interface is configured in the system
          - subnet address of the interface
         - type of the interface { cluster_interconnect | public }



e) Get the current configuration details.


[oracle@rhel11gr2rac1 bin]$ ./oifcfg getif
eth0  192.168.0.0  global  public
eth1  10.10.10.0   global  cluster_interconnect


f) Add the new cluster interconnect information.


$ oifcfg setif -global /:cluster_interconnect

interface -- eth2
subnet -- We are going to add the new interface in the same subnet of previous interconnect. (10.10.10.0)

-- We can use the below command to find the subnet of an interface.

[oracle@rhel11gr2rac1 bin]$ ./oifcfg iflist
eth0  192.168.0.0
eth1  10.10.10.0
eth1  169.254.0.0
eth2  10.10.10.0


-- Our new network interface is eth2 and hence the subnet is 10.10.10.0

-- Note

i) This can be done with -global option even if the interface is not available yet, but this can not be done
with -node option if the interface is not available, it will lead to node eviction.

ii) If your adding a 2nd private network, not replacing the existing private network, please ensure MTU size of both interfaces are the same, otherwise instance startup will report below error:


ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if MTU failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpcini2
ORA-27303: additional information: requested interface lan1:801 has a different MTU (1500) than lan3:801 (9000), which is not supported. Check output from ifconfig command


Check the MTU of the private interface.

-- Node 1


[root@rhel11gr2rac1 ~]# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:0C:29:89:94:43
          inet addr:10.10.10.20  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe89:9443/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:220044 errors:0 dropped:0 overruns:0 frame:0
          TX packets:186665 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:147597750 (140.7 MiB)  TX bytes:108973996 (103.9 MiB)


[root@rhel11gr2rac1 ~]# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:10.10.10.30  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe89:944d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:410 errors:0 dropped:0 overruns:0 frame:0
          TX packets:215 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:84329 (82.3 KiB)  TX bytes:35114 (34.2 KiB)




[root@rhel11gr2rac2 ~]# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          inet addr:10.10.10.21  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b506/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:187806 errors:0 dropped:0 overruns:0 frame:0
          TX packets:220819 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:109461264 (104.3 MiB)  TX bytes:148275753 (141.4 MiB)



[root@rhel11gr2rac2 ~]# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:10.10.10.31  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b510/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:498 errors:0 dropped:0 overruns:0 frame:0
          TX packets:186 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:103963 (101.5 KiB)  TX bytes:27884 (27.2 KiB)

-- All the above interface are having the same MTU 1500.

Now add the interface as below.


[oracle@rhel11gr2rac1 bin]$ ./oifcfg setif -global eth2/10.10.10.0:cluster_interconnect

verify the changes made.

[oracle@rhel11gr2rac1 bin]$ ./oifcfg getif
eth0  192.168.0.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
eth2  10.10.10.0  global  cluster_interconnect

[oracle@rhel11gr2rac2 bin]$ ./oifcfg getif
eth0  192.168.0.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
eth2  10.10.10.0  global  cluster_interconnect


g) Shutdown the clusterware in all the nodes.


[root@rhel11gr2rac1 bin]# ./crsctl stop crs
[root@rhel11gr2rac2 bin]# ./crsctl stop crs


h) If you have configured the oifcfg before the network card is available then now make the changes at the
os level and check whether the network is available before bring up the crs.


Ping test

Node 1

[root@rhel11gr2rac1 bin]# ping rhel11gr2rac1-priv2
PING rhel11gr2rac1-priv2.manzoor.com (10.10.10.30) 56(84) bytes of data.
64 bytes from rhel11gr2rac1-priv2.manzoor.com (10.10.10.30): icmp_seq=1 ttl=64 time=0.042 ms
64 bytes from rhel11gr2rac1-priv2.manzoor.com (10.10.10.30): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from rhel11gr2rac1-priv2.manzoor.com (10.10.10.30): icmp_seq=3 ttl=64 time=0.040 ms

--- rhel11gr2rac1-priv2.manzoor.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.038/0.040/0.042/0.001 ms

[root@rhel11gr2rac1 bin]# ping rhel11gr2rac2-priv2
PING rhel11gr2rac2-priv2.manzoor.com (10.10.10.31) 56(84) bytes of data.
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=1 ttl=64 time=1.77 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=2 ttl=64 time=0.333 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=3 ttl=64 time=0.292 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=4 ttl=64 time=0.300 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=5 ttl=64 time=0.299 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=6 ttl=64 time=0.463 ms

--- rhel11gr2rac2-priv2.manzoor.com ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 4999ms
rtt min/avg/max/mdev = 0.292/0.576/1.772/0.538 ms


Node 2


[root@rhel11gr2rac2 bin]# ping rhel11gr2rac2-priv2
PING rhel11gr2rac2-priv2.manzoor.com (10.10.10.31) 56(84) bytes of data.
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=2 ttl=64 time=0.050 ms
64 bytes from rhel11gr2rac2-priv2.manzoor.com (10.10.10.31): icmp_seq=3 ttl=64 time=0.045 ms

--- rhel11gr2rac2-priv2.manzoor.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.045/0.047/0.050/0.008 ms
[root@rhel11gr2rac2 bin]# ping rhel11gr2rac1-priv2
PING rhel11gr2rac1-priv2.manzoor.com (10.10.10.30) 56(84) bytes of data.
64 bytes from rhel11gr2rac1-priv2.manzoor.com (10.10.10.30): icmp_seq=1 ttl=64 time=2.20 ms
64 bytes from rhel11gr2rac1-priv2.manzoor.com (10.10.10.30): icmp_seq=2 ttl=64 time=0.401 ms
64 bytes from rhel11gr2rac1-priv2.manzoor.com (10.10.10.30): icmp_seq=3 ttl=64 time=0.321 ms

--- rhel11gr2rac1-priv2.manzoor.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.321/0.976/2.207/0.871 ms


i) Start the CRS.


[root@rhel11gr2rac1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@rhel11gr2rac2 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.


j) Now verify the status

Node 1

[oracle@rhel11gr2rac1 bin]$ ./oifcfg getif
eth0  192.168.0.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
eth2  10.10.10.0  global  cluster_interconnect

[oracle@rhel11gr2rac1 bin]$ ./olsnodes -l -p
rhel11gr2rac1   10.10.10.20,10.10.10.30


-- Both the private network are getting listed.


[oracle@rhel11gr2rac2 bin]$ ./oifcfg getif
eth0  192.168.0.0  global  public
eth1  10.10.10.0  global  cluster_interconnect
eth2  10.10.10.0  global  cluster_interconnect

[oracle@rhel11gr2rac2 bin]$ ./olsnodes -l -p
rhel11gr2rac2   10.10.10.21,10.10.10.31




========================================================================
For 11.2.0.2+: (HAIP address will show in alert log instead of private IP)
eg.

Cluster communication is configured to use the following interface(s) for this instance
  169.254.86.97
=======================================================================================

From alert log
==============

Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.
  [name='eth1:1', type=1, ip=169.254.48.13, mac=00-0c-29-75-b5-06, net=169.254.0.0/17, mask=255.255.128.0, use=haip:cluster_interconnect/62]
Private Interface 'eth2:1' configured from GPnP for use as a private interconnect.
  [name='eth2:1', type=1, ip=169.254.227.73, mac=00-0c-29-75-b5-10, net=169.254.128.0/17, mask=255.255.128.0, use=haip:cluster_interconnect/62]

.....
Cluster communication is configured to use the following interface(s) for this instance
  169.254.48.13
  169.254.227.73



Note: interconnect communication will use all two virtual private IPs; in case of network failure, as long as there is one private network adapter functioning, all two IPs will remain active.


From Database


SQL> select * from GV$configured_interconnects where is_public = 'NO';

   INST_ID NAME            IP_ADDRESS       IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
         2 eth1:1          169.254.48.13    NO
         2 eth2:1          169.254.227.73   NO
         1 eth1:1          169.254.62.58    NO
         1 eth2:1          169.254.250.70   NO


Here each private interface will have an virtual ip i.e the eth1 is having the vip as 169.254.62.58 and eth2 vip is 169.254.250.70 like wise for node 2 the eth1 vip is 169.254.48.13 and eth2 vip is 169.254.227.73.

VIP is used for failover, i.e. if one network interface goes down then the vip will be failed over to the other
available interface.

Eg.

If in node 1 the interface eth1 got failured then the vip 169.254.62.58 will be failed over to the eth2. Thus as long as there is one private newtork adapter functioning all the two ips will remain active.



Testing..

   INST_ID NAME            IP_ADDRESS       IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
         2 eth1:1          169.254.48.13    NO
         2 eth2:1          169.254.227.73   NO
         1 eth1:1          169.254.62.58    NO
         1 eth2:1          169.254.250.70   NO


Let bring down the interface eth1 in node 1.

[root@rhel11gr2rac1 ~]# ifdown eth1


Snap from the node 1 db alter log

Thu Oct 03 23:38:45 2013
SKGXP: ospid 16542: network interface query failed for IP address 169.254.62.58.
SKGXP: [error 11132]


ifconfig

--output

eth1      Link encap:Ethernet  HWaddr 00:0C:29:89:94:43
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:375106 errors:0 dropped:0 overruns:0 frame:0
          TX packets:310254 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:251856284 (240.1 MiB)  TX bytes:186387264 (177.7 MiB)

eth2      Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:10.10.10.30  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe89:944d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:157698 errors:0 dropped:0 overruns:0 frame:0
          TX packets:139343 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:113436206 (108.1 MiB)  TX bytes:74727012 (71.2 MiB)

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:169.254.62.58  Bcast:169.254.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2:2    Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:169.254.250.70  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1



--- Since the eth1 interface is down the vip 169.254.62.58 has been failed over to the eth2 interface, which is eth2:1




[root@rhel11gr2rac2 ~]# ifdown eth1


ifconfig output

eth1      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:326893 errors:0 dropped:0 overruns:0 frame:0
          TX packets:377493 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:192462174 (183.5 MiB)  TX bytes:259305297 (247.2 MiB)

eth2      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:10.10.10.31  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b510/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:132279 errors:0 dropped:0 overruns:0 frame:0
          TX packets:165414 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:72715227 (69.3 MiB)  TX bytes:114056247 (108.7 MiB)

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:169.254.48.13  Bcast:169.254.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2:2    Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:169.254.227.73  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1


-- Since the eth1 interface id down the vip 169.254.48.13 has been failed over to eth2, which is eth2:1.


-- Eventhough one interface is down on each node still we have two vips on both the nodes which been served by the remaining network interface.

The oifcfg output as below.

[root@rhel11gr2rac2 bin]# ./oifcfg iflist -n -p
eth0  192.168.0.0  PRIVATE  255.255.255.0
eth2  10.10.10.0  PRIVATE  255.255.255.0
eth2  169.254.0.0  UNKNOWN  255.255.128.0
eth2  169.254.128.0  UNKNOWN  255.255.128.0


-- Now lets bring up the eth1 on node 2.




[root@rhel11gr2rac2 bin]# ifup eth1


ifconfig output in node 2


eth1      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          inet addr:10.10.10.21  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b506/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:327807 errors:0 dropped:0 overruns:0 frame:0
          TX packets:378590 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:192898043 (183.9 MiB)  TX bytes:260037599 (247.9 MiB)

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          inet addr:169.254.48.13  Bcast:169.254.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth1:2    Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          inet addr:169.254.227.73  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:10.10.10.31  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b510/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:138848 errors:0 dropped:0 overruns:0 frame:0
          TX packets:173925 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:75828753 (72.3 MiB)  TX bytes:120251886 (114.6 MiB)


--now both the vips are servered by eth1 eventhough eth2 is up and running this is because one interface is down on node 1.


[root@rhel11gr2rac1 ~]# ifup eth1

ifconfig output in node 1

eth1      Link encap:Ethernet  HWaddr 00:0C:29:89:94:43
          inet addr:10.10.10.20  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe89:9443/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:375296 errors:0 dropped:0 overruns:0 frame:0
          TX packets:310382 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:251983004 (240.3 MiB)  TX bytes:186445931 (177.8 MiB)

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:89:94:43
          inet addr:169.254.62.58  Bcast:169.254.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2      Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:10.10.10.30  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe89:944d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:186819 errors:0 dropped:0 overruns:0 frame:0
          TX packets:161939 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:134733101 (128.4 MiB)  TX bytes:85910612 (81.9 MiB)

eth2:2    Link encap:Ethernet  HWaddr 00:0C:29:89:94:4D
          inet addr:169.254.250.70  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1




ifconfig output in node 2 (after the eth1 is up on both the nodes)

eth1      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          inet addr:10.10.10.21  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b506/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:333637 errors:0 dropped:0 overruns:0 frame:0
          TX packets:386233 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:196228732 (187.1 MiB)  TX bytes:265869284 (253.5 MiB)

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:75:B5:06
          inet addr:169.254.48.13  Bcast:169.254.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2      Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:10.10.10.31  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe75:b510/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:140802 errors:0 dropped:0 overruns:0 frame:0
          TX packets:175889 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:76612446 (73.0 MiB)  TX bytes:121438787 (115.8 MiB)

eth2:1    Link encap:Ethernet  HWaddr 00:0C:29:75:B5:10
          inet addr:169.254.227.73  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1



-- As long as one interface is healthy then there wont be any impact to the asm/db instance.

now we bought down eth2 in node 1 and eth1 in node 2. Below is the oifcfg output.

node 1

[root@rhel11gr2rac1 bin]# ./oifcfg iflist -n -p
eth0  192.168.0.0  PRIVATE  255.255.255.0
eth1  10.10.10.0  PRIVATE  255.255.255.0
eth1  169.254.0.0  UNKNOWN  255.255.128.0
eth1  169.254.128.0  UNKNOWN  255.255.128.0


Node 2

[root@rhel11gr2rac2 bin]# ./oifcfg iflist -n -p
eth0  192.168.0.0  PRIVATE  255.255.255.0
eth2  10.10.10.0  PRIVATE  255.255.255.0
eth2  169.254.128.0  UNKNOWN  255.255.128.0
eth2  169.254.0.0  UNKNOWN  255.255.128.0


Below is the oifcfg output when both the interface are up in both the nodes.

Node 1


[root@rhel11gr2rac1 bin]# ./oifcfg iflist -n -p
eth0  192.168.0.0  PRIVATE  255.255.255.0
eth1  10.10.10.0  PRIVATE  255.255.255.0
eth1  169.254.0.0  UNKNOWN  255.255.128.0
eth2  10.10.10.0  PRIVATE  255.255.255.0
eth2  169.254.128.0  UNKNOWN  255.255.128.0


Node 2


[root@rhel11gr2rac2 bin]# ./oifcfg iflist -n -p
eth0  192.168.0.0  PRIVATE  255.255.255.0
eth1  10.10.10.0  PRIVATE  255.255.255.0
eth1  169.254.0.0  UNKNOWN  255.255.128.0
eth2  10.10.10.0  PRIVATE  255.255.255.0
eth2  169.254.128.0  UNKNOWN  255.255.128.0





Reference:-
11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip (Doc ID 1210883.1)
How to Modify Private Network Information in Oracle Clusterware (Doc ID 283684.1)

2 comments:

  1. It is great a single system allows multiple private users to use a single network.

    Thanks
    Silvester Norman

    Changing MAC Address

    ReplyDelete
  2. Amazed. Thanks Man

    Regards,
    Vasanth R

    ReplyDelete