DOC

oracle_rac_install_on_hp_device

By Julia Hunter,2014-12-15 23:41
10 views 0
oracle_rac_install_on_hp_device

1; Hardware Requirement

    HP ProLiant ML580 G5

    Rack-style, two 2.13G Intel Xeon CPUs, 4G RAM;one IDE ROM;two 72G 10000Rpm

    SCSI hard disks;one mouse/keyboard;four 10/100/1000M Ethernet ports;two

    redundant power supplies and fans

HP MSA1000 Model

    Two redundant MSA 1000 controllers;two redundant MSA 2/3 HUBs;two or four 72G

    10000Rpm SCSI hard disks;two redundant power suppliers and fans;four fiber link

    cables;four FC HBA.

    2; Hardware connection

    2.1 Connect the networks;

    1) Connect two public interfaces on one node with one public Ethernet switch;

    Connect two public interfaces on another node with another public Ethernet switch;

    Connect two public Ethernet switches;

    2) Connect two private interfaces on one node with two internal Ethernet switches;

    Connect two private interfaces on another node with two internal Ethernet switches;

    Connect two internal Ethernet switches;

    2.2 Connect the shared disks;

    1) Connect one fiber channel on each node with one shared disk controller; 2) Connect another fiber channel on each node with another shared disk controller; 3; Oracle Cluster Installation

    3.1 Power on the systems

    1) Power on the shared disk, waiting for the initialization finished; 2) Power on two nodes;

    3.2 Configure the networks

    3) Set the name of two nodes: db1, db2;

    4) Set the name of four network interfaces on each node: int1(public), int2(public),

    int3(internal), int4(internal)

    5) Combine two public interfaces(int1, int2) into one virtual public interface(team1) using

    HP network configuration utility on each node;

    6) Set the name of virtual public network interface on each node: team1; 7) Combine two private interfaces(int3, int4) into one virtual private interface(team2) using

    HP network configuration utility on each node;

    8) Set the name of virtual private network interface on each node: team2; 9) The team1 on each node should be listed first in the bind order in the advanced setting of

    network connection from control panel;

    10) The team2 on each node should be listed second in the bind order in the advanced setting

    of network connection from control panel;

    11) Set the IP addresses for team1 on the first node: 192.100.95.9;

    12) Set the IP addresses for team2 on the first node: 192.100.90.1;

    13) Set the IP addresses for team1 on the second node: 192.100.95.10; 14) Set the IP addresses for team2 on the second node: 192.100.90.2; 15) Add the following lines into c:\windows\system32\driver\etc\hosts on each node:

    192.100.90.1 db1-priv

    192.100.95.9 db1

    192.100.95.7 db1-vip

    192.100.90.2 db2-priv

    192.100.95.10 db2

    192.100.95.8 db2-vip

    3.3 Configure the shared disks

    1) Power off the second node(db2), keep the first node(db1) and shared disks powered on; 2) Configure the multi-path to combine the two fiber channels on the node using

    HP_MPIO_Basic_DSM_v1.30;

    3) Restart;

    4) Configure the logical drive (RAID 1+0) using HP Array Configuration Utility. If there are

    4 shared hard disks, create two logical drives(RAID 1+0);

    Disable the logical drive array accelerator setting;

    5) Enable automounting of the shared logical drives using diskpart command; 6) Disable the write caching of the shared logical drives in the device manager of computer

    management in administrative tools of control panel;

    7) Power off the first node(db1), power on the second node(db2); 8) Configure the multi-path to combine the two fiber channels on the node using

    HP_MPIO_Basic_DSM_v1.30;

    9) Restart;

    10) Enable automounting of the shared logical drives using diskpart command; 11) Disable the write caching of the shared logical drives in device manager of computer

    management in administrative tools of control panel;

    12) Restart

    13) Create the expand partition on each shared logical drive;

    14) Create logical drive on each expand partition;

    Dont assign the drive letter;

    Dont format the logical drive;

    15) Power on the first node(db1);

    16) Remove the drive letters for the logical drives on the shared expand partitions for the

    second node;

    3.4 Cluster Pre-install Check

    1) Unzip Cluster Install File;

    2) Enter into the UnzipDir\clusterware\cluvfy

    3) Run runcluvfy stage pre crsinst n db1,db2

    3.5 Install Cluster

    1) Run setup under UnzipDir\clusterware

    2) Add the second node

    3) Set team1 as public; Set team2 as private;

    4) Set partition 1 as:

    ; Put master OCR;

    ; Put vote disk;

    ; Format using CFS

    ; Put data;

    ; Use disk label S

    5) Set partition 2 as:

    ; Format using CFS

    ; Put data;

    ; Use disk label R

    6) Install

    7) Run vipca under CRSInstallDir\bin 8) Add 192.100.96.7 to team1 of db1;

    Change IP address of team1 of db2 from 192.100.95.10 to 192.100.96.10;

    Add 192.100.95.8 to team1 of db2;

    Set the following lines in c:\windows\system32\driver\etc\hosts on each node:

    192.100.96.10 db2

    192.100.96.8 db2-vip

    On db1 run the following commands under CRSInstallDir\bin:

     Srvctl stop nodeapps n db1;

     Srvctl stop nodeapps n db2;

     Oifcfg delif global team1;

     Oifcfg setif node db1 team1/192.100.95.9:public;

     Oifcfg setif node db2 team1/192.100.96.10:public;

     Srvctl modify nodeapps n db2 A 192.100.96.8/255.255.255.0/team1;

    Restart two nodes, waiting for the oracle services to be started;

    4; Oracle Database Installation

    1) Unzip the database install file; 2) Run setup under UnzipDir\database; 3) Set the install dir to local disk; 4) Select all nodes;

    5) Only install the software;

    5; Install the patch

    1) $(ORACLE_HOME)\bin\Srvctl stop nodeapps n db1;

    $(ORACLE_HOME)\bin\Srvctl stop nodeapps n db2;

    Stop all the oracle services; 2) Unzip the patch install file; 3) Run setup under UnzipDir\Disk1; 4) Select CrsHome;

    5) Install the cluster patch;

    6) Run setup under UnzipDir\Disk1; 7) Select DBHome;

    8) Install the database patch;

    6; Create the database

    1) Run dbca;

    2) Select cluster database;

    3) Select create the database;

4) Select all the nodes;

    5) Set the global database name to ats;

    6) Select CFS as the storage mechanism;

    7) Set the database files to use the public position: s:/oradata; 8) Set the connection mode to the shared server mode and the shared server count to 5;

    7; Modify configuration

    1) Modify tnsnames.ora on db1: remove address db2-vip from LISTENERS_ATS address

    list;

    2) Modify tnsnames.ora on db2: remove address db1-vip from LISTENERS_ATS address

    list;

    3) Restart all nodes;

    8; Create user

    1) Open the website http://db1:1158/em;

    2) Create the user:

    Username: tcc_PROJECT_NAME, for example: tcc_bjal

    Password: tccuser

    Default table space: users

    Temporary table space: temp

    Role: connect, resource

    System privilege: create synonym

    9; Run the script

    1) Run the sqlplus:

    Sqlplus username/password

    2) Run the script under database module:

    @DB_MODULE_DIRECTORY\create.sql

Report this document

For any questions or suggestions please email
cust-service@docsford.com