Oracle Database 12c has introduced Oracle Flex Cluster which allows management of applications and the RDBMS in a common cluster using the Hub-Leaf deployment model. The database tier is composed of 'Hub' servers having access to shared storage and peer-to-peer communication with other “Hub” servers. On the other hand, the application tier, composed of 'Leaf' servers, is loosely coupled with no shared storage or peer-to-peer communication requirements with all nodes in the cluster.  A 'Leaf' server has a loose communication association with only a single Hub node in the cluster.

This deployment model offers far greater availability and ease of management between application components and the database . The placement as well as start / stop ordered dependencies between the application and database tiers may be defined and automated. For example, a database dependent application may pull up the database or the database startup could trigger the startup of application components on the correct servers in the correct order.

In this article, I will demonstrate :

  • Configuration of a highly available application resource on a leaf node of a Oracle 12.1.0.2c flex cluster. 
  • Hard start and stop dependency of the application resource on the RAC database running in the cluster  
  • Failover of the application resource to another leaf node

Since the application resource will connect to the RAC database running on hub nodes, I have installed Oracle 12.1.0.2c client software on both the leaf nodes.

Current Setup:

  • Name of the cluster: Cluster01
  • Oracle Clusterware active version :12.1.0.2.0
  • Type of cluster: Flex cluster
  • Hub nodes : host01, host02
  • Leaf Nodes: host04, host05
  • Name of RAC database: orcl
  • Type of database: Administrator managed
  • Instances of database: orcl1@host01, orcl2@host02

Demonstration

  • Examine the cluster to find out name of the cluster, clusterware version and role of each node in the cluster.

[root@host02 trace]# olsnodes -c
cluster01

[root@host02 trace]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@host02 ~]# crsctl get node role status -all
Node 'host01' active role is 'hub'
Node 'host02' active role is 'hub'
Node 'host04' active role is 'leaf'
Node 'host05' active role is 'leaf'

Find out the currently defined server pools and server allocations. Currently, both the hub nodes are assigned to ora.orcl serverpool which contains the RAC database B. Both the leaf nodes are assigned to Free pool.

[root@host02 ~]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04 host05

NAME=Generic
ACTIVE_SERVERS=host01 host02

NAME=ora.orcl
ACTIVE_SERVERS=host01 host02

Find out various server categories defined in the cluster. The built-in category ora.leaf.category is associated with all the leaf nodes in the cluster.

[root@host02 ~]# crsctl status category
NAME=ora.hub.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r–
ACTIVE_CSS_ROLE=hub
EXPRESSION=

NAME=ora.leaf.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r–
ACTIVE_CSS_ROLE=leaf
EXPRESSION=

Create a new server pool called leaf_app_pool to host a highly available application resource on one of the Flex Cluster Leaf Nodes.

[root@host01 ~]# crsctl add serverpool leaf_app_pool -attr "MIN_SIZE=1, MAX_SIZE=1, SERVER_CATEGORY=ora.leaf.category"

Note that one of the leaf nodes (host05) has been assigned to the newly created server pool leaf_app_pool.

[root@host02 trace]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04

NAME=Generic
ACTIVE_SERVERS=host01 host02

NAME=leaf_app_pool
ACTIVE_SERVERS=host05

NAME=ora.orcl
ACTIVE_SERVERS=host01 host02

On both the leaf nodes (host04 and host05), create the following action script (/home/oracle/action.scr) which will be called by the clusterware to start, stop, check and clean up the application resource called leaf_res. The application resource will perform the following set of simple tasks:

  • When the application resource is started, it will
    • Connect to one of the instances of RAC database orcl
    • Query the name of the hub node and instance it gets connected to
    • Spool the output to a file /tmp/leaf_res.out on the node running the application resource
    • When the application resource is stopped or cleaned, the file /tmp/leaf_res.out is deleted
    • When the application resource is checked, a test is performed to check the existence of the file /tmp/leaf_res.out

[root ~]# cat /home/oracle/action.scr
#!/bin/bash
DB_CONNECTION="/u01/app/oracle/product/12.1.0/client_1/sqlplus -s sys/oracle@cluster01-scan.cluster01.example.com:1521/orcl as sysdba"
SQLQUERY1="select SYS_CONTEXT('USERENV','HOST') LEAFNAME, SYS_CONTEXT('USERENV','SERVER_HOST') HUBNAME, instance_name from v\$instance;"
case "$1" in
  'start')
        $DB_CONNECTION <<eof > /dev/null
          Spool /tmp/leaf_res.out
          col LEAFNAME for a20
          col HUBNAME for a20
          col INSTANCE_NAME for a20
          $SQLQUERY1
          Spool off
          exit
eof
     exit 0
     ;;

  'stop')
     rm '/tmp/leaf_res.out'
     exit 0
     ;;

  'check')
     if [ -e /tmp/leaf_res.out ]; then
        exit 0
     else
        exit 1
     fi
     ;;

  'clean')
     rm -f '/tmp/leaf_res.out'
     exit 0
     ;;
esac

[root@host04 ~]# chmod 755 /home/oracle/action.scr
[root@host05 ~]# chmod 755 /home/oracle/action.scr

Create an application resource leaf_res which

  • Has the action script /home/oracle/action.scr
  • Will execute on the server allocated to leaf_app_pool server pool
  • Has hard start dependency on RAC database (ora.orcl.db) so that on starting the resource leaf_res, the RAC database will be automatically started (if not up already)
  • Has hard stop dependency on RAC database (ora.orcl.db) so that on stopping the RAC database the resource leaf_res will be automatically stopped (if not down already)

[root@host01 ~]# crsctl add resource leaf_res -type cluster_resource -attr  "ACTION_SCRIPT=/home/oracle/action.scr,PLACEMENT=restricted,SERVER_POOLS=leaf_app_pool,START_DEPENDENCIES='hard(global:uniform:ora.orcl.db) pullup:always (global:ora.orcl.db)',STOP_DEPENDENCIES='hard(global:ora.orcl.db)'" 

Check the status of the newly created application resource leaf_res. Note that the resource exists but has not been started yet.

[root@host01 ~]# crsctl stat res leaf_res -t
——————————————————————————–
Name           Target  State        Server                   State details       
——————————————————————————–
Cluster Resources
——————————————————————————–
leaf_res
      1        OFFLINE OFFLINE                               STABLE
——————————————————————————–

 

Stop the RAC database orcl to verify the dependency between the application resource leaf_res and the RAC database.

[root@host01 ~]# srvctl stop database -d orcl

[root@host01 ~]# srvctl status database -d orcl
Instance orcl1 is not running on node host01
Instance orcl2 is not running on node host02

Start the leaf_res application resource. Notice that

  • orcl database is automatically started as defined in the dependency definitions for leaf_res resource
  • Application resource leaf_res is started on leaf node host05 which has been assigned to leaf_app_pool

[root@host01 ~]# crsctl start res leaf_res

CRS-2672: Attempting to start 'ora.orcl.db' on 'host01'
CRS-2681: Clean of 'ora.orcl.db' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'host02'
CRS-2676: Start of 'ora.orcl.db' on 'host01' succeeded
CRS-2676: Start of 'ora.orcl.db' on 'host02' succeeded
CRS-2672: Attempting to start 'leaf_res' on 'host05'
CRS-2676: Start of 'leaf_res' on 'host05' succeeded

[root@host02 trace]# srvctl status database -d orcl
Instance orcl1 is running on node host01
Instance orcl2 is running on node host02

[root@host02 trace]# crsctl stat res leaf_res
NAME=leaf_res
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on host05

Verify that the file /tmp/leaf_res.out has been created on the node (host05) running the application resource leaf_res. The contents of the file indicate that the instance orcl1 running on hub node host01 was accessed to execute the query.

[root@host05 ~]# ls -l /tmp/leaf_res.out 
-rw-r–r– 1 root root 245 May 13 14:51 /tmp/leaf_res.out
 
[root@host05 ~]# cat /tmp/leaf_res.out 

LEAFNAME             HUBNAME              INSTANCE_NAME                         
——————– ——————– ——————–                  
host05.example.com   host01               orcl1                    

In order to verify high availability of the application resource leaf_res, shutdown Oracle Clusterware on the leaf node (host05) hosting the application resource leaf_res.

[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on  
'host05'
CRS-2673: Attempting to stop 'ora.crsd' on 'host05'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host05'
CRS-2673: Attempting to stop 'leaf_res' on 'host05'
CRS-2677: Stop of 'leaf_res' on 'host05' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host05' has completed
CRS-2677: Stop of 'ora.crsd' on 'host05' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host05'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'host05'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host05'
CRS-2673: Attempting to stop 'ora.evmd' on 'host05'
CRS-2673: Attempting to stop 'ora.storage' on 'host05'
CRS-2673: Attempting to stop 'ora.crf' on 'host05'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host05'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host05' succeeded
CRS-2677: Stop of 'ora.storage' on 'host05' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'host05' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host05' succeeded
CRS-2677: Stop of 'ora.crf' on 'host05' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host05' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host05'
CRS-2677: Stop of 'ora.mdnsd' on 'host05' succeeded
CRS-2677: Stop of 'ora.cssd' on 'host05' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'host05'
CRS-2677: Stop of 'ora.gipcd' on 'host05' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host05' has completed
CRS-4133: Oracle High Availability Services has been stopped.

Verify that application resource leaf_res is no longer executing on host05 by checking non-existence of file /tmp/leaf_res.out

[root@host05 ~]# cat /tmp/leaf_res.out 

cat: /tmp/leaf_res.out: No such file or directory

Examine the status of the server pools. Note that previously unallocated leaf node (host04) has been moved to leaf_app_pool server pool.

[root@host02 trace]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=host01 host02

NAME=leaf_app_pool
ACTIVE_SERVERS=host04

NAME=ora.orcl
ACTIVE_SERVERS=host01 host02

Notice that the application resource leaf_res has automatically failed over to the surviving leaf node (host04).

[root@host02 trace]# crsctl stat res leaf_res
NAME=leaf_res
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on host04

[root@host04 ~]# cat /tmp/leaf_res.out 

LEAFNAME             HUBNAME              INSTANCE_NAME                         
——————– ——————– ——————–                  
host04.example.com   host01               orcl1          

In order to verify hard stop dependency of the application resource leaf_res, on Oracle RAC database orcl, shutdown the RAC database. Note that stopping the RAC database automatically stops the application resource leaf_res as well.

[root@host02 trace]# srvctl stop database -d orcl
PRCR-1133 : Failed to stop database orcl and its running services
PRCR-1132 : Failed to stop resources using a filter
CRS-2529: Unable to act on 'ora.orcl.db' because that would require stopping or relocating 'leaf_res', but the force option was not specified
CRS-2529: Unable to act on 'ora.orcl.db' because that would require stopping or relocating 'leaf_res', but the force option was not specified

[root@host02 trace]# srvctl stop database -d orcl -f

[root@host02 trace]# srvctl status database -d orcl
Instance orcl1 is not running on node host01
Instance orcl2 is not running on node host02

[root@host02 trace]# crsctl stat res leaf_res
NAME=leaf_res
TYPE=cluster_resource
TARGET=OFFLINE
STATE=OFFLINE

[root@host04 ~]# cat /tmp/leaf_res.out 
cat: /tmp/leaf_res.out: No such file or directory

Summary

  • Oracle Flex Cluster allows you to manage application components and the database collectively.
    • In the Oracle Flex Cluster,
      • Hub nodes, comprising the database tier, are tightly connected, and have direct access to shared storage.
      • Leaf Nodes, comprising the application tier, do not require direct access to shared storage, but instead request data through Hub Nodes.
      • The entire process of application tier start up with correct startup order and placement and database/application tier pull-up – can be fully automated by defining start and stop dependencies of the applications on database services or the databases directly.

References

https://www.oracle.com/technetwork/products/clusterware/overview/oracle-clusterware-12c-overview-1969750.pdf 

About the Author

Anju Garg

Anju Garg is an Oracle Ace with over 14 years of experience in IT Industry in various roles. Since 2010, she has been involved in teaching and has trained more than a hundred DBAs from across the world in various core DBA technologies like RAC, Data guard, Performance Tuning, SQL statement tuning, Database Administration etc. She is a regular speaker at Sangam and OTNYathra. She writes articles about Oracle and is one of the reviewers of the following book published by Pearson Oracle Problem-Solving and Troubleshooting Handbook. She is passionate about learning and has keen interest in RAC and Performance Tuning. You can learn all about Anju’s credentials and read more from her via her technical blog site at http://oracleinaction.com/

Start the discussion at forums.toadworld.com