Pages

Monday, June 17, 2013

EPM 11.1.2.1-How-to Setup Essbase Clustering on Unix





EPM 11.1.2: How-to Setup Essbase Clustering on Unix
Clustering
It is Categorised into two types

Horizontal Clustering: This is the normal clustering where the exact replica of the essbase server is
 spread across multiple machines where each machine is in active state (active-active)

Vertical Clustering: This clustering scenario is similar to Horizontal clustering wherein instead of having
 multiple instances, multiple instances of Essbase are created on the same machine
 (adding more hardware). If you compare this with BI EE, BI EE does not support this clustering 
architecture yet.


Active-Active Clustering (Horizontal )
Active-Passive Clustering (Vertical)
An active-active Essbase cluster supports read-only operations on the databases and should be used only for reporting
Active-Passive Cluster supports failover and writeback
Essbase clusters supports High Availability and Load Balancing Ideal\recommended for Reporting
Active-passive Essbase clusters do not support load-balancing
Essbase clusters do not support data write-back or outline modification, and they do not manage database replication tasks such as synchronizing the changes in one database across all databases in the cluster, they do not support Planning. When Planning is configured to use Essbase in cluster mode as a data source, it does not support the ability to launch business rules with Calculation Manager as the rules engine.
Active-passive Essbase clusters support failover with write-back to
databases.. Essbase failover
clusters use the service failover functionality of the Oracle Process Manager and Notification
Server server. A single Essbase installation is run in an active-passive deployment, and one host runs the Essbase agent and two servers. Oracle Process Manager and Notification Server stops,starts, and monitors the agent process

active-active failover can be implemented using  Provider Services
active-passive failover  can be implemented using EPM System Configurator,

An active-passive Essbase cluster can contain only two Essbase servers. To install an additional Essbase server, install an additional instance of Essbase on another machine.
Active –active & active –passive can ‘t be supported both at a time
Active –active & active –passive can ‘t be supported both at a time.

Hyperion Products supportingactive –passive FAILOVER clustering?
  • ERPI Integrator
  •   Planning
  • Essbase Studio
  •  FR Reporting Studio
  •  Web analysis

 Products that don’t supportactive –passive clustering.

  •     Integration Services
  •      Interactive Reporting
  •     FDM
  •     Oracle Essbase Analytics Link Hyperion Financial Management




Failover

Failover is the ability to switch automatically to a redundant standby database, server, or network if the
primary database, server, or network fails or is shut down. A system that is clustered for failover provides
 high availability and fault tolerance through server redundancy and fault tolerant hardware, 
such as shared disks.

In an active-passive system, the passive member is activated during the failover operation and 
consumers are directed to it instead of the failed member. You can automate failover by setting up cluster 
services to detect failures and move cluster resources from the failed node to the standby node.

In a load-balanced active-active system, the load balancer serving requests to the active members
 performs the failover. If an active member fails, the load balancer automatically redirects requests for the
 failed member to the surviving active members.

Some active-active scenarios in failover clusters involve different applications running in
 active-passive configuration to enable better use of hardware resources. For example, one node is the
 active server for application A, and another node is the active server for application B, and both 
applications are configured in active-passive mode on both servers. Usually, both nodes are used at the 
same time by different applications, but if one node fails, the applications on the failed node are relocated 
to the remaining node.


High availability

High availability is a system attribute that enables an application to continue to provide services  in the
 presence of failures. This is achieved through removal of single points of failure, with fault tolerant
 hardware, as well as server clusters; if one server fails, processing requests are routed to another server.

Single point of failure 


A single point of failure is any component in a system that, if it fails, prevents users from accessing
the normal functionality.

Load Balancing

Load balancing is distribution of requests among application servers to guarantee consistent
performance under load. A load balancer, which is the only point of entry into the system, directs the requests to individual application servers. Hardware and software load balancers are available.


High Availability and Load Balancing for EPM System Components


http://docs.oracle.com/cd/E12825_01/epm.111/epm_high_avail/frameset.htm?ch01s04.html

Essbase Server Cluster Configurations

Active-Passive Essbase Clusters

An active-passive Essbase cluster can contain two Essbase servers. To install additional Essbase servers, you must install an additional instance of Essbase. The application must be on a shared drive, and the cluster name must be unique within the deployment environment.
These types of shared drive are supported:
·         SAN storage device with a shared disk file system supported on the installation platform such as OCFS
·         NAS device over a supported network protocol.
Note:
Any networked file system that can communicate with an NAS storage device is supported, but the cluster nodes must be able to access the same shared disk over that file system.
SAN or a fast NAS device is recommended because of shorter I/O latency and failover times.
set up active-passive Essbase clusters with EPM System Configurator. You specify the Essbase cluster information for each Essbase instance. You define the cluster when you configure the first instance of Essbase. When you configure the second instance, you associate the instance with the cluster.
Note:
For a given physical Essbase server that Administration Services is administering, Administration Services displays only the name of the cluster to which that Essbase server belongs.
For instructions, see “Clustering Essbase Server” in Chapter 4, “Configuring EPM System Products,” of the Oracle Hyperion Enterprise Performance Management System Installation and Configuration Guide.

Essbase Server Clustering


At long last, after many years of customer requests, and many unsupported, creative workarounds, Oracle now has an officially supported Essbase clustering method.  This is a software based, active-passive cluster, using Oracle's OPMN (Oracle Process Monitoring and Notification service).  Due to the nature of Essbase, and its agent's need to have exclusive locking rights of files associated with applications and databases, only one agent can be active at any given time.  But, what OPMN does is provide automatic fail over, high availability and write-back to the other Essbase agent, upon failure of the active agent.  The only capability missing is load balancing.

This new functionality was first introduced with EPM System 11.1.2, though there have been many issues in this first release.  Oracle recommends implementing Essbase clustering in EPM System 11.1.2.1.  In addition, you need to apply OPMN patch 11744008, which resolves some known issues with OPMN.  What Essbase clustering still doesn't give you is live backups, but, Oracle is supposed to be working on finally making that a feature for future releases. 

An active-passive Essbase cluster can contain two Essbase servers. To install additional Essbase servers, you must install an additional instance of Essbase, either on the same server, which would really not be recommended, since you still have a point of failure of the physical hardware, or another physical server, which is recommended. The applications must be on a shared drive, and the cluster name must be unique within the deployment environment.

These types of shared drive are supported:

SAN storage device with a shared disk file system supported on the installation platform, such as OCFS.
NAS device over a supported network protocol.
Note: Any networked file system that can communicate with an NAS storage device is supported, but the cluster nodes must be able to access the same shared disk over that file system.  SAN or a fast NAS device is recommended because of shorter I/O latency and fail over times.

Essbase cluster initial setup occurs on the first instance of Essbase, where you define the Essbase cluster name and the local Essbase instance name and instance location, using the EPM System Configurator.  This version of Essbase still uses the old variable name of ARBnORPATH, but, the variable itself is now used to define the location of the application files, not the location of the Essbase system files, as in previous versions Essbase.

All of this information is stored in the EPM System Registry, which is stored in the Shared Services database When you setup each instance, not only for Essbase, but for the entire system, you connect to the Shared Services database so that the same EPM System Registry is in use for the entire system. OPMN also reads the Essbase cluster information from the EPM System registry and keeps track of the active node there.

When you setup the second instance of Essbase, and connect to the same EPM System Registry, you will be presented with an option to join the previously configured cluster, that was setup on the first instance. All information regarding the previously configured cluster will automatically populate and will be grayed out.  Once you complete the setup, with the EPM System Configurator, there are still quite a few manual steps that must be taken to update OPMN configuration files, on each Essbase instance. Consult the EPM System High Availability guide and Oracle EPM System Installation and Configuration guide for more detailed information on the manual changes required to complete the setup.

Setup of the Primary Essbase Server:


 Enter the Middleware Home location
 
Select “New Installation” and “Choose components individually”
Select the Essbase products to be installed (Note – Oracle HTTP Server and WebLogic Application Server are automatically selected as these are components that need to be installed but not configured).
Confirmation of Products to be installed
Wait for Summary Page for confirmation that the products were installed successfully. 
Launch the EPM System Configurator
Select “Connect to a previously configured Shared Services database” and enter the database information for Shared Services. If you are using a VIP for the database, click on Advanced Options and enter it in the jdbc url box.
Select the products to configure
Enter the SMTP information if being used
Enter a name for this Essbase Instance. For the ARBORPATH, enter the location on the Shared File System. Click on the Set up Cluster button.
Enter a Cluster Name
Uncheck the “I wish to receive security updates via My Oracle Support” if you do not want to receive updates.
Click Yes to confirm that you do not want to receive updates.
Summary of configuration

OPMN UPDATES:

Once Essbase is set up, there are some post installation items that must be applied for the OPMN to function. OPMN is the mechanism that the Essbase uses for failover. Prior to making changes to the OPMN or starting Essbase, the following patch must be applied to opmn - Patch ID 11744008. This patch resolves issues relating to Essbase failover using OPMN. 
NOTE: OPMN (OHS) 11.1.1.4 is the release used with the EPM 11.1.2.1 release (OPMN and EPM are two separate releases so that is the reason for the different product releases).  If you do not apply this patch, Essbase failover will not work.
To apply the patch, download and extract out the zip file and stop all OHS processes before applying the patch.
At the prompt, cd to the /esbuser/Oracle/Middleware/EPMSystem11R1/OPatch directory.
Type the following command:
Ø opatch apply <path to the p11744008 location> -oh /esbuser/Oracle/Middleware/ohs
Ø for example: opatch apply /esbuser/Oracle/Middleware/EPMSystem11R1/OPatch/p11744008 –oh /esbuser/Oracle/Middleware/ohs
At the “Is the local system ready for patching? [y | n]” prompt, type y
Once the patch has been applied, go to the OPMN directory and open the opmn.xml file “$EPM_ORACLE_HOME/user_projects/<unique epm instance name>/config/OPMN/opmn”
On each Essbase server, add the lines in blue and change the items in red in the opmn.xml file.
Leave the <ssl enabled=” ” to True is using the SSL. If so, the wallet files will need to be updated on both servers with the same information.
<ipaddr remote="essbase_server1.com"/>
<port local="6711" remote="6712"/> <- These ports can be changed if needed.
<ssl enabled="false" wallet-file="/esbuser/Oracle/Middleware/user_projects/<unique epm system name>/config/OPMN/opmn/wallet"/>
The below contains both nodes that make up the cluster. These lines will need to be added after the <ipaddr remote> section and before </notification-server>.
<topology>
<nodes list="essbase_server1.com:6712,essbase_server2:6712" />
</topology>
If the restart-on-death setting is set to true, the OPMN will attempt to restart Essbase on the same node and not failover. If you want Essbase to failover onto the second server, then set this to false (as shown below). Also if your line contains “numprocs” remove this entry. As this is the primary server, set the service-weight to 101.
<ias-component id="EssbaseCluster"> <- Points to the cluster name you created
<process-type id="EssbaseAgent" module-id="ESS" service-failover="1" service-weight="101">
<process-set id="AGENT" restart-on-death="false">
Additionally, if you are not using the default ports as shown above, check the ports.prop file to make sure it is using the non-default ports that you have configured.

In addition to the OPMN config file changes there is also a change that needs to be made to the $EPM_ORACLE_HOME/user_projects/<unique epm system instance name>/config/starter/Essbase.properties file.
opmnEntity=EssbaseCluster <- Needs to be set to the cluster name (as shown in screen shot below)
After all changes have been made, go to the “$EPM_ORACLE_HOME/user_projects/<unique epm system instance name>/bin” and run the start.sh file to start both opmn and Essbase. To verify that Essbase and opmn are running, go to $EPM_ORACLE_INSTANCE/bin and type the following:
opmnctl status
You should see the EssbaseCluster as Alive
Setup of the Secondary Essbase Server:
Setup of the second Essbase server. This is the Failover node.
Launch the EPM System Configurator.
Enter the Middleware Home location
Choose “New Installation”
Select the products to install
Confirmation of Products to be installed
Wait for summary page that shows completion of installation. 
Launch the EPM System Configurator

Select “Connect to a previously configured Shared Services database”. Enter the oracle database info for Shared Services and if using a DB VIP, click on Advanced Options and enter it in the JDBC URL.
Select the products/tasks to configure
Enter the SMTP information if being used
Enter the Essbase Instance Name (should be unique) and click on Set up Cluster. Note that the ARBORPATH cannot be changed and is picking up the location from the Shared Services registry.
Choose the cluster name from the drop-down list that was created by the primary Essbase Server - “EssbaseCluster”
Uncheck the “I wish to receive security updates via My Oracle Support” if you do not want to receive updates.
Click Yes to confirm that you do not want to receive updates.
Confirmation of products/tasks to be configured.
Summary of configuration status.










OPMN

OPMN UPDATES:

Once Essbase is set up, there are some post installation items that must be applied for the OPMN to function. OPMN is the mechanism that the Essbase uses for failover. Prior to making changes to the OPMN or starting Essbase, the following patch must be applied to opmn - Patch ID 11744008. This patch resolves issues relating to failover using OPMN.
To apply the patch, download and extract out the zip file and stop all OHS processes before applying the patch.
At the prompt, cd to the /esbuser/Oracle/Middleware/EPMSystem11R1/OPatch directory.
Type the following command:
Ø opatch apply <path to the p11744008 location> -oh /esbuser/Oracle/Middleware/ohs
Ø for example: opatch apply /esbuser/Oracle/Middleware/EPMSystem11R1/OPatch/p11744008 –oh /esbuser/Oracle/Middleware/ohs
At the “Is the local system ready for patching? [y | n]” prompt, type y
Once the patch has been applied, go to the OPMN directory and open the opmn.xml file “$EPM_ORACLE_HOME/user_projects/<unique epm instance name>/config/OPMN/opmn”
On each Essbase server, add the lines in blue and change the items in red in the opmn.xml file.
Leave the <ssl enabled=” ” to True is using the SSL. If so, the wallet files will need to be updated on both servers with the same information.
<ipaddr remote="essbase_server2.com"/>
<port local="6721" remote="6722"/> <- Make sure these are the correct ports you want to use.
<ssl enabled="false" wallet-file/esbuser/Oracle/Middleware/user_projects/<unique epm system name>/config/OPMN/opmn/wallet "/>
The below contains both nodes that make up the cluster. These lines will need to be added after the <ipaddr remote> section and before </notification-server>.
<topology>
<nodes list="essbase_server1:6712,essbase_server2.com:6712" />
</topology>
If the restart-on-death setting is set to true, the OPMN will attempt to restart Essbase on the same node and not failover. If you want Essbase to failover onto the second server, then set this to false (as shown below). Also if your line contains “numprocs” remove this entry. As this is the secondary server, set the service-weight to 100.
<ias-component id="EssbaseClusterPLN"> <- Points to the cluster name you created
<process-type id="EssbaseAgent" module-id="ESS" service-failover="1" service-weight="100">
<process-set id="AGENT" restart-on-death="false">
Additionally, if you are not using the default ports as shown above, check the ports.prop file to make sure it is using the non-default ports that you have configured.
In addition to the OPMN config file changes there is also a change that needs to be made to the $EPM_ORACLE_HOME/user_projects/<unique epm system instance name>/config/starter/Essbase.properties file.
opmnEntity=EssbaseCluster <- Needs to be set to the cluster name (as shown in screen shot below)
After all changes have been made, go to the “$EPM_ORACLE_HOME/user_projects/<unique epm system instance name>/bin” and run the start.sh file to start both opmn and Essbase. To verify that Essbase and opmn are running, go to $EPM_ORACLE_INSTANCE/bin and type the following:
opmnctl status
You should see the EssbaseCluster as Alive. However, you will need to stop the Essbase instance running on this box as this is the failover box. To stop the Essbase instance (but leave the opmn up and running so that it can communicate with the primary box), run the following command:
opmnctl stopproc ias-component=EssbaseCluster
opmnctl status – to verify that the Essbase instance is Down.

No comments:

Post a Comment