Aychin's Oracle RDBMS Blog

Only for Advanced level Professionals

Monthly Archives: May 2013

Policy Managed Databases and Server Pools

Note: This material is not based on practical tests, it just conclusions made based solely on the Oracle provided Technical documentation.

What is the benefits of the Policy Managed Databases? This concept is most suitable for big enterprises that want to consolidate services in one management unit and automate resource sharing between them.

If organization have 20 servers (hardware resources) it can consolidate it in one logical automated unity under the Oracle Clusterware. Then relocate several databases, RAC or RAC One Node, to this Clusterware environment. To manage this stack of servers effectively, we need to split them in groups. Now server pools concept come to place, we can divide our servers in server pools that can consist from one or more servers. But not all servers can be needed at the current configuration, that is why we have default Free pool, all our servers will be initially in this pool (if it is fresh configuration). Then this servers will be assigned to server pools that we will create.

Let’s create some virtual cluster and on example of it describe how it works. Assume that we have 10 nodes, we want to consolidate them in one cluster:

Nodes: node1, node2, .., node10
We have 1 RAC database, 3 single instance databases
They serve 8 applications, through dedicated Services, 5 of this services is served by our RAC database.
Some of this applications is for front end users, which accessed from web, some for internal applications and so on.

Before creating pools, consider next points:

  • One database can be placed to several pools. Database to pool relation is one to many
  • One Service can be assigned to only one pool. Service to pool relation is one to one
  • RAC One Node database runs only in one server pool and planned relocations must always be within this server pool. But it can be online relocated to the server from Free pool also.
  • All databases must be policy-managed, administrator managed databases can be placed only in to Generic pool. Each in its sub pool.

For our RAC database, we have 5 Services defined, 3 of them is critical for business, because serve online applications, 2 is for internal statistical and reporting purposes. It is more important for us to maintain best service response time for this 3 Services. We will create one pool racpool with two sub pools racsp1 and racsp2 for our RAC database, one of the sub pools will have 2 nodes, another one will have 1 node. Critical Services will be served by racsp1 with two nodes, less critical by racsp2 with one node. And this sub pools must be mutually exclusive, it means that, they must not share servers between each other, no one server can be part of both this sub pools in same time. Single instances we will need to convert to RAC One Node, because SINGLE type instances cannot be policy-managed. Server pool for RON should have at least 2 nodes, to it can be relocated online within this pool, for some maintenance operations. Let’s create 1 pool ronpool1 with 1 server, for less important RON database and ronpool2 for 2 more important databases.

ronpool1: 1 node
ronpool2: 3 node
racpool:  (3 nodes)
racsp1: 2 node
racsp2: 1 node

This is minimum required number of nodes within this pools.

Now let’s specify properties for this pools, the main properties that can be specified through srvctl is

-g <pool_name>         NAME          Server pool name
-l                MIN_SIZE      Minimum size of the server pool (Default value is 0)
-u                MAX_SIZE      Maximum size of the server pool (Default value is -1 for unlimited maximum size)
-i         IMPORTANCE    Importance of the server pool (Default value is 0)
-n "<server_list>"     SERVER_NAMES  Comma separated list of candidate server names

From oracle documentation:

A string of server names in the following format:

server_name1 server_name2 ...
Oracle Clusterware automatically manages this attribute, which contains the space-delimited list of servers that are currently assigned to a server pool.
String This optional attribute indicates if servers assigned to this server pool are shared with other server pools. A server pool can explicitly state that it is exclusive of any other server pool that has the same value for this attribute. Two or more server pools are mutually exclusive when the sets of servers assigned to them do not have a single server in common. For example, server pools A and B must be exclusive if they both set the value of this attribute to foo_A_B.Top-level server pools are mutually exclusive, by default.
Any integer from 0 to 1000 Relative importance of the server pool, with 0 denoting the lowest level of importance and 1000, the highest. This optional attribute is used to determine how to reconfigure the server pools when a node joins or leaves the cluster. The default value is 0.
Any nonnegative integer or -1 (no limit) The maximum number of servers a server pool can contain. This attribute is optional and is set to -1 (no limit), by default.Note: A value of -1 for this attribute spans the entire cluster.
Any nonnegative integer The minimum size of a server pool. If the number of servers contained in a server pool is below the number you specify in this attribute, then Oracle Clusterware automatically moves servers from other pools into this one until that number is met.Note: The value of this optional attribute does not set a hard limit. It governs the priority for server assignment whenever the cluster is reconfigured. The default value is 0.
String The name of the server pool, which you must specify when you create the server pool. Server pool names must be unique within the domain of names of user-created entities, such as resources, types, and servers. A server pool name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server pool name cannot begin with a period nor with ora.
A string of space-delimited server pool names in the following format:

sp1 sp2 ...
Use of this attribute makes it possible to create nested server pools. Server pools listed in this attribute are referred to as parent server pools. A server pool included in a parent server pool is referred to as a child server pool.
A string of space-delimited server names in the following format:

server1 server2 ...
A list of candidate node names that may be associated with a server pool. If this optional attribute is empty, Oracle Clusterware assumes that any server may be assigned to any server pool, to the extent allowed by values of other attributes, such as PARENT_POOLS.The server names identified as candidate node names are not validated to confirm that they are currently active cluster members. Cluster administrators can use this attribute to define servers as candidates that have not yet been added to the cluster.

You manage server pools that are managing Oracle RAC or RAC One Node databases with the Server Control (SRVCTL) utility. Use the Oracle Clusterware Control (CRSCTL) utility to manage all other server pools.

Rules of server assignment

Servers assigned to pools in this order:

  1. First servers will be assigned to Generic pool
  2. Then user-created pools
  3. All that not assigned to any of them will be assigned to Free pool
  • After Generic server pool will be filled according to its requirements (SERVER_NAMES attribute). Servers will first assigned to top level, user created pools, in order from higher IMPORTANCE pools to lower importance until MIN_SIZE requirement will be fulfilled.
  • Then servers will be assigned to top level user created pools, in order from higher IMPORTANCE pools to lower importance until MAX_SIZE requirement will be fulfilled. All servers not placed to Generic or any other will be assigned to Free pool.

Let’s divide our pools by importance:

Pool name:           Importance:
ronpool1	     5
ronpool2             10
racpool              10
*racsp1              1
*racsp2              0

ronpool2 and racpool have equal importance for our enterprise. ronpool1 with less important database have less importance. Sub pools within racpool also have different importance, racsp1 is more important for us than racsp2, because of critical services.

Now specify MIN_SIZE for them, we already defined it before:

Pool name:           Importance:   Min_Size:
ronpool1             5             1
ronpool2             10            3
racpool              10            3
*racsp1              1             2
*racsp2              0             1

What about MAX_SIZE? Now we need to define maximum number of servers that our pools can have. There we must decide, if we want to leave some servers in Free pool, or assign all servers to server pools. Because by the rule, clusterware will try to fill this server pools up to the MAX_SIZE value. From other side why we need this servers to stay without any activity in the Free pool? Instead of it, we can use them and benefit from their power. According to the rules we also have option to set MAX_SIZE and still be able to leave some servers in the Free pool, because we can set IMPORTANCE attribute for Free pool. We cannot change any other parameter of the Free pool, but IMPORTANCE we can. If we will set IMPORTANCE for Free pool greater or equal to our most important pool then servers must stay in Free pool. Only if failure will occur, they will be assigned to the required pool to maintain MIN_SIZE requirement and stay as part of this pool.

We have 10 servers, let’s assume that we set MAX_SIZE as so:

Pool name:           Importance:   Min_Size:  Max_Size:
Free                 0             0          -1
ronpool1             5             1          1
ronpool2             10            3          5
racpool              10            3          3
*racsp1              1             2          2
*racsp2              0             1          1

Then after server assignments we will have:

Pool name:           Importance:   Min_Size:  Max_Size:  Nodes:
Free                 0             0          -1         node10
ronpool1             5             1          1          node7
ronpool2             10            3          5          node1, node3, node5, node8, node9
racpool              10            3          3          node2, node4, node6
*racsp1              1             2          2          node2, node4
*racsp2              0             1          1          node6

Consider that free pool IMPORTANCE is 0.

1. It sorts top level pools by importance: ronpool2->10, racpool->10, ronpool1->5 and assigns servers to them one by one until MIN_SIZE. It redistributes servers with same principle between sub pools.
2. It goes through same sorted list, trying to fulfill MAX_SIZE attributes, ronpool2 have this attribute set to 5, because of that servers node8 and node9 are assigned to it.

But if we will set IMPORTANCE of free pool to 15 then this servers must stay in Free pool. I prefer to not to leave this servers without activity, that is why I will not modify IMPORTANCE of Free pool and will set the MAX_SIZE for pools:

Pool name:           Importance:   Min_Size:  Max_Size:  Nodes:
Free                 0             0          -1         
ronpool1             5             1          1          node7
ronpool2             10            3          5          node1, node3, node5, node8, node9
racpool              10            3          4          node2, node4, node6, node10
*racsp1              1             2          3          node2, node4, node10
*racsp2              0             1          1          node6

Relocations of servers between pools

Relocation of the nodes between groups is made by the Clusterware depending on the attributes of the pools. Administrator can force relocation by changing attributes of the group, it will be manual indirect relocation.

General rules, according to Oracle documentation, is:

If the number of servers in a server pool falls below the value of the MIN_SIZE attribute for the server pool (such as when a server fails), based on values you set for the MIN_SIZE and IMPORTANCE attributes for all server pools, Oracle Clusterware can move servers from other server pools into the server pool whose number of servers has fallen below the value for MIN_SIZE. Oracle Clusterware selects servers from other server pools to move into the deficient server pool that meet the following criteria:

  • For server pools that have a lower IMPORTANCE value than the deficient server pool, Oracle Clusterware can take servers from those server pools even if it means that the number of servers falls below the value for the MIN_SIZE attribute.
  • For server pools with equal or greater IMPORTANCE, Oracle Clusterware only takes servers from those server pools if the number of servers in a server pool is greater than the value of its MIN_SIZE attribute.

As You see there is just two rules on which Clusterware relays, such parameters like server load or its power capacity is not considered. That is why it is advisable to have servers with same performance parameters in the configuration, or You can play with SERVER_NAMES attribute of the pools (Candidate servers list) to isolate this servers to specific server pools. Because there can be situation, for example, when You will have server pool with two powerful severs and in case of failure of one of them Clusterware can pick much less powerful server and add to this pool.

To let Clusterware consider request response time and other performance characteristics and offer the relocation and some other solutions as modifying CPU_COUNT (Instance Caging) parameter, You will need to use Oracle Quality of Service management. Oracle QoS works with policy managed databases, it adds some more restrictions on server pool configurations that You can read from Oracle QoS Management documentation.

In our case if we will lose node10 for example, nothing will happen, because MIN_SIZE of the racpool is 2. But if will lose one more node, for example node4 then one node from ronpool2 will be reassigned to the racsp1 sub pool of the racpool. And configuration will look like this:

Pool name:           Importance:   Min_Size:  Max_Size:  Nodes:
Free                 0             0          -1         
ronpool1             5             1          1          node7
ronpool2             10            3          5          node1, node3, node5, node8
racpool              10            3          4          node2, node6, node9
*racsp1              1             2          3          node2, node9
*racsp2              0             1          1          node6

Also if You have some Administrator-Managed databases, then You have Generic pool, because when You add some database to the clusterware configuration as admin managed, it will automatically create sub pool under the Generic pool for this database. You can have admin managed and policy managed databases in one clusterware, but same servers cannot be part of the Generic pool and user-created pools. And also consider that first servers will be assigned to Generic pool only after that to all others. Server will not be grabbed from Generic pool to user-created pool even if it will have more importance and not fulfilled MIN_SIZE attribute.

To get some server from Generic pool and reassigne it to user-managed pool, You will need to relocate admin managed database that occupies this server, then remove it from this sub pool and from Generic pool. You can do it by updating SERVER_NAMES attribute of the Generic pool and its sub pool. After that it can be moved to the user created pool. It will be moved automatically, if attributes of this pool will allow it, if not You can manually increase MIN_SIZE to allow clusterware to relocate this server to this pool.

For RAC databases when new servers will be relocated to its pool, Clusterware will automatically add undo tablespace and redo thread as needed.

Exclusive pools

Exclusive pools is pools that cannot share same servers, for example if pool1 have node1 and node2, then pool2 cannot contain node1 or node2. By default all top level pools is exclusive. But sub pools is not exclusive. For example if subpool1 have node1 then subpool2 also can have this server, it is true only for sub pools of one server pool. We can see it in sub pools of Generic pool. To make them exclusive Oracle documentation recommends to set EXCLUSIVE_POOLS attribute to some string for sub pools that must be treated as exclusive. This string can be any sequence of characters excluding special ones. For example to make subpool1 and subpool2 exclusive we must set EXCLUSIVE_POOLS attribute for both of them to same string value, for example “excl1”, then they will not be able to share same servers.

Some practice

To add server pool:

srvctl add srvpool -g pool1 -l 0 -u 2

To modify MIN_SIZE, MAX_SIZE and IMPORTANCE and SERVER_NAMES attributes:

srvctl modify srvpool -g pool1 -l 1 -n node1,node2

To manipulate with Generic pool and its sub pools use crsctl:

crsctl modify serverpool Generic -attr "SERVER_NAMES=node1 node2 node3"

To add sub pools, You can first add regular pools using srvctl then by modifying PARENT_POOLS attribute by crsctl assign them a parent:

srvctl add srvpool -g sp1 -l 0 -u 1
srvctl add srvpool -g sp2 -l 0 -u 1

crsctl modify srvpool ora.sp1 -attr "PARENT_POOLS=ora.pool1"
crsctl modify srvpool ora.sp2 -attr "PARENT_POOLS=ora.pool1"

Note: To all non default pools “ora.” prefix is added automatically, that is why to manipulate them using crsctl You need to use this prefix.


how without them? 🙂

There is some problems, that cannot be explained in other way than Oracle not yet fully support all of the features offered. For example, after You will assign some pools to parent pool, You cannot manipulate this parent pool using srvctl, sub pools will not be shown by this tool at all.

srvctl config srvpool -g pool1
PRKO-3160 : Server pool pool1 is internally managed as part of administrator-managed database configuration and therefore cannot be queried directly via srvpool object.

Or when You will try to modify EXCLUSIVE_POOLS attribute to some string, to make pools exclusive, You will get this message:

crsctl modify serverpool ora.sp1 -attr "EXCLUSIVE_POOLS=excl1"
CRS-2596: Modifications to the 'EXCLUSIVE_POOLS' attribute of server pools are not supported

But, it can be some problems with my environment or configuration. As noted before I don’t tested all this stuff. If You will have chance to test all of this theory, then pleas paste Your results here, I will correct this paper.

(c) Aychin Gasimov, 05/2013, Munich, Germany