After provisioning a domain with the maximum sized nodes are you scaling nodes down and up while utilising shared storage for aserver/mserver domain and product binaries directories?
NOTE: Use in conjunction with limiting SSH validation on standby nodes feature to prevent Myst performing SSH validations against scaled down nodes.
Overview of the requirements to scale nodes up and down.
Name | Requirement Type | Requirement Example |
---|---|---|
Node | Physical/Virtual Host | Create the maximum number of nodes. |
Node size | Myst | Create the Platform Model with the maximum size of nodes. |
Enable feature | Myst | See Enable the Feature |
NodeManager | Myst | See NodeManager |
Deployment Plan Distribution | Myst | See Deployment Plan |
oraInventory | Myst | See oraInventory |
oraInventory | Shared storage | /u01/app/oracle/admin/shared/oraInventory/ |
Product binary home Both aserver and mserver |
Shared storage | /u01/app/oracle/product/ |
Domain home Both aserver and mserver |
Shared storage | /u01/app/oracle/admin/shared/ |
validation=off
validation.ssh=false
validation.multi-node.install=false
The NodeManager normally runs on each node under the same NodeManager home directory eg. ${[rxr.wls.Domain-1].domainAserverHome}/nodemanager
.
We want the NodeManagers' configuration files on shared storage by adding override java arguments. This starts the NodeManagers in their respective homes.
Line breaks added for easier viewing. Remove line breaks when entering into Myst.
-DDomainsFile=${[rxr.wls.Domain-1].domainAserverHome}/nodemanager/nodemanager.domains
-DNodeManagerHome=${[rxr.wls.Domain-1].domainAserverHome}/nodemanager/soa-as
-DLogFile=${[rxr.wls.Domain-1].domainAserverHome}/nodemanager/soa-as/nodemanager.log
-DDomainsFile=${[rxr.wls.Domain-1].domainMserverHome}/nodemanager/nodemanager.domains
-DNodeManagerHome=${[rxr.wls.Domain-1].domainMserverHome}/nodemanager/soa-01
-DLogFile=${[rxr.wls.Domain-1].domainMserverHome}/nodemanager/soa-01/nodemanager.log
Here is an example of a two node environment.
└── shared_storage
├── aserver
│ └── soa-as
│ ├── nodemanager.log
│ ├── nodemanager.log.lck
│ ├── nodemanager.process.id
│ ├── nodemanager.process.lck
│ └── nodemanager.properties
└── mserver
├── soa-01
│ ├── nodemanager.log
│ ├── nodemanager.log.lck
│ ├── nodemanager.process.id
│ ├── nodemanager.process.lck
│ └── nodemanager.properties
├── soa-02
├── nodemanager.log
├── nodemanager.log.lck
├── nodemanager.process.id
├── nodemanager.process.lck
└── nodemanager.properties
Update the Myst deployment plan distribution to shared
. Myst now distributes JCA adapter plans once instead of distributing to multiple nodes.
shared
Because the product binaries are installed in shared storage we want the relating oraInventory to also be there.
${oracle.base}/admin/shared/oraInventory
Information about the updated actions which are supported in Myst 6.6.3+
.
Name | Standard Feature | Shared Storage Feature |
---|---|---|
install | Install product binaries on all nodes | Installs product binaries one time on the AdminServer node into the shared storage |
patch | OPatch on all nodes | Runs on the AdminServer node. Opatch is applied one time on shared storage product binaries |
copy-domain | 1. Packs domain aserver 2. Unpacks domain to mserver nodes |
1. Packs domain aserver 2. Unpacks the domain to one mserver node defined in Myst |
Note: See the documentation about limiting SSH validation for standby nodes for an understanding of limitations which may affect you when scaling.
Instead of systemd, you could use Myst to start or stop services.