ExaCC: Patch Virtual Machine Cluster Grid Infrastructure
Overview: Oracle is responsible for performing patches and updates to ExaCC infrastructure every quarter (there is no patch in Dec). That includes: ... Read More
Learn more about why Eclipsys has been named the 2023 Best Workplaces in Technology and Ontario, Certified as a Great Place to Work in Canada and named Canada’s Top SME Employer!
Learn more!Upgrading Oracle Grid Interface on Exadata doesn’t differ from upgrading on non-Exadata RAC. The upgrade will be done in rolling mode.
Note: The steps below should not be used for Exadata Cloud Service or Exadata Cloud at Customer.
1- Download Grid Software 19c gold image – clone version 19.10 (19.10.0.0.210119 – patch 32372882) and stage patch file p32372882_1910000GIRU_Linux-x86-64.zip on the first compute node under “/u01/dba/patches/“
[grid@exadb01 ~]$ cd /u01/dba/patches/ [grid@exadb01 ~]$ ls p32372882_1910000GIRU_Linux-x86-64.zip
Note: According to MOS (Doc ID 2542082.1), Oracle recommends using GI base install – 19.3 + Upgrade to the latest RU (19.10 as in our example). I had below error while applying 19.10 (patch 32226239) RU on top of 19.3 base install using “/u01/app/19.0.0.0/grid/gridSetup.sh“.
[grid@exadb01 gi19c]$ /u01/app/19.0.0.0/grid/gridSetup.sh -applyRU /u01/dba/patches/gi19c/32226239 -executePrereqs -silent -responseFile /u01/dba/patches/gi19c/grid_19c_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false Preparing the home to patch... Preparing the home to apply the patch failed. For details look at the logs from /u01/app/oraInventory/logs. The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2021-02-08_10-16-08AM/installerPatchActions_2021-02-08_10-16-08AM.log Launching Oracle Grid Infrastructure Setup Wizard... [FATAL] [INS-13019] Some mandatory prerequisites are not met. These prerequisites cannot be ignored. CAUSE: The following prerequisites are mandatory and cannot be ignored: - ACFS Driver Checks ACTION: For detailed information of the failed prerequisites check the log: /u01/app/oraInventory/logs/GridSetupActions2021-02-08_10-16-08AM/gridSetupActions2021-02-08_10-16-08AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
Refer to the ACFS driver certification matrix for the Kernel version you have – refer to the section “ACFS and AFD 19c Supported Platforms” for details in the MOS note: ACFS Support On OS Platforms (Certification Matrix). ( Doc ID 1369107.1 )
It is recommended to use 19c GI gold image latest clone directly instead of GI 19.3 + upgrade to the latest RU.
2- Create the new 19c Grid Infrastructure (GI_HOME) on all compute nodes.
Connect to the first compute node as root and use dcli to create a new 19c GI_HOME home on all compute nodes.
1[root@exadb01 gi19c]# dcli -g /root/dbs_group -l root mkdir -p /u01/app/19.0.0.0/grid 2[root@exadb01 gi19c]# dcli -g /root/dbs_group -l root chown grid:oinstall /u01/app/19.0.0.0/grid
3- Install GI 19.10 on the first compute node.
Connect to first compute node as grid and unzip cloned home into /u01/app/19.0.0.0/grid
[grid@exadb01 ~]$ cd /u01/dba/patches/ [grid@exadb01 ~]$ unzip /u01/dba/patches/p32372882_1910000GIRU_Linux-x86-64.zip [grid@exadb01 patches]$ ls -ltr total 6496964 -rw-r--r-- 1 grid oinstall 716 Jan 20 13:55 README.txt -rw-r--r-- 1 grid oinstall 6652882747 Jan 20 14:01 grid-klone-Linux-x86-64-19000210119.zip [grid@exadb01 patches]$ unzip -q /u01/app/19.0.0.0/grid/grid-klone-Linux-x86-64-19000210119.zip -d /u01/app/19.0.0.0/grid [grid@exadb01 grid]$ /u01/app/19.0.0.0/grid/OPatch/opatch version OPatch Version: 12.2.0.1.23 OPatch succeeded.
4- Run cluster verify pre-crsinst stage to confirm all checks are passed.
Connect to first compute node as grid and runcluvfy.sh from 19c GI_HOME.
[grid@exadb01 ~]$ cd /u01/app/19.0.0.0/grid [grid@exadb01 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/18.0.0.0/grid -dest_crshome /u01/app/19.0.0.0/grid -dest_version 19.0.0.0.0 -fixup -verbose . . . Pre-check for cluster services setup was successful. Verifying RPM Package Manager database ...INFORMATION PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges. CVU operation performed: stage -pre crsinst Date: Feb 11, 2021 10:45:47 AM CVU home: /u01/app/19.0.0.0/grid/ User: grid
5- Generate a Response File for Grid Infrastructure Upgrade (Only for Silent Mode)
You may use the Grid Infrastructure response file that was used to install the system. Make sure to replace parameter value “oracle.install.option=CRS_CONFIG” with “oracle.install.option=UPGRADE”
Or you can use the sample file below:
--- Response file sample [grid@exadb01 ~]$ cd /u01/dba/patches/gi19c [grid@exadb01 gi19c]$ vi grid_19c_upgrade.rsp oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0 INVENTORY_LOCATION=/u01/app/oraInventory oracle.install.option=UPGRADE ORACLE_BASE=/u01/app/grid oracle.install.crs.config.scanType=LOCAL_SCAN oracle.install.crs.config.ClusterConfiguration=STANDALONE oracle.install.crs.config.configureAsExtendedCluster=false oracle.install.crs.config.clusterName=cluster-clu1 <<< Cluster Name oracle.install.crs.config.gpnp.configureGNS=false oracle.install.crs.config.autoConfigureClusterNodeVIP=false oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS oracle.install.crs.config.clusterNodes=exadb01:,exadb02:,exadb03:,exadb04: <<< List of compute nodes oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE oracle.install.crs.config.useIPMI=false oracle.install.asm.diskGroup.name=+DATA <<< ASM disk group where OCR file is stored oracle.install.asm.diskGroup.AUSize=0 oracle.install.asm.configureAFD=false oracle.install.crs.configureRHPS=false oracle.install.crs.config.ignoreDownNodes=false oracle.install.config.managementOption=NONE oracle.install.crs.rootconfig.executeRootScript=false
6- Using Dry-Run Upgrade Mode to Check System Upgrade Readiness.
Use dry-run upgrade mode of Oracle Grid Infrastructure installation, gridSetup.sh, to configure the Grid Home and copy the binaries to the rest of the nodes. This step will take some time to complete, but by doing it at this point, we are reducing the downtime required during the Grid Infrastructure Upgrade.
You can execute dry-run in either Silent Mode or Interactive Mode.
Note: If there is a one-off patch that you need to apply on GI 19c home after finishing the upgrade, it is recommended to apply a one-off patch while executing dry-run. You can do that by adding the argument “-applyOneOffs <One-off-patch_LOCATION>“ when you execute “gridSetup.sh“. This will apply a one-off patch on the new GI 19c home on all nodes as part of the dry-run.
Note: The installer does not perform an actual upgrade in the dry-run upgrade mode. You can relaunch the installer, without any flag, from any of the cluster nodes to upgrade Oracle Grid Infrastructure if the dry-run is successful.
Ensure that all the folders under the central inventory have read and write permissions for the install user (grid). This is required to avoid the below error.
[FATAL] [INS-32050] Install user (grid) does not have sufficient access permissions for the given central inventory. CAUSE: User does not have read or write permissions for /u01/app/oraInventory/locks under the central inventory. ACTION: Ensure that all the folders under the central inventory has read and write permissions for the install user.
Execute dry-run in silent mode.
[grid@exadb01 ~]$ cd /u01/app/19.0.0.0/grid/ [grid@exadb01 grid]$ ./gridSetup.sh -silent -responseFile /u01/dba/patches/gi19c/grid_19c_upgrade.rsp -ignorePrereqFailure -dryRunForUpgrade Launching Oracle Grid Infrastructure Setup Wizard... The response file for this session can be found at: /u01/app/19.0.0.0/grid/install/response/grid_2021-02-11_12-18-13PM.rsp You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2021-02-11_12-18-13PM/gridSetupActions2021-02-11_12-18-13PM.log As a root user, execute the following script(s): 1. /u01/app/19.0.0.0/grid/rootupgrade.sh Execute /u01/app/19.0.0.0/grid/rootupgrade.sh on the following nodes: [exadb01] Run the script on the local node. Successfully Setup Software.
Connect to first compute node as root and run script /u01/app/19.0.0.0/grid/rootupgrade.sh
[root@exadb01 ~]# /u01/app/19.0.0.0/grid/rootupgrade.sh Check /u01/app/19.0.0.0/grid/install/root_exadb01_2021-02-11_12-35-20-637603768.log for the output of root script
7- Validate oracle 19c GI binary is relinked with RDS on all nodes.
Verify the GI binary is linked with the rds option (this is the default starting 11.2.0.4 but may not be effective on your system). The following command should return ‘rds’:
[grid@exadb01 grid]$ /u01/app/19.0.0.0/grid/bin/skgxpinfo rds --- Repeat the same command on remaining compute nodes.
8- Idle timeout setting management.
During the upgrade, some tasks will take some time to complete so we want to prevent an SSH and shell idle timeout to occur. Please check that the idle timeout is 14400 at minimum.
The idle-timeout command provides the ability to update the shell or SSH client idle timeouts. The net effect of these two timeouts are the same, so typically the shorter of the two will prevail and make the most impact.
[root@exadb01 ~]# /opt/oracle.cellos/host_access_control idle-timeout -l 14400 [2021-02-11 12:48:09 -0500] [INFO] [IMG-SEC-0404] Shell timeout set to 14400 [root@exadb01 ~]# /opt/oracle.cellos/host_access_control idle-timeout -c 14400 [2021-02-11 12:48:21 -0500] [INFO] [IMG-SEC-0403] SSH client idle timeout is set to 14400 [2021-02-11 12:48:21 -0500] [INFO] [IMG-SEC-0A02] SSHD Service restarted. Changes in effect for new connections. [root@exadb01 ~]# /opt/oracle.cellos/host_access_control idle-timeout -s [2021-02-11 12:48:42 -0500] [INFO] [IMG-SEC-0402] Shell timeout is set to TMOUT=14400 [2021-02-11 12:48:42 -0500] [INFO] [IMG-SEC-0403] SSH client idle timeout is set to ClientAliveInterval 14400 --- Repeat the same command on remaining compute nodes.
9- Investigate Log for Errors (TFA)
Use Oracle Trace File Analyzer to analyze all of your logs across your cluster to identify recent errors. The summary command gives you a real-time report of system and cluster status.
[root@exadb01 ~]# tfactl analyze -last 7d
10- Verify no active ASM rebalance is running.
Query gv$asm_operation to verify no active rebalance is running. A rebalance is running when the result of the following query is not equal to zero.
SYS@+ASM1> select count(*) from gv$asm_operation; COUNT(*) ---------- 0
1- Execute gridSetup.sh in silent mode using the upgrade response file generated by dry-run on the first compute node.
Response file “/u01/app/19.0.0.0/grid/install/response/grid_2021-02-11_12-18-13PM.rsp“
You may run gridSetup.sh in nohup mode in case your session is timed out.
[grid@exadb01 ~]$ nohup /u01/app/19.0.0.0/grid/gridSetup.sh -silent -responseFile /u01/app/19.0.0.0/grid/install/response/grid_2021-02-11_12-18-13PM.rsp -ignorePrereqFailure & [grid@exadb01 ~]$ tail -f nohup.out Launching Oracle Grid Infrastructure Setup Wizard... The response file for this session can be found at: /u01/app/19.0.0.0/grid/install/response/grid_2021-02-13_11-30-39PM.rsp As a root user, execute the following script(s): 1. /u01/app/19.0.0.0/grid/rootupgrade.sh Execute /u01/app/19.0.0.0/grid/rootupgrade.sh on the following nodes: [exadb01, exadb02, exadb03, exadb04] Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node. Successfully Setup Software with warning(s). As install user, execute the following command to complete the configuration. /u01/app/19.0.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/19.0.0.0/grid/install/response/grid_2021-02-11_12-18-13PM.rsp [-silent]
2- Execute rootupgrade.sh script on each compute node starting with the first compute node.
[root@exadb01 ~]# /u01/app/19.0.0.0/grid/rootupgrade.sh Check /u01/app/19.0.0.0/grid/install/root_exadb01_2021-02-13_23-39-38-662083651.log for the output of root script [root@exanadbadm01 ~]# tail -f /u01/app/19.0.0.0/grid/install/root_exanadbadm01.sleeman.local_2021-02-13_23-39-38-662083651.log Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/19.0.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/exadb01/crsconfig/rootcrs_exadb01_2021-02-13_11-39-55PM.log 2021/02/13 23:40:01 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'. 2021/02/13 23:40:01 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector. 2021/02/13 23:40:01 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'. 2021/02/13 23:40:02 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector. 2021/02/13 23:40:02 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'. 2021/02/13 23:40:02 CLSRSC-464: Starting retrieval of the cluster configuration data 2021/02/13 23:40:07 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes. 2021/02/13 23:41:41 CLSRSC-693: CRS entities validation completed successfully. 2021/02/13 23:41:53 CLSRSC-515: Starting OCR manual backup. 2021/02/13 23:42:06 CLSRSC-516: OCR manual backup successful. 2021/02/13 23:42:12 CLSRSC-486: At this stage of upgrade, the OCR has changed. Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR. 2021/02/13 23:42:12 CLSRSC-541: To downgrade the cluster: 1. All nodes that have been upgraded must be downgraded. 2021/02/13 23:42:12 CLSRSC-542: 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down. 2021/02/13 23:42:21 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed. 2021/02/13 23:42:21 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'. 2021/02/13 23:42:24 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'. 2021/02/13 23:42:29 CLSRSC-363: User ignored prerequisites during installation 2021/02/13 23:42:38 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'. 2021/02/13 23:42:38 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'. 2021/02/13 23:43:21 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode 2021/02/13 23:43:21 CLSRSC-482: Running command: '/u01/app/18.0.0.0/grid/bin/crsctl start rollingupgrade 19.0.0.0.0' CRS-1131: The cluster was successfully set to rolling upgrade mode. 2021/02/13 23:43:29 CLSRSC-482: Running command: '/u01/app/19.0.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/18.0.0.0/grid -oldCRSVersion 18.0.0.0.0 -firstNode true -startRolling false ' ASM configuration upgraded in local node successfully. 2021/02/13 23:43:41 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode 2021/02/13 23:43:47 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack 2021/02/13 23:44:26 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed. 2021/02/13 23:44:29 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'. 2021/02/13 23:44:30 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'. 2021/02/13 23:44:45 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'. 2021/02/13 23:45:16 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'. 2021/02/13 23:45:24 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'. 2021/02/13 23:45:33 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'. 2021/02/13 23:45:33 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service' 2021/02/13 23:46:14 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'. 2021/02/13 23:46:54 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'. 2021/02/13 23:47:02 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'. 2021/02/13 23:48:18 CLSRSC-343: Successfully started Oracle Clusterware stack clscfg: EXISTING configuration version 5 detected. Successfully taken the backup of node specific configuration in OCR. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2021/02/13 23:48:36 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'. 2021/02/13 23:48:40 CLSRSC-474: Initiating upgrade of resource types 2021/02/13 23:49:16 CLSRSC-475: Upgrade of resource types successfully initiated. 2021/02/13 23:49:31 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'. 2021/02/13 23:49:40 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded >>> Once rootupgrade.sh succesffuly finishes on first compute node then proceed to remaining nodes.
3- Complete the configuration.
After rootupgrade.sh has been successfully executed on all database nodes, execute the following command to complete the configuration. Use the command printed at the end of gridSetup.sh
[grid@exadb01 ~]$ /u01/app/19.0.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/19.0.0.0/grid/install/response/grid_2021-02-11_12-18-13PM.rsp -silent Launching Oracle Grid Infrastructure Setup Wizard... You can find the logs of this session at: /u01/app/oraInventory/logs/GridSetupActions2021-02-14_00-03-37AM You can find the log of this install session at: /u01/app/oraInventory/logs/UpdateNodeList2021-02-14_00-03-37AM.log You can find the log of this install session at: /u01/app/oraInventory/logs/UpdateNodeList2021-02-14_00-03-37AM.log Successfully Configured Software.
4- Add 19c home to /etc/oratab on all nodes.
[grid@exadb01 ~]$ vi /etc/oratab --- Add +ASM1:/u01/app/19.0.0.0/grid:N
5- Verify cluster status
Perform a check on the status of the Grid Infrastructure post upgrade by executing the following command from one of the compute nodes.
[grid@exadb01 ~]$ . oraenv [ORACLE_SID]: +ASM1 [grid@exadb01 ~]$ crsctl check cluster -all ************************************************************** exadb01: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** exadb02: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** exadb03: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** exadb04: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
Check that the running CRS version matches your expectation after the upgrade, make sure the cluster is the NORMAL mode, rather than [UPGRADE FINAL]:
[grid@exadb01 ~]$ crsctl query crs releasepatch Oracle Clusterware release patch level is [475103839] and the complete list of patches [29340594 32218454 32218663 32240590 32289783 32301133 ] have been applied on the local node. The release patch string is [19.10.0.0.0]. [grid@exadb01 ~]$ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [19.0.0.0.0] [grid@exadb01 ~]$ crsctl query crs softwareversion Oracle Clusterware version on node [exadb01] is [19.0.0.0.0] [grid@exadb01 ~]$ crsctl query crs releaseversion Oracle High Availability Services release version on the local node is [19.0.0.0.0]
6- Perform Inventory update.
If you are using Oracle Database 11g release 2 with Oracle Grid Infrastructure 19c, an inventory update is required to the 19c Grid Home because in 19c the cluster node names are not registered in the inventory. Older database version tools relied on node names from inventory. Please run the following command on the local node when using earlier releases of the database with 19c GI.
[grid@exadb01 ~]$ /u01/app/19.0.0.0/grid/oui/bin/runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/19.0.0.0/grid "CLUSTER_NODES={exadb01,exadb02,exadb03,exadb04}" CRS=true LOCAL_NODE=local_node Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 24575 MB Passed The inventory pointer is located at /etc/oraInst.loc
7- Disable Diagsnap for Exadata.
Due to Bugs 24900613, 25785073, and 25810099, Diagsnap should be disabled for Exadata.
[grid@exadb01 ~]$ cd /u01/app/19.0.0.0/grid/bin [grid@exadb01 bin]$ ./oclumon manage -disable diagsnap Diagsnap is already Disabled on exadb01 Successfully Disabled diagsnap --- Repeat on remaining nodes.
8- Alter Advance ASM Compatible Diskgroup Attribute.
As a highly recommended best practice and in order to create new databases with the password file stored in an ASM diskgroup or using ACFS, it’s recommended to advance the COMPATIBLE.ASM and COMPATIBLE.ADVM parameters of your diskgroups to the Oracle ASM software version in use.
[grid@exadb01 bin]$ sqlplus / as sysasm SQL*Plus: Release 19.0.0.0.0 - Production on Sun Feb 14 00:21:18 2021 Version 19.10.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.10.0.0.0 SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0'; Diskgroup altered. SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.advm' = '19.0.0.0.0'; Diskgroup altered. SQL> ALTER DISKGROUP RECO SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0'; Diskgroup altered. SQL> ALTER DISKGROUP RECO SET ATTRIBUTE 'compatible.advm' = '19.0.0.0.0'; Diskgroup altered. SQL> SELECT 2 name group_name , sector_size sector_size 3 4 , block_size block_size 5 , allocation_unit_size allocation_unit_size , state state 6 7 , type type 8 , COMPATIBILITY asm_comp 9 , DATABASE_COMPATIBILITY db_comp 10 , total_mb total_mb , (total_mb - free_mb) used_mb 11 12 , ROUND((1- (free_mb / total_mb))*100, 2) pct_used 13 FROM v$asm_diskgroup 14 15 ORDER BY name / 16 17 DG Sector Block Allocation Name Size Size Unit Size State Type ASM_COMP DB_COMP Total Size (MB) Used Size (MB) Pct. Used ---------- ------- ------- ------------ ----------- ------ ------------ ------------ --------------- -------------- --------- DATA 512 4,096 4,194,304 MOUNTED HIGH 19.0.0.0.0 11.2.0.4.0 134,535,168 19,859,280 14.76 RECO 512 4,096 4,194,304 MOUNTED HIGH 19.0.0.0.0 11.2.0.4.0 33,623,424 12,086,040 35.95
Congratulations !!!. GI upgrade successfully completed.
Overview: Oracle is responsible for performing patches and updates to ExaCC infrastructure every quarter (there is no patch in Dec). That includes: ... Read More
In this blog, we will demonstrate the steps to peer two VCNs in different regions through a DRG in the same tenancy. This is called a remote VCN ... Read More