The not so AutoUpgrade (Oracle AutoUpgrade in Windows)


First, before anyone starts sending me print screens about their successful AutoUpgrade, or flexing about how their 100+ databases were smoothly migrated to PDBs over the weekend, I am stopping you right there. This is my honest feedback on the experience I had when trying the tool on a windows environment. It is obviously not a rant but could have been after 2 lonesome and painful months of sheer struggle with Oracle support. I had even shared the issues I went through with the community on Twitter but I just happened to be the lucky one to have hit that special Bug. There are enough blog posts showing you the glamourous side of AutoUpgrade, let me introduce you… the Hitchcockian version.


The Plot 

It all started last summer when my client, who had Oracle Standard edition databases all over his windows fleet, asked about available approaches and downtime to migrate PeopleSoft DBs from 12c (non-CDB)  to 19c multi-tenant architecture. At that time, AutoUpgrade hype was still kicking and I’d watched all recordings of Mike Dietrich. Although most examples were based on Linux I felt confident it’ll be a piece of cake on windows too (never trust the hype) and sold that to my client.  

My environment :

Platform Source non-CDB database SI Target CDB SI Hypervisor      Application
Windows Server 2012 Standard Edition Standard Edition Yes (VMware) PeopleSoft 9.2

Choose AutoUpgrade 

Besides the glaring fact that Oracle strongly recommends it as the best way to upgrade databases to 19c, I have also explored available alleys before choosing AU.

According to the Oracle 19c migration white paper, below are the few methods available to upgrade or migrate to 19c databases.


In my case, I needed to migrate & convert a 12c non-CDB to a 19c PDB and since Full transportable Tablespaces weren’t supported on Standard edition, the remaining options were:

– DBUA ==> Doesn’t seem to support NonCDB to PDB conversion (also resume after failure is not possible)
– dbupgrade ==> Not enough automated (heck, we’re in 2020)
– Datapump expdp/impdp ==> Gigantic downtime when tested with the database that was ~1TB and 80k+ tables worth 


The journey into BUG land

AutoUpgrade option being selected, I was very excited to perform it in a real project as opposed to doing it on a vagrant lab. I started by installing the target 19c Software and patched it to 19.7 before creating a CDB using dbca, all was going as planned.

First Attempt

I decided to do POC with a production refresh using RMAN duplicate and then upgrade the whole thing but out of the blue, right at the finish line (95% upgrade completion), the AutoUpgrade crashed on me.

The Configuration

–  AutoUpgrade Config file is pretty basic as you can see below with restoration set to no

# Upgrade 1
upg1.upgrade_node=DEVHOST # sanitized
upg1.target_cdb=CDBTEST # sanitized
upg1.restoration=no   # Standard edition doesn’t support guaranteed restore points



The following steps were running while none of ORACLE_SID and ORACLE_HOME environment variables were set (important)

19c jdk               autoupgrade.jar
1.8.0_201 Build.version 19.9.2  – 2020/08/31


1. Autoupgrade analyze

C:\> java -jar autoupgrade.jar -config UP19_FSDB.cfg -mode analyze

The only reported pre-upgrade errors were due to a non-empty Recycle bin and the presence of 2 deprecated parameters but both had fixups which I decided to perform manually anyway.  


2. Autoupgrade deploy

The environment was ready for a go so I launched the deployment phase

C:\> java -jar autoupgrade.jar -config UP19_FSDB.cfg -mode deploy 
... An hour later
upg> lsj 
| 107| FSDB |DBUPGRADE|EXECUTING|RUNNING|20/09/11 12:33|19:25:12|93%Upgraded | 
Errors in database [FSDB] Stage [DBUPGRADE] 
Operation [STOPPED] Status [ERROR] 
Info [ Error: UPG-1400 UPGRADE FAILED [FSUAT] 
Cause: Database upgrade failed with errors

The upgrade phase never finished but half the catalogue was upgraded while 2 components became invalid (hybrid 12/19c DB)




I will break down the errors into 3 categories according to the component they are linked to


The first set of errors stemmed from two missing columns on sys.aud$ but let’s first see what exactly happened

CAUSE: ERROR at Line 6156 in [D:\..\FSDB\FSDB\107\dbupgrade\catupgrdxxfsdb1.log]
        REASON: ORA-00904: "CURRENT_USER": invalid identifier
        ACTION: [MANUAL]
2020-09-16 20:52:19.585 ERROR 
CAUSE: ERROR at Line 6281 in [D:\..\FSDB\FSDB\107\dbupgrade\catupgrdxxfsdb1.log]
        REASON: ORA-00904: "RLS_INFO": invalid identifier
        ACTION: [MANUAL]    

In the catupgrdfsdb0.log, the lines show that both current_user and rls$info columns were added to sys.aud$ at first. But a few minutes later, the same columns magically vanished which made the 19c cataudit.sql (called by catalog.sql) script fail when recreating DBA_AUDIT_TRAIL view that’s based on aud$


How can this be? adding columns to a table just to see them vanish a few minutes later with no other DDL run on that table!


The AUD$ table was actually under ADM schema and not SYS (sys.aud$ was just a synonym). But why on earth adding columns on sys.aud$ synonym didn’t raise any error (i.e ORA-00942). If we can’t run a ddl through a synonym, then an exception should be expected.

SQL> select owner, object_name, object_type from dba_objects where object_name='AUD$'; 
---- ------------- ------------ 

ADM (SYSADM): This is what we call in PeopleSoft the administrative schema(Owner ID) which contains nearly all of the database objects and is used by PeopleSoft application to control schema objects access. 


The fix that took 2 months to figure out ways to add the columns directly to ADM.aud$ table before the AutoUpgrade

SQL> alter table adm.aud$ add rls$info clob; 
SQL> alter table adm.aud$ add current_user varchar2(128); 


This issue was solved very early in the SR as the error is likely due to the missing XDK component which JAVA depends on. Note that the source production database(12c) didn’t have XDK and java wasn’t used either.



Install Oracle XDK and recompile invalid objects before the AutoUpgrade as shown below

SQL> @?\xdk\admin\initxml.sql

SQL> @?\admin\utlrp.sql


TIME_STAMP function had moved to dbms_registry_sys package sometime back and the internal cdend.sql script is still referring to the old statement package. Even if it is only a failed select statement it will still block the upgrade.



Replace the called function with the right package in the 19c catalogue script (?\admin\cdend.sql) and resume the AutoUpgrade

-- Replace the below line 
SELECT dbms_registry.time_stamp('CATALOG') AS timestamp FROM DUAL; 
-- By this one 
SELECT dbms_registry_sys.time_stamp('CATALOG') AS timestamp FROM DUAL; 

Important: Don’t hesitate to  run the utlrp.sql after making changes before starting/resuming the AutoUpgrade as you can always have invalid objects lingering that can block the upgrade (see below ) 

REASON: ORA-04063: package body "SYS.DBMS_REGISTRY_SYS" has errors 


I wanted to share my experience so peers in the same situation won’t have to struggle with the support for months to get their databases upgraded in windows using a tool that is supposed to be an industry standard. Make no mistake, I love that I can leverage native automation for my DBA tasks but I just don’t call something AUTO when you have to add manual corrections to resolve undocumented bugs just because it’s windows. Being a beta tester is never fun but we can agree that AutoUpgrade has room for improvement.

At last, it always feels good to see a successful output after too many failures.


Share on: