Quantcast
Channel: dbi Blog
Viewing all 2837 articles
Browse latest View live

AWS RDS for PostgreSQL – 3 – Creating the RDS PostgreSQL instance

$
0
0

In the last tow posts we had a look at DB Parameter Groups and Subnet Groups as these need to be ready when you want to deploy a RDS PostgreSQL instance on AWS. In this post we’ll use these two building blocks to bring up a high available PostgreSQL instance using a master instance in one, and a replica in another availability zone. This is usually what you want when you want a production deployment in AWS.

As usual, with the AWS console, creating a new database is quite easy:

In our case we want to go for PostgreSQL:

The latest version as of today is 12.2, we want to go for production and we need to provide a name:

We sill stick with the default for the master username as that is postgres usually anyway:

We’ll go with the defaults for the storage section as well:

For production deployments you usually want to go for one or more standby instances. For the scope of this little demo it is not required:

Now it comes to the connectivity and this is where we need the subnet group we created in the last post:

For the security I’ve chosen my default which also allows incoming traffic to the standard PostgreSQL port 5432. We’ll let the default for authentication:

.. but will use the DB Parameter Group we’ve created in the first post:

We’ll keep the defaults for the remaining sections (backup, monitoring, encryption) and will create the RDS PostgreSQL instance:

The deployment will take some minutes and once it is ready you get an overview in the console:

If you are looking for the reference to the DB Parameter Group you can find that in the “Configuration” section:

From now on we can use psql to connect to our PostgreSQL instance:

[ec2-user@ip-10-0-1-51 ~]$ psql -h dwe-pg.xxx.eu-central-1.rds.amazonaws.com -p 5432 -U postgres postgres
psql (12.2)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.

postgres=> select version();
                                                version                                                 
--------------------------------------------------------------------------------------------------------
 PostgreSQL 12.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
(1 row)

As you already can see the connection is encrypted and this is because AWS configured PostgreSQL for SSL out of the box and we saw that already when we created the DB Parameter Group:

postgres=> show ssl;
 ssl 
-----
 on
(1 row)

postgres=> show ssl_cert_file ;
              ssl_cert_file              
-----------------------------------------
 /rdsdbdata/rds-metadata/server-cert.pem
(1 row)

postgres=> show ssl_key_file ;
              ssl_key_file              
----------------------------------------
 /rdsdbdata/rds-metadata/server-key.pem
(1 row)

As we are now using a managed service, do not expect that you can use PostgreSQL as you can do that when you install for your own:

postgres=> select pg_ls_dir('.');
ERROR:  permission denied for function pg_ls_dir
postgres=> \du+
                                                                      List of roles
         Role name         |                   Attributes                   |                          Member of                           | Description 
---------------------------+------------------------------------------------+--------------------------------------------------------------+-------------
 pg_execute_server_program | Cannot login                                   | {}                                                           | 
 pg_monitor                | Cannot login                                   | {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables} | 
 pg_read_all_settings      | Cannot login                                   | {}                                                           | 
 pg_read_all_stats         | Cannot login                                   | {}                                                           | 
 pg_read_server_files      | Cannot login                                   | {}                                                           | 
 pg_signal_backend         | Cannot login                                   | {}                                                           | 
 pg_stat_scan_tables       | Cannot login                                   | {}                                                           | 
 pg_write_server_files     | Cannot login                                   | {}                                                           | 
 postgres                  | Create role, Create DB                        +| {rds_superuser}                                              | 
                           | Password valid until infinity                  |                                                              | 
 rds_ad                    | Cannot login                                   | {}                                                           | 
 rds_iam                   | Cannot login                                   | {}                                                           | 
 rds_password              | Cannot login                                   | {}                                                           | 
 rds_replication           | Cannot login                                   | {}                                                           | 
 rds_superuser             | Cannot login                                   | {pg_monitor,pg_signal_backend,rds_replication,rds_password}  | 
 rdsadmin                  | Superuser, Create role, Create DB, Replication+| {}                                                           | 
                           | Password valid until infinity                  |                                                              | 
 rdsrepladmin              | No inheritance, Cannot login, Replication      | {}                                                           | 

Our “postgres” user is not a real super user anymore, but it is assigned to the “rds_superuser” role and this one has limited permissions. The real super is “rdsadmin” and we do not have access to this user. A lot of stuff is not possible, e.g.:

postgres=> select * from pg_hba_file_rules ;
ERROR:  permission denied for view pg_hba_file_rules
postgres=> 
postgres=> select * from pg_read_file('/var/tmp/aa');
ERROR:  permission denied for function pg_read_file

… and this is normal as all these would interact with the operating system, and that is hidden completely and not allowed at all. Changing parameters on the instance level does not work as well:

postgres=> alter system set work_mem='1MB';
ERROR:  must be superuser to execute ALTER SYSTEM command

You will need to do this by adjusting the DB Parameter group. What surprised me is that you can create a tablespace:

postgres=> create tablespace t1 location '/var/tmp';
CREATE TABLESPACE
postgres=> \db
                      List of tablespaces
    Name    |  Owner   |               Location                
------------+----------+---------------------------------------
 pg_default | rdsadmin | 
 pg_global  | rdsadmin | 
 t1         | postgres | /rdsdbdata/db/base/tablespace/var/tmp
(3 rows)

The location is remapped in the background. The reason is in the documentation and it is only supported for compatibility reasons, not to spread I/O as you do not know what the storage looks like anyway. Using RDS for PostgreSQL (or any other RDS service) will change the way you will be working with the database instance. You carefully need to check your application if you want to migrate, maybe it is using functionality that is not supported in RDS. Setting parameters can not be done directly, but by using DB Parameter Groups.

In the next post we’ll look at how you can change parametes by modifying the DB Parameter Group.

Cet article AWS RDS for PostgreSQL – 3 – Creating the RDS PostgreSQL instance est apparu en premier sur Blog dbi services.


Setup Oracle XE on Linux Mint – a funny exercise

$
0
0

On my old Laptop (Acer Travelmate with an Intel Celeron N3160 CPU) I wanted to install Oracle XE. Currently the available XE version is 18.4. My Laptop runs on Linux Mint 19.3 (Tricia). The Blog will describe the steps I had to follow (steps for Ubuntu would be similar).

REMARK: The following steps were done just for fun and are not supported and not licensable from Oracle. If you follow them then you do it at your own risk 😉

Good instructions on how to install Oracle XE are already available here.

But the first issue not mentioned in the instructions above is that Oracle can no longer be installed on the latest Mint version due to a change in glibc. This has also been described in various blogs about e.g. installing Oracle on Fedora 26 or Fedora 27. The workaround for the problem is to do the following:


cd $ORACLE_HOME/lib/stubs
mkdir BAK
mv libc* BAK/
$ORACLE_HOME/bin/relink all

This brings us to the second issue. Oracle does not provide you with a mechanism to relink Oracle XE. You can of course relink an Enterprise Edition or a Standard Edition 2 version, but relinking XE is not possible because lots of archives and objects are not available in a XE-release. So how can we achieve to install Oracle XE on Linux Mint then? This needs a bit of an unsupported hack by copying archive and object-files from an Enterprise Edition version to XE, but I’ll get to that later.

So here are the steps to install Oracle XE on Linux Mint (if not separately mentioned, the steps are done as root. I.e. you may prefix your command with a “sudo” if you do not login to root directly):

1. Install libaio and alien


root@clemens-TravelMate:~# apt-get update && apt-get upgrade
root@clemens-TravelMate:~# apt-get install libaio*
root@clemens-TravelMate:~# apt-get install alien

2. Download the Oracle rpm from here and convert it to a deb-file


root@clemens-TravelMate:~# cd /opt/distr
root@clemens-TravelMate:/opt/distr# alien --script oracle-database-xe-18c_1.0-2_amd64.rpm

3. Delete the original rpm to save some space


root@clemens-TravelMate:/opt/distr# ls -l oracle-database-xe-18c_1.0-2_amd64.deb
...
root@clemens-TravelMate:/opt/distr# rm oracle-database-xe-18c_1.0-2_amd64.rpm

4. Install the package


root@clemens-TravelMate:/opt/distr# dpkg -i oracle-database-xe-18c_1.0-2_amd64.deb

5. Make sure your host has an IPv4 address in your hosts file


root@clemens-TravelMate:/opt/distr# more /etc/hosts
127.0.0.1 localhost localhost.localdomain
192.168.10.49 clemens-TravelMate.fritz.box clemens-TravelMate

6. Disable the system check in the configuration script


cd /etc/init.d/
cp -p oracle-xe-18c oracle-xe-18c-cfg
vi oracle-xe-18c-cfg

Add the parameter


-J-Doracle.assistants.dbca.validate.ConfigurationParams=false 

in line 288 of the script, so that it finally looks as follows:


    $SU -s /bin/bash  $ORACLE_OWNER -c "(echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD') | $DBCA -silent -createDatabase -gdbName $ORACLE_SID -templateName $TEMPLATE_NAME -characterSet $CHARSET -createAsContainerDatabase $CREATE_AS_CDB -numberOfPDBs $NUMBER_OF_PDBS -pdbName $PDB_NAME -sid $ORACLE_SID -emConfiguration DBEXPRESS -emExpressPort $EM_EXPRESS_PORT -J-Doracle.assistants.dbca.validate.DBCredentials=false -J-Doracle.assistants.dbca.validate.ConfigurationParams=false -sampleSchema true $SQLSCRIPT_CONSTRUCT $DBFILE_CONSTRUCT $MEMORY_CONSTRUCT"

7. Adjust user oracle, so that it has bash as its default shell


mkdir -p /home/oracle
chown oracle:oinstall /home/oracle
vi /etc/passwd
grep oracle /etc/passwd
oracle:x:54321:54321::/home/oracle:/bin/bash

You may of course add a .bashrc or .bash_profile in /home/oracle.

8. Adjust the Oracle make-scripts for Mint/Ubuntu (I took the script from here):


oracle@clemens-TravelMate:~/scripts$ cat omkfix_XE.sh 
#!/bin/sh
# Change the path below to point to your installation
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
# make changes in orld script
sed -i 's/exec gcc "\$@"/exec gcc -no-pie "\$@"/' $ORACLE_HOME/bin/orald
# Take backup before committing changes
cp $ORACLE_HOME/rdbms/lib/ins_rdbms.mk $ORACLE_HOME/rdbms/lib/ins_rdbms.mk.back
cp $ORACLE_HOME/rdbms/lib/env_rdbms.mk $ORACLE_HOME/rdbms/lib/env_rdbms.mk.back
cp $ORACLE_HOME/network/lib/env_network.mk $ORACLE_HOME/network/lib/env_network.mk.back
cp $ORACLE_HOME/srvm/lib/env_srvm.mk $ORACLE_HOME/srvm/lib/env_srvm.mk.back
cp $ORACLE_HOME/crs/lib/env_has.mk $ORACLE_HOME/crs/lib/env_has.mk.back
cp $ORACLE_HOME/odbc/lib/env_odbc.mk $ORACLE_HOME/odbc/lib/env_odbc.mk.back
cp $ORACLE_HOME/precomp/lib/env_precomp.mk $ORACLE_HOME/precomp/lib/env_precomp.mk.back
cp $ORACLE_HOME/ldap/lib/env_ldap.mk $ORACLE_HOME/ldap/lib/env_ldap.mk.back
cp $ORACLE_HOME/ord/im/lib/env_ordim.mk $ORACLE_HOME/ord/im/lib/env_ordim.mk.back
cp $ORACLE_HOME/ctx/lib/env_ctx.mk $ORACLE_HOME/ctx/lib/env_ctx.mk.back
cp $ORACLE_HOME/plsql/lib/env_plsql.mk $ORACLE_HOME/plsql/lib/env_plsql.mk.back
cp $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk.back
cp $ORACLE_HOME/bin/genorasdksh $ORACLE_HOME/bin/genorasdksh.back
#
# make changes changes in .mk files
#
sed -i 's/\$(ORAPWD_LINKLINE)/\$(ORAPWD_LINKLINE) -lnnz18/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(HSOTS_LINKLINE)/\$(HSOTS_LINKLINE) -lagtsh/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(EXTPROC_LINKLINE)/\$(EXTPROC_LINKLINE) -lagtsh/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(OPT) \$(HSOTSMAI)/\$(OPT) -Wl,--no-as-needed \$(HSOTSMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(OPT) \$(HSDEPMAI)/\$(OPT) -Wl,--no-as-needed \$(HSDEPMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(OPT) \$(EXTPMAI)/\$(OPT) -Wl,--no-as-needed \$(EXTPMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(SPOBJS) \$(LLIBDMEXT)/\$(SPOBJS) -Wl,--no-as-needed \$(LLIBDMEXT)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/\$(S0MAIN) \$(SSKRMED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKRMED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSBBDED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSBBDED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKRSED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKRSED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SKRNPT)/\$(S0MAIN) -Wl,--no-as-needed \$(SKRNPT)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSTRCED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSTRCED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSTNTED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSTNTED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFEDED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFEDED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/\$(S0MAIN) \$(SSKFODED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFODED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFNDGED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFNDGED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFMUED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFMUED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFSAGED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFSAGED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(DBGVCI)/\$(S0MAIN) -Wl,--no-as-needed \$(DBGVCI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(DBGUCI)/\$(S0MAIN) -Wl,--no-as-needed \$(DBGUCI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKECED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKECED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/^\(ORACLE_LINKLINE.*\$(ORACLE_LINKER)\) \($(PL_FLAGS)\)/\1 -Wl,--no-as-needed \2/g' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/^\(TNSLSNR_LINKLINE.*\$(TNSLSNR_OFILES)\) \(\$(LINKTTLIBS)\)/\1 -Wl,--no-as-needed \2/g' $ORACLE_HOME/network/lib/env_network.mk
sed -i 's/\$LD \$1G/$LD -Wl,--no-as-needed \$LD_RUNTIME/' $ORACLE_HOME/bin/genorasdksh
sed -i 's/\$(GETCRSHOME_OBJ1) \$(OCRLIBS_DEFAULT)/\$(GETCRSHOME_OBJ1) -Wl,--no-as-needed \$(OCRLIBS_DEFAULT)/' $ORACLE_HOME/srvm/lib/env_srvm.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/crs/lib/env_has.mk;
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/odbc/lib/env_odbc.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/precomp/lib/env_precomp.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/srvm/lib/env_srvm.mk;
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/network/lib/env_network.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ldap/lib/env_ldap.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ord/im/lib/env_ordim.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ctx/lib/env_ctx.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/plsql/lib/env_plsql.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk
oracle@clemens-TravelMate:~/scripts$ 
oracle@clemens-TravelMate:~/scripts$ chmod +x omkfix_XE.sh
oracle@clemens-TravelMate:~/scripts$ . ./omkfix_XE.sh
oracle@clemens-TravelMate:~/scripts$ 

9. Install an Oracle Enterprise Edition 18.4. in a separate ORACLE_HOME /u01/app/oracle/product/18.0.0/dbhome_1. You may follow the steps to install it here.

REMARK: At this step I also updated the /etc/sysctl.conf with the usual Oracle requirements and activated the parameters with sysctl -p.


vm.swappiness=1
fs.file-max = 6815744
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 8589934592
kernel.sem = 250 32000 100 128
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
vm.nr_hugepages = 600

10. Copy Object- and archive-files from the Enterprise Edition Oracle-Home to the XE Oracle-Home:


oracle@clemens-TravelMate:~/scripts$ cat cpXE.bash 
OH1=/u01/app/oracle/product/18.0.0/dbhome_1
OH=/opt/oracle/product/18c/dbhomeXE
cp -p $OH1/rdbms/lib/libknlopt.a $OH/rdbms/lib
cp -p $OH1/rdbms/lib/opimai.o $OH/rdbms/lib
cp -p $OH1/rdbms/lib/ssoraed.o $OH/rdbms/lib
cp -p $OH1/rdbms/lib/ttcsoi.o $OH/rdbms/lib
cp -p $OH1/lib/nautab.o $OH/lib
cp -p $OH1/lib/naeet.o $OH/lib
cp -p $OH1/lib/naect.o $OH/lib
cp -p $OH1/lib/naedhs.o $OH/lib
 
cp -p $OH1/lib/*.a $OH/lib
cp -p $OH1/rdbms/lib/*.a $OH/rdbms/lib
oracle@clemens-TravelMate:~/scripts$ bash ./cpXE.bash

11. relink oracle


cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk config.o ioracle

REMARK: This is of course not supported and you’ve effectively changed your Oracle XE to an Enterprise Edition version now!!!

12. Configure XE as root
REMARK: Without the relink above the script below would hang at the output “Copying database files”. Actually it would hang during the “startup nomount” of the DB.


root@clemens-TravelMate:/etc/init.d# ./oracle-xe-18c-cfg configure
/bin/df: unrecognized option '--direct'
Try '/bin/df --help' for more information.
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password: 
************
Enter SYSTEM user password: 
**********
Enter PDBADMIN User Password: 
***********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.
 
Connect to Oracle Database using one of the connect strings:
     Pluggable database: clemens-TravelMate.fritz.box/XEPDB1
     Multitenant container database: clemens-TravelMate.fritz.box
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE
root@clemens-TravelMate:/etc/init.d# 

Done. Now you can use your XE-DB:


oracle@clemens-TravelMate:~$ . oraenv
ORACLE_SID = [oracle] ? XE
The Oracle base has been set to /opt/oracle
oracle@clemens-TravelMate:~$ sqlplus / as sysdba
 
SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 6 21:22:46 2020
Version 18.4.0.0.0
 
Copyright (c) 1982, 2018, Oracle.  All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
 
SQL> show pdbs
 
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 XEPDB1                         READ WRITE NO
SQL> 

REMARK: As you can see, the logon-Banner shows “Enterprise Edition”. I.e. the software installed is no longer Oracle XE and absolutely not supported and not licensable under XE. The installation may just serve as a simple test and fun exercise to get Oracle working on Linux Mint.

Finally I installed the Swingbench Simple Order Entry schema and ran a test with 100 concurrent OLTP-Users. It worked without issues.

Cet article Setup Oracle XE on Linux Mint – a funny exercise est apparu en premier sur Blog dbi services.

Cleanup a failed Oracle XE installation on Linux Mint

$
0
0

On this Blog I described on how to install Oracle XE on a current Linux Mint version (19.3. Tricia when writing the Blog). After the conversion of the Oracle provided rpm to a deb installation file with the tool alien, you can install the Oracle XE software with a simple command


root@clemens-TravelMate:/opt/distr# dpkg -i oracle-database-xe-18c_1.0-2_amd64.deb

If the installation fails or the XE database cannot be created with the configuration script later on, then you have to be able to remove the software again and do the cleanup on the system. However, this is often not possible that easily with “dpkg -r oracle-database-xe-18c”, because the pre-removal script or post-removal script may fail leaving the deb-repository untouched.

To remove the software you may follow the steps below:

REMARK: It is assumed here that there is no other Oracle installation in the oraInventory.

1.) Make sure the Listener and the XE database is stopped

If there is no other way then you may kill the listener and pmon process with e.g.


root@clemens-TravelMate:~# kill -9 $(ps -ef | grep tnslsnr | grep -v grep | tr -s " " | cut -d " " -f2)
root@clemens-TravelMate:~# kill -9 $(ps -ef | grep pmon | grep -v grep | tr -s " " | cut -d " " -f2)

2.) Skip the pre-removal skript by just putting an “exit 0” at the beginning of it


root@clemens-TravelMate:~# vi /var/lib/dpkg/info/oracle-database-xe-18c.prerm
...
root@clemens-TravelMate:~# head -2 /var/lib/dpkg/info/oracle-database-xe-18c.prerm
#!/bin/bash
exit 0

3.) Remove the Oracle installation


root@clemens-TravelMate:~# rm /etc/oratab /etc/oraInst.loc
root@clemens-TravelMate:~# rm -rf /opt/oracle/*

Remark: Removing the oraInst.loc ensures that the postrm-script /var/lib/dpkg/info/oracle-database-xe-18c.postrm is skipped.

4.) Remove the deb-package


root@clemens-TravelMate:~# dpkg --purge --force-all oracle-database-xe-18c

5.) Cleanup some other files


root@clemens-TravelMate:~# rm -rf /var/tmp/.oracle
root@clemens-TravelMate:~# rm -rf /opt/ORCLfmap/

Done.

Cet article Cleanup a failed Oracle XE installation on Linux Mint est apparu en premier sur Blog dbi services.

Starting an Oracle Database when a first connection comes in

$
0
0

To save resources I thought about the idea to start an Oracle database automatically when a first connection comes in. I.e. if there are many smaller databases on a server, which are not required during specific times, then we may shut them down and automatically start them when a connection comes in. The objective was that even the first connection should be successful. Is that possible? Yes, it is. Here’s what I did:

First of all I needed a failed connection event which triggers the startup of the database. In my case I took the message a listener produces on a connection of a not registered service. E.g.


sqlplus cbleile/@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:06:55 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

With the default listener-logging, above connection produces a message like the following in the listener.log-file:


oracle@oracle-19c6-vagrant:/opt/oracle/diag/tnslsnr/oracle-19c6-vagrant/listener/trace/ [orclcdb (CDB$ROOT)] tail -2 listener.log 
09-APR-2020 14:06:55 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCLPDB1)(CID=(PROGRAM=sqlplus)(HOST=oracle-19c6-vagrant)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=11348)) * establish * ORCLPDB1 * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor

or alternatively you may check listener alert-xml-logfile log.xml in the listener/alert directory.
To keep it easy, I used the listener.log file to check if such a failed connection came in.

So now we just need a mechanism to check if we have a message in the listener.log and trigger the database-startup then.

First I do create an application service on my PDB:


alter system set container=pdb1;
exec dbms_service.create_service('APP_PDB1','APP_PDB1');
exec dbms_service.start_service('APP_PDB1');
alter system register;
alter pluggable database save state;

REMARK1: Do not use the default service of a PDB when connecting with the application. ALWAYS create a service for the application.
REMARK2: By using the “save state” I do ensure that the service is started automatically on DB-startup.

Secondly I created a tnsnames-alias, which retries the connection several times in case it fails initially:


ORCLPDB1_S =
  (DESCRIPTION =
  (CONNECT_TIMEOUT=10)(RETRY_COUNT=30)(RETRY_DELAY=2)
   (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
   )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = APP_PDB1)
    )
  )

Important are the parameters


(RETRY_COUNT=30)(RETRY_DELAY=2)

I.e. in case of an error we do wait for 2 seconds and try again. We try 30 times to connect. That means we do have 60 seconds to start the database when the first connection comes in.

The simple BASH-script below polls for the message of a failed connection and starts the database when such a message is in the listener.log:


#!/bin/bash
 
# Set the env for orclcdb
export ORAENV_ASK=NO
export ORACLE_SID=orclcdb
. oraenv
 
# Define where the listener log-file is
LISTENER_LOG=/opt/oracle/diag/tnslsnr/oracle-19c6-vagrant/listener/trace/listener.log
 
# create a fifo-file
fifo=/tmp/tmpfifo.$$
mkfifo "${fifo}" || exit 1
 
# tail the listener.log and write to fifo in a background process
tail -F -n0 $LISTENER_LOG >${fifo} &
tailpid=$! # optional
 
# check if a connection to service APP_PDB1 arrived and the listener returns a TNS-12514
# TNS-12514 TNS:listener does not currently know of service requested in connect descriptor
# i.e. go ahead if we detect a line containing "establish * APP_PDB1 * 12514" in the listener.log
grep -i -m 1 "establish \* app_pdb1 \* 12514" "${fifo}"
 
# if we get here a request to connect to service APP_PDB1 came in and the service is not 
# registered at the listener. We conclude then that the DB is down.
 
# Do some cleanup by killing the tail-process and removing the fifo-file
kill "${tailpid}" # optional
rm "${fifo}"
 
# Startup the DB
sqlplus -S / as sysdba <<EOF
startup
exit
EOF

REMARK1: You may check the discussion here on how to poll for a string in a file on Linux.
REMARK2: In production above script would probably need a trap in case of e.g. Ctrl-C’ing it to kill the background tail process and remove the tmpfifo file.

Test:

The database is down.
In session 1 I do start my simple bash-script:


oracle@oracle-19c6-vagrant:/home/oracle/tools/test_db_start_whenconnecting/ [orclcdb (CDB$ROOT)] bash ./poll_listener.bash

In session 2 I try to connect:


oracle@oracle-19c6-vagrant:/home/oracle/ [orclcdb (CDB$ROOT)] sqlplus cbleile@orclpdb1_s
 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:33:11 2020
Version 19.6.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Enter password: 

After entering my password my connection-attempt “hangs”.
In session 1 I can see the following messages:


09-APR-2020 14:33:15 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=APP_PDB1)(CID=(PROGRAM=sqlplus)(HOST=oracle-19c6-vagrant)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=11592)) * establish * APP_PDB1 * 12514
ORACLE instance started.
 
Total System Global Area 3724537976 bytes
Fixed Size		    9142392 bytes
Variable Size		 1224736768 bytes
Database Buffers	 2483027968 bytes
Redo Buffers		    7630848 bytes
Database mounted.
Database opened.
./poll_listener.bash: line 38: 19096 Terminated              tail -F -n0 $LISTENER_LOG > ${fifo}
oracle@oracle-19c6-vagrant:/home/oracle/tools/test_db_start_whenconnecting/ [orclcdb (CDB$ROOT)] 

And session 2 automatically connects as the DB is open now:


oracle@oracle-19c6-vagrant:/home/oracle/ [orclcdb (CDB$ROOT)] sqlplus cbleile@orclpdb1_s
 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:33:11 2020
Version 19.6.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Enter password: 
Last Successful login time: Thu Apr 09 2020 14:31:44 +01:00
 
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
 
cbleile@orclcdb@PDB1> 

All subsequent connects the the DB are fast of course.

With such a mechanism I could even think of starting a virtual machine from a common listener once such a connection arrives. I.e. this would even allow us to e.g. start DB-servers in the Cloud when a first connection from the application comes in. I.e. we could stop DB-VMs on the Cloud to save money and start them up with a terraform-script (or whatever CLI-tool you use to manage your Cloud) when a first DB-connection arrives.

REMARK: With (Transparent) Application Continuity there are even more possibilities, but that feature requires RAC/RAC One Node or Active Data Guard and is out of scope here.

Cet article Starting an Oracle Database when a first connection comes in est apparu en premier sur Blog dbi services.

AWS RDS for PostgreSQL – 4 – Changing parameters

$
0
0

If you followed that last posts about DB Parameter Groups, Subnet Groups and Setting up the RDS instance you should have a running RDS instance. You should also be aware that changing parameters can not be done like you usually do it but you need to do that by changing the DB parameter groups. In this post we’ll look at how you can do that and, especially, what you should avoid.

Looking at our current configuration of the PostgreSQL RDS instance we can see that our DB Parameter Group is “in-sync”:

That means nothing changed or needs to be applied as all parameters of the instance have the same values as specified in the DB Parameter Group. Let’s go and change one of the parameters using the AWS console:

The parameter we’ll change is “autovacuum_naptime“, and we’ll change it from 15 to 20:

Once the changes are saved you’ll notice that the status in the configuration section changes to “applying”:

A few moments later the DB Parameter Group is “in-sync” again and the parameter is applied to the RDS instance:

postgres=> show autovacuum_naptime ;
 autovacuum_naptime 
--------------------
 20s
(1 row)

Using the AWS command line utilities for these tasks usually is much easier and tasks can be automated. Changing the same parameter on the command line:

$ aws rds modify-db-parameter-group --db-parameter-group-name dbi-dwe-pg12 --parameters="ParameterName=autovacuum_naptime, ParameterValue=10, ApplyMethod=immediate"

You can already spot an important bit in the command: ApplyMethod=immediate. For dynamic parameters you have the choice to apply a new value immediately or “pending-reboot”. What happens if we change a static parameter using the AWS console?

Now we get a status of “pending-reboot”:

Rebooting the instance applies the parameter and the DB Parameter Group is in sync again:

postgres=> show autovacuum_naptime ;
FATAL:  terminating connection due to administrator command
SSL connection has been closed unexpectedly
The connection to the server was lost. Attempting reset: Succeeded.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
postgres=> show autovacuum_naptime ;
 autovacuum_naptime 
--------------------
 10s
(1 row)

postgres=> show max_locks_per_transaction ;
 max_locks_per_transaction 
---------------------------
 128
(1 row)

What you always need to be careful about it the instance type. Currently the instance is running db.m5.xlarge, which means we should have 16GB of memory available. Checking the current setting of shared_buffers we see that PostgreSQL uses 3GB from that 16GB for caching:

postgres=> show shared_buffers;
 shared_buffers 
----------------
 3936960kB
(1 row)

postgres=> select 3936960/1024/1024;
 ?column? 
----------
        3
(1 row)

What happens when we set that to 32GB?

Will the instance come up after rebooting and what value will we see for shared_buffers?

We’ve managed to create a configuration set that will prevent the RDS from starting up. On the one hand this is expected, on the other hand I would have expected that there are some sanity checks in the background that prevent you from doing that. Maybe the reason AWS is not checking that is, that DB Parameter Groups can be used by several instances which all can run on different instance types. So, be careful, when you have a DB Parameter Group you are using for more than one instance and you want to change settings like shared_buffers. Keep in mind that you need check the instance type, because that defines your amount of memory and CPUs.

By using the command line utilities we can also check the PostgreSQL log file, which confirms why the instance is not able to start up:

dwe@dwe:~/Documents/aws$ aws rds describe-db-log-files --db-instance-identifier dwe-pg
DESCRIBEDBLOGFILES      1586431187000   error/postgres.log      13626
DESCRIBEDBLOGFILES      1586267878000   error/postgresql.log.2020-04-07-13      8632
DESCRIBEDBLOGFILES      1586269968000   error/postgresql.log.2020-04-07-14      6211
DESCRIBEDBLOGFILES      1586426381000   error/postgresql.log.2020-04-09-09      8236
DESCRIBEDBLOGFILES      1586429793000   error/postgresql.log.2020-04-09-10      6096
DESCRIBEDBLOGFILES      1586431182000   error/postgresql.log.2020-04-09-11      4024
dwe@dwe:~/Documents/aws$ aws rds download-db-log-file-portion --db-instance-identifier dwe-pg --log-file-name error/postgres.log | tail -15
2020-04-09 11:19:46 UTC::@:[27607]:LOG:  database system is shut down
2020-04-09 11:19:47.453 GMT [27773] LOG:  skipping missing configuration file "/rdsdbdata/config/recovery.conf"
2020-04-09 11:19:47.453 GMT [27773] LOG:  skipping missing configuration file "/rdsdbdata/config/recovery.conf"
2020-04-09 11:19:47 UTC::@:[27773]:LOG:  database system is shut down
Postgres Shared Memory Value: 35386589184 bytes
2020-04-09 11:19:47.482 GMT [27785] LOG:  skipping missing configuration file "/rdsdbdata/config/recovery.conf"
2020-04-09 11:19:47.482 GMT [27785] LOG:  skipping missing configuration file "/rdsdbdata/config/recovery.conf"
2020-04-09 11:19:47 UTC::@:[27785]:LOG:  starting PostgreSQL 12.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
2020-04-09 11:19:47 UTC::@:[27785]:LOG:  listening on IPv4 address "0.0.0.0", port 5432
2020-04-09 11:19:47 UTC::@:[27785]:LOG:  listening on IPv6 address "::", port 5432
2020-04-09 11:19:47 UTC::@:[27785]:LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2020-04-09 11:19:47 UTC::@:[27785]:FATAL:  could not map anonymous shared memory: Cannot allocate memory
2020-04-09 11:19:47 UTC::@:[27785]:HINT:  This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 35386589184 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
2020-04-09 11:19:47 UTC::@:[27785]:LOG:  database system is shut down

A bit strange is, that there is no recovery.conf in PostgreSQL 12 but AWS somehow still is referencing that somewhere.

So far for changing parameters. In the next post we’ll look at backup and restore.

Cet article AWS RDS for PostgreSQL – 4 – Changing parameters est apparu en premier sur Blog dbi services.

Find the SQL Plan Baseline for a plan operation

$
0
0

By Franck Pachot

.
If you decide to capture SQL Plan Baselines, you achieve plan stability by being conservative: if the optimizer comes with a new execution plan, it is loaded into the SQL Plan Management base, but not accepted. One day, you may add an index to improve some queries. Then you should check if there is any SQL Plan Baseline for queries with the same access predicate. Because the optimizer will probably find this index attractive, and add the new plan in the SPM base, but it will not be used unless you evolve it to accept it. Or you may remove the SQL Plan Baseline for these queries now that you know you provided a very efficient access path.

But how do you find all SQL Plan Baselines that are concerned? Here is an example.

I start with the SCOTT schema where I capture the SQL Plan Baselines for the following queries:


set time on sqlprompt 'SQL> '
host TWO_TASK=//localhost/CDB1A_PDB1.subnet.vcn.oraclevcn.com sqlplus sys/"demo##OracleDB20c" as sysdba @ ?/rdbms/admin/utlsampl.sql
connect scott/tiger@//localhost/CDB1A_PDB1.subnet.vcn.oraclevcn.com
alter session set optimizer_mode=first_rows optimizer_capture_sql_plan_baselines=true;
select * from emp where ename='SCOTT';
select * from emp where ename='SCOTT';

This is a full table scap because I have no index here.
Now I create an index that helps for this kind of queries:


alter session set optimizer_mode=first_rows optimizer_capture_sql_plan_baselines=false;
host sleep 1
create index emp_ename on emp(ename);
host sleep 1
select * from emp where ename='SCOTT';

I have now, in addition to the accepted FULL TABLE SCAN baseline, the loaded, but not accepted, plan with INDEX access.
Here is the detail the list of plans:


SQL> select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin from dba_sql_plan_baselines;

             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN
_______________________ _________________________________ __________________ ______ ______ ______ _______________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g8d8a279cc    17-apr-20 19:37    YES    YES    NO     AUTO-CAPTURE

Full table scan:

SQL> select * from dbms_xplan.display_sql_plan_baseline('SQL_62193752b864a1e8','SQL_PLAN_6469raaw698g8d8a279cc'
);

                                                                  PLAN_TABLE_OUTPUT
___________________________________________________________________________________

--------------------------------------------------------------------------------
SQL handle: SQL_62193752b864a1e8
SQL text: select * from emp where ename='SCOTT'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_6469raaw698g8d8a279cc         Plan id: 3634526668
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    87 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    87 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ENAME"='SCOTT')

Index access - not accepted

SQL> select * from dbms_xplan.display_sql_plan_baseline('SQL_62193752b864a1e8','SQL_PLAN_6469raaw698g854d6b671'
);

                                                                                   PLAN_TABLE_OUTPUT
____________________________________________________________________________________________________

--------------------------------------------------------------------------------
SQL handle: SQL_62193752b864a1e8
SQL text: select * from emp where ename='SCOTT'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_6469raaw698g854d6b671         Plan id: 1423357553
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 2855689319

-------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |           |     1 |    87 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| EMP       |     1 |    87 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | EMP_ENAME |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ENAME"='SCOTT')

SQL Plan Baseline lookup by plan operation

Now, I want to know all queries in this case, where a SQL Plan Baseline references this index because I’ll probably want to delete all plans for this query, or maybe evolve the index access to be accepted.
Here is my query on sys.sqlobj$plan


select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin
 ,operation,options,object_name
from (
 -- SPM execution plans
 select signature,category,obj_type,plan_id
 ,operation, options, object_name
 from sys.sqlobj$plan
) natural join (
 -- SQL Plan Baselines
 select signature,category,obj_type,plan_id
 ,name plan_name
 from sys.sqlobj$
 where obj_type=2
) natural join (
 select plan_name
 ,sql_handle,created,enabled,accepted,fixed,origin
 from dba_sql_plan_baselines
)
where operation='INDEX' and object_name like 'EMP_ENAME'
/

This gets the signature and plan identification from sys.sqlobj$plan then joins to sys.sqlobj$ to get the plan name, and finally dba_sql_plan_baselines to get additional information:


             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN    OPERATION       OPTIONS    OBJECT_NAME
_______________________ _________________________________ __________________ ______ ______ ______ _______________ ____________ _____________ ______________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE    INDEX        RANGE SCAN    EMP_ENAME

You can see that I like natural joins but be aware that I do that only when I fully control the columns by defining, in subqueries, the column projections before the join.

I have the following variant if I want to lookup by the outline hints:


select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin
 ,operation,options,object_name
 ,outline_data
from (
 -- SPM execution plans
 select signature,category,obj_type,plan_id
 ,operation, options, object_name
 ,case when other_xml like '%outline_data%' then extract(xmltype(other_xml),'/*/outline_data').getStringVal() end outline_data
 from sys.sqlobj$plan
) natural join (
 -- SQL Plan Baselines
 select signature,category,obj_type,plan_id
 ,name plan_name
 from sys.sqlobj$
 where obj_type=2
) natural join (
 select plan_name
 ,sql_handle,created,enabled,accepted,fixed,origin
 from dba_sql_plan_baselines
)
where outline_data like '%INDEX%'
/

This is what we find on the OTHER_XML and it is faster to filter here rather than calling dbms_xplan for each:


             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN       OPERATION                   OPTIONS    OBJECT_NAME                                                                                                                                                                                                                                                                                                                                                                                                                             OUTLINE_DATA
_______________________ _________________________________ __________________ ______ ______ ______ _______________ _______________ _________________________ ______________ ________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE    TABLE ACCESS    BY INDEX ROWID BATCHED    EMP            <outline_data><hint><![CDATA[BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "EMP"@"SEL$1")]]></hint><hint><![CDATA[INDEX_RS_ASC(@"SEL$1" "EMP"@"SEL$1" ("EMP"."ENAME"))]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$1")]]></hint><hint><![CDATA[FIRST_ROWS]]></hint><hint><![CDATA[DB_VERSION('20.1.0')]]></hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('20.1.0')]]></hint><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint></outline_data>

Those SYS.SQLOBJ$ tables are the tables where Oracle stores the queries for the SQL Management Base (SQL Profiles, SQL Plan Baselines, SQL Patches, SQL Quarantine).

If you want to find the SQL_ID from a SQL Plan Baseline, I have a query in a previous post:
https://medium.com/@FranckPachot/oracle-dba-sql-plan-baseline-sql-id-and-plan-hash-value-8ffa811a7c68

Cet article Find the SQL Plan Baseline for a plan operation est apparu en premier sur Blog dbi services.

Documentum – DA fail to load if accessed after D2 because of ESAPI

$
0
0

In some specific cases, you might have faced an issue where Documentum Administrator is loading properly but then suddenly it’s not working anymore and all you did in between was accessing D2. I don’t remember seeing that before DA 16.x but maybe it’s just my memory that is playing me tricks! When this happen, you will see the usual pop-up on Documentum Administrator asking you to refresh/reload your screen but without any details of what’s wrong.

 

Fortunately, the logs of Documentum Administrator are more useful since it will print something like that (it’s an old log but the same still apply with the latest DA Patches as well):

2019-05-18 11:40:12,680 UTC  WARN [[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'] IntrusionDetector - [SECURITY FAILURE Anonymous:null@unknown -> /webtop/IntrusionDetector] Invalid input: context=HTTP cookie name: XSS: WebformTag, type(HTTPCookieName)=^[a-zA-Z0-9\-_]{1,50}$, input=https%3A%2F%2Fweblogic_server_01.domain.com%3A8085%2FD2%2Fx3_portal%2Ftheme
org.owasp.esapi.errors.ValidationException: HTTP cookie name: XSS: WebformTag: Invalid input. Please conform to regex ^[a-zA-Z0-9\-_]{1,50}$ with a maximum length of 1024
        at org.owasp.esapi.reference.validation.StringValidationRule.checkWhitelist(StringValidationRule.java:144)
        at org.owasp.esapi.reference.validation.StringValidationRule.checkWhitelist(StringValidationRule.java:160)
        at org.owasp.esapi.reference.validation.StringValidationRule.getValid(StringValidationRule.java:284)
        at org.owasp.esapi.reference.DefaultValidator.getValidInput(DefaultValidator.java:214)
        at org.owasp.esapi.reference.DefaultValidator.getValidInput(DefaultValidator.java:185)
        at com.documentum.web.security.validators.WDKESAPIValidator.getValidCookieName(WDKESAPIValidator.java:136)
        at com.documentum.web.form.WebformTag.getJsessionidCookieName(WebformTag.java:401)
        at com.documentum.web.form.WebformTag.renderExtnJavaScript(WebformTag.java:326)
        at com.documentum.web.form.WebformTag.doStartTag(WebformTag.java:175)
        at jsp_servlet._wdk._system._errormessage.__errormessage._jsp__tag2(__errormessage.java:466)
        at jsp_servlet._wdk._system._errormessage.__errormessage._jsp__tag1(__errormessage.java:431)
        at jsp_servlet._wdk._system._errormessage.__errormessage._jspService(__errormessage.java:152)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.ServletStubImpl.onAddToMapException(ServletStubImpl.java:489)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:376)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:247)
        at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:630)
        at weblogic.servlet.internal.RequestDispatcherImpl.include(RequestDispatcherImpl.java:502)
        at com.documentum.web.form.FormProcessor.dispatchURL(FormProcessor.java:2283)
        at com.documentum.web.formext.component.URLDispatchBridge.dispatch(URLDispatchBridge.java:108)
        at com.documentum.web.formext.component.ComponentDispatcher.mapRequestToComponent(ComponentDispatcher.java:505)
        at com.documentum.web.formext.component.ComponentDispatcher.doGet(ComponentDispatcher.java:354)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
        at com.documentum.web.formext.component.ComponentDispatcher.doService(ComponentDispatcher.java:328)
        at com.documentum.web.formext.component.ComponentDispatcher.serviceAsNonController(ComponentDispatcher.java:166)
        at com.documentum.web.formext.component.ComponentDispatcher.service(ComponentDispatcher.java:147)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.ResponseHeaderControlFilter.doFilter(ResponseHeaderControlFilter.java:351)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.CompressionFilter.doFilter(CompressionFilter.java:96)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.env.WDKController.processRequest(WDKController.java:239)
        at com.documentum.web.env.WDKController.doFilter(WDKController.java:184)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3706)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3672)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)
2019-05-18 11:40:12,681 UTC ERROR [[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'] com.documentum.web.common.Trace - Encountered error in error message component jsp
com.documentum.web.security.exceptions.SecurityWrapperRuntimeException: HTTP cookie name: XSS: WebformTag: Invalid input. Please conform to regex ^[a-zA-Z0-9\-_]{1,50}$ with a maximum length of 1024
        at com.documentum.web.security.validators.WDKESAPIValidator.getValidCookieName(WDKESAPIValidator.java:141)
        at com.documentum.web.form.WebformTag.getJsessionidCookieName(WebformTag.java:401)
        at com.documentum.web.form.WebformTag.renderExtnJavaScript(WebformTag.java:326)
        at com.documentum.web.form.WebformTag.doStartTag(WebformTag.java:175)
        at jsp_servlet._wdk._system._errormessage.__errormessage._jsp__tag2(__errormessage.java:466)
        at jsp_servlet._wdk._system._errormessage.__errormessage._jsp__tag1(__errormessage.java:431)
        at jsp_servlet._wdk._system._errormessage.__errormessage._jspService(__errormessage.java:152)
        at weblogic.servlet.jsp.JspBase.service(JspBase.java:35)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.ServletStubImpl.onAddToMapException(ServletStubImpl.java:489)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:376)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:247)
        at weblogic.servlet.internal.RequestDispatcherImpl.invokeServlet(RequestDispatcherImpl.java:630)
        at weblogic.servlet.internal.RequestDispatcherImpl.include(RequestDispatcherImpl.java:502)
        at com.documentum.web.form.FormProcessor.dispatchURL(FormProcessor.java:2283)
        at com.documentum.web.formext.component.URLDispatchBridge.dispatch(URLDispatchBridge.java:108)
        at com.documentum.web.formext.component.ComponentDispatcher.mapRequestToComponent(ComponentDispatcher.java:505)
        at com.documentum.web.formext.component.ComponentDispatcher.doGet(ComponentDispatcher.java:354)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
        at com.documentum.web.formext.component.ComponentDispatcher.doService(ComponentDispatcher.java:328)
        at com.documentum.web.formext.component.ComponentDispatcher.serviceAsNonController(ComponentDispatcher.java:166)
        at com.documentum.web.formext.component.ComponentDispatcher.service(ComponentDispatcher.java:147)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:286)
        at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:260)
        at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:137)
        at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:350)
        at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:25)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.ResponseHeaderControlFilter.doFilter(ResponseHeaderControlFilter.java:351)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.servlet.CompressionFilter.doFilter(CompressionFilter.java:96)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at com.documentum.web.env.WDKController.processRequest(WDKController.java:239)
        at com.documentum.web.env.WDKController.doFilter(WDKController.java:184)
        at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:78)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3706)
        at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3672)
        at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:328)
        at weblogic.security.service.SecurityManager.runAsForUserCode(SecurityManager.java:197)
        at weblogic.servlet.provider.WlsSecurityProvider.runAsForUserCode(WlsSecurityProvider.java:203)
        at weblogic.servlet.provider.WlsSubjectHandle.run(WlsSubjectHandle.java:71)
        at weblogic.servlet.internal.WebAppServletContext.doSecuredExecute(WebAppServletContext.java:2443)
        at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2291)
        at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2269)
        at weblogic.servlet.internal.ServletRequestImpl.runInternal(ServletRequestImpl.java:1705)
        at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1665)
        at weblogic.servlet.provider.ContainerSupportProviderImpl$WlsRequestExecutor.run(ContainerSupportProviderImpl.java:272)
        at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:352)
        at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:337)
        at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:57)
        at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
        at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:652)
        at weblogic.work.ExecuteThread.execute(ExecuteThread.java:420)
        at weblogic.work.ExecuteThread.run(ExecuteThread.java:360)

 

So what’s that? Well the logs are pretty much self-explanatory but in short, the issue is a mismatch on the ESAPI layer. For several years, Documentum applications (DA, D2, D2-Config, D2-Rest, aso…) are all using the OWASP Enterprise Security API (aka ESAPI). This is an open source library that can be used to apply/enforce security controls on Web Applications. As you can see above, the issue in this case is there is a Cookie Name that doesn’t match the expected regex (regular expression) defined in the ESAPI.properties file of Documentum Administrator:

[weblogic@weblogic_server_01 ~]$ cd $APPLICATIONS
[weblogic@weblogic_server_01 apps]$ 
[weblogic@weblogic_server_01 apps]$ grep "CookieName" da/WEB-INF/classes/ESAPI.properties
Validator.HTTPCookieName=^[a-zA-Z0-9\\-_]{1,50}$
[weblogic@weblogic_server_01 apps]$

 

When accessing DA in a new/clean browser window (E.g.: in private mode), then DA will setup its own stuff and you will have the following cookies:

Cookie: appname=da;
        wdk_sess_cookie_0=eJwFGztVFUQJkM4lg1NzQysNTQzBJlpYZuCrQEA00NLck1hcxTMoNnKMLI0TexNUwNjMTG39asqdHK0CCs3Lg3NcXXNyC/1Cq4odTHKdAq1tMhyjHdN8vTc1xTK0MD1DUNTS1NcvdSzRDUDAnrIeYA..;
        JSESSIONID_da=Nxs2AYuOOuD2iBZqBA8Vw3UuuUlEjA_EbESx98AthouJ-elAeyWP!-1595571209

 

Then if you access any application like D2-Config, D2-REST, D2-Smartview or a WebLogic Admin Console for example, then each of these application will add their own cookies but usually cookies are removed when you leave the application because not needed anymore, except for the session ones (JSESSIONID). So, after accessing D2-Config and back to DA, then you should see something like that:

Cookie: appname=da;
        wdk_sess_cookie_0=eJwFGztVFUQJkM4lg1NzQysNTQzBJlpYZuCrQEA00NLck1hcxTMoNnKMLI0TexNUwNjMTG39asqdHK0CCs3Lg3NcXXNyC/1Cq4odTHKdAq1tMhyjHdN8vTc1xTK0MD1DUNTS1NcvdSzRDUDAnrIeYA..;
        JSESSIONID_da=Nxs2AYuOOuD2iBZqBA8Vw3UuuUlEjA_EbESx98AthouJ-elAeyWP!-1595571209;
        JSESSIONID_D2Config=RR5rnoSBFAAHOR342UkkiRV2GtCqZ9TyYbwrsDTD9K058yON4SAj!491149461

 

Going back to DA after that doesn’t show any issue because the application specific cookies aren’t there anymore. The problem is only for D2. The reason is that D2 uses internally the Sencha GXT and a specific cookie is setup by that toolkit. This cookie isn’t removed when you leave the application and therefore, since the name of this cookie doesn’t follow the restriction of the DA ESAPI, then DA isn’t working anymore. Loading D2 and DA afterwards will result in the following cookies:

Cookie: appname=da;
        wdk_sess_cookie_0=eJwFGztVFUQJkM4lg1NzQysNTQzBJlpYZuCrQEA00NLck1hcxTMoNnKMLI0TexNUwNjMTG39asqdHK0CCs3Lg3NcXXNyC/1Cq4odTHKdAq1tMhyjHdN8vTc1xTK0MD1DUNTS1NcvdSzRDUDAnrIeYA..;
        JSESSIONID_da=Nxs2AYuOOuD2iBZqBA8Vw3UuuUlEjA_EbESx98AthouJ-elAeyWP!-1595571209;
        JSESSIONID_D2Config=RR5rnoSBFAAHOR342UkkiRV2GtCqZ9TyYbwrsDTD9K058yON4SAj!491149461
        JSESSIONID_D2=SxeOGoLdAWHtfOhs2D6g-525FcVyUQJFIqUfsJJx5Au6-yjFop3P!78206796;
        https%3A%2F%2Fweblogic_server_01.domain.com%3A8085%2FD2%2Fx3_portal%2Ftheme=%7B%22state%22%3A%7B%22id%22%3A%22s%3Aslate%22%2C%20%22file%22%3A%22s%3Aresources%2Fthemes%2Fslate%2Fcss%2Fxtheme-x3.css%22%7D%7D

 

You can see above the same cookie name as in the DA logs: “https%3A%2F%2Fweblogic_server_01.domain.com%3A8085%2FD2%2Fx3_portal%2Ftheme“. That’s a strange name for a cookie (it’s the encoded URL of the D2 theme) but that’s how it… Because this cookie isn’t removed when you leave D2 and because DA restrict the length and allowed characters in the name of cookies, then you have this issue. A few months ago, we opened the OpenText SR#4366426 to ask whether they could fix that but the feedback we got so far is that no it’s not possible because it’s coming from the Sencha GXT and they have apparently no control over that. Therefore, the only solution I can see would be to update the ESAPI regex to allow this cookie. It’s not really a good idea to update the ESAPI regex, unfortunately, I don’t see another solution/workaround… Here is an example on how it could potentially be done:

[weblogic@weblogic_server_01 apps]$ grep "CookieName" da/WEB-INF/classes/ESAPI.properties
Validator.HTTPCookieName=^[a-zA-Z0-9\\-_]{1,50}$
[weblogic@weblogic_server_01 apps]$ 
[weblogic@weblogic_server_01 apps]$ sed -i 's,^Validator.HTTPCookieName=\^.*\$,Validator.HTTPCookieName=\^\[a-zA-Z0-9%._:\\\\/\\\\-\]\{1\,100\}\$,' da/WEB-INF/classes/ESAPI.properties
[weblogic@weblogic_server_01 apps]$ 
[weblogic@weblogic_server_01 apps]$ grep "CookieName" da/WEB-INF/classes/ESAPI.properties
Validator.HTTPCookieName=^[a-zA-Z0-9%._:\\/\\-]{1,100}$
[weblogic@weblogic_server_01 apps]$

 

Basically what I added above is to allow the additional characters that are part of the CookieName/URL (%, ., : and /) and I also allowed more chars because by default it’s only 50 but the URL is longer. With this new definition, redeploy/update DA and the issue should be gone.

 

What I still don’t understand is why I can’t see the same issue for D2 itself or with an older version of DA (7.3 for example)… Both D2 and DA uses the ESAPI but I can only see DA 16.x fail, older versions as well as D2 are both working fine without error while their regex is even more strict: “^[a-zA-Z0-9\\-_]{1,32}$“. This is the same pattern as for DA 16.x but it limits to 32 chars max, which is even less. There must have been a bug in earlier versions that prevented this property from actually doing anything. It might have been a change on the ESAPI layer as well but I assume both DA and D2 ESAPI are in sync? I guess I will try to do some research on that.

 

Normally the issue shouldn’t happen anymore on the 20.4 release because D2 will force the Sencha GXT to use web storage instead of cookies. This will also be applied on the next patches of earlier D2 versions normally (E.g.: 16.5.1 P06, planned for end of April 2020).

 

Cet article Documentum – DA fail to load if accessed after D2 because of ESAPI est apparu en premier sur Blog dbi services.

“Segment Maintenance Online Compress” feature usage

$
0
0

By Franck Pachot

.
On Twitter, Ludovico Caldara mentioned the #licensing #pitfall when using the Online Partition Move with Basic Compression. Those two features are available in Enterprise Edition without additional option, but when used together (moving online a compressed partition) they enable the usage of Advance Compression Option:


And there was a qustion about detection of this feature. I’ll show how this is detected. Basically, the ALTER TABLE MOVE PARTITION sets the “fragment was compressed online” flag in TABPART$ or TABSUBPART$ when the segment was compressed during the online move.

I create a partitioned table:


SQL> create table SCOTT.DEMO(id,x) partition by hash(id) partitions 2 as select rownum,lpad('x',100,'x') from xmltable('1 to 1000');

Table created.

I set basic compression, which does not compress anything yet but only for future direct loads:


SQL> alter table SCOTT.DEMO modify partition for (42) compress;

Table altered.

I move without the ‘online’ keyword:


SQL> alter table SCOTT.DEMO move partition for (42);

Table altered.

This does not enable the online compression flag (which is 0x2000000):


SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and objec
t_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75609      75610          2         18 12

The 0x12 is about the presence of statistics (the MOVE does online statistics gathering in 12c).


SQL> exec sys.dbms_feature_usage_internal.exec_db_usage_sampling(sysdate)

PL/SQL procedure successfully completed.

SQL> select name,detected_usages,currently_used,feature_info from dba_feature_usage_statistics where name='Segment Maintenance Online Compress';

NAME                                     DETECTED_USAGES CURRE FEATURE_INFO
---------------------------------------- --------------- ----- --------------------------------------------------------------------------------
Segment Maintenance Online Compress                    0 FALSE

Online Move of compressed partition

Now moving online this compressed segment:


SQL> alter table SCOTT.DEMO move partition for (42) online;

Table altered.

This has enabled the 0x2000000 flag:


SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and objec
t_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75611      75611          2   33554450 2000012

And, of course, is logged by the feature usage detection:


SQL> exec sys.dbms_feature_usage_internal.exec_db_usage_sampling(sysdate)

PL/SQL procedure successfully completed.

SQL> select name,detected_usages,currently_used,feature_info from dba_feature_usage_statistics where name='Segment Maintenance Online Compress';

NAME                                     DETECTED_USAGES CURRE FEATURE_INFO
---------------------------------------- --------------- ----- --------------------------------------------------------------------------------
Segment Maintenance Online Compress                    1 FALSE Partition Obj# list: 75611:

The FEATURE_INFO mentions the object_id for the concerned partitions (for the last detection only).

No Compress

The only way I know to disable this flag is to uncompress the partition, and this can be done online:


SQL> alter table SCOTT.DEMO move partition for (42) nocompress online;

Table altered.

SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and object_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75618      75618          2         18 12

DBMS_REDEFINITION

As a workaround, DBMS_REDEFINITION does not use the Advanced Compression Option. For example, this does not enable any flag:


SYS@CDB$ROOT>
SYS@CDB$ROOT> alter table SCOTT.DEMO rename partition for (24) to PART1;

Table altered.

SYS@CDB$ROOT> create table SCOTT.DEMO_X for exchange with table SCOTT.DEMO;

Table created.

SYS@CDB$ROOT> alter table SCOTT.DEMO_X compress;

Table altered.

SYS@CDB$ROOT> exec dbms_redefinition.start_redef_table(uname=>'SCOTT',orig_table=>'DEMO',int_table=>'DEMO_X',part_name=>'PART1',options_flag=>dbms_redefinition.cons_use
_rowid);

PL/SQL procedure successfully completed.

SYS@CDB$ROOT> exec dbms_redefinition.finish_redef_table(uname=>'SCOTT',orig_table=>'DEMO',int_table=>'DEMO_X',part_name=>'PART1');

PL/SQL procedure successfully completed.

SYS@CDB$ROOT> drop table SCOTT.DEMO_X;                                                                                                                        
Table dropped.

But of course, the difference is that only the blocks that are direct-path inserted into the interim table are compressed. Not the online modifications.

Only for partitions?

As far as I know, this is detected only for partitions and subpartitions, the online partition move operation which came in 12cR1. Since 12cR2 we can also move online a non-partitioned table and this, as far as I know, is not detected by dba_feature_usage_statistics. But don’t count on this as this may be considered as a bug which may be fixed one day.

Cet article “Segment Maintenance Online Compress” feature usage est apparu en premier sur Blog dbi services.


Documentum – xPlore installer refuses to start because of ‘Multiple launches’

$
0
0

While working with xPlore, you might have faced before a message saying that an xPlore installer refused to start because it is not allowed to run multiple instances at the same time. This could have happened while running any installer: binaries, patches, dsearch, aso… However, when checking the processes, there is only one running and that’s the one you just started.

 

That’s actually a very old and very persistent issue since it’s there at least since xPlore 1.2 and it’s still the same for the latest version. It is already documented in the OpenText KB7867004. In the KB, only xPlore 1.2 to 1.4 are mentioned and only for a xPlore patch installation. The truth is that it’s still true even for the most recent 16.x version and it’s not limited to patches, you might have that for all installers as I mentioned before, because they are all built using the same soft. I wanted to talk about this issue in this blog because it’s probably something that might happen more and more with the DevOps approach. With ordinary Virtual Machines, you might face this once but then it’s fixed, you won’t see it again. In the DevOps world, if it’s not done properly, it’s something that might repeat a few times until it’s really fixed at the root of the issue.

 

So what’s this issue? As mentioned, it actually comes from the installer itself. You probably know that OpenText inherited from Dell and EMC before that of the “InstallAnywhere” installers. The issue here usually comes when there are wrong permissions on the home folder. The InstallAnywhere creates hidden files/folders under the home folder (E.g.: ~/.com.zerog.registry.xml) for the installation which register all components present. In case it’s not possible to create that, the installer fails to start with the message I mentioned above. This is something that you might be facing while using containers because of several things. Here are some potential examples:

  • the home folder of your installation owner (xplore or whatever the name) was left with root permissions
  • xPlore binaries were installed properly in an image but then at the runtime, it wasn’t mounted properly (if it’s a volume)
  • something changed the permissions

 

To replicate the issue quickly, you can just start the Dsearch installer for example. Here I’m using a silent properties file because there is no GUI:

[xplore@ds-0 ~]$ $XPLORE_HOME/setup/dsearch/dsearchConfig.bin LAX_VM "$JAVA_HOME/bin/java" -f $XPLORE_HOME/setup/FT_Dsearch.properties
Preparing to install
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Multiple launches of this installer is not allowed. It will now quit.
[xplore@ds-0 ~]$
[xplore@ds-0 ~]$ ls -ld $HOME
drwx------ 1 root root 4096 Mar 21 15:29 /home/xplore
[xplore@ds-0 ~]$
[xplore@ds-0 ~]$ ls -l $HOME
ls: cannot open directory /home/xplore/: Permission denied
[xplore@ds-0 ~]$

 

If you have the above behavior, then you just need to correct the permissions on the home folder. If this is inside a container (docker or other) or inside a Kubernetes pod for example, then you will need to find where exactly the issue is coming from and fix it there but for that I cannot help you because it depends on the infrastructure you put together. To validate the solution quickly:

[root@ds-0 /]# chown -R xplore:xplore /home/xplore
[root@ds-0 /]#
[root@ds-0 /]# su - xplore
[xplore@ds-0 ~]$
[xplore@ds-0 ~]$ $XPLORE_HOME/setup/dsearch/dsearchConfig.bin LAX_VM "$JAVA_HOME/bin/java" -f $XPLORE_HOME/setup/FT_Dsearch.properties
Preparing to install
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

[xplore@ds-0 ~]$
[xplore@ds-0 ~]$ grep "\.version" $XPLORE_HOME/installinfo/instances/dsearch/PrimaryDsearch.properties
ess.version=16.4.0000
appserver.version=9.0.1
[xplore@ds-0 ~]$

 

The error message could be clearer, mentioning that there is a permission issue with the home folder but at least, it’s not failing silently. I guess that’s at least a good point.

 

Cet article Documentum – xPlore installer refuses to start because of ‘Multiple launches’ est apparu en premier sur Blog dbi services.

Oracle Support: Easy export of SQL Testcase

$
0
0

By Franck Pachot

.
Many people complain about the quality of support. And there are some reasons behind that. But before complaining, be sure that you provide all information. Because one reason for inefficient Service Request handling is the many incomplete tickets the support engineers have to manage. Oracle provides the tools to make this easy for you and for them. Here I’ll show how easy it is to provide a full testcase with DBMS_DIAG. I’m not talking about hours spent to identify the tables involved, the statistics, the parameters,… All that can be done autonomously with a single command as soon as you have the SQL text or SQL_ID.

In my case, I’ve reproduced my problem (very long parse time) with the following:


set linesize 120 pagesize 1000
variable sql clob
exec select sql_text into :sql from dba_hist_sqltext where sql_id='5jyqgq4mmc2jv';
alter session set optimizer_features_enable='18.1.0';
alter session set tracefile_identifier='5jyqgq4mmc2jv';
select value from v$diag_info where name='Default Trace File';
alter session set events 'trace [SQL_Compiler.*]';
exec execute immediate 'explain plan for '||:sql;
alter session set events 'trace [SQL_Compiler.*] off';

I was too lazy to copy the big SQL statement, so I get it directly from AWR. Because it is a parsing problem, I just run an EXPLAIN PLAN. I set Optimizer Feature Enable to my current version because the first workaround in production was to keep the previous version. I ran a “SQL Compiler” trace, aka event 10053, in order to get the timing information (which I described in a previous blog post). But that’s not the topic. Rather than providing those huge traces to Oracle Support, better to give an easy to reproduce test case.

So this is the only thing I added to get it:


variable c clob
exec DBMS_SQLDIAG.EXPORT_SQL_TESTCASE(directory=>'DATA_PUMP_DIR',sql_text=>:sql,testcase=>:c);

Yes, that’s all. This generates the following files in my DATA_PUMP_DIR directory:

There’s a README, there’s a dump of the objects (I used the default which exports only metadata and statistics), there’s the statement, the system statistics,… you can play with this or simply import the whole with DBMS_SQLDIAG.

I just tar’ed this and copy it to another environment (I provisioned a 20c database in the Oracle Cloud for that) and ran the following:


grant DBA to DEMO identified by demo container=current;
connect demo/demo@&_connect_identifier
create or replace directory VARTMPDPDUMP as '/var/tmp/dpdump';
variable c clob
exec DBMS_SQLDIAG.IMPORT_SQL_TESTCASE(directory=>'VARTMPDPDUMP',filename=>'oratcb_0_5jyqgq4mmc2jv_1_018BBEEE0001main.xml');
@ oratcb_0_5jyqgq4mmc2jv_1_01A20CE80001xpls.sql

And that’s all. This imported all the objects and statistics to exactly reproduce my issue. Now that it reproduces everywhere, I can open a SR, with a short description and the SQL Testcase files (5 MB here). It is not always easy to reproduce a problem, but if you can reproduce it in your environment, there’s a good chance that you can quickly export what is required to reproduce it in another environment.

SQL Testcase Builder is available in any edition. You can use it yourself to reproduce in pre-production a production issue or to provide a testcase to the Oracle Support. Or to send to your preferred troubleshooting consultant: we are doing more and more remote expertise, and reproducing an issue in-house is the most efficient way to analyze a problem.

Cet article Oracle Support: Easy export of SQL Testcase est apparu en premier sur Blog dbi services.

티베로 – The most compatible alternative to Oracle Database

$
0
0

By Franck Pachot

.
Do you remember that time where we were able to buy IBM PC clones, cheaper than the IBM PC but fully compatible? I got the same impression when testing Tibero, the TmaxSoft relational database compatible with the Oracle Database. Many Oracle customers are looking for alternatives to the Oracle Database, because of unfriendly commercial and licensing practices, like forcing the usage of expensive options or not counting vCPU for licensing. Up to now, I was not really impressed by the databases that claim Oracle compatibility. You simply cannot migrate an application from Oracle to another RDBMS without having to change a lot of code. This makes it nearly impossible to move a legacy application where the business logic has been implemented during years in the database model and stored procedures. Who will take the risk to guarantee the same behavior even after very expensive UAT? Finally, with less effort, you may optimize your Oracle licenses and stay with the same database software.

Tibero

However, in Asia, some companies have another reason to move out of Oracle. Not because of Oracle, but because it is an American company. This is true especially for public government organizations for which storing data and running critical application should not depend on a US company. And once they have built their alternative, they may sell it worldwide. In this post I’m looking at Tibero, a database created by a South Korean company – TmaxSoft – with an incredible level of compatibility with Oracle.

I’ll install and run a Tibero database to get an idea about what compatibility means.

Demo trial

After creating a login account on the TmaxSoft TechNet, I’ve requested a demo license on: https://technet.tmaxsoft.com/en/front/common/demoPopup.do

You need to now the host where you will run this as you have to provide the result of `uname -n` to get the license key. That’s a 30 days trial (longer if you don’t restart the instance) that can run everything on this host. I’ve used an Oracle Compute instance running OEL7 for this test. I’ve downloaded the Tibero 6 software installation: tibero6-bin-FS07_CS_1902-linux64–166256-opt.tar.gz from TmaxSoft TechNet > Downloads > Database > Tibero > Tibero 6

For the installation, I followed the instructions from https://store.dimensigon.com/deploy-tibero-database/ that I do not reproduce here. Basically, you need some packages, some sysctl.conf settings for shared memory, some limits.conf settings, a user in ‘dba’ group,… Very similar to Oracle prerequisites. Then untar the software – this installs a $TB_HOME about 1GB.

Database creation

The first difference with Oracle is that you cannot start an instance without a valid license file:


$ $TB_HOME/bin/tb_create_db.sh
  ********************************************************************
* ERROR: Can't open the license file!!
* (1) Check the license file - /home/tibero/tibero6/license/license.xml
  ********************************************************************

I have my trial license file and move it to $TB_HOME/license/license.xml

The creation of the database is ready and there’s a simple tb_create_db.sh for that. First stage is starting the instance (NOMOUNT mode):


$ $TB_HOME/bin/tb_create_db.sh
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).

A few information about settings is displayed:


+----------------------------- size -------------------------------+
 system size = 100M (next 10M)
 syssub size = 100M (next 10M)
   undo size = 200M (next 10M)
   temp size = 100M (next 10M)
    usr size = 100M (next 10M)
    log size = 50M
+--------------------------- directory ----------------------------+
 system directory = /home/tibero/tibero6/database/t6a
 syssub directory = /home/tibero/tibero6/database/t6a
   undo directory = /home/tibero/tibero6/database/t6a
   temp directory = /home/tibero/tibero6/database/t6a
    log directory = /home/tibero/tibero6/database/t6a
    usr directory = /home/tibero/tibero6/database/t6a

And the creation is going - that really looks like an Oracle Database:


+========================== newmount sql ==========================+
 create database "t6a"
  user sys identified by tibero
  maxinstances 8
  maxdatafiles 100
  character set MSWIN949
  national character set UTF16
  logfile
  group 1 ('/home/tibero/tibero6/database/t6a/log001.log') size 50M,
  group 2 ('/home/tibero/tibero6/database/t6a/log002.log') size 50M,
  group 3 ('/home/tibero/tibero6/database/t6a/log003.log') size 50M
    maxloggroups 255
    maxlogmembers 8
    noarchivelog
  datafile '/home/tibero/tibero6/database/t6a/system001.dtf' 
    size 100M autoextend on next 10M maxsize unlimited
  SYSSUB 
  datafile '/home/tibero/tibero6/database/t6a/syssub001.dtf' 
    size 10M autoextend on next 10M maxsize unlimited
  default temporary tablespace TEMP
    tempfile '/home/tibero/tibero6/database/t6a/temp001.dtf'
    size 100M autoextend on next 10M maxsize unlimited
    extent management local autoallocate
  undo tablespace UNDO
    datafile '/home/tibero/tibero6/database/t6a/undo001.dtf'
    size 200M
    autoextend on next 10M maxsize unlimited
    extent management local autoallocate
  default tablespace USR
    datafile  '/home/tibero/tibero6/database/t6a/usr001.dtf'
    size 100M autoextend on next 10M maxsize unlimited
    extent management local autoallocate;
+==================================================================+

Database created.
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).

Then the dictionary is loaded (equivalent to catalog/catproc):


/home/tibero/tibero6/bin/tbsvr
Dropping agent table...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
Creating example users...
Creating virtual tables(1)...
Creating virtual tables(2)...
Granting public access to _VT_DUAL...
Creating the system generated sequences...
Creating internal dynamic performance views...
Creating outline table...
Creating system tables related to dbms_job...
Creating system tables related to dbms_lock...
Creating system tables related to scheduler...
Creating system tables related to server_alert...
Creating system tables related to tpm...
Creating system tables related to tsn and timestamp...
Creating system tables related to rsrc...
Creating system tables related to workspacemanager...
Creating system tables related to statistics...
Creating system tables related to mview...
Creating system package specifications:
    Running /home/tibero/tibero6/scripts/pkg/pkg_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_standard_extension.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_clobxmlinterface.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_udt_meta.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_seaf.sql...
    Running /home/tibero/tibero6/scripts/pkg/anydata.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_db2_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_application_info.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aq.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aq_utl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aqadm.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_assert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_crypto.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_db2_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_db_version.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_ddl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_debug.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_debug_jdwp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_errlog.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_expression.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_fga.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_flashback.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_geom.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_java.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_job.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_lob.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_lock.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_metadata.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mssql_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview_refresh_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_obfuscation.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_output.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_pipe.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_random.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_redefinition.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_redefinition_stats.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_repair.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_result_cache.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rls.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rowid.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rsrc.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_scheduler.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_session.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_space.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_space_admin.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sph.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql_analyze.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sqltune.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_stats.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_stats_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_system.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_transaction.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_types.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_utility.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_utl_tb.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_verify.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmldom.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlgen.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlquery.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xplan.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dg_cipher.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_htf.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_htp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_psm_sql_result_cache.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_sys_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_tb_utility.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_text.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_tudiconst.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_encode.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_file.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_tcp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_http.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_url.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_i18n.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_match.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_raw.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_smtp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_str.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_compress.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_text_japanese_lexer.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_tpm.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_recomp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_monitor.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_server_alert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_ctx_ddl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_odci.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_ref.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_owa_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_alert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_client_internal.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xslprocessor.sql...
    Running /home/tibero/tibero6/scripts/pkg/uda_wm_concat.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_diutil.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlsave.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlparser.sql...
Creating auxiliary tables used in static views...
Creating system tables related to profile...
Creating internal system tables...
Check TPR status..
Stop TPR
Dropping tables used in TPR...
Creating auxiliary tables used in TPR...
Creating static views...
Creating static view descriptions...
Creating objects for sph:
    Running /home/tibero/tibero6/scripts/iparam_desc_gen.sql...
Creating dynamic performance views...
Creating dynamic performance view descriptions...
Creating package bodies:
    Running /home/tibero/tibero6/scripts/pkg/_pkg_db2_standard.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aq.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aq_utl.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aqadm.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_assert.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_db2_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_errlog.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_metadata.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mssql_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview_refresh_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_redefinition_stats.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_rsrc.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_scheduler.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_session.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sph.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sql_analyze.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sql_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sqltune.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_stats.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_stats_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_utility.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_utl_tb.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_verify.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_workspacemanager.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xmlgen.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xplan.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dg_cipher.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_htf.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_htp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_text.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_http.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_url.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_i18n.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_smtp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_text_japanese_lexer.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_tpm.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_recomp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_server_alert.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xslprocessor.tbw...
Running /home/tibero/tibero6/scripts/pkg/_uda_wm_concat.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xmlparser.tbw...
Creating public synonyms for system packages...
Creating remaining public synonyms for system packages...
Registering dbms_stats job to Job Scheduler...
Creating audit event pacakge...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_audit_event.tbw...
Creating packages for TPR...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_tpr.sql...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_tpr.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_apm.tbw...
Start TPR
Create tudi interface
    Running /home/tibero/tibero6/scripts/odci.sql...
Creating spatial meta tables and views ...
Creating internal system jobs...
Creating Japanese Lexer epa source ...
Creating internal system notice queue ...
Creating sql translator profiles ...
Creating agent table...
Done.
For details, check /home/tibero/tibero6/instance/t6a/log/system_init.log.

From this log, you can already imagine the PL/SQL DBMS_% packages compatibility with Oracle Database: they are all there.
All seems good, I have a TB_HOME and TB_SID to identify the instance:


**************************************************
* Tibero Database 't6a' is created successfully on Fri Dec  6 17:23:57 GMT 2019.
*     Tibero home directory ($TB_HOME) =
*         /home/tibero/tibero6
*     Tibero service ID ($TB_SID) = t6a
*     Tibero binary path =
*         /home/tibero/tibero6/bin:/home/tibero/tibero6/client/bin
*     Initialization parameter file =
*         /home/tibero/tibero6/config/t6a.tip
*
* Make sure that you always set up environment variables $TB_HOME and
* $TB_SID properly before you run Tibero.
**************************************************

This looks very similar to Oracle Database and here is my ‘init.ora’ equivalent:

I should add _USE_HUGE_PAGE=Y there as I don’t like to see 3GB allocated with 4k pages.
Looking at the instance processes shows many background Worker Processes that have several threads:

Not going into the details there, but DBWR does more than the Oracle Database Writer as it runs treads for writing to datafiles as well as writing to redo logs. RCWP is the recovery process (also used by standby databases). PEWP runs the parallel query threads. FGWP runs the foreground (session) threads.

Tibero is similar to Oracle but not equal. Tibero has been developed in 2003 with the goal of maximum compatibility with Oracle: SQL, PL/SQL, MVCC compatibility for easy application migration as well as architecture compatibility for easier adoption by DBA. But it was also built from scratch for modern OS and runs processes and threads. I installed Linux x86-64 but Tibero is also available for AIX, HP-UX, Solaris, Windows.

Connect

I can connect with the SYS user by attaching to the SHM when the TB_HOME and TB_SID is set to my local instance:


SQL> Disconnected.
[SID=t6a u@h:w]$ TB_HOME=~/tibero6 TB_ID=t6a tbsql sys/tibero

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

I can also connect though the listener (the port was mentioned at database creation):


[SID=t6a u@h:w]$ TB_HOME= TB_ID= tbsql sys/tibero@localhost:8629/t6a

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

Again it is similar to Oracle (like ezconnect or full connection string) but not exactly the same:

Similar but not a clone

The first time I looked at Tibero, I was really surprised how far it goes with the compatibility with Oracle Database. I’ll probably write more blog posts about it but even complex PL/SQL packages can run without any change. Then comes the idea: is it only an API compatibility or is this software a clone of Oracle? I’ve even heard rumours that some source code must have leaked in order to reach such compatibility. I want to make it clear here: I’m 100% convinced that this database engine was written from scratch, inspired by Oracle architecture and features, and implementing the same language, dictionary packages and views, but with completely different code and internal design. When we troubleshoot Oracle we are used to see the C function stacks in trace dumps. Let’s have a look at the C functions here.

I’ll strace the pread64 call while running a query in order to see the stack behind. I get the PID to trace:


select client_pid,pid,wthr_id,os_thr_id from v$session where sid in (select sid from v$mystat);

The process for my session is: tbsvr_FGWP000 -t NORMAL -SVR_SID t6a and the PID is the Linux PID (OS_THR_ID is the thread).
I strace (compiled with libunwind to show the call stack):


strace -k -e trace=pread64 -y -p 7075


Here is the first call stack for the first pread64() call:


pread64(49, "\4\0\0\0\2\0\200\0\261]\2\0\0\0\1\0\7\0\0\0\263\0\0\0l\2\0\0\377\377\377\377"..., 8192, 16384) = 8192
 > /usr/lib64/libpthread-2.17.so(__pread_nocancel+0x2a) [0xefc3]
 > /home/tibero/tibero6/bin/tbsvr(read_dev_ne+0x2b2) [0x14d8cd2]
 > /home/tibero/tibero6/bin/tbsvr(read_dev+0x94) [0x14d96e4]
 > /home/tibero/tibero6/bin/tbsvr(buf_read1_internal+0x2f8) [0x14da158]
 > /home/tibero/tibero6/bin/tbsvr(tcbh_read_blks_internal+0x5d8) [0x14ccf98]
 > /home/tibero/tibero6/bin/tbsvr(tcbh_read_blk_internal+0x1d) [0x14cd2dd]
 > /home/tibero/tibero6/bin/tbsvr(tcbuf_pin_read_locked+0x39c) [0x14ec99c]
 > /home/tibero/tibero6/bin/tbsvr(tcbuf_get+0x198a) [0x14f3c9a]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_units_internal+0x256) [0x17b0a56]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_units_from_df+0x406) [0x17b2396]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_ext_internal+0x2ff) [0x17b5b1f]
 > /home/tibero/tibero6/bin/tbsvr(tx_sgmt_create+0x1cb) [0x1752ffb]
 > /home/tibero/tibero6/bin/tbsvr(ddl_create_dsgmt+0xc0) [0x769260]
 > /home/tibero/tibero6/bin/tbsvr(_ddl_ctbl_internal+0x155d) [0x7f86dd]
 > /home/tibero/tibero6/bin/tbsvr(ddl_create_table+0xf9) [0x7f9dd9]
 > /home/tibero/tibero6/bin/tbsvr(ddl_execute+0xf04) [0x44aa54]
 > /home/tibero/tibero6/bin/tbsvr(ddl_process_internal+0xf6a) [0x44ed0a]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_sql_process+0x4082) [0x1def92]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_msg_sql_common+0x1518) [0x1ca718]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_handle_msg_internal+0x2225) [0x1847c5]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_wthr_request_from_cl_conn+0x70a) [0x187eea]
 > /home/tibero/tibero6/bin/tbsvr(wthr_get_new_cli_con+0xc94) [0x18fea4]
 > /home/tibero/tibero6/bin/tbsvr(thread_main_chk_bitmask+0x18d) [0x1966ed]
 > /home/tibero/tibero6/bin/tbsvr(svr_wthr_main_internal+0x1393) [0x1ab2b3]
 > /home/tibero/tibero6/bin/tbsvr(wthr_init+0x80) [0xa2a5b0]
 > /usr/lib64/libpthread-2.17.so(start_thread+0xc5) [0x7ea5]
 > /usr/lib64/libc-2.17.so(clone+0x6d) [0xfe8cd]

I don’t think there is anything in common with the Oracle software code or layer architecture here, except some well known terms (segment, extent, buffer get, buffer pin,…).

I also show a data block dump here just to get an idea:


SQL> select dbms_rowid.rowid_absolute_fno(rowid),dbms_rowid.rowid_block_number(rowid) from demo where rownum=1;

DBMS_ROWID.ROWID_ABSOLUTE_FNO(ROWID) DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------ ------------------------------------
                                   2                                 2908

SQL> alter system dump datafile 2 block 2908;

The dump is in /home/tibero/tibero6/instance/t6a/dump/tracedump/tb_dump_7029_73_31900660.trc:


**Dump start at 2020-04-19 14:54:46
DUMP of BLOCK file #2 block #2908

**Dump start at 2020-04-19 14:54:46
data block Dump[dba=02_00002908(8391516),tsn=0000.0067cf33,type=13,seqno =1]
--------------------------------------------------------------
 sgmt_id=3220  cleanout_tsn=0000.00000000  btxcnt=2
 l1dba=02_00002903(8391511), offset_in_l1=5
 btx      xid                undo           fl  tsn/credit
 00  0000.00.0000  00_00000000.00000.00000  I  0000.00000000
 01  0000.00.0000  00_00000000.00000.00000  I  0000.00000000
--------------------------------------------------------------
Data block dump:
  dlhdr_size=16  freespace=7792  freepos=7892  symtab_offset=0  rowcnt=4
Row piece dump:
 rp 0 8114:  [74] flag=--H-FL--  itlidx=255    colcnt=11
  col 0: [6]
   0000: 05 55 53 45 52 32                               .USER2
  col 1: [6]
   0000: 05 49 5F 43 46 31                               .I_CF1
  col 2: [1]
   0000: 00                                              .
  col 3: [4]
   0000: 03 C2 9F CA                                     ....
  col 4: [6]
   0000: 05 49 4E 44 45 58                               .INDEX
  col 5: [2]
   0000: 01 80                                           ..
  col 6: [9]
   0000: 08 78 78 01 05 13 29 15 00                      .xx...)..
  col 7: [9]
   0000: 08 78 78 01 05 13 29 15 00                      .xx...)..
  col 8: [20]
   0000: 13 32 30 32 30 2D 30 31 2D 30 35 3A 31 39 3A 34 .2020-01-05:19:4
   0010: 31 3A 32 31                                     1:21
  col 9: [6]
   0000: 05 56 41 4C 49 44                               .VALID
  col 10: [2]
   0000: 01 4E                                           .N

Even if there are obvious differences in the implementation, this really looks similar to an Oracle block format with ITL list in the block header and row pieces with flags.

If you look for a compatible alternative to Oracle Database, you have probably found some database which try to accept the same SQL and PL/SQL syntax. But this is not sufficient to run an application with minimal changes. Here, with Tibero I was really surprised to see how it copies the Oracle syntax, behavior and features. The dictionary views are similar, with some differences because the implementation is different. Tibero has also an equivalent of ASM and RAC. You can expect other blog posts about it, so do not hesitate to follow the rss or twitter feed.

Cet article 티베로 – The most compatible alternative to Oracle Database est apparu en premier sur Blog dbi services.

AWS Certified Database Specialty

$
0
0

Here is my feedback after preparing and passing the AWS Database Specialty certification. There are tips about the exam but also some thoughts that came to my mind during the preparation when I had to mind-shift from a multi-purpose database system to purpose-built database services.

Exam Availability

This exam was in beta between last December/January and then was planned for production starting April 6, 2020. I initially planned to take the exam this first day but COVID situation, and then family reasons, I had to re-schedule two times. PearsonVUE is really good for that: free re-cancel, and the ability to take the exam at home. This last point was a bit stressful for me and here is my little feedback about taking the exam from home:

  • Wi-Fi: an ethernet cable is always more reliable. When working remotely, it happens that I have to re-connect or re-start the router (or ask kids to do it as they work from home as well and they know how internet access is important) but you can’t leave the room or talk during the exam.
  • Room: it must be closed, and nobody enters for 3 hours. A Post-It on the door is a nice reminder for kids. I also asked them to be quiet (and this without playing Fortnite because I want full bandwidth). When working, I can put headsets to concentrate, but that’s not allowed for the exam.
  • Clean desk: no paper, no second monitor,… that is not a problem. The problem is: I work in a room that is messy. For online conferences, the webcam is framed correctly to hide this. But for the exam, you have to take pictures of the room. But no worry, they are not there to judge your home and the stack of laundry to iron that is just behind 😉
  • I sometimes look around or put my hand on my mouth when concentrating 🤔. That’s forbidden: they watch you by webcam and open a chat conversation to tell you to avoid that.

The best is that, even if the full score is sent a few days later, the PASS status is immediately visible. I find that extremely good during this lockdown period: I hope that my enthusiasm when passing the exam will give inspiration to kids. Achievement is a good reward and motivation to go further in life.

Preparation resources

So, I passed the exam at the beginning of its availability and I do that on purpose. When you wait, you find a lot of “brain dumps” (aka illegal leaks) of questions all over the internet when you are looking for information, and I hate that. This lowers the value of the certification. I really don’t get the point. If someone just wants the diploma he can just design one with photoshop. It is illegal but using dumps as well. The quality, price, and recognition of certifications suffer when people are cheating.

So, very few resources available. Nothing yet on https://linuxacademy.com/ which has a good reputation for AWS Certification preparations.

Note that I’ve also followed the Exam Readiness: AWS Certified Database – Specialty and it was for me a complete waste of time.

The best I’ve found is reading the FAQs referenced at https://aws.amazon.com/certification/certification-prep/ > AWS Specialty Certifications > Database – Specialty > Read AWS whitepapers and FAQs > STUDY TIP: Focus on the following FAQs

I’ve read Vladimir Mukhin feedback: https://medium.com/@vlad_13843/aws-certified-database-specialty-unofficial-exam-guide-4e38951481f5, good list of topic but too many links to documents and videos for me. I think that if I need more than 2 or 3 days to prepare for this kind of exam, then the exam is not for me. I don’t want to get a certification based only on recently learned knowledge.

For this Database specialty, you need:

  • A strong background in databases (you have to include NoSQL in the database catalog) from past experience.
  • A good understanding of IT concepts: network, security, encryption, high-availability, disaster recovery (they have funny names in this cloud but the concepts are the same).
  • And of course, know the AWS services, that’s the part that you can learn for the certification (I already had a good idea about Aurora and DynamoDB internal architecture).

But the best skill for these certification exams in general (those with questions and no hands-on) is logic. I took the same approach as with my Oracle certification exams before: read the question and all answers, think about what they want you to answer, think about what you would answer. Good if that matches. Then re-read all to find the words in the questions which make one answer possible or not possible. For example, many questions start with “An online gaming company…” and this is the usual example for DynamoDB. When you see Disaster Recovery then you can eliminate the Multi-AZ and focus on multi-Region. There are many questions where multiple answers are possible, but they ask you the best one, and the question should mention on which criteria: cost, availability,… And remember that the people who write the questions are proud of their product. If a new Auto-Scaling option has been recently added to a service, there is good chance that they want to be sure that you know about it. So in addition to the mentioned criteria, I implicitly add the marketing one.

Let me give another trick I’m used to with Oracle exams and which seems to work there as well. When you have answers with a shape like:

  • Enable A and run X
  • Enable A and run Y
  • Enable A and run Z
  • Enable B and run Y

There is a good chance that “Enable A and run Y” is the answer they expect. Of course, this, as the previous tips, are not an absolute truth. I use them only when, after thinking with logic, I hesitate between two answers. But stay calm: they don’t put answers to trick you but just to validate your skills.

Tips

Actually, the best I’ve read before the exam was:

On those exams, you can mark questions for review. I do not mark those where I’m sure about the answer. I don’t want to review them even when I have plenty of time. When you go back on those, you are in the mood of finding mistakes, and there’s a risk that you have doubts and change a good answer to the wrong one. Trust yourself: when you know the answer from the beginning, then it is the right answer.
For the questions where I have a doubt, I don’t waste time and mark them for review. Sometimes, another question later rings a bell and helps you for another one. I usually have 30% questions marked. I mark too many at the beginning (like not trusting myself, or being stressed by the time), and maybe at the end (when I see I have enough time to review a lot later). But I never leave a question without an answer. At the end, I review and unmark when done.

Important: being at full concentration level for 3 hours is hard. This means that there is a higher chance that your first answer is better than during the review. Change your answer only when you have new elements (like you thought about it when at another question) or because you know you didn’t spend enough time on the first pass.

Understand AWS

That’s probably the most important if, like me, you started on databases with Oracle or other RDBMS. You can be scared when looking at the multiplication of services in AWS. They have a reason and if you understand it then it will be easier to remember what they do and how they do it. And all is in the name: Amazon Web Services.

AW[S] as Service: The right tool for the right job

The ‘S’ in AWS means ‘Service’ and it is actually the idea of MicroServices. You think of a database as the integration of CPU, Memory and Storage and you know how the database vendors try to avoid any latency between those components: memory is a shared segment attached to your session process, and when I/O is required, preference goes to direct I/O to bypass the filesystem services for performance and durability reasons. With AWS all layers are different services. Whether you run your database yourself on EC2 or it is managed with RDS, it will involve many different services: EBS mount, Aurora clustered storage, backups on S3,… In the case of Aurora, even the shared memory buffers are running separately and the database writer is behind the network layer. And in addition to those many layers, you can add some Elasticache in front, replicate with DMS, monitor with CloudWatch, audit with CloudTrail,… The philosophy is the opposite of a multi-purpose database: you build your system from many blocks.

As an illustration, look at this architecture best practice for something as simple as WordPress:


We are in a completely different world than what made MySQL popular: the LAMP full-stack bundle for Web Services to keep infrastructure simple. Paradoxically, MySQL (and the MySQL upper layer in Aurora) is the database engine that is the most used in RDS.

A[W]S as Web: scale, scale, scale

The ‘W’ in AWS stands for Web. The services are accessible through the internet, which means: worldwide network with unreliable latency. When you start watching presentations about AWS and especially the NoSQL services like DynamoDB you hear things like: “SQL does not scale”, “Joins does not scale”,… They even illustrate this by mentioning that they moved out of Oracle because they had performance issue with it, and they accepted the lack of consistency because they had availability issues with it… But think about this: how many customers and how many transactions they had before they decided this move? It seems that the ‘old monolith’ database scaled very well with their growing business during years. When I read those kind of messages, I stay away and just remember what is needed to get the right answer at the exam. If you are a gaming company (that’s not a bank…) and want to store user scores (this fits to key-value) accessible worldwide with millisecond latency (this is physically incompatible with two-phase-commit) and have a very simple user case (always access top score per user) but growing business with impossible capacity planning (this fits auto-scale) then DynamoDB is the solution expected.

I think that the “scale” message from cloud vendors addressing startup companies is not really about the number of transactions or storage size. It is more about scaling an organization where a small young people team grows to multi-national. That’s the problem Amazon faced with their Oracle databases: hundreds or thousands of databases for many (micro-)services, incompatible with very short dev release schedules and growing ops infrastructure. This is what didn’t scale: the organization, not the transactions per second. But that’s not for the certification exam. For the exam you need to know about reserved, provisioned, on-demand, and auto-scaling, and serverless to fit what you know about the capacity required over time.

[A]WS as Amazon: a marketplace, from books to IT resources

The ‘A’ is for Amazon, an online marketplace for books, and then for pretty everything in a large variety, and an incredible quantity sold on Black Friday. Today, “cloud” is the selling message for startup companies, and Amazon takes a lot of examples from their own business. But don’t forget that, at that time, their startup company has grown on on-premises infrastructure. That’s the big difference between you and them: for Amazon the AWS public cloud services are running on their own datacenters. They sell some of their network/compute/storage capacity to amortize their CapExp. You go there for the opposite reason: close your datacenter and run your IT on OpEx only.

Those considerations go beyond the exam scope but thinking about this helped me in two ways:

  • Every training or certification provided by a vendor contains some marketing messages (maybe not even on purpose but they are written by people within the company culture) and it is important to keep this in mind to focus on the expected answers.
  • Think about uses cases like an online retail company would: “Suggesting similar products” -> Graph -> AWS Neptune. “Store the buyer’s cart” -> key/value -> DynamoDB. “Access the catalog worldwide” -> low latency eventually consistent reads -> Regions, Global, Read Replica.

If you plan to also take the Cloud Practitioner exam, I recommend passing it before, because for the Database Specialty you need to know about VPC, EC2, CloudFormation,… I’m saying that but I did the opposite with the idea that starting with the most difficult releases all the stress for the second exam 😉

When looking more deeply at NoSQL for this exam, I had very interesting discussion and you may be interested by these Twitter conversations:

Cet article AWS Certified Database Specialty est apparu en premier sur Blog dbi services.

Patching ODA from 18.3 to 18.8

$
0
0

Introduction

19c will soon be available for your ODAs. But you may not be ready. Here is how to patch your ODA from 18.3 to 18.8, the very latest 18c release. This patch has here been applied on X7-2M hardware.

Download the patch files

Patch number is 30518425. This patch is composed of 2 zipfiles you will copy on your ODA.

Check free space on disk

Applying a patch requires several GB, please check free space on /, /opt and /u01 before starting. These filesystems should have about 20% free space. If needed, /u01 and /opt filesystems can be extended online, based on Linux VG/LV. For example if you need an additional 20GB in /opt:

lvextend -L +20G /dev/mapper/VolGroupSys-LogVolOpt
resize2fs /dev/mapper/VolGroupSys-LogVolOpt

Check processes

It’s also recommended to check what’s running on your ODA before patching, you’ll do the same check after patching is complete:

ps -ef | grep pmon
oracle     863     1  0 Feb22 ?        00:00:06 ora_pmon_UBTMUR
oracle    8014     1  0  2019 ?        00:03:06 ora_pmon_TSTDEV
oracle    9901     1  0 Feb22 ?        00:00:11 ora_pmon_DEVUT2
grid     14044     1  0  2019 ?        00:22:39 asm_pmon_+ASM1
grid     17118     1  0  2019 ?        00:18:18 apx_pmon_+APX1
oracle   22087 19584  0 11:02 pts/0    00:00:00 grep pmon

ps -ef | grep tnslsnr
grid     15667     1  0  2019 ?        01:43:48 /u01/app/18.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid     15720     1  0  2019 ?        02:26:42 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle   22269 19584  0 11:02 pts/0    00:00:00 grep tnslsnr
grid     26884     1  0  2019 ?        00:01:24 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER1523 -inherit
grid     94369     1  0  2019 ?        00:01:16 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER1522 -inherit

Check current version in use

Start to check current version on all components:

odacli describe-component

System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180717       up-to-date
}
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.21.r126801      up-to-date
BIOS                                      41040100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date
FIRMWAREDISK                              0121                  0112

Some components version could be higher than those available at time of previous patch/deployment. In this example, starting version is 18.3.0.0.0: this version as a straight path to 18.8.0.0.0.

Duration of the patch may vary depending on the components to be patched. I strongly advise you to check the target version for each component in the documentation, for an ODA X7-2M here is the url.

For this ODA, and compared to currently deployed components versions, no OS update is embedded in the patch, meaning that it will shortened the patching time.

Preparing the patch

Copy the patch files on disk in a temp directory. Then unzip the files and update the repository:

cd /u01/tmp
unzip p30518425_188000_Linux-x86-64_1of2.zip
unzip p30518425_183000_Linux-x86-64_1of2.zip
rm -rf p30518425_188000_Linux-x86-64_*
odacli update-repository -f /u01/tmp/oda-sm-18.8.0.0.0-200209-server1of2.zip
odacli update-repository -f /u01/tmp/oda-sm-18.8.0.0.0-200209-server2of2.zip
odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 3
ID                                       Description                      Created                             Status
---------------------------------------- -------------------------------- ----------------------------------- ---------
7127c3ca-8fb9-4ac9-810d-b7e1aa0e32c5     Repository Update                February 24, 2020 01:13:53 PM CET   Success
5e294f03-3fa8-48ae-b193-219659bec4de     Repository Update                February 24, 2020 01:14:09 PM CET   Success

Patch the dcs components

Patching the dcs components is easy. Now it’s a 3-step process:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 18.8.0.0.0
/opt/oracle/dcs/bin/odacli update-dcsadmin -v 18.8.0.0.0
/opt/oracle/dcs/bin/odacli update-dcscomponents -v 18.8.0.0.0
{
  "jobId" : "f36c44b3-4eb8-4a43-a323-d28a9836de74",
  "status" : "Success",
  "message" : null,
  "reports" : null,
  "createTimestamp" : "February 24, 2020 14:04:01 PM CET",
  "description" : "Job completed and is not part of Agent job list",
  "updatedTime" : "February 24, 2020 14:04:01 PM CET"
}
odacli list-jobs | tail -n 3
b8d474ab-3b5e-4860-a9f4-3d73497d6d4c     DcsAgent patching                                                           February 24, 2020 1:58:54 PM CET    Success
286b36a2-c1fc-48b9-8c99-197e24b6c8ba     DcsAdmin patching                                                           February 24, 2020 2:01:52 PM CET    Success

Note that the latest update is not a job in the list.

Check proposed version to patch to

Now the describe-component should propose the real available versions bundled in the patch:

odacli describe-component
System Version
---------------
18.8.0.0.0
 
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            18.8.0.0.0
GI                                        18.3.0.0.180717       18.8.0.0.191015
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       12.2.0.1.191015
[ OraDB11204_home1 ]                      11.2.0.4.180717       11.2.0.4.191015
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.21.r126801      4.0.4.47.r131913
BIOS                                      41040100              41060600
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              qdv1rf30
FIRMWAREDISK                              0121                  up-to-date

OS and firmwaredisk components don’t need to be patched.

Pre-patching report

Let’s check if patching has the green light:

odacli create-prepatchreport -s -v 18.8.0.0.0
odacli describe-prepatchreport -i 12d61cda-1cef-40b9-ad7d-8e087007da23

 
Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  12d61cda-1cef-40b9-ad7d-8e087007da23
            Description:  Patch pre-checks for [OS, ILOM, GI]
                 Status:  SUCCESS
                Created:  February 24, 2020 2:05:41 PM CET
                 Result:  All pre-checks succeeded
 

 
Node Name
---------------
dbiora07
 
Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.8.0.0.0
Is patch location available     Success   Patch location is available
Verify OS patch                 Success   There are no packages available for
                                          an update
 
__ILOM__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.8.0.0.0
Is patch location available     Success   Patch location is available
Checking Ilom patch Version     Success   Successfully verified the versions
Patch location validation       Success   Successfully validated location
 
__GI__
Validate supported GI versions  Success   Validated minimum supported versions
Validate available space        Success   Validated free space under /u01
Verify DB Home versions         Success   Verified DB Home versions
Validate patching locks         Success   Validated patching locks

ODA is ready.

Patching infrastructure and GI

First the Trace File Analyzer should be stopped, then the update-server could be run:

/etc/init.d/init.tfa stop
odacli update-server -v 18.8.0.0.0
odacli describe-job -i 4d6aab0e-18c4-4bbd-8c16-a39c8a14f992
 Job details
----------------------------------------------------------------
                     ID:  4d6aab0e-18c4-4bbd-8c16-a39c8a14f992
            Description:  Server Patching
                 Status:  Success
                Created:  February 24, 2020 2:15:32 PM CET
                Message:


Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                February 24, 2020 2:15:39 PM CET    February 24, 2020 2:15:39 PM CET    Success
dcs-controller upgrade                   February 24, 2020 2:15:39 PM CET    February 24, 2020 2:15:42 PM CET    Success
Patch location validation                February 24, 2020 2:15:42 PM CET    February 24, 2020 2:15:42 PM CET    Success
dcs-cli upgrade                          February 24, 2020 2:15:42 PM CET    February 24, 2020 2:15:43 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:43 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Updating YumPluginVersionLock rpm        February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:54 PM CET    Success
Applying OS Patches                      February 24, 2020 2:15:54 PM CET    February 24, 2020 2:26:44 PM CET    Success
Creating repositories using yum          February 24, 2020 2:26:44 PM CET    February 24, 2020 2:26:45 PM CET    Success
Applying HMP Patches                     February 24, 2020 2:26:45 PM CET    February 24, 2020 2:27:35 PM CET    Success
Patch location validation                February 24, 2020 2:27:35 PM CET    February 24, 2020 2:27:35 PM CET    Success
oda-hw-mgmt upgrade                      February 24, 2020 2:27:35 PM CET    February 24, 2020 2:28:04 PM CET    Success
OSS Patching                             February 24, 2020 2:28:04 PM CET    February 24, 2020 2:28:04 PM CET    Success
Applying Firmware Disk Patches           February 24, 2020 2:28:05 PM CET    February 24, 2020 2:28:11 PM CET    Success
Applying Firmware Expander Patches       February 24, 2020 2:28:11 PM CET    February 24, 2020 2:28:16 PM CET    Success
Applying Firmware Controller Patches     February 24, 2020 2:28:16 PM CET    February 24, 2020 2:29:02 PM CET    Success
Checking Ilom patch Version              February 24, 2020 2:29:03 PM CET    February 24, 2020 2:29:05 PM CET    Success
Patch location validation                February 24, 2020 2:29:05 PM CET    February 24, 2020 2:29:06 PM CET    Success
Save password in Wallet                  February 24, 2020 2:29:07 PM CET    February 24, 2020 2:29:07 PM CET    Success
Apply Ilom patch                         February 24, 2020 2:29:07 PM CET    February 24, 2020 2:42:35 PM CET    Success
Copying Flash Bios to Temp location      February 24, 2020 2:42:35 PM CET    February 24, 2020 2:42:35 PM CET    Success
Starting the clusterware                 February 24, 2020 2:42:35 PM CET    February 24, 2020 2:44:53 PM CET    Success
clusterware patch verification           February 24, 2020 2:55:27 PM CET    February 24, 2020 2:55:47 PM CET    Success
Patch location validation                February 24, 2020 2:55:47 PM CET    February 24, 2020 2:56:37 PM CET    Success
Opatch updation                          February 24, 2020 2:57:32 PM CET    February 24, 2020 2:57:36 PM CET    Success
Patch conflict check                     February 24, 2020 2:57:36 PM CET    February 24, 2020 2:58:27 PM CET    Success
clusterware upgrade                      February 24, 2020 2:58:27 PM CET    February 24, 2020 3:21:18 PM CET    Success
Updating GiHome version                  February 24, 2020 3:21:18 PM CET    February 24, 2020 3:21:53 PM CET    Success
Update System version                    February 24, 2020 3:22:26 PM CET    February 24, 2020 3:22:27 PM CET    Success
preRebootNode Actions                    February 24, 2020 3:22:27 PM CET    February 24, 2020 3:23:14 PM CET    Success
Reboot Ilom                              February 24, 2020 3:23:14 PM CET    February 24, 2020 3:23:14 PM CET    Success

Server reboots 5 minutes after the patch ends. On my X7-2M this operation lasted 1h15.

Let’s check the component’s versions:

odacli describe-component
System Version
---------------
18.8.0.0.0
 
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.8.0.0.0            up-to-date
GI                                        18.8.0.0.191015       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       12.2.0.1.191015
[ OraDB11204_home1 ]                      11.2.0.4.180717       11.2.0.4.191015
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.21.r126801      4.0.4.47.r131913
BIOS                                      41040100              41060600
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              qdv1rf30
FIRMWAREDISK                              0121                  up-to-date

Neither ILOM nor BIOS have been updated. This is a bug.

Solve the ILOM and BIOS not patched

An additional procedure is needed (provided by MOS), crsctl needs to be stopped then BIOS patched manually:

/u01/app/18.0.0.0/grid/bin/crsctl stop crs
ipmiflash -v write ILOM-4_0_4_47_r131913-ORACLE_SERVER_X7-2.pkg force script config delaybios warning=0

Versions should now be fine:

odacli describe-component

System Version
---------------
18.8.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.8.0.0.0            up-to-date
GI                                        18.8.0.0.191015       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       12.2.0.1.191015
[ OraDB11204_home1 ]                      11.2.0.4.180717       11.2.0.4.191015
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.47.r131913      up-to-date
BIOS                                      41060600              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              qdv1rf30
FIRMWAREDISK                              0121                  up-to-date

Patching the storage

Patching of the storage is much faster than patching the “server”:

odacli update-storage -v 18.8.0.0.0

odacli describe-job -i a97deb0d-2e0b-42d9-8b56-33af68e23f15
 
Job details
----------------------------------------------------------------
                     ID:  a97deb0d-2e0b-42d9-8b56-33af68e23f15
            Description:  Storage Firmware Patching
                 Status:  Success
                Created:  February 24, 2020 3:33:10 PM CET
                Message:
 
Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           February 24, 2020 3:33:11 PM CET    February 24, 2020 3:33:19 PM CET    Success
Applying Firmware Controller Patches     February 24, 2020 3:33:19 PM CET    February 24, 2020 3:40:53 PM CET    Success
preRebootNode Actions                    February 24, 2020 3:40:53 PM CET    February 24, 2020 3:40:54 PM CET    Success
Reboot Ilom                              February 24, 2020 3:40:54 PM CET    February 24, 2020 3:40:54 PM CET    Success

Another auto reboot is done after this step.

Patching the dbhomes

Time for patching the dbhomes depends on the number of dbhomes and number of databases. In this example, 2 dbhomes are deployed:

odacli list-dbhomes


ID                                       Name                 DB Version                               Home Location                                 Status                 
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- -------                 ---
8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98     OraDB12201_home1     12.2.0.1.180717                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured
8a494efd-e745-4fe9-ace7-2369a36924ff     OraDB11204_home1     11.2.0.4.180717                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

odacli update-dbhome -i 8a494efd-e745-4fe9-ace7-2369a36924ff -v 18.8.0.0.0 
odacli describe-job -i 7c5589d7-564a-4d8b-b69a-1dc50

Job details
----------------------------------------------------------------
                     ID:  7c5589d7-564a-4d8b-b69a-1dc50162a4c6
            Description:  DB Home Patching: Home Id is 8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98
                 Status:  Success
                Created:  February 25, 2020 9:18:49 AM CET
                Message:  WARNING::Failed to run the datapatch as db TSTY_RP7 is not running##WARNING::Failed to run the datapatch as db EXPY_RP7 is not registered with
clusterware##WARNING::Failed to run datapatch on db DEVM12_RP7Failed to run Utlrp script##WARNING::Failed t
 
Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validating dbHome available space        February 25, 2020 9:18:59 AM CET    February 25, 2020 9:18:59 AM CET    Success
clusterware patch verification           February 25, 2020 9:19:21 AM CET    February 25, 2020 9:19:31 AM CET    Success
Patch location validation                February 25, 2020 9:19:31 AM CET    February 25, 2020 9:19:31 AM CET    Success
Opatch updation                          February 25, 2020 9:19:31 AM CET    February 25, 2020 9:19:32 AM CET    Success
Patch conflict check                     February 25, 2020 9:19:32 AM CET    February 25, 2020 9:19:32 AM CET    Success
db upgrade                               February 25, 2020 9:19:32 AM CET    February 25, 2020 9:19:32 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:19:32 AM CET    February 25, 2020 9:19:35 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:19:35 AM CET    February 25, 2020 9:20:18 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:18 AM CET    February 25, 2020 9:20:55 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:55 AM CET    February 25, 2020 9:20:55 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:55 AM CET    February 25, 2020 9:20:56 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:56 AM CET    February 25, 2020 9:21:33 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:33 AM CET    February 25, 2020 9:21:40 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:40 AM CET    February 25, 2020 9:21:46 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:46 AM CET    February 25, 2020 9:21:57 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:57 AM CET    February 25, 2020 9:22:32 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:22:32 AM CET    February 25, 2020 9:23:12 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:23:12 AM CET    February 25, 2020 9:23:53 AM CET    Success
Update System version                    February 25, 2020 9:23:53 AM CET    February 25, 2020 9:23:53 AM CET    Success
updating the Database version            February 25, 2020 9:24:03 AM CET    February 25, 2020 9:24:08 AM CET    Success
updating the Database version            February 25, 2020 9:24:08 AM CET    February 25, 2020 9:24:14 AM CET    Success
updating the Database version            February 25, 2020 9:24:14 AM CET    February 25, 2020 9:24:18 AM CET    Success
updating the Database version            February 25, 2020 9:24:18 AM CET    February 25, 2020 9:24:23 AM CET    Success
updating the Database version            February 25, 2020 9:24:23 AM CET    February 25, 2020 9:24:28 AM CET    Success
updating the Database version            February 25, 2020 9:24:28 AM CET    February 25, 2020 9:24:33 AM CET    Success
updating the Database version            February 25, 2020 9:24:33 AM CET    February 25, 2020 9:24:38 AM CET    Success
updating the Database version            February 25, 2020 9:24:38 AM CET    February 25, 2020 9:24:45 AM CET    Success
updating the Database version            February 25, 2020 9:24:45 AM CET    February 25, 2020 9:24:51 AM CET    Success
updating the Database version            February 25, 2020 9:24:51 AM CET    February 25, 2020 9:24:56 AM CET    Success
updating the Database version            February 25, 2020 9:24:56 AM CET    February 25, 2020 9:25:02 AM CET    Success
updating the Database version            February 25, 2020 9:25:02 AM CET    February 25, 2020 9:25:08 AM CET    Success

odacli update-dbhome -i 8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98 -v 18.8.0.0.0 
odacli describe-job -i fbed248a-1d0d-4972-afd2-8b43ac8ad514

 Job details
----------------------------------------------------------------
                     ID:  fbed248a-1d0d-4972-afd2-8b43ac8ad514
            Description:  DB Home Patching: Home Id is 8a494efd-e745-4fe9-ace7-2369a36924ff
                 Status:  Success
                Created:  February 25, 2020 9:39:54 AM CET
                Message:  WARNING::Failed to run the datapatch as db CUR7_RP7 is not registered with clusterware##WARNING::Failed to run the datapatch as db SRS7_RP7 is not registered with clusterware##WARNING::Failed to run the datapatch as db CUX7_RP7 is not regi
 
Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validating dbHome available space        February 25, 2020 9:40:05 AM CET    February 25, 2020 9:40:05 AM CET    Success
clusterware patch verification           February 25, 2020 9:40:07 AM CET    February 25, 2020 9:40:12 AM CET    Success
Patch location validation                February 25, 2020 9:40:12 AM CET    February 25, 2020 9:40:17 AM CET    Success
Opatch updation                          February 25, 2020 9:40:49 AM CET    February 25, 2020 9:40:51 AM CET    Success
Patch conflict check                     February 25, 2020 9:40:51 AM CET    February 25, 2020 9:41:07 AM CET    Success
db upgrade                               February 25, 2020 9:41:07 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:30 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:30 AM CET    February 25, 2020 9:43:30 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:30 AM CET    February 25, 2020 9:43:30 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:30 AM CET    February 25, 2020 9:43:33 AM CET    Success
Update System version                    February 25, 2020 9:43:33 AM CET    February 25, 2020 9:43:33 AM CET    Success
updating the Database version            February 25, 2020 9:43:35 AM CET    February 25, 2020 9:43:37 AM CET    Success
updating the Database version            February 25, 2020 9:43:37 AM CET    February 25, 2020 9:43:39 AM CET    Success
updating the Database version            February 25, 2020 9:43:39 AM CET    February 25, 2020 9:43:42 AM CET    Success
updating the Database version            February 25, 2020 9:43:42 AM CET    February 25, 2020 9:43:45 AM CET    Success
updating the Database version            February 25, 2020 9:43:45 AM CET    February 25, 2020 9:43:48 AM CET    Success
updating the Database version            February 25, 2020 9:43:48 AM CET    February 25, 2020 9:43:50 AM CET    Success
updating the Database version            February 25, 2020 9:43:50 AM CET    February 25, 2020 9:43:52 AM CET    Success
updating the Database version            February 25, 2020 9:43:52 AM CET    February 25, 2020 9:43:54 AM CET    Success
updating the Database version            February 25, 2020 9:43:54 AM CET    February 25, 2020 9:43:56 AM CET    Success
updating the Database version            February 25, 2020 9:43:56 AM CET    February 25, 2020 9:43:58 AM CET    Success
updating the Database version            February 25, 2020 9:43:58 AM CET    February 25, 2020 9:44:01 AM CET    Success
updating the Database version            February 25, 2020 9:44:01 AM CET    February 25, 2020 9:44:04 AM CET    Success
updating the Database version            February 25, 2020 9:44:04 AM CET    February 25, 2020 9:44:06 AM CET    Success
updating the Database version            February 25, 2020 9:44:06 AM CET    February 25, 2020 9:44:08 AM CET    Success
updating the Database version            February 25, 2020 9:44:08 AM CET    February 25, 2020 9:44:10 AM CET    Success

odacli list-dbhomes
ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98     OraDB12201_home1     12.2.0.1.191015                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured
8a494efd-e745-4fe9-ace7-2369a36924ff     OraDB11204_home1     11.2.0.4.191015                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

The 2 dbhomes are updated. Failures on Sqlpatch upgrade should be analyzed, but remember that standby databases cannot be upgraded as their dictionary is not available.

Let’s check on primary 12c (and later) databases if everything is OK:

su – oracle
. oraenv <<< DEVC12
sqlplus / as sysdba
set serverout on
exec dbms_qopatch.get_sqlpatch_status;
...
Patch Id : 28163133
        Action : APPLY
        Action Time : 27-MAY-2019 16:02:11
        Description : DATABASE JUL 2018 RELEASE UPDATE 12.2.0.1.180717
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/28163133/22313390/28163133_apply_DEVC12_201
9May27_16_02_00.log
        Status : SUCCESS
 
Patch Id : 30138470
        Action : APPLY
        Action Time : 24-FEB-2020 17:10:35
        Description : DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136382/30138470_apply_DEVC12_202
0Feb24_17_09_56.log
        Status : SUCCESS

PL/SQL procedure successfully completed.
 exit

Let’s check if everything is also OK on 11g databases;

su – oracle
. oraenv <<< DEVC11
sqlplus / as sysdba
select * from dba_registry_history;
ACTION_TIME
---------------------------------------------------------------------------
ACTION                         NAMESPACE
------------------------------ ------------------------------
VERSION                                ID BUNDLE_SERIES
------------------------------ ---------- ------------------------------
COMMENTS
--------------------------------------------------------------------------------
...
17-MAY-19 11.18.48.769476 AM
APPLY                          SERVER
11.2.0.4                           180717 PSU
PSU 11.2.0.4.180717
 
25-FEB-20 09.57.40.969359 AM
APPLY                          SERVER
11.2.0.4                           191015 PSU
PSU 11.2.0.4.191015
 Patch Id : 26635944
        Action : APPLY
        Action Time : 21-NOV-2017 15:53:49
        Description : OJVM RELEASE UPDATE: 12.2.0.1.171017 (26635944)
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26635944/21607957/26635944_apply_G100652_CD
BROOT_2017Nov21_15_53_12.log
        Status : SUCCESS

Optional: update the dbclones

If you now create a new dbhome, it will be based on the previous dbclone. So you may need to provision a new dbclone to avoid that. If you need the latest dbclone for 18c, patch number is 27604558:

cd /u01/tmp
unzip p27604558_188000_Linux-x86-64.zip
odacli update-repository -f /u01/tmp/odacli-dcs-18.8.0.0.0-191226-DB-18.8.0.0.zip

Now you are able to create a new dbhome from this dbclone:

odacli create-dbhome -v 18.8.0.0.191015
odacli list-dbhomes
ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98     OraDB12201_home1     12.2.0.1.191015                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured
8a494efd-e745-4fe9-ace7-2369a36924ff     OraDB11204_home1     11.2.0.4.191015                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
395451e5-12b9-4851-b331-dd3e650e6d11     OraDB18000_home1     18.8.0.0.191015                          /u01/app/oracle/product/18.0.0.0/dbhome_1     Configured

Final checks

Let’s get the final versions:

odacli describe-component
System Version
---------------
18.8.0.0.0
 
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.8.0.0.0            up-to-date
GI                                        18.8.0.0.191015       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.191015       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.191015       up-to-date
[ OraDB18000_home1 ]                      18.8.0.0.191015       up-to-date
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.47.r131913      up-to-date
BIOS                                      41060600              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RF30              up-to-date
FIRMWAREDISK                              0121                  up-to-date

Looks good. Please also check the running processes and compare them to the initial status.

Cleanse the old patches

It’s now possible to cleanse the old patches, they will never be used again. For this ODA, history was:

Deploy = 12.2.1.2.0 => Patch 12.2.1.4.0 => Patch 18.3.0.0.0 => Patch 18.8.0.0.0

Check the filesystems usage before and after cleansing:

df -h /opt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolOpt
                       79G   61G   15G  81% /opt

df -h /u01
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
                      197G  116G   71G  63% /u01

odacli cleanup-patchrepo -cl -comp db,gi -v 12.2.1.2.0
odacli cleanup-patchrepo -cl -comp db,gi -v 12.2.1.4.0
odacli cleanup-patchrepo -cl -comp db,gi -v 18.3.0.0.0


df -h /opt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolOpt
                       79G   39G   37G  51% /opt
df -h /u01
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
                      197G  116G   71G  63% /u01

Conclusion

Your ODA is now in the latest 18c version. Future upgrade will be more serious, OS will jump to Linux 7 and Oracle stack to 19.6. Old X4-2 ODAs will be stuck to 18.8. And remember that 19.5 is not a production release.

Cet article Patching ODA from 18.3 to 18.8 est apparu en premier sur Blog dbi services.

How to run Avamar backup on SQL Agent Job with PowerShell

$
0
0

By one of our customer we use Avamar for the backup and restore solution. I was asked by this customer to find a solution to run Avamar backups for a group of databases on a specified instance. In fact, we currently try a database clone solution on some instances and clone’s databases must not be backed up, but the rest of the databases must be.

After some discussions with my colleague I decided to use a PowerShell script which will be called in a SQL job step.
This PowerShell script will select the databases to back-up and use the avsql command to run the backups. Avsql is the command line interface for Avamar and SQL Server, it has plenty of functionalities including operations like browse, backup or restore of an instance.

Prerequisites

There are some prerequisites to use this command line.
The first one is to create an avsql command file where we have to define some parameters for the command we would like to execute:

  • operation to perform
  • id of the user used to run the operation
  • user password (encrypted)
  • Avamar server
  • client where to run the operation
  • retention 60 days

Here a picture of my file:

The second one is to create the user mentioned above.
For that open you Avamar Administrator console, go to Administration, Account Management and search for the client where you want to create the user. In the User Tab, right click on the mouse button and add new user. Select the role “Back up/Restore User”, give a name to your user and enter a password. Once done the user is created:

Job creation

Once done we can connect to the SQL Server instance and create a SQL Agent Job.
I don’t want to waste too much time on job creation as there is no added value with that:

The step will just have to run a PowerShell script which will be executed by the SQL Server Agent Service Account:

The interesting part is more in the script that I copy here:

$instance = 'yourserver\instancename'
$DatabasesList = invoke-sqlcmd -query "select distinct db_name(database_id) as name from sys.master_files where physical_name not like '%SqlClone%'" -serverinstance $instance -QueryTimeout 1000 -ConnectionTimeout 1000 -SuppressProviderContextWarning
Foreach ($Database in $DatabasesList)
{
#prepare database name for Avamar backup
$dbname = $Database.name
$db = "$instance/$dbname"
#prepare command line for Avamar backup
$BackupCmd = "avsql --flagfile=C:\ProgramData\Bis\Avamar\avsql_backup.cmd --brtype=full --label=Daily_Full_Clone_Server """ + $db + """"
#Run backup command via Avamar
powershell.exe -command "Start-Process 'cmd' -verb runas -ArgumentList '/c $BackupCmd'"
}

The trickiest part has been to find the right command to execute the avsql command.
Indeed this command must be executed with elevation via a cmd process.
This part of the script powershell.exe -command “Start-Process ‘cmd’ -verb runas -ArgumentList ‘/c $BackupCmd'” shows how I finally managed it.
I executed a PowerShell command which runs a command prompt with elevation with as argument my Avamar command line.
Once scheduled our job will create Avamar full backup for the selected databases.

I hope this can help Avamar users 😉

Cet article How to run Avamar backup on SQL Agent Job with PowerShell est apparu en premier sur Blog dbi services.

How Windows Admin Center can backup on Azure

$
0
0

To be honest I never used before Windows Admin Center but after some hours of utilization I’m still convinced. As SQL Server Management Studio is a great tool to manage our SQL Server instances with a CMS (Centralized Management Server), Windows Admin Center is a great tool to centralize the management of our Physical, Virtual, on-premises…Windows Server. This browser-based tool is also a hybrid tool which is able to integrate your on-premises servers with Azure in some clicks.
I will show your in this blog-post how to backup an on-premise Virtual Machine to your Azure subscription via Windows Admin Center.

I will not take too much time to describe the installation of Windows Admin Center as it’s an easy task. You just have to download the msi package and follow the instructions. Just take care to not use Internet Explorer to open the URL to connect to your Windows Admin Center as it’s not supported, better used an Edge browser.
Once opened it’s time to add your servers. Select the Add button at the top left of the screen and type your server name. As I didn’t provide administrator credentials, I can add the server but I will need to do it later for connection. A warning alerts us that single sign-in is available but we need first to set up Kerberos constrained delegation which I still have to do:

Once done our server appears in the Server list.

Just click on the name to manage this server, here you will have to enter your administrator credential in order to connect:

You have now an overview of your server with some information like name, domain, OS with version, RAM… even the failover cluster name where it is part of. We have also lots of interesting tools on the left side bar to manage our server with some Azure functionalities and more common ones like events, firewall, processes…

I will focus on the Azure Backup possibility. Indeed in case of a corruption or crash of my Virtual Machine locally I don’t have any possibility to restore it or to quickly recover it. In real world this functionality can provide an alternative of existing on-premises backup solution in case of accidental or malicious deletions or corruptions.
Those backups can also be used to recreate the Virtual Machine directly on Azure.
To set up this functionality click on Azure Backup and we can start to configure it:

To use Azure services with Windows Admin Center we have to register it with Azure and complete the following steps:

  • connect to our Azure account with an active subscription
  • the target Windows Servers that we want to backup must have Internet access to Azure
  • connect our Windows Admin Center gateway to Azure:

Once done you are connected also with your Azure account on Windows Admin Center:

Back to our overview screen, we still not have a backup protection for our server but now Windows Admin Center is “connected” with our Azure account:

And now clicking on Azure Backup let us protect our Windows Server.
Our subscription is selected and we can set up our backup:

  • a Vault name and a new Resource Group are created automatically
  • we can choose the location where the backup will be done, here I selected the new Switzerland North region
  • we can select what we want to protect, here the server system state and disk c:\
  • we can choose when files/folders backup and system state backup will be done and the retention of those backups
  • enter an encryption passphrase used to encrypt/decrypt backup

Once we applied our configuration:

  • a new Azure Resource Group is created if there it does not exist
  • a new Azure Recovery Services Vault is created
  • the Microsoft Azure Recovery Services (MARS) Agent is installed and registered to the Vault
  • a backup and retention schedule is created with the options selected before and associate with the Windows Server

An overview of your Azure Backup is displayed. For the moment no backups have been executed and we can start an adhoc one by clicking on “Backup Now”:

Our backup is now running:

When the backup is finished, we have a Recovery Points for our Windows Server:


It was the first round used to set up Windows Admin Center with Azure and run a first backup.
In the next blog-post I will try to use my Azure backup to restore files and folder and see if we can do more tricky things.
See you soon 😉

 

Cet article How Windows Admin Center can backup on Azure est apparu en premier sur Blog dbi services.


ODA 19.6 is available

$
0
0

Introduction

It’s been 1 year now that 19c is available on-premise. Finally, 19c is now available today on ODA too, in a real production release. Let’s have a glance of what’s inside.

Is 19.6 available for my old ODA?

The 19.6 release, like most recent releases, is the same for all ODAs. But oldest ODAs will not support it. First-gen ODA (now 8 years old), X3-2 (7yo) and X4-2 (6yo) won’t support this very latest version. X4-2 just get stuck to 18.8 as this 19.6 release and later ones will not be compatible on that hardware. All other ODAs between X5-2 and X8-2 in S/M/L/HA flavours are OK to run this release.

What is the benefit of 19.6 over 18.8?

You need 19.x ODA release to be able to run 19c databases. Why do you need 19c over 18c database? Or why not waiting for 20c database? Because 19c is the long-term release of Oracle database, meaning that you will have standard support until 2023, and extended support until 2026. Both 11gR2 and 12cR1 need extended support today, 12cR2’s standard support will end this year with no extended support available and 18c’s standard support will end next year with no extended support available. Because they are short-term releases. Patching to 19.6 makes sense, definitely.

Am I able to run older databases with 19.6?

Yes for sure! Oracle has always been conservative on ODA, letting users running quite old databases on modern hardware. 19.6 supports 19c databases, as well as 18c, 12cR2, 12cR1 and 11gR2 databases. But you’ll need extended support if you’re still using 12cR1 and 11gR2 databases.

Does 19.6 makes my 19c migration easier?

Yes, definitely. ODA benefits from a nice odacli update-database feature for a while now, and this is now replaced by odacli move-database. By moving a database to a new home you can upgrade it to a new version without using another tool.

What are the main changes over 18.8?

If you were using older version, it will not be a big difference but ODA 19.6 is provided with Linux 7 instead of Linux 6. This is the main reason we had to wait all this time to get this new version. This Linux is more modern, more secure and surely more capable on newest servers.

Is is possible to upgrade to 19.6 from my current release?

If your ODA is already running on latest 18.8, it’s OK for a patch. If you’re still using older 18c releases, like 18.3, 18.5 or 18.7 you will need to upgrade to 18.8 before. Please take a look at my blog post.

If you’re coming from an older version, like 12.2 or 12.1, it will be a long journey to update to 19.6. Just because you’ll have to prior upgrade to 18.3 then 18.8 and finally to 19.6. 3 or more patches to apply is quite a big amount of work and you may encounter more problems than applying fewer patches.

But remember, I already advised to always consider reimaging as a different way of patching. So don’t hesitate to start from scratch, and make a complete refresh, especially if you’re still running these old versions.

What about coming from 19.5?

I hope you’re not running this version, because you cannot upgrade to 19.6. Applying 19.6 needs a complete reimaging.

Is it complex to patch?

Patching is not complex, the operations to apply the patch are limited compared to what you should have done on a classic server. But preparing the patch and do the debugging if something goes wrong is complex. If you never patched an ODA before, it could be challenging. This is even more challenging because of the major OS upgrade to Linux 7. And documentation is showing the way when you read this part “Recovering from a Failed Operating System Upgrade”. If your ODA system has been tuned these past years, it will clearly be tough to patch. Once again, consider reimaging before going straight to the patch. Reimaging is always successful if everything has been backed up before.

What’s new for Standard Edition 2 users?

As you may know, RAC is no longuer provided in 19c SE2 edition, but you now have a free High Availability feature in this version that brings automatic failover to the other node in case of failure. A kind of RAC One Node feature, assuming you’re using HA ODA’s, so one of these four: X5-2, X6-2HA, X7-2HA and X8-2HA.

Conclusion

19c was missing on ODA, and that was quite annoying because how to recommand this platform if 19c is not available in 2020? Now we need to deploy this 19.6 on new ODAs and reimage or patch existing ODAs to see if this release meets all our (high) expectations. Stay tuned for first feedbacks from me.

Cet article ODA 19.6 is available est apparu en premier sur Blog dbi services.

AWS Aurora IO:XactSync is not a PostgreSQL wait event

$
0
0

By Franck Pachot

.
In AWS RDS you can run two flavors of the PostgreSQL managed service: the real PostgreSQL engine, compiled from the community sources, and running on EBS storage mounted by the database EC2 instance, and the Aurora which is proprietary and AWS Cloud only, where the upper layer has been taken from the community PostgreSQL. The storage layer in Aurora is completely different.

In PostgreSQL, as in most RDBMS except for exclusive fast load operations, the user session backend process writes to shared memory buffers. The write to disk, by the session process or a background writer process, is asynchronous. Because the buffer writes are usually random, they would add latency that would slow the response time if synced immediately. By doing them asynchronously, the user doesn’t wait on them. And doing them at regular intervals (checkpoints), they can be re-ordered to reduce the seek time and avoid writing the same buffer multiple times. As you don’t want to lose your committed changes or see the uncommitted ones in case of instance failure, the memory buffer changes are protected by the WAL (Write-Ahead Logging). Even when the session process is performing the write, there’s no need to sync that to the storage until the end of the transaction. The only place where the end-user must wait for a physical write sync is at commit because we must ensure that the WAL went to durable storage before acknowledging the commit. But those writes being sequential, this log-at-commit should be fast. A well-designed application should not have more than one commit per user interaction and then a few milliseconds is not perceptible. That’s the idea: write to memory only, without high latency system calls (visible through wait events), and it is only at commit that we can wait for the latency of a physical write, usually on local storage because the goal is to protect from memory loss only, and fast disks because the capacity is under control (only the WAL between two checkpoints is required).

Aurora is completely different. For High Availability reasons, the storage is not local but distributed to multiple data centers (3 Availability Zones in the database cluster region). And in order to stay in HA even in case of a full AZ failure, the blocks are mirrored in each AZ. This means that each buffer write is actually written to 6 copies on remote storage. That would make the backend, and the writer, too busy to complete the checkpoints. And for this reason, AWS decided to offload this work to the Aurora storage instances. The database instances ships only the WAL (redo log). They apply it locally to maintain their shared buffer, but that stays in memory. All checkpoint and recovery is done by the storage instances (which contains some PostgreSQL code for that). In addition to High Availability, Aurora can expose the storage to some read replicas for performance reasons (scale out the reads). And in order to reduce the time to recover, an instance can be immediately opened even when there’s still some redo to apply to recover to the point of failure. As the Aurora storage tier is autonomous for this, the redo is applied on the flow when a block is read.

For my Oracle readers, here is how I summarized this once:

A consequence of this Aurora design is the session backend process sending directly the changes (WAL) and wait for sync acknowledge on COMMIT. This obviously adds more latency because all goes on the network, a trade-off between performance and availability. In this blog I’m showing an example of row-by-row insert running in RDS PostgreSQL and in Aurora PostgreSQL. And new design for writes means new wait event: the XactSync is not a PostgreSQL wait event but an Aurora-only one when synching at commit.

It is usually a bad idea to design an application with too frequent commits. When we have lot of rows to change, better do it in one transaction, or several transactions with intermediate commits, rather than commit each row. The recommendation is even more important in Aurora (and that’s the main goal of this post). I’ll run my example with different size of intermediate commit.

Aurora instance

I have created an Aurora PostgreSQL 11.6 database running on db.r5large in my lab account. From the default configuration here are the settings I changed: I made it publicly accessible (that’s a temporary lab), I disabled automatic backups, encryption, performance insights, enhanced monitoring, and deletion protection as I don’t need them here. A few screenshots from that:

I have created a ‘demo’ database and set an easy master password.

Create table

I’ll create a demo table and a my_insert procedure where the first parameter is the number of rows to insert and the second parameter is the commit count (I commit only after this number of rows is inserted):


cat > /tmp/cre.sql<<'SQL'
drop table if exists demo;
create table demo (id int primary key,n int);
create or replace procedure my_insert(rows_per_call int,rows_per_xact int) as $$
declare
 row_counter int :=0;
begin for i in 1..rows_per_call loop
   insert into demo values (i,i);
   row_counter:=row_counter+1;
   if row_counter>=rows_per_xact then commit; row_counter:=0; end if;
 end loop; commit; end;
$$ language plpgsql;
\timing on
truncate table demo;
\x
select * from pg_stat_database where datname='demo';
select * from pg_stat_database where datname='demo' \gset
call my_insert(1e6::int,1000);
select *,tup_inserted-:tup_inserted"+ tup_inserted",xact_commit-:xact_commit"+ xact_commit" from pg_stat_database where datname='demo';
select * from pg_stat_database where datname='demo';
select * from pg_stat_wal_receiver;
select * from pg_stat_bgwriter;
create extension pg_stat_statements;
select version();
SQL

I saved that into /tmp/cre.sql and run it:


db=aur-pg.cluster-crtkf6muowez.us-east-1.rds.amazonaws.com
PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo -e < /tmp/cre.sql

Here is the result:


select * from pg_stat_database where datname='demo';
-[ RECORD 1 ]--+------------------------------
datid          | 16400
datname        | demo
numbackends    | 1
xact_commit    | 4
xact_rollback  | 0
blks_read      | 184
blks_hit       | 1295
tup_returned   | 773
tup_fetched    | 697
tup_inserted   | 23
tup_updated    | 0
tup_deleted    | 0
conflicts      | 0
temp_files     | 0
temp_bytes     | 0
deadlocks      | 0
blk_read_time  | 270.854
blk_write_time | 0
stats_reset    | 2020-04-26 15:03:30.796362+00

Time: 176.913 ms
select * from pg_stat_database where datname='demo'
Time: 110.739 ms
call my_insert(1e6::int,1000);
CALL
Time: 17780.118 ms (00:17.780)
select *,tup_inserted-23"+ tup_inserted",xact_commit-4"+ xact_commit" from pg_stat_database where datname='demo';
-[ RECORD 1 ]--+------------------------------
datid          | 16400
datname        | demo
numbackends    | 1
xact_commit    | 1017
xact_rollback  | 0
blks_read      | 7394
blks_hit       | 2173956
tup_returned   | 5123
tup_fetched    | 839
tup_inserted   | 1000023
tup_updated    | 2
tup_deleted    | 0
conflicts      | 0
temp_files     | 0
temp_bytes     | 0
deadlocks      | 0
blk_read_time  | 317.762
blk_write_time | 0
stats_reset    | 2020-04-26 15:03:30.796362+00
+ tup_inserted | 1000000
+ xact_commit  | 1013

That validates the statistics I gather: one million insert with a commit every 1000. And the important point: 17.7 seconds to insert those 1000000 rows on Aurora.

Run on RDS Aurora

Ok, 17 seconds is not enough for me. In the following tests I’ll insert 10 million rows. And let’s start doing that in one big transaction:


cat > /tmp/run.sql <<'SQL'
truncate table demo;
vacuum demo;
\x
select * from pg_stat_database where datname='demo' \gset
\timing on
call my_insert(1e7::int,10000000);
\timing off
select tup_inserted-:tup_inserted"+ tup_inserted",xact_commit-:xact_commit"+ xact_commit" from pg_stat_database where datname='demo';
SQL

I run this on my Aurora instance.


[opc@b aws]$ PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo -e < /tmp/run.sql
truncate table demo;
TRUNCATE TABLE
vacuum demo;
VACUUM
Expanded display is on.
select * from pg_stat_database where datname='demo'
Timing is on.
call my_insert(1e7::int,10000000);
CALL
Time: 133366.305 ms (02:13.366)
Timing is off.
select tup_inserted-35098070"+ tup_inserted",xact_commit-28249"+ xact_commit" from pg_stat_database where datname='demo';
-[ RECORD 1 ]--+---------
+ tup_inserted | 10000000
+ xact_commit  | 62

This runs the call with 10 million rows with commit every 10 million, which means all in one transaction. The database stats show very few commits (there may be some irrelevant background jobs at the same time). Elapsed time for the inserts: 2:13 minutes.

I did not enable Performance Insight so I’ll run a dirty active sessions sampling:


while true ; do echo "select pg_sleep(0.1);select state,to_char(clock_timestamp(),'hh24:mi:ss'),coalesce(wait_event_type,'-null-'),wait_event,query from pg_stat_activity where application_name='psql' and pid != pg_backend_pid();" ; done | PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo | awk -F "|" '/^ *active/{print > "/tmp/ash.txt";$2="";c[$0]=c[$0]+1}END{for (i in c){printf "%8d %s\n", c[i],i}}' | sort -n & PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo -e < /tmp/run.sql ; pkill psql

This runs in the background a query on PG_STAT_ACTIVITY piped to an AWK script to get the number of samples per wait even. No wait event means not waiting (probably ON CPU) and I label it as ‘-null-‘.

Here is the result for the same run. 2:15 which proves that there’s no overhead with my sampling running in another session with plenty of CPU available:


Expanded display is on.
select * from pg_stat_database where datname='demo'
Timing is on.
call my_insert(1e7::int,10000000);
CALL
Time: 135397.016 ms (02:15.397)
Timing is off.
select tup_inserted-45098070"+ tup_inserted",xact_commit-28338"+ xact_commit" from pg_stat_database where datname='demo';
-[ RECORD 1 ]--+----
+ tup_inserted | 0
+ xact_commit  | 819

[opc@b aws]$
       1  active    IO         XactSync     call my_insert(1e7::int,10000000);
      30  active    IO         WALWrite     call my_insert(1e7::int,10000000);
     352  active    -null-                  call my_insert(1e7::int,10000000);
[opc@b aws]$

Most of the time (92% of pg_stat_activity samples) my session is not on a wait event, which means running in CPU. 8% of the samples are “WALWrite” and only one sample “XactSync”.

This “XactSync” is something that you cannot find in PostgreSQL documentation but in Aurora documentation:
IO:XactSync – In this wait event, a session is issuing a COMMIT or ROLLBACK, requiring the current transaction’s changes to be persisted. Aurora is waiting for Aurora storage to acknowledge persistence.

The WALWrite is the session process sending the WAL but not waiting for acknowledge (this is my guess from the small wait samples as I’ve not seen this internal Aurora behaviour documented).

I’ll now run with different commit size, starting from the maximum (all in one transaction) down to commit every 100 rows. I’m copying only the interesting parts: the elapsed time and the wait event sample count:


call my_insert(1e7::int,1e7::int);
CALL
Time: 135868.757 ms (02:15.869)

       1  active    -null-                  vacuum demo;
      44  active    IO         WALWrite     call my_insert(1e7::int,1e7::int);
     346  active    -null-                  call my_insert(1e7::int,1e7::int);

call my_insert(1e7::int,1e6::int);
CALL
Time: 136080.102 ms (02:16.080)

       3  active    IO         XactSync     call my_insert(1e7::int,1e6::int);
      38  active    IO         WALWrite     call my_insert(1e7::int,1e6::int);
     349  active    -null-                  call my_insert(1e7::int,1e6::int);

call my_insert(1e7::int,1e5::int);
CALL
Time: 133373.694 ms (02:13.374)


      30  active    IO         XactSync     call my_insert(1e7::int,1e5::int);
      35  active    IO         WALWrite     call my_insert(1e7::int,1e5::int);
     315  active    -null-                  call my_insert(1e7::int,1e5::int);

call my_insert(1e7::int,1e4::int);
CALL
Time: 141820.013 ms (02:21.820)

      32  active    IO         WALWrite     call my_insert(1e7::int,1e4::int);
      91  active    IO         XactSync     call my_insert(1e7::int,1e4::int);
     283  active    -null-                  call my_insert(1e7::int,1e4::int);

call my_insert(1e7::int,1e3::int);
CALL
Time: 177596.155 ms (02:57.596)

       1  active    -null-                  select tup_inserted-95098070"+ tup_inserted",xact_commit-33758"+ xact_commit" from pg_stat_database where datname='demo';
      32  active    IO         WALWrite     call my_insert(1e7::int,1e3::int);
     250  active    IO         XactSync     call my_insert(1e7::int,1e3::int);
     252  active    -null-                  call my_insert(1e7::int,1e3::int);

call my_insert(1e7::int,1e2::int);
CALL
Time: 373504.413 ms (06:13.504)

      24  active    IO         WALWrite     call my_insert(1e7::int,1e2::int);
     249  active    -null-                  call my_insert(1e7::int,1e2::int);
     776  active    IO         XactSync     call my_insert(1e7::int,1e2::int);

Even if the frequency of XactSync is increasing with the number of commits, the elapsed is nearly the same when the commit size is higher than ten thousand rows. Then it quickly increases and committing every 100 rows only brings the elapsed time to 6 minutes with 74% of the time waiting on XactSync.

This wait event being specific to the way Aurora foreground process sends the redo to the storage, through the network, I’ll run the same with a normal PostgreSQL (RDS) running on mounted block storage (EBS).

I’m adding here some CloudWatch metrics during those runs, starting at 11:30 with large commit size down to row-by-row commit still running at 12:00
.

As you can see I’ve added the ones for the next test with RDS PostgreSQL running the same between 12:15 and 12:30 which is detailed in the next post: https://blog.dbi-services.com/aws-aurora-vs-rds-postgresql/. Interpreting those results is beyond the scope of this blog post. We have seen that in the frequent commit run most of the time is on IO:XactSync wait event. However, CloudWatch shows 50% CPU utilization which means, for r5.large, one thread always in CPU. This is not exactly what I expect with a wait event. But, quoting Kevin Closson, everything is a CPU problem, isn’t it?

Cet article AWS Aurora IO:XactSync is not a PostgreSQL wait event est apparu en premier sur Blog dbi services.

AWS Aurora vs. RDS PostgreSQL on frequent commits

$
0
0

This post is the second part of https://blog.dbi-services.com/aws-aurora-xactsync-batch-commit/ where I’ve run row-by-row inserts on AWS Aurora with different size of intermediate commit. Without surprise the commit-each-row anti-pattern has a negative effect on performance. And I mentioned that this is even worse in Aurora where the session process sends directly the WAL to the network storage and waits, at commit, that it is acknowledged by at least 4 out of the 6 replicas. An Aurora specific wait event is sampled on these waits: XactSync. At the end of the post I have added some CloudWatch statistics about the same running in RDS but with the EBS-based PostgreSQL rather than the Aurora engine. The storage is then an EBS volume mounted on the EC2 instance. Here is the result, in this post, with a Single AZ storage, and a second run where the volume is synchronized in Multi-AZ.

TL;DR: I have put all that in a small graph showing the elapsed time for all runs from the previous and current post. Please keep in mind that this is not a benchmark. Just one test with one specific workload. I’ve put everything to reproduce it below. Remember that the procedure my_insert(1e7,1e2) inserts 10 million rows with a commit every 100 rows.

Before explaining in the results here the detail. I put it quickly as it is the same as in the last post: the Aurora PostgreSQL engine has the same SQL API so I changed only the endpoint in the $db environment variable.

RDS PostgreSQL

I have created a database on RDS with the real PostgreSQL the same as I did in the previous post on Aurora:

In order to run on the same EC2 shape, I selected the “Memory Optimized classes” db.r5large and in this test I disabled Multi-AZ.

Here is the single run in one transaction:


[opc@b aws]$ PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo -e < /tmp/run.sql
truncate table demo;
TRUNCATE TABLE
vacuum demo;
VACUUM
Expanded display is on.
select * from pg_stat_database where datname='demo'
Timing is on.
call my_insert(1e7::int,10000000);
CALL
Time: 55908.947 ms (00:55.909)
Timing is off.
select tup_inserted-1000070"+ tup_inserted",xact_commit-1063"+ xact_commit" from pg_stat_database where datname='demo';
-[ RECORD 1 ]--+---------
+ tup_inserted | 10000000
+ xact_commit  | 14

Here with my quick active session history sample:


[opc@b aws]$ while true ; do echo "select pg_sleep(0.1);select state,to_char(clock_timestamp(),'hh24:mi:ss'),coalesce(wait_event_type,'-null-'),wait_event,query from pg_stat_activity where application_name='psql' and pid != pg_backend_pid();" ; done | PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo | awk -F "|" '/^ *active/{$2="";c[$0]=c[$0]+1}END{for (i in c){printf "%8d %s\n", c[i],i}}' | sort -n & PGPASSWORD=FranckPachot psql -p 5432 -h $db -U postgres demo -e < /tmp/run.sql ; pkill psql
[1] 29583
truncate table demo;
TRUNCATE TABLE
vacuum demo;
VACUUM
Expanded display is on.
select * from pg_stat_database where datname='demo'
Timing is on.
call my_insert(1e7::int,10000000);
CALL
Time: 56175.302 ms (00:56.175)
Timing is off.
select tup_inserted-11000070"+ tup_inserted",xact_commit-1102"+ xact_commit" from pg_stat_database where datname='demo';
-[ RECORD 1 ]--+---------
+ tup_inserted | 10000000
+ xact_commit  | 346

       3  active    IO         DataFileExtend   call my_insert(1e7::int,10000000);
     160  active    -null-                  call my_insert(1e7::int,10000000);
[opc@b aws]$

No samples for waiting except the table file extension.

Now running the loop with decreasing commit size:


call my_insert(1e7::int,1e7::int);
CALL
Time: 50659.978 ms (00:50.660)

       2  active    IO         DataFileExtend   call my_insert(1e7::int,1e7::int);
     139  active    -null-                  call my_insert(1e7::int,1e7::int);

call my_insert(1e7::int,1e6::int);
CALL
Time: 51760.251 ms (00:51.760)

       2  active    IO         DataFileExtend   call my_insert(1e7::int,1e6::int);
     154  active    -null-                  call my_insert(1e7::int,1e6::int);

call my_insert(1e7::int,1e5::int);
CALL
Time: 50694.917 ms (00:50.695)

       3  active    IO         DataFileExtend   call my_insert(1e7::int,1e5::int);
     139  active    -null-                  call my_insert(1e7::int,1e5::int);

call my_insert(1e7::int,1e4::int);
CALL
Time: 52569.108 ms (00:52.569)

       1  active    IO         WALWrite     call my_insert(1e7::int,1e4::int);
       3  active    IO         DataFileExtend   call my_insert(1e7::int,1e4::int);
     151  active    -null-                  call my_insert(1e7::int,1e4::int);

call my_insert(1e7::int,1e3::int);
CALL
Time: 60799.896 ms (01:00.800)

       4  active    IO         DataFileExtend   call my_insert(1e7::int,1e3::int);
       5  active    IO         WALWrite     call my_insert(1e7::int,1e3::int);
     186  active    -null-                  call my_insert(1e7::int,1e3::int);

call my_insert(1e7::int,1e2::int);
CALL
Time: 108391.218 ms (01:48.391)

       1  active    IO         WALWrite     call my_insert(1e7::int,1e2::int);
       1  active    LWLock     WALWriteLock   call my_insert(1e7::int,1e2::int);
       1  active    -null-                  vacuum demo;
       2  active    IO         DataFileExtend   call my_insert(1e7::int,1e2::int);
     315  active    -null-                  call my_insert(1e7::int,1e2::int);

When the commit frequency increase we start to see WALWrite samples, but still not a lot. Please, remember that this is sampling. WALWrite occurs often especially with frequent commits but the low latency storage (SSD with 1000 Provisioned IOPS here) makes it infrequently sampled.

The CloudWatch statistics are in the previous post so that it is easier to compare.

Multi AZ

I’ve created the same except that I’ve enabled “Multi-AZ deployment” where the storage is synchronously replicated into another data centre. This adds a lot for reliability: failover within a few minutes with no data loss. And a bit of performance as the backups can be offloaded on the replica. But of course, being synchronous, it increases the write latency. Here is the result with the same run with multiple commit size:


call my_insert(1e7::int,1e7::int);
CALL
Time: 55902.530 ms (00:55.903)

       1  active    -null-                  select * from pg_stat_database where datname='demo'
       3  active    IO         DataFileExtend   call my_insert(1e7::int,1e7::int);
     158  active    -null-                  call my_insert(1e7::int,1e7::int);

call my_insert(1e7::int,1e6::int);
CALL
Time: 64711.467 ms (01:04.711)

       3  active    Lock       relation     truncate table demo;
       6  active    IO         DataFileExtend   call my_insert(1e7::int,1e6::int);
     176  active    -null-                  call my_insert(1e7::int,1e6::int);

call my_insert(1e7::int,1e5::int);
CALL
Time: 55480.628 ms (00:55.481)

       1  active    IO         WALWrite     call my_insert(1e7::int,1e5::int);
       5  active    IO         DataFileExtend   call my_insert(1e7::int,1e5::int);
     155  active    -null-                  call my_insert(1e7::int,1e5::int);

call my_insert(1e7::int,1e4::int);
CALL
Time: 69491.522 ms (01:09.492)

       1  active    IO         DataFileImmediateSync   truncate table demo;
       1  active    IO         WALWrite     call my_insert(1e7::int,1e4::int);
       6  active    IO         DataFileExtend   call my_insert(1e7::int,1e4::int);
     192  active    -null-                  call my_insert(1e7::int,1e4::int);

call my_insert(1e7::int,1e3::int);
CALL
Time: 82964.170 ms (01:22.964)

       2  active    IO         DataFileExtend   call my_insert(1e7::int,1e3::int);
       2  active    IO         WALWrite     call my_insert(1e7::int,1e3::int);
     233  active    -null-                  call my_insert(1e7::int,1e3::int);

call my_insert(1e7::int,1e2::int);
CALL
Time: 188313.372 ms (03:08.313)

       1  active    -null-                  truncate table demo;
       2  active    IO         WALWrite     call my_insert(1e7::int,1e2::int);
       6  active    IO         DataFileExtend   call my_insert(1e7::int,1e2::int);
     562  active    -null-                  call my_insert(1e7::int,1e2::int);

The overhead in performance is really small, so this Multi-AZ is a good recommendation for production if cost allows it.

Reliability vs. Performance

Without a surprise, writing the redo log to remote storage increases the latency. We see a small difference between Single and Multi-AZ. And we see a big difference with Aurora which replicates to three AZs. There are 5 pillars in the AWS Well-Architected Framework: Operational Excellence, Security, Reliability, Performance Efficiency and Cost Optimization. I like the 5 axes, but calling them “pillars” supposes that they have all the same height and weight, which is not the case. When you go Multi-AZ you increase the reliability, but the cost as well. When going to Aurora you push the reliability even further but the performance decreases as well for writes. This is no surprise: Aurora is single-master. You can scale reads to multiple replicas but the write endpoint goes to one single instance.

And finally a larger conclusion. You can get better agility with microservices (Aurora is one database platform service built on multiple infrastructure services: compute, buffer cache, storage). You can get better scalability with replicas. And reliability with synchronization. And better performance with high-performance disks and low-latency network. But in 2020 as well as 40 years ago your best investment is in application design. In this example, once you avoid row-by-row commit and be ready to bulk commit, you are free to choose any option for your availability and cost requirements. If you don’t opt for a scalable code design, just re-using a crude row-by-row API, you will have mediocre performance which will constrain your availability and scalability options.

Cet article AWS Aurora vs. RDS PostgreSQL on frequent commits est apparu en premier sur Blog dbi services.

Migrating From Oracle Non-CDB 19c to Oracle 20c

$
0
0

With Oracle 20c, the non-multitenant architecture is no longer supported. So, people will have to migrate their databases to container if they want to use Oracle 20c. There are many methods to transform a non-cdb database to a pluggable one.
-Datapump
-Full Trabsportable Tablespaces
-Plugging non-cdb database , upgrade the plugged database and then convert
-Upgrading the non-cdb database, then plug it the container and then convert it ( But I am not sure that this method will work with Oracle 20c as there is non-cdb architecture)
We can find useful information about these methods in Oracle documentation and on Mike Dietrich blogs

In this blog I am going to use the method plugging the database to migrate a non-cdb Oracle 19c database prod19

********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : prod19
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : NO
FORCE_LOGGING          : NO
VERSION                : 19.0.0.0.0
CDB Enabled            : NO
*************************************

into an Oracle 20c container database prod20

********* dbi services Ltd. *********

STATUS                 : OPEN
DB_UNIQUE_NAME         : prod20
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : YES
FORCE_LOGGING          : YES
VERSION                : 20.0.0.0.0
CDB Enabled            : YES
List PDB(s)  READ ONLY : PDB$SEED
List PDB(s) READ WRITE : PDB1
*************************************

The first step is to open the source database on READ-ONLY mode and then generate the metadata xml file of the non-cdb prod19 database using dbms_pdb.describe procedure.

SQL> exec DBMS_PDB.DESCRIBE('/home/oracle/upgrade/prod19.xml');

Procedure PL/SQL terminee avec succes.

SQL>

The generated xml file is used to plug the non-cdb database into the container prod20. But before plugging the database I run the following script to detect eventual errors

DECLARE
compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/upgrade/prod19.xml', pdb_name => 'prod19')
WHEN TRUE THEN 'YES' ELSE 'NO'
END;
BEGIN
DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible?  ==>  ' || compatible);
END;
/

When querying PDB_PLUG_IN_VIOLATIONS, I can see following error

PROD19	 ERROR	   PDB's version does not match CDB's version: PDB's
		   version 19.0.0.0.0. CDB's version 20.0.0.0.0.

But as explained by Mike Dietrich in his blog I ignore the error and then plug prod19 into the CDB prod20

SQL> create pluggable database prod18 using '/home/oracle/upgrade/prod19.xml' file_name_convert=('/u01/app/oracle/oradata/PROD19/','/u01/app/oracle/oradata/PROD20/prod19/');

Base de donnees pluggable creee.

At this state the database prod19 is plugged into the container prod20, but need to be upgraded to Oracle 20.

SQL> show pdbs

    CON_ID CON_NAME   OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED   READ ONLY  NO
3 PDB1   READ WRITE NO
4 PROD19   MOUNTED
SQL> alter session set container=prod19 ;

Session modifiee.

SQL> startup upgrade
Base de donnees pluggable ouverte.

And we can now upgrade the PDB prod19 using dbupgrade

oracle@oraadserverupgde:/home/oracle/ [prod20 (CDB$ROOT)] dbupgrade -l /home/oracle/logs -c prod19

A few minutes after, the upgrade is finished. Below some truncated output

...
...
------------------------------------------------------
Phases [0-106]         End Time:[2020_05_01 17:13:42]
Container Lists Inclusion:[PROD19] Exclusion:[NONE]
------------------------------------------------------

Grand Total Time: 1326s [PROD19]

 LOG FILES: (/home/oracle/upgrade/log//catupgrdprod19*.log)

Upgrade Summary Report Located in:
/home/oracle/upgrade/log//upg_summary.log

     Time: 1411s For PDB(s)

Grand Total Time: 1411s

 LOG FILES: (/home/oracle/upgrade/log//catupgrd*.log)


Grand Total Upgrade Time:    [0d:0h:23m:31s]
oracle@oraadserverupgde:/home/oracle/ [prod20 (CDB$ROOT)]

Having a quick check to log files to see if all was fine during the upgrade

oracle@oraadserverupgde:/home/oracle/upgrade/log/ [prod19] cat upg_summary.log

Oracle Database Release 20 Post-Upgrade Status Tool    05-01-2020 17:13:2
Container Database: PROD20
[CON_ID: 4 => PROD19]

Component                               Current         Full     Elapsed Time
Name                                    Status          Version  HH:MM:SS

Oracle Server                          UPGRADED      20.2.0.0.0  00:11:26
JServer JAVA Virtual Machine           UPGRADED      20.2.0.0.0  00:02:07
Oracle XDK                             UPGRADED      20.2.0.0.0  00:00:33
Oracle Database Java Packages          UPGRADED      20.2.0.0.0  00:00:05
Oracle Text                            UPGRADED      20.2.0.0.0  00:01:02
Oracle Workspace Manager               UPGRADED      20.2.0.0.0  00:00:41
Oracle Real Application Clusters       UPGRADED      20.2.0.0.0  00:00:00
Oracle XML Database                    UPGRADED      20.2.0.0.0  00:01:57
Oracle Multimedia                      UPGRADED      20.2.0.0.0  00:00:39
LOCATOR                                UPGRADED      20.2.0.0.0  00:01:11
Datapatch                                                        00:00:30
Final Actions                                                    00:00:45
Post Upgrade                                                     00:00:06

Total Upgrade Time: 00:21:14 [CON_ID: 4 => PROD19]

Database time zone version is 32. It is older than current release time
zone version 34. Time zone upgrade is needed using the DBMS_DST package.

Grand Total Upgrade Time:    [0d:0h:23m:31s]
oracle@oraadserverupgde:/home/oracle/upgrade/log/ [prod19]

After the upgrade, we have to convert prod19 to a pluggable database.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
	 4 PROD19			  MOUNTED
SQL> alter session set container=prod19;

Session modifiee.

QL> alter session set container=prod18;
SQL> @?/rdbms/admin/noncdb_to_pdb.sql

After the noncdb_to_pdb script runs successfully, the PDB prod19 can be now opened in read write mode

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PROD19                         READ WRITE NO

Cet article Migrating From Oracle Non-CDB 19c to Oracle 20c est apparu en premier sur Blog dbi services.

Documentum Upgrade – Corrupt Lockbox with CS 16.4

$
0
0

A couple months ago, I was working on an upgrade project from Documentum 7.x to 16.4. During this upgrade, I faced a few issues so I will try to write some blogs about all that in the coming weeks, starting today with an alleged “corrupt” Lockbox. This upgrade was part of a migration as well from Virtual Machines to Containers (using Kubernetes), therefore the upgrade was done on a staging environment build specifically for that using the source version. The cloning process was done properly and then the upgrade started.

 

Upgrading the binaries was done without error but there were some INFO messages that I found interesting:

[dmadmin@stg-cs ~]$ cd $DCTM_BINARY/logs/
[dmadmin@stg-cs logs]$ grep -i "lock" install.log
08:35:27,490  INFO [main] com.documentum.install.shared.actions.DiActionExtractSystemResourceTarget - Extracting java system resource upgradeLockbox.sh to $DM_HOME/bin/upgradeLockbox.sh
08:35:27,586  INFO [main] com.documentum.install.shared.actions.DiActionSetPermissionTarget - performing chmod 6750 $DM_HOME/bin/upgradeLockbox.sh
08:35:33,897  INFO [main] - Did not find existing lockbox file, don't need to upgrade existing lockbox!
[dmadmin@stg-cs logs]$

 

Since it was just an INFO message, I just kept a note about that to check it later and then I proceeded with the patching and repository upgrade. The patching went smoothly as well but it wasn’t the case for the repository upgrade which failed at the beginning:

[dmadmin@stg-cs logs]$ cd $DM_HOME/install/logs/
[dmadmin@stg-cs logs]$ cat install.log
11:07:01,522  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: UniversalServerConfigurator
11:07:01,522  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0248
11:07:01,523  INFO [main] -
11:07:01,554  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
11:07:01,578  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerInformation - Setting CONFIGURE_DOCBROKER value to TRUE for SERVER
11:07:01,578  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerInformation - Setting CONFIGURE_DOCBASE value to TRUE for SERVER
11:07:02,581  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer was started using the dm_launch_server_config_program.sh script.
11:07:02,581  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer will determine the value of environment variable DOCUMENTUM.
11:07:05,581  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer will determine the value of environment variable PATH.
11:07:08,603  INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentConfigurationInstallationValidation - Start to validate docbase parameters.
11:07:08,610  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerPatchExistingDocbaseAction - The installer will obtain all the DOCBASE on the machine.
11:07:10,618  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerDocAppFolder - The installer will obtain all the DocApps which could be installed for the repository.
11:07:10,620  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadDocBaseComponentInfo - The installer will gather information about the component GR_REPO.
11:07:13,624  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatusForOld - The installer will check old AEK key status.
11:07:15,173  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - The installer will validate AEK/Lockbox fileds.
11:07:15,180  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - Upgrade docbase use keep aek unchanged in lockbox, will not re-create it
11:07:16,685 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Check AEK key passphrase failed
com.documentum.install.shared.common.error.DiException: Check AEK key passphrase failed
        at com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase.executeValidation(DiWAServerValidateLockboxPassphrase.java:133)
        at com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase.setup(DiWAServerValidateLockboxPassphrase.java:115)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:73)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        ...
[dmadmin@stg-cs logs]$

 

At first, I thought that I provided the wrong passphrases for the AEK or Lockbox. Therefore, I tried to reset the server fingerprint to validate the Lockbox passphrase:

[dmadmin@stg-cs logs]$ dm_crypto_manage_lockbox -lockbox lockbox.lb -lockboxpassphrase ${lockbox_passphrase} -resetfingerprint
Lockbox lockbox.lb
Lockbox Path $DOCUMENTUM/dba/secure/lockbox.lb
The Lockbox is corrupt and failed to load.Lockbox open failed ▒pY
** Operation failed **
[dmadmin@stg-cs logs]$

 

After some investigation, I saw that I could actually work with the lockbox while using the CS 7.x binaries (initial version, already using the lockbox) but it wasn’t working while using the CS 16.4 binaries. So just setting the $DM_HOME to one or the other path (and reloading the environment) changed completely the behavior since one was working but not the other. Since it was therefore linked to the upgrade, I was thinking that maybe the “INFO” message above regarding the upgrade lockbox was actually an issue? So digging deeper, I saw two new parameters in the 16.4 upgrade silent properties:

SERVER.LOCKBOX_FILE_NAME1=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD1=xxx

 

These two parameters weren’t available as far as I know with the CS 7.3 and they only appeared with the CS 16.4. While trying to upgrade the binaries from 7.x to 16.4, you obviously don’t have the binaries already available in 16.4 and therefore you don’t have access to the silent properties files of 16.4 (unless you go dig them into the compressed archive) so I used the one from 7.x which was therefore without these two parameters. Is this the issue? Well not really… Obviously, if you specify these two parameters, then the lockbox will be upgraded with the binaries (I didn’t test it, hopefully it works…). However, it shouldn’t be mandatory, the repository upgrade should also perform the mandatory lockbox upgrade. The thing here is that we already have to provide the AEK & Lockbox passphrases in the repository upgrade so to avoid duplicating things unnecessarily, I wanted to do that on the repository upgrade directly, where it should be, from my point of view. Therefore, I assumed that this wasn’t really the issue here (I will come back on that later in this blog…).

 

Why is the repository upgrade failing then? Well as you can see, there is not enough information so far. I found that the passphrases that I used were indeed correct but it still failed because of the so called “corrupt” Lockbox. What I did first is to backup the lockbox and try to upgrade it myself:

[dmadmin@stg-cs logs]$ cd $DM_HOME/bin/
[dmadmin@stg-cs bin]$
[dmadmin@stg-cs bin]$ ./upgradeLockbox.sh lockbox.lb ${lockbox_passphrase}
Lockbox lockbox.lb
Lockbox Path $DOCUMENTUM/dba/secure/lockbox.lb
Renamed $DOCUMENTUM/dba/secure/lockbox.lb to $DOCUMENTUM/dba/secure/lockbox.lb.bak.2020-2-14.11.33.17
Renamed $DOCUMENTUM/dba/secure/lockbox.lb.FCD to $DOCUMENTUM/dba/secure/lockbox.lb.FCD.bak.2020-2-14.11.33.17
Creating initial Lockbox 4.0 file...
Reading old Lockbox file into buffer... $DOCUMENTUM/dba/secure/lockbox.lb.bak.2020-2-14.11.33.17
Setting pointer to Custom SSV providers...
Importing data from old Lockbox file into new Lockbox 4.0 handle...

Done!
[dmadmin@stg-cs bin]$ 
[dmadmin@stg-cs bin]$ diff $DOCUMENTUM/dba/secure/lockbox.lb $DOCUMENTUM/dba/secure/lockbox.lb.bak.2020-2-14.11.33.17
1c1
< 4.000000|...
\ No newline at end of file
---
> 3.100000|...
\ No newline at end of file
[dmadmin@stg-cs bin]$

 

As you can see above, the upgrade of the lockbox itself is working fine and it does change the version from 3.1 (used by CS 7.x) to 4.0 (used by CS 16.4). Running again the repository upgrade didn’t produce any error anymore so it was able to proceed and complete without any problem once the lockbox was manually upgraded. However, this is only a workaround so I wanted to get to the bottom of the issue and therefore with my second repository, I restored the initial lockbox, reloaded it in the memory with the CS 7.x binaries and enabled the DEBUG logs on the repository upgrade installer. Obviously, the DEBUG logs was quite huge so I won’t put everything in this blog but the relevant section is the following one:

[dmadmin@stg-cs bin]$ cd $DM_HOME/install/logs/
[dmadmin@stg-cs logs]$ cat install.log
12:02:28,957 DEBUG [main]  - ###################The variable is: LOG_IS_READY, value is: true
12:02:28,957 DEBUG [main]  - ###################The variable is: FORMATED_PRODUCT_VERSION_NUMBER, value is: 16.4.0000.0248
12:02:28,958  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: UniversalServerConfigurator
12:02:28,958  INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 16.4.0000.0248
12:02:28,958  INFO [main]  -
12:02:28,959 DEBUG [main]  - ###################The variable is: DCTM_INSTALLER_TEMP_DIR, value is: /tmp/731965.tmp
...
12:02:41,075 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatusForOld - *******************Start action com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatusForOld***********************
12:02:41,075  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatusForOld - The installer will check old AEK key status.
12:02:41,079 DEBUG [main]  - Before running the following command, the class path is /tmp/install.dir.13366/InstallerData:/tmp/install.dir.13366/InstallerData/installer.zip:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:
12:02:41,079 DEBUG [main]  - Before running the following command, the path is /usr/xpg4/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/dmadmin/.local/bin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
12:02:42,569 DEBUG [main]  - Command line is /bin/sh -c "$DM_HOME/bin/dm_crypto_create -check -noprompt -lockbox lockbox.lb -keyname CSaek >"$DOCUMENTUM/temp/installer/installlogs/tempKeyStoreOutput201016218bfd-9e6c-3441-a81e-697068322a7391246586e34ef995978.out" 2>&1" and start in /tmp/731965.tmp. The return code of this command is 3
12:02:42,569 DEBUG [main]  - After running the above command, the class path is /tmp/install.dir.13366/InstallerData:/tmp/install.dir.13366/InstallerData/installer.zip:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:
12:02:42,569 DEBUG [main]  - After running the above command, the path is /usr/xpg4/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/dmadmin/.local/bin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
12:02:42,629 DEBUG [main]  - ###################The variable is: SERVER.OLD_KEYSTORE_STATUS, value is: NOT_EXIST
12:02:42,629 DEBUG [main]  - ###################The variable is: SERVER.KEYSTORE_STATUS, value is: NOT_EXIST
12:02:42,629 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatusForOld - *******************************end of action********************************
12:02:42,633 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs - Start to resolve variable
12:02:42,633 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs - Start to check condition
12:02:42,633 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs - Start to setup
12:02:42,633 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs - *******************Start action com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs***********************
12:02:42,633 DEBUG [main]  - ###################The variable is: COMMON.EXIST_AEK_FILES, value is: lockbox.lb
12:02:42,633 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs - *******************************end of action********************************
12:02:42,634 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - Start to resolve variable
12:02:42,634 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - Start to check condition
12:02:42,634 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - Start to setup
12:02:42,634 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - *******************Start action com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation***********************
12:02:42,634  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - The installer will validate AEK/Lockbox fileds.
12:02:42,637 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - *******************************end of action********************************
12:02:42,638 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - Start to resolve variable
12:02:42,639 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - Start to check condition
12:02:42,639 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - Start to setup
12:02:42,639 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - *******************Start action com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore***********************
12:02:42,639 DEBUG [main]  - ###################The variable is: SERVER.KEYSTORE_FILE, value is: $DOCUMENTUM/dba/secure/CSaek
12:02:42,642  INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - Upgrade docbase use keep aek unchanged in lockbox, will not re-create it
12:02:42,642 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerCreateKeyStore - *******************************end of action********************************
12:02:42,643 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Start to resolve variable
12:02:42,644 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Start to check condition
12:02:42,644 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Start to setup
12:02:42,644 DEBUG [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - *******************Start action com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase***********************
12:02:42,644 DEBUG [main]  - Before running the following command, the class path is /tmp/install.dir.13366/InstallerData:/tmp/install.dir.13366/InstallerData/installer.zip:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:
12:02:42,644 DEBUG [main]  - Before running the following command, the path is /usr/xpg4/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/dmadmin/.local/bin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
12:02:44,112 DEBUG [main]  - Command line is /bin/sh -c "dm_crypto_create -check -noprompt -passphrase ****** -lockbox lockbox.lb -keyname CSaek > /tmp/731965.tmp/dm_crypto_create_check362671.log" and start in $DM_HOME/bin. The return code of this command is 3
12:02:44,113 DEBUG [main]  - After running the above command, the class path is /tmp/install.dir.13366/InstallerData:/tmp/install.dir.13366/InstallerData/installer.zip:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:$DM_HOME/dctm-server.jar:$DOCUMENTUM/dctm.jar:$DOCUMENTUM/config:$DM_HOME/bin:
12:02:44,113 DEBUG [main]  - After running the above command, the path is /usr/xpg4/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$JAVA_HOME/bin:$DM_HOME/bin:$DOCUMENTUM/dba:$ORACLE_HOME/bin:$DM_HOME/bin:$ORACLE_HOME/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/dmadmin/.local/bin:/home/dmadmin/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
12:02:44,165 DEBUG [main] com.documentum.install.shared.common.error.DiException - Check AEK key passphrase failed
12:02:44,166 ERROR [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Check AEK key passphrase failed
com.documentum.install.shared.common.error.DiException: Check AEK key passphrase failed
        at com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase.executeValidation(DiWAServerValidateLockboxPassphrase.java:133)
        at com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase.setup(DiWAServerValidateLockboxPassphrase.java:115)
        at com.documentum.install.shared.installanywhere.actions.InstallWizardAction.install(InstallWizardAction.java:73)
        at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
        ...
[dmadmin@stg-cs logs]$

 

As you can see above, the first strange thing is that the installer has a “return code of this command is 3” while trying to check the “old AEK key status“. If you look at the command that is printed in the DEBUG logs, you can see this: $DM_HOME/bin/dm_crypto_create -check -noprompt -lockbox lockbox.lb -keyname CSaek. This is actually correct if you are using the default AEK passphrase but it’s not if you are using a custom passphrase… If you look a little bit lower in the DEBUG logs, you can see the correct command used later to check the AEK: dm_crypto_create -check -noprompt -passphrase ****** -lockbox lockbox.lb -keyname CSaek. Maybe it’s just a logging topic where the passphrase was completely removed from the first command while it was replaced by “******” on the second command? Anyway, I found that interesting to note…

 

Since I do not have access to the source code, I needed OpenText to confirm if this was potentially the issue or if it was the first thing I mentioned earlier related to the properties in the CS Upgrade silent properties file. I opened the OpenText Service Request #4436375 and after two months, I got the feedback that the lockbox upgrade can only be done during the CS binaries upgrade. This doesn’t make any sense to me. As I said previously, the lockbox is related to a repository, it has nothing to do with the binaries. In addition, you already need to provide lockbox details during the repository upgrade so why doing this in the binaries upgrade which therefore needs you to enter the passwords on yet another silent properties file? An enhancement request would have been possible but as you probably know, Documentum 16.7 removed the support for the Lockbox so it does only impact Documentum 16.4 and this will therefore not change anymore. I guess I will stick with my workaround in our custom silent script package so that the lockbox if upgraded, if needed (present in 7.x & target in 16.4), just before running the repository upgrade. This is all done automatically in our Ansible playbooks so that we can just execute one command and we have the upgrade done from A to Z, successfully.

Cet article Documentum Upgrade – Corrupt Lockbox with CS 16.4 est apparu en premier sur Blog dbi services.

Viewing all 2837 articles
Browse latest View live