Quantcast
Channel: dbi Blog
Viewing all 2833 articles
Browse latest View live

Automatic start/stop for CONTROL-M v9

$
0
0

In this post, I would like to share how to implement automatic Start and Stop for CONTROL-M/Server, CONTROL-M/Agent and CONTROL-M/EM running on Oracle linux 7 individual servers, by implementing some updates in the existing BMC procedure and script.
The CONTROL-M should be running version 9 and we are using the internal PostgreSQL database.
The user created for CONTROL-M/Server and CONTROL-M/Agent environment is ctmuser, the user created for CONTROL-M/EM environment is emuser.

BMC CONTROL-M Workload Automation documentation

CONTROL-M/Server

With ctmuser, update existing rc.ctmuser file provided with the BMC distribution by adding the start_ctm part. /home/ctmuser is the home directory of the installation. This command was missing from the existing script, and will guarantee a proper startup of the CONTROL-M Server. The rc.ctmuser script content will then be as following.

    ctmsrv% cat /home/ctmuser/ctm_server/data/rc.ctmuser

    # start database at boot
    su - ctmuser -c dbversion
    if [ $? -eq 0 ] ; then
      echo "SQL Server is already running "
    else
      if [ -f /home/ctmuser/ctm_server/scripts/startdb ]; then
      echo "Starting SQL server for CONTROL-M"
      su - ctmuser -c startdb
      echo "Sleeping for 20"
      sleep 20
      fi
    fi

    # start CONTROL-M Server at boot
    if [ -f /home/ctmuser/ctm_server/scripts/start_ctm ]; then
    echo "Starting CONTROL-M Server"
    su - ctmuser -c /home/ctmuser/ctm_server/scripts/start_ctm &
    sleep 5
    fi

    # start CONTROL-M Configuration Agent at boot
    if [ -f /home/ctmuser/ctm_server/scripts/start_ca ]; then
    echo "Starting CONTROL-M Server Configuration Agent"
    su - ctmuser -c /home/ctmuser/ctm_server/scripts/start_ca &
    sleep 5
    fi

    exit 0

    ctmsrv%

 

The existing BMC procedure does not include any script to be run when the service or the physical server is shutdown. Processes will then simply be killed, which will not guarantee a proper stop of the application and most important the internal PostgreSQL database.
With ctmuser, create a new rc file for stop purpose. Let’s name the file rc.stop.ctmuser and have it located in same /home/ctmuser/ctm_server/data directory. The file content will be the following.

    ctmsrv% cat /home/ctmuser/ctm_server/data/rc.stop.ctmuser
    # stop CONTROL-M Configuration Agent at poweroff/reboot
    if [ -f /home/ctmuser/ctm_server/scripts/shut_ca ]; then
      echo "Stopping CONTROL-M Server Configuration Agent"
      su - ctmuser -c /home/ctmuser/ctm_server/scripts/shut_ca &
      sleep 5
    fi

    # stop CONTROL-M Server at poweroff/reboot
    if [ -f /home/ctmuser/ctm_server/scripts/shut_ctm ]; then
      echo "Stopping CONTROL-M Server"
      su - ctmuser -c /home/ctmuser/ctm_server/scripts/shut_ctm &
      sleep 5
    fi

    # stop database at poweroff/reboot
    if [ -f /home/ctmuser/ctm_server/scripts/shutdb ]; then
      echo "Stopping SQL server for CONTROL-M"
      su - ctmuser -c shutdb
      echo "Sleeping for 20"
      sleep 20
    fi


    exit 0

 

With root privileges, create the service file as below. We will be adding the ExecStop part additionnaly to the existing BMC procedure.

    [root@ctmsrv system]# cat /etc/systemd/system/ctmserver.service
    [Unit]
    Description=CONTROL-M Server
    After=systemd-user-sessions.service multi-user.target network.target
    [Service]
    ExecStart=/bin/sh -c /home/ctmuser/ctm_server/data/rc.ctmuser
    ExecStop=/bin/sh -c /home/ctmuser/ctm_server/data/rc.stop.ctmuser
    Type=forking
    RemainAfterExit=yes
    [Install]
    WantedBy=multi-user.target

 

Give appropriate file permissions and enable the service. A system reboot will be necessary.

    [root@ctmsrv system]# chmod 644 ctmserver.service
    [root@ctmsrv system]# systemctl daemon-reload
    [root@ctmsrv system]# systemctl enable ctmserver.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/ctmserver.service
      to /etc/systemd/system/ctmserver.service.
    [root@ctmsrv system]# systemctl reboot

 

A well-functioning test can be performed by running service ctmserver stop/start with root privileges and monitoring results of :

  • watch ‘ps -ef | grep ctm’
  • ctm_menu
  • service ctmserver status

 CONTROL-M/Agent

For the Agent we will apply the existing BMC procedure.
With root privileges, create the service file as described  below.

    [root@ctmagtli1 system]# cat /etc/systemd/system/ctmag.service
    [Unit]
    Description=CONTROL-M Agent
    [Service]
    Type=forking
    RemainAfterExit=yes
    ExecStart=/home/ctmuser/ctmagent/ctm/scripts/rc.agent_user start
    ExecStop=/home/ctmuser/ctmagent/ctm/scripts/rc.agent_user stop
    [Install]
    WantedBy=multi-user.target

 

Give appropriate file permissions and enable the service. A system reboot will be necessary.

    [root@ctmagtli1 system]# chmod 644 ctmag.service
    [root@ctmagtli1 system]# systemctl daemon-reload
    [root@ctmagtli1 system]# systemctl enable ctmag.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/ctmag.service
      to /etc/systemd/system/ctmag.service.
    [root@ctmagtli1 system]# systemctl reboot

 

A well-functioning test can be performed by running service ctmag stop/start with root privileges and monitoring results of :

  • watch ‘ps -ef | grep ctm’
  • /home/ctmuser/ctmagent/ctm/scripts/ag_diag_comm
  • service ctmag status

CONTROL-M/EM

For EM we will apply the existing BMC procedure.
With root privileges, create the service file as described  below.

    [root@emserver system]# cat /etc/systemd/system/EM.service
    [Unit]
    Description=CONTROL-M/EM
    After=systemd-user-sessions.service multi-user.target network.target
    [Service]
    User=emuser
    ExecStart=/bin/sh -c /home/emuser/bin/start_server;/home/emuser/bin/start_ns_daemon;/home/emuser/bin/start_cms;/home/emuser/bin/start_config_agent
    Type=forking
    RemainAfterExit=yes
    ExecStop=/bin/sh -c /home/emuser/bin/stop_config_agent;/home/emuser/bin/stop_cms;/home/emuser/bin/stop_ns_daemon;/home/emuser/bin/home/em50/bin/stop_server
    [Install]
    WantedBy=multi-user.target

 

Give appropriate file permissions and enable the service. A system reboot will be necessary.

    [root@emserver system]# chmod 644 EM.service
    [root@emserver system]# systemctl daemon-reload
    [root@emserver system]# systemctl enable EM.service
    Created symlink from /etc/systemd/system/multi-user.target.wants/EM.service
      to /etc/systemd/system/EM.service.
    [root@emserver system]# systemctl reboot

 

A well-functioning test can be performed by running service EM stop/start with root privileges and monitoring results of :

  • watch ‘ps -ef | grep em’
  • root_menu
  • service EM status

 

 

Cet article Automatic start/stop for CONTROL-M v9 est apparu en premier sur Blog dbi services.


ORACLE 11g to 12c RMAN catalog migration

$
0
0

This is a small migration demo of a 11g catalog (RCAT11G) to a new 12c catalog (RCAT12c).

Demo databases environments have been easily managed thanks to DBI DMK tool.

oracle@vmreforadg01:/home/oracle/ [RCAT11G] sqh
SQL*Plus: Release 11.2.0.4.0 

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

 

Current configuration

Displaying the list of databases registered in the RCAT11g catalog.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
RCAT11G

SQL> select * from rcat.rc_database;

    DB_KEY  DBINC_KEY       DBID NAME     RESETLOGS_CHANGE# RESETLOGS
---------- ---------- ---------- -------- ----------------- ---------
        41         42 3287252358 DB2TEST1                 1 05-JAN-18
         1          2   65811618 DB1TEST1                 1 05-JAN-18

 

Displaying the list of databases registered in the RCAT12c catalog.

SQL> select instance_name from v$instance;

INSTANCE_NAME
----------------
RCAT12C

SQL>  select * from rcat.rc_database;

no rows selected

 

Verifying existing backup meta data on RCAT11g.

SQL> select db_name, name from rcat.rc_datafile;

DB_NAME                        NAME
------------------------------ ------------------------------------------------------------
DB2TEST1                       /u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/system01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/sysaux01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/undotbs01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/users01DB1TEST11G.dbf

8 rows selected.

 

Migrating RCAT11g to RCAT12c

Importing RCAT11g catalog data into RCAT12c.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 13:39:56
connected to source recovery catalog database
PL/SQL package RCAT.DBMS_RCVCAT version 11.02.00.04 in IMPCAT database is too old
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of import catalog command at 01/05/2018 13:39:56
RMAN-06429: IMPCAT database is not compatible with this version of RMAN

 

When using IMPORT CATALOG, the version of the source recovery catalog schema must be equal to the current version of the destination recovery catalog schema. We, therefore, first need to upgrade RCAT11g catalog schema.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] sqlplus sys/manager@RCAT11G as sysdba
SQL*Plus: Release 12.2.0.1.0

SQL> @/oracle/u01/app/oracle/product/12.2.0/db_1_0/rdbms/admin/dbmsrmansys.sql

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


Grant succeeded.


Grant succeeded.


Grant succeeded.


Grant succeeded.


Grant succeeded.


PL/SQL procedure successfully completed.
oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman target sys/manager catalog rcat/manager@RCAT11G
Recovery Manager: Release 12.2.0.1.0

connected to target database: RCAT12C (DBID=426487514)
connected to recovery catalog database

RMAN> upgrade catalog;

recovery catalog owner is RCAT
enter UPGRADE CATALOG command again to confirm catalog upgrade

RMAN> upgrade catalog;

recovery catalog upgraded to version 12.02.00.01
DBMS_RCVMAN package upgraded to version 12.02.00.01
DBMS_RCVCAT package upgraded to version 12.02.00.01.

 

Verifying new version of RCAT11g catalog.

oracle@vmreforadg01:/u00/app/oracle/network/admin/ [RCAT11G] sqlplus rcat/manager
SQL*Plus: Release 11.2.0.4.0

SQL> select * from rcver;

VERSION
------------
12.02.00.01

 

Importing RCAT11g catalog data into RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

connected to recovery catalog database

RMAN> list db_unique_name all;


RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 15:21:48
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 05-JAN-2018 15:21:52

RMAN> list db_unique_name all;


List of Databases
DB Key  DB Name  DB ID            Database Role    Db_unique_name
------- ------- ----------------- ---------------  ------------------
2       DB1TEST1 65811618         PRIMARY          DB1TEST11G
42      DB2TEST1 3287252358       PRIMARY          DB2TEST11G

 

Verifying new configuration

Displaying the list of databases registered in the RCAT11g catalog.

oracle@vmreforadg01:/u00/app/oracle/network/admin/ [RCAT11G] sqh
SQL*Plus: Release 11.2.0.4.0

SQL> select * from rcat.rc_database;

no rows selected

SQL> select db_name, name from rcat.rc_datafile;

no rows selected

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

no rows selected

 

Displaying the list of databases registered in the RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

SQL> select * from rcat.rc_database;

    DB_KEY  DBINC_KEY       DBID NAME                          RESETLOGS_CHANGE# RESETLOGS FINAL_CHANGE#
---------- ---------- ---------- ----------------------------- ----------------- --------- -------------
        42         43 3287252358 DB2TEST1                                      1 05-JAN-18
         2          3   65811618 DB1TEST1                                      1 05-JAN-18

SQL> select db_name, name from rcat.rc_datafile;

DB_NAME                        NAME
------------------------------ ------------------------------------------------------------
DB2TEST1                       /u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/system01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/sysaux01DB1TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
DB2TEST1                       /u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/undotbs01DB1TEST11G.dbf
DB1TEST1                       /u01/oradata/DB1TEST11G/users01DB1TEST11G.dbf

8 rows selected.

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

    DB_KEY      DB_ID START_TIME          COMPLETION_TIME
---------- ---------- ------------------- -------------------
        42 3287252358 05/01/2018 11:32:00 05/01/2018 11:32:00
        42 3287252358 05/01/2018 11:32:02 05/01/2018 11:32:06
        42 3287252358 05/01/2018 11:32:09 05/01/2018 11:32:10
        42 3287252358 05/01/2018 11:32:12 05/01/2018 11:32:12
        42 3287252358 05/01/2018 15:33:37 05/01/2018 15:33:37
        42 3287252358 05/01/2018 15:33:40 05/01/2018 15:33:45
        42 3287252358 05/01/2018 15:33:47 05/01/2018 15:33:48
        42 3287252358 05/01/2018 15:33:50 05/01/2018 15:33:50
         2   65811618 05/01/2018 11:29:36 05/01/2018 11:29:36
         2   65811618 05/01/2018 11:29:38 05/01/2018 11:29:43
         2   65811618 05/01/2018 11:29:45 05/01/2018 11:29:46
         2   65811618 05/01/2018 11:29:48 05/01/2018 11:29:48
         2   65811618 05/01/2018 15:31:17 05/01/2018 15:31:17
         2   65811618 05/01/2018 15:31:19 05/01/2018 15:31:24
         2   65811618 05/01/2018 15:31:26 05/01/2018 15:31:27
         2   65811618 05/01/2018 15:31:29 05/01/2018 15:31:29
         2   65811618 05/01/2018 15:44:47 05/01/2018 15:44:47
         2   65811618 05/01/2018 15:44:49 05/01/2018 15:44:52
         2   65811618 05/01/2018 15:44:56 05/01/2018 15:44:57
         2   65811618 05/01/2018 15:45:00 05/01/2018 15:45:00
         2   65811618 05/01/2018 15:46:53 05/01/2018 15:46:53
         2   65811618 05/01/2018 15:46:55 05/01/2018 15:47:00
         2   65811618 05/01/2018 15:47:02 05/01/2018 15:47:03
         2   65811618 05/01/2018 15:47:05 05/01/2018 15:47:05

24 rows selected.

 

Checking new backup meta data to be recorded in RCAT12c catalog

Generating a new backup.

oracle@vmreforadg02:/u00/app/oracle/network/admin/ [DB2TEST11G] export NLS_DATE_FORMAT="DD/MM/YYYY HH24:MI:SS"
oracle@vmreforadg02:/u00/app/oracle/network/admin/ [DB2TEST11G] rmanh
Recovery Manager: Release 11.2.0.4.0

RMAN> connect target /

connected to target database: DB2TEST1 (DBID=3287252358)

RMAN> connect catalog rcat/manager@rcat12c

connected to recovery catalog database

RMAN> backup database plus archivelog;

Starting backup at 05/01/2018 15:51:14
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=32 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=32 RECID=1 STAMP=964611118
input archived log thread=1 sequence=33 RECID=2 STAMP=964611131
input archived log thread=1 sequence=34 RECID=3 STAMP=964625616
input archived log thread=1 sequence=35 RECID=4 STAMP=964625629
input archived log thread=1 sequence=36 RECID=5 STAMP=964626674
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:16
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:17
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_annnn_TAG20180105T155116_f4z474b0_.bkp tag=TAG20180105T155116 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:17

Starting backup at 05/01/2018 15:51:17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/DB2TEST11G/system01DB2TEST11G.dbf
input datafile file number=00003 name=/u01/oradata/DB2TEST11G/undotbs01DB2TEST11G.dbf
input datafile file number=00002 name=/u01/oradata/DB2TEST11G/sysaux01DB2TEST11G.dbf
input datafile file number=00004 name=/u01/oradata/DB2TEST11G/users01DB2TEST11G.dbf
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:18
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:25
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_nnndf_TAG20180105T155117_f4z476gv_.bkp tag=TAG20180105T155117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:26
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:27
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_ncsnf_TAG20180105T155117_f4z47glg_.bkp tag=TAG20180105T155117 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:27

Starting backup at 05/01/2018 15:51:27
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=37 RECID=6 STAMP=964626687
channel ORA_DISK_1: starting piece 1 at 05/01/2018 15:51:28
channel ORA_DISK_1: finished piece 1 at 05/01/2018 15:51:29
piece handle=/u90/fast_recovery_area/DB2TEST11G/backupset/2018_01_05/o1_mf_annnn_TAG20180105T155128_f4z47jn7_.bkp tag=TAG20180105T155128 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 05/01/2018 15:51:29

 

Verifying backup set in new RCAT12c catalog.

oracle@vmtestoradg1:/u01/app/oracle/network/admin/ [RCAT12C] sqh
SQL*Plus: Release 12.2.0.1.0

SQL> select sysdate from dual;

SYSDATE
-------------------
05/01/2018 15:53:16

SQL> select DB_KEY, DB_ID, START_TIME, COMPLETION_TIME from RCAT.RC_BACKUP_SET;

    DB_KEY      DB_ID START_TIME          COMPLETION_TIME
---------- ---------- ------------------- -------------------
        42 3287252358 05/01/2018 11:32:00 05/01/2018 11:32:00
        42 3287252358 05/01/2018 11:32:02 05/01/2018 11:32:06
        42 3287252358 05/01/2018 11:32:09 05/01/2018 11:32:10
        42 3287252358 05/01/2018 11:32:12 05/01/2018 11:32:12
        42 3287252358 05/01/2018 15:33:37 05/01/2018 15:33:37
        42 3287252358 05/01/2018 15:33:40 05/01/2018 15:33:45
        42 3287252358 05/01/2018 15:33:47 05/01/2018 15:33:48
        42 3287252358 05/01/2018 15:33:50 05/01/2018 15:33:50
        42 3287252358 05/01/2018 15:51:16 05/01/2018 15:51:16
        42 3287252358 05/01/2018 15:51:18 05/01/2018 15:51:21
        42 3287252358 05/01/2018 15:51:25 05/01/2018 15:51:26
        42 3287252358 05/01/2018 15:51:28 05/01/2018 15:51:28
         2   65811618 05/01/2018 11:29:36 05/01/2018 11:29:36
         2   65811618 05/01/2018 11:29:38 05/01/2018 11:29:43
         2   65811618 05/01/2018 11:29:45 05/01/2018 11:29:46
         2   65811618 05/01/2018 11:29:48 05/01/2018 11:29:48
         2   65811618 05/01/2018 15:31:17 05/01/2018 15:31:17
         2   65811618 05/01/2018 15:31:19 05/01/2018 15:31:24
         2   65811618 05/01/2018 15:31:26 05/01/2018 15:31:27
         2   65811618 05/01/2018 15:31:29 05/01/2018 15:31:29
         2   65811618 05/01/2018 15:44:47 05/01/2018 15:44:47
         2   65811618 05/01/2018 15:44:49 05/01/2018 15:44:52
         2   65811618 05/01/2018 15:44:56 05/01/2018 15:44:57
         2   65811618 05/01/2018 15:45:00 05/01/2018 15:45:00
         2   65811618 05/01/2018 15:46:53 05/01/2018 15:46:53
         2   65811618 05/01/2018 15:46:55 05/01/2018 15:47:00
         2   65811618 05/01/2018 15:47:02 05/01/2018 15:47:03
         2   65811618 05/01/2018 15:47:05 05/01/2018 15:47:05

28 rows selected.

 

 

 

Cet article ORACLE 11g to 12c RMAN catalog migration est apparu en premier sur Blog dbi services.

RMAN debugging during catalog import

$
0
0

In this post I would like to share how I have been able to troubleshoot and solve a catalog import issue using RMAN debug function.

As we can see, the error message provided by RMAN is not very helpful.

oracle@vmtestoradg1:/home/oracle/ [RCAT12C] rman catalog rcat/manager
Recovery Manager: Release 12.2.0.1.0

connected to recovery catalog database

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 14:11:45
connected to source recovery catalog database
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of import catalog command at 01/05/2018 14:11:46
RMAN-06004: ORACLE error from recovery catalog database: ORA-00933: SQL command not properly ended

RMAN>

 

Instead of looking for possible more information in appropriate logs, we can easily use the RMAN debug function.

RMAN> debug on;

 

By running the import command again we will be able to extract below useful information on the command RMAN is complaining about.

DBGSQL:           CREATE DATABASE LINK
DBGSQL:            RCAT11G.IT.DBI-SERVICES.COM
DBGSQL:            CONNECT
DBGSQL:              TO
DBGSQL:            "RCAT"
DBGSQL:            IDENTIFIED BY
DBGSQL:            "manager"
DBGSQL:            USING
DBGSQL:            'RCAT11G'
DBGSQL:
DBGSQL:              sqlcode = 933
DBGSQL:           error: ORA-00933: SQL command not properly ended (krmkosqlerr)

 

Let’s run the command into a sqlplus session to understand the failure.

SQL> conn rcat/manager
Connected.
SQL> CREATE DATABASE LINK RCAT11G.IT.DBI-SERVICES.COM
CONNECT TO "RCAT" IDENTIFIED BY "manager"
USING 'RCAT11G';  2    3
CREATE DATABASE LINK RCAT11G.IT.DBI-SERVICES.COM
                                   *
ERROR at line 1:
ORA-00933: SQL command not properly ended

 

We can quickly understand that the ‘-‘ character is not appropriately used for domain name.

Let’s temporarily update needed parameters. An instance restart will be needed.

SQL> alter system set db_domain='dbiservices' scope=spfile;

System altered.

SQL> alter system set service_names='dbiservices' scope=both;

System altered.

SQL> alter database rename GLOBAL_NAME to "RCAT11G.dbiservices";

Database altered.

 

And our next import will be successful.

RMAN> import catalog rcat/manager@RCAT11G;

Starting import catalog at 05-JAN-2018 15:21:48
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 05-JAN-2018 15:21:52

RMAN>

 

We will be able to restore DB_DOMAIN, SERVICE_NAMES and GLOBAL_NAME to previous values, followed by a catalog instance restart.

 

Cet article RMAN debugging during catalog import est apparu en premier sur Blog dbi services.

Spectre/Meltdown on Oracle Public Cloud UEK – PIO

$
0
0

The Spectre and Meltdown is now in the latest Oracle UEK kernel, after updating it with ‘yum update':

[opc@PTI ~]$ rpm -q --changelog kernel-uek
| awk '/CVE-2017-5715|CVE-2017-5753|CVE-2017-5754/{print $NF}' | sort | uniq -c
43 {CVE-2017-5715}
16 {CVE-2017-5753}
71 {CVE-2017-5754}

As I did on the previous post on AWS, I’ve run quick tests on the Oracle Public Cloud.

Physical reads

I’ve run some SLOB I/O reads with the patches, as well sit KPTI disabled, and with KPTI, IBRS and IBPB disabled.

And I was quite surprised by the result:


DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 23,335.6 nopti
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 23,420.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 24,857.6
DB Time(s) : 1.0 DB CPU(s) : 0.4 Read IO requests : 25,332.1


DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 39,857.7 nopti
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,088.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,627.0
DB Time(s) : 2.0 DB CPU(s) : 0.7 Read IO requests : 40,707.5


DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 47,491.4 nopti
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 47,491.4 nopti
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 49,438.2
DB Time(s) : 4.0 DB CPU(s) : 0.9 Read IO requests : 49,764.5


DB Time(s) : 8.0 DB CPU(s) : 1.2 Read IO requests : 54,227.9 nopti
DB Time(s) : 8.0 DB CPU(s) : 1.2 Read IO requests : 54,582.9 nopti
DB Time(s) : 8.0 DB CPU(s) : 1.3 Read IO requests : 57,288.6
DB Time(s) : 8.0 DB CPU(s) : 1.4 Read IO requests : 57,057.2

Yes. I all tests that I’ve done, the IOPS is higher with KPTI enabled vs. when booting the kernel with the nopti option. Here is a graph with those numbers:

CaptureOPCPIO001

I did those tests on the Oracle Cloud because I know that we have very fast I/O here, in hundreds of milliseconds, probably all cached in the storage:

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Avg % DB Wait
Event Waits Time (sec) Wait time Class
------------------------------ ----------- ---------- --------- ------ --------
db file parallel read 196,921 288.8 1.47ms 48.0 User I/O
db file sequential read 581,073 216.3 372.31us 36.0 User I/O
DB CPU 210.5 35.0
 
% of Total Waits
----------------------------------------------- Waits
Total 1ms
Event Waits <8us <16us <32us <64us <128u <256u =512 Event to 32m <512 <1ms <2ms <4ms <8ms <16ms =32m
------------------------- ------ ----- ----- ----- ----- ----- ----- ----- ----- ------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
db file parallel read 196.9K .0 1.0 99.0 db file parallel read 194.9K 1.0 15.4 74.7 8.5 .3 .1 .0 .0
db file sequential read 581.2K 17.3 69.5 13.3 db file sequential read 77.2K 86.7 10.7 2.3 .2 .1 .0 .0 .0
 

So what?

I expected to have higher IOPS when disabling the page table isolation, because of the overhead of context switches. And it is the opposite here. Maybe this is because I have a very small SGA (because my goal is to have only physical reads). Note also that, as far as I know, only my guest OS has been patched for Meltdown and Spectre. We will see if the numbers are different after the next Oracle Cloud maintenance.

 

Cet article Spectre/Meltdown on Oracle Public Cloud UEK – PIO est apparu en premier sur Blog dbi services.

Spectre and Meltdown on Oracle Public Cloud UEK

$
0
0

In the last post I published the strange results I had when testing physical I/O with the latest Spectre and Meltdown patches. There is the logical I/O with SLOB cached reads.

Logical reads

I’ve run some SLOB cache reads with the latest patches, as well as with only KPTI disabled, and with KPTI, IBRS and IBPB disabled.
I am on the Oracle Public Cloud DBaaS with 4 OCPU

DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 670,001.2
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 671,145.4
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 672,464.0
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 685,706.7 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,291.3 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 689,386.4 nopti
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 699,301.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,773.3 nopti noibrs noibpb
DB Time(s) : 1.0 DB CPU(s) : 1.0 Logical read (blocks) : 704,908.2 nopti noibrs noibpb

This is what I expected: when disabling the mitigation for Meltdown (PTI), and for some of the Spectre (IBRS and IBPB), I have a slightly better performance – about 5%. This is with only one SLOB session.

However, with 2 sessions I have something completely different:

DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,235,637.8 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,237,689.6 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,243,464.3 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,247,257.4 nopti noibrs noibpb
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,251,485.1
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,253,477.0
DB Time(s) : 2.0 DB CPU(s) : 2.0 Logical read (blocks) : 1,271,986.7

This is not a saturation situation here. My VM shape is 4 OCPUs, which is supposed to be the equivalent of 4 hyperthreaded cores.

And this figure is even worse with 4 sessions (all cores used) and more:

DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks) : 2,268,272.3 nopti noibrs noibpb
DB Time(s) : 4.0 DB CPU(s) : 4.0 Logical read (blocks): 2,415,044.8


DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks) : 3,353,985.7 nopti noibrs noibpb
DB Time(s) : 6.0 DB CPU(s) : 6.0 Logical read (blocks): 3,540,736.5


DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks) : 4,365,752.3 nopti noibrs noibpb
DB Time(s) : 8.0 DB CPU(s) : 7.9 Logical read (blocks): 4,519,340.7

The graph from those is here:
CaptureOPCLIO001

If I compare with the Oracle PaaS I tested last year (https://blog.dbi-services.com/oracle-public-cloud-liops-with-4-ocpu-in-paas/) which was on Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz you can also see a nice improvement here on Intel(R) Xeon(R) CPU E5-2699C v4 @ 2.20GHz.

This test was on 4.1.12-112.14.10.el7uek.x86_64 and Oracle Linux has now released a new update: 4.1.12-112.14.11.el7uek

 

Cet article Spectre and Meltdown on Oracle Public Cloud UEK est apparu en premier sur Blog dbi services.

Moving tables ONLINE on filegroup with constraints and LOB data

$
0
0

Let’s start this new week by going back to a discussion with one of my customers a couple of days ago about moving several tables into different filegroups. Let’s say that some of them contained LOB data. Let’s add to the game another customer requirement: moving all of them ONLINE to avoid impacting the data availability during the migration process. The concerned tables had schema constraints as primary key and foreign keys and non-clustered indexes as well. So a pretty common schema we may deal with daily at customer shops.

Firstly, let’s say that the first topic of the discussion didn’t focus on moving non-clustered indexes on a different filegroup (pretty well-known from my customer) but on how to manage moving constraints online without integrity issues. The main reason of that came from different pointers found by my customer on the internet where we have to first drop such constraints and then to recreate them (by using TO MOVE clause) and that’s whay he was not very confident to move such constraints without introducing integrity issues.

Let’s illustrate this scenario with the following demonstration. I will use a dbo.TransactionHistory2 table that I want to move ONLINE from the primary to the FG1 filegroup. There is a primary key constraint on the TransactionID column as well as foreign key on the ProductID column that refers to dbo.bigProduct table and the ProductID column.

EXEC sp_helpconstraint 'dbo.bigTransactionHistory2';

blog 125 - 1 - bigTransactionHistory2 PK FK

Here a picture of indexes existing on the dbo.bigTransactionHistory2 table:

EXEC sp_helpindex 'dbo.bigTransactionHistory2';

blog 125 - 2 - bigTransactionHistory2 indexes

Let’s say that the pk_big_TranactionHistory_TransactionID unique clustered index is tied to the primary key constraint.

Let’s start by using the first approach based on the WITH MOVE clause .

ALTER TABLE dbo.bigTransactionHistory2 DROP CONSTRAINT pk_bigTransactionHistory_TransactionID WITH (MOVE TO FG1, ONLINE = ON);

--> No constraint to avoid duplicates

ALTER TABLE dbo.bigTransactionHistory2 ADD CONSTRAINT pk_bigTransactionHistory_TransactionID PRIMARY KEY(TransactionDate, TransactionID)
WITH (ONLINE = ON);

By looking further at the script performed  we may quickly figure out that this approach may lead to introduce duplicate entries between the drop constraint step and the move table on the FG1 filegroup and  create constraint step.

We might address this issue by encapsulating the above command within a transaction. But obviously this method has cost: we have good chance to create a long blocking scenario – depending on the amount of data – and leading temporary to data unavailability. The second drawback concerns the performance. Indeed, we first drop the primary key constraint meaning we are dropping the underlying clustered index structure in the background. Going this way implies to rebuild also related non-clustered indexes to update the leaf level with row ids and to rebuild them again when re-adding the primary key constraint in the second step.

From my point of view there is a better way to go through if we want all the steps to be performed efficiently and ONLINE including the guarantee that constraints will continue to ensure checks during all the moving process.

Firstly, let’s move the primary key by using a one-step command. The same applies to the UNIQUE constraints. In fact, moving such constraint requires only to rebuild the corresponding index with the parameters DROP_EXISTING and ONLINE parameters to preserve the constraint functionality. In this case, my non-clustered indexes are not touched by the operation because we don’t have to update the leaf level as the previous method.

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionDate] ASC, [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

In addition, the good news is if we try to introduce a duplicate key while the index is rebuilding on the FG1 filegroup we will face the following error as expected:

Msg 2627, Level 14, State 1, Line 3
Violation of PRIMARY KEY constraint ‘pk_bigTransactionHistory_TransactionID’.
Cannot insert duplicate key in object ‘dbo.bigTransactionHistory2′. The duplicate key value is (Jan 1 2005 12:00AM, 1).

So now we may safely move the additional structures represented by the non-clustered index. We just have to execute the following command to move ONLINE the corresponding physical structure:

CREATE INDEX [idx_bigTransactionHistory2_ProductID]
ON dbo.bigTransactionHistory2 ( ProductID ) 
WITH (DROP_EXISTING = ON, ONLINE = ON)
ON [FG1]

 

Le’ts continue with the second scenario that consisted in moving a table ONLINE on a different filegroup with LOB data. Moving such data may be more complex as we may expect. The good news is SQL Server 2012 has introduced ONLINE operation capabilities and my customer run on SQL Server 2014.

For the demonstration let’s going back to the previous demo and let’s introduce a new [other infos] column with VARCHAR(MAX) data. Here the new definition of the dbo.bigTransactionHistory2 table:

CREATE TABLE [dbo].[bigTransactionHistory2](
	[TransactionID] [bigint] NOT NULL,
	[ProductID] [int] NOT NULL,
	[TransactionDate] [datetime] NOT NULL,
	[Quantity] [int] NULL,
	[ActualCost] [money] NULL,
	[other infos] [varchar](max) NULL,
 CONSTRAINT [pk_bigTransactionHistory_TransactionID] PRIMARY KEY CLUSTERED 
(
	[TransactionID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

Let’s take a look at the table’s underlying structure:

SELECT 
	OBJECT_NAME(p.object_id) AS table_name,
	p.index_id,
	p.rows,
	au.type_desc AS alloc_unit_type,
	au.used_pages,
	fg.name AS fg_name
FROM 
	sys.partitions as p
JOIN 
	sys.allocation_units AS au on p.hobt_id = au.container_id
JOIN	
	sys.filegroups AS fg on fg.data_space_id = au.data_space_id
WHERE
	p.object_id = OBJECT_ID('bigTransactionHistory2')
ORDER BY
	table_name, index_id, alloc_unit_type

 

blog 125 - 3 - bigTransactionHistory2 with LOB

A new LOB_DATA allocation unit type is there and indicates the table contains LOB data for all the index structures. At this stage, we may think that going to the previous way to move online the unique clustered index is sufficient but it is not according the output below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1];

blog 125 - 4 - bigTransactionHistory2 move LOB data

In fact, only data in IN_ROW_DATA allocation units moved from the PRIMARY to FG1 filegroup. In this context, moving LOB data is a non-trivial operation and I had to use a solution based on one proposed here by Kimberly L. Tripp from SQLSkills (definitely one of my favorite sources for tricky scenarios). So partitioning is the way to go. In respect of the solution fom SQLSkills I created a temporary partition function and scheme as shown below:

SELECT MAX([TransactionID])
FROM dbo.bigTransactionHistory2
-- 6910883
GO


CREATE PARTITION FUNCTION pf_bigTransaction_history2_temp (BIGINT)
AS RANGE RIGHT FOR VALUES (6920000)
GO

CREATE PARTITION SCHEME ps_bigTransaction_history2_temp
AS PARTITION pf_bigTransaction_history2_temp
TO ( [FG1], [PRIMARY] )
GO

Applying the scheme to the dbo.bigTransactionHistory2 table will allow us to move all data (IN_ROW_DATA and LOB_DATA) from the PRIMARY to FG1 filegroup as shown below:

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON ps_bigTransaction_history2_temp ([TransactionID])

Looking quickly at the storage configuration confirms this time all data moved to the right FG1.

blog 125 - 5 - bigTransactionHistory2 partitioning

Let’s finally remove the temporary partitioning configuration from the table (remember that all operations are performed ONLINE)

CREATE UNIQUE CLUSTERED INDEX pk_bigTransactionHistory_TransactionID
ON dbo.bigTransactionHistory2 ( [TransactionID] ASC )
WITH (ONLINE = ON, DROP_EXISTING = ON)
ON [FG1]

-- Remove underlying partition configuration
DROP PARTITION SCHEME ps_bigTransaction_history2_temp;
DROP PARTITION FUNCTION pf_bigTransaction_history2_temp;
GO

blog 125 - 6 - bigTransactionHistory2 last config

Finally, you can apply the same method for all non-clustered indexes that contain LOB data …

Cheers

 

 

 

 

 

 

 

 

Cet article Moving tables ONLINE on filegroup with constraints and LOB data est apparu en premier sur Blog dbi services.

Oracle Application Container: a Swiss Use case

$
0
0

Here we want to start a business in Switzerland in 3 different areas and make it easy to start a new market area as soon as required by that business. We are going to use the Application Container feature in order to:

  • Have a dedicated PDB for each marker with shared and local metadata and data
  • Roll out frequently data model and code to add features in a central manner with one command

We have first to create an master application container, you can have many master application containers within a CDB. We will also create a seed PDB from that master application container. Not mandatory, its role will be to keep a sync copy of the master and improves speed provisioning for new pluggable database creation within the master container.

SQL> create pluggable database B2C_WEB_CON as application container admin user pdbadmin identified by secret roles = (DBA) ;

Pluggable database B2C_WEB_CON created.

SQL> alter pluggable database B2C_WEB_CON open;

Pluggable database B2C_WEB_CON altered.

SQL> alter session set container = B2C_WEB_CON ;

Session altered.

SQL> create pluggable database as seed admin user pdbadmin identified by oracle roles=(DBA)  ;

Pluggable database AS created.

SQL> alter pluggable database B2C_WEB_CON$SEED open;

Pluggable database B2C_WEB_CON$SEED altered.

SQL> select PDB_ID, PDB_NAME, STATUS, IS_PROXY_PDB, APPLICATION_ROOT, APPLICATION_PDB, APPLICATION_SEED, APPLICATION_ROOT_CON_ID from dba_pdbs order by 1;
  PDB_ID PDB_NAME           STATUS   IS_PROXY_PDB   APPLICATION_ROOT   APPLICATION_PDB   APPLICATION_SEED     APPLICATION_ROOT_CON_ID
       4 B2C_WEB_CON        NORMAL   NO             YES                NO                NO
       5 B2C_WEB_CON$SEED   NORMAL   NO             NO                 YES               YES                                        4

 

I will now create an application from zero. An application is a set of command executed in the master application container and on which a version tag is applied. In other words, Oracle will record what happens in the master container and applied a version flag on those commands to replay them in future PDBs.

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON as sysdba
Connected.

SQL> create user USR_B2C_WEB identified by secret quota unlimited on TBS_B2C_WEB ;

User USR_B2C_WEB created.

SQL> alter user USR_B2C_WEB default tablespace TBS_B2C_WEB ;

User USR_B2C_WEB altered.

SQL> grant create session, resource to USR_B2C_WEB ;

Grant succeeded.

SQL> alter session set current_schema = USR_B2C_WEB ;

Session altered.

SQL> create table customers ( customer_id number, name varchar2(50), address varchar2(50) ) ;

Table CUSTOMERS created.

SQL> create table orders ( order_id number, customer_id number, order_date date ) ;

Table ORDERS created.

SQL> create table order_details ( order_detail_id number, order_id number, product_id number, quantity number ) ;

Table ORDER_DETAILS created.

SQL> create table products ( product_id number, name varchar2(50) ) ;

Table PRODUCTS created.

SQL> alter pluggable database application B2C_WEB_APP end install '1.0';

Pluggable database APPLICATION altered.

SQL> select * from dba_applications;
APP_NAME                                 APP_ID APP_VERSION   APP_STATUS   APP_IMPLICIT   APP_CAPTURE_SERVICE   APP_CAPTURE_MODULE
APP$62EA42BE47360FA8E0537A38A8C0A0F3          2 1.0           NORMAL       Y              SYS$USERS             java@VM122 (TNS V1-V3)
B2C_WEB_APP                                   3 1.0           NORMAL       N              b2c_web_con           java@VM122 (TNS V1-V3)

 

We can now synchronize the application tables of our application B2C_WEB_APP from the MASTER to the SEED in order to increase the next pluggable database creations speed

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON$SEED as sysdba

Session altered.

SQL> alter pluggable database application B2C_WEB_APP sync ;

 

Then, go to the master and create your pluggable database for each market with the latest B2C_WEB_APP application release which is currently 1.0

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> create pluggable database B2C_WEB_APP_VD admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_VD created.

SQL> create pluggable database B2C_WEB_APP_GE admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_GE created.

SQL> create pluggable database B2C_WEB_APP_ZH admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_ZH created.

 

We open and save state to make them opened at the next CDB restart

SQL> connect sys/oracle@//localhost:1521/B2C_WEB_CON as sysdba
Connected.

SQL> alter pluggable database all open ;

Pluggable database ALL altered.

SQL> alter pluggable database save state ;

Pluggable database SAVE altered.

 

Let’s generate some business activity on each pluggable database corresponding to different Swiss markets

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_VD
Connected.
SQL> insert into usr_b2c_web.products select rownum, 'product_VD_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> commit ;

Commit complete.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> insert into usr_b2c_web.products select rownum, 'product_GE_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> insert into usr_b2c_web.products select rownum, 'product_ZH_00'||rownum from dual connect by level <= 5 ;

5 rows inserted.

SQL> commit;

Commit complete.

 

Now we can check from the master container if data are well located according to their market

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, p.*
  2  from containers ( PRODUCTS ) p ;
CON$NAME           PRODUCT_ID NAME               CON_ID
B2C_WEB_APP_ZH              1 product_ZH_001          7
B2C_WEB_APP_ZH              2 product_ZH_002          7
B2C_WEB_APP_ZH              3 product_ZH_003          7
B2C_WEB_APP_ZH              4 product_ZH_004          7
B2C_WEB_APP_ZH              5 product_ZH_005          7
B2C_WEB_APP_GE              1 product_GE_001          6
B2C_WEB_APP_GE              2 product_GE_002          6
B2C_WEB_APP_GE              3 product_GE_003          6
B2C_WEB_APP_GE              4 product_GE_004          6
B2C_WEB_APP_GE              5 product_GE_005          6
B2C_WEB_APP_VD              1 product_VD_001          3
B2C_WEB_APP_VD              2 product_VD_002          3
B2C_WEB_APP_VD              3 product_VD_003          3
B2C_WEB_APP_VD              4 product_VD_004          3
B2C_WEB_APP_VD              5 product_VD_005          3

 

We have different products for each of our markets. Now, we would like to Upgrade the data model and add some code (a procedure) to add a basic feature: add a customer

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin upgrade '1.0' to '1.1';

Pluggable database APPLICATION altered.

SQL> alter session set current_schema = USR_B2C_WEB ;

Session altered.

SQL> alter table customers drop ( address ) ;

Table CUSTOMERS altered.

SQL> alter table customers add ( email varchar2(35) ) ;

Table CUSTOMERS altered.

SQL> alter table products add ( price number (8, 2) ) ;

Table PRODUCTS altered.

SQL> create sequence customer_seq ;

Sequence CUSTOMER_SEQ created.

SQL> create procedure customer_add ( name in varchar2, email in varchar2 ) as
  2  begin
  3    insert into customers values ( customer_seq.nextval, name, email ) ;
  4    commit ;
  5  end;
  6  /

Procedure CUSTOMER_ADD compiled

SQL> alter pluggable database application B2C_WEB_APP end upgrade to '1.1';

Pluggable database APPLICATION altered.

 

Let push in production the release 1.1 one market after each other

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync ;

Pluggable database APPLICATION altered.

 

Some business activity happens and new customer are going to appears with the new feature we deployed at the release 1.1 of our B2C_WEB_APP application

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Watson', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('John Curt', 'JOHN@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Dalton X', 'DALTON@AOL.COM') ;

PL/SQL procedure successfully completed.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> exec customer_add ('Scotty Hertz', 'SCOTTY@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Sandeep John', 'SANDEEP@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Curt', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Orlondo Watson', 'ORLONDO@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> exec customer_add ('Maria Smith', 'MARIA@GMAIL.COM') ;

PL/SQL procedure successfully completed.

SQL> exec customer_add ('Smith Scotty', 'SMITH@YAHOO.COM') ;

PL/SQL procedure successfully completed.

 

Now, have a look on the new customer data from the master master container

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( CUSTOMERS ) c ;
CON$NAME           CUSTOMER_ID NAME             EMAIL                 CON_ID
B2C_WEB_APP_VD               1 Scotty Hertz     SCOTTY@GMAIL.COM           3
B2C_WEB_APP_VD               2 Scotty Hertz     SCOTTY@GMAIL.COM           3
B2C_WEB_APP_VD               3 Smith Watson     SMITH@YAHOO.COM            3
B2C_WEB_APP_VD               4 John Curt        JOHN@GMAIL.COM             3
B2C_WEB_APP_VD               5 Dalton X         DALTON@AOL.COM             3
B2C_WEB_APP_GE               1 Scotty Hertz     SCOTTY@GMAIL.COM           6
B2C_WEB_APP_GE               2 Sandeep John     SANDEEP@YAHOO.COM          6
B2C_WEB_APP_GE               3 Smith Curt       SMITH@YAHOO.COM            6
B2C_WEB_APP_GE               4 Orlondo Watson   ORLONDO@GMAIL.COM          6
B2C_WEB_APP_ZH               1 Maria Smith      MARIA@GMAIL.COM            7
B2C_WEB_APP_ZH               2 Smith Scotty     SMITH@YAHOO.COM            7


11 rows selected.

 

As we don’t like the email format because it’s ugly in the web interface, we now are going to release a “data” patch on top of the release 1.1 in order to format customer’s emails in a proper manner

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin patch 1 minimum version '1.1' ;

Pluggable database APPLICATION altered.

SQL> update usr_b2c_web.customers set email = trim(lower(email)) ;

0 rows updated.

SQL> alter pluggable database application B2C_WEB_APP end patch 1 ;

Pluggable database APPLICATION altered.

SQL> select * from dba_app_patches;
APP_NAME        PATCH_NUMBER PATCH_MIN_VERSION   PATCH_STATUS   PATCH_COMMENT
B2C_WEB_APP                1 1.1                 INSTALLED

 

As we know have a patch ready to cleanup the email format we are ready to deploy it on each market

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

 

Let’s check if the data patch has been applied successfully and the email format is now OK for all markets

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( CUSTOMERS ) c ;
CON$NAME           CUSTOMER_ID NAME             EMAIL                 CON_ID
B2C_WEB_APP_VD               1 Scotty Hertz     scotty@gmail.com           3
B2C_WEB_APP_VD               2 Scotty Hertz     scotty@gmail.com           3
B2C_WEB_APP_VD               3 Smith Watson     smith@yahoo.com            3
B2C_WEB_APP_VD               4 John Curt        john@gmail.com             3
B2C_WEB_APP_VD               5 Dalton X         dalton@aol.com             3
B2C_WEB_APP_ZH               1 Maria Smith      maria@gmail.com            7
B2C_WEB_APP_ZH               2 Smith Scotty     smith@yahoo.com            7
B2C_WEB_APP_GE               1 Scotty Hertz     scotty@gmail.com           6
B2C_WEB_APP_GE               2 Sandeep John     sandeep@yahoo.com          6
B2C_WEB_APP_GE               3 Smith Curt       smith@yahoo.com            6
B2C_WEB_APP_GE               4 Orlondo Watson   orlondo@gmail.com          6


11 rows selected.

 

A new feature has now been claimed from the business. We need an upgrade of the application to add a parameters table that should contains USR_B2C_WEB application’s parameters which must be shared on all PDB applications. Also each market want be able to add its own parameters without impacting existing one or others markets.
We are going to use the attribute “SHARING” set to “EXTENDED DATA” for that table to make possible a mix of shared data in the master and PDB local data in the same table (deeper explanation and others sharing modes here).

SQL> connect pdbadmin/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> alter pluggable database application B2C_WEB_APP begin upgrade '1.1' to '1.2';

Pluggable database APPLICATION altered.

SQL> create table usr_b2c_web.settings sharing = extended data ( name varchar2(50), value varchar2(50) );

Table USR_B2C_WEB.SETTINGS created.

SQL> insert into usr_b2c_web.settings values ( 'compagny_name', 'wisdom IT' ) ;

1 row inserted.

SQL> insert into usr_b2c_web.settings values ( 'head_quarter_address', 'street village 34, 3819 Happiness, Switzerland' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> alter pluggable database application B2C_WEB_APP end upgrade to '1.2';

Pluggable database APPLICATION altered.

 

Upgrade 1.2 for all market and addition of a local parameter “market_name” for each market

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_VD
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'VAUD' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_GE
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'GENEVA' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

SQL> connect pdbadmin/oracle@//localhost:1521/B2C_WEB_APP_ZH
Connected.

SQL> alter pluggable database application B2C_WEB_APP sync;

Pluggable database APPLICATION altered.

SQL> insert into usr_b2c_web.settings values ( 'market_name', 'ZURICH' ) ;

1 row inserted.

SQL> commit ;

Commit complete.

 

Now we check if shared parameters are available for all markets and each one of them has a dedicated “market_name” value

SQL> connect usr_b2c_web/secret@//localhost:1521/B2C_WEB_CON
Connected.

SQL> select con$name, c.*
  2  from containers ( SETTINGS ) c ;

CON$NAME         NAME                   VALUE                                              CON_ID
B2C_WEB_CON      compagny_name          wisdom IT                                               4
B2C_WEB_CON      head_quarter_address   street village 34, 3819 Happiness, Switzerland          4
B2C_WEB_APP_GE   compagny_name          wisdom IT                                               6
B2C_WEB_APP_GE   head_quarter_address   street village 34, 3819 Happiness, Switzerland          6
B2C_WEB_APP_GE   market_name            GENEVA                                                  6
B2C_WEB_APP_VD   compagny_name          wisdom IT                                               3
B2C_WEB_APP_VD   head_quarter_address   street village 34, 3819 Happiness, Switzerland          3
B2C_WEB_APP_VD   market_name            VAUD                                                    3
B2C_WEB_APP_ZH   compagny_name          wisdom IT                                               7
B2C_WEB_APP_ZH   head_quarter_address   street village 34, 3819 Happiness, Switzerland          7
B2C_WEB_APP_ZH   market_name            ZURICH                                                  7

 

Looks all good.

Now the business need to extend the startup activity to a new market area of Switzerland. We are so going to add a new pluggable database for that marker. This market will benefit immediately of the latest application release.

SQL> alter session set container = B2C_WEB_CON ;

Session altered.

SQL> create pluggable database B2C_WEB_APP_ZG admin user pdbadmin identified by secret roles=(DBA) ;

Pluggable database B2C_WEB_APP_ZG created.

SQL> alter pluggable database B2C_WEB_APP_ZG open;

Pluggable database B2C_WEB_APP_ZG altered.

 

Let’s check with the parameter table if all data have been synchronized

SQL> select con$name, c.*
  2  from containers ( SETTINGS ) c ;
CON$NAME         NAME                   VALUE                                              CON_ID
B2C_WEB_APP_VD   compagny_name          wisdom IT                                               3
B2C_WEB_APP_VD   head_quarter_address   street village 34, 3819 Happiness, Switzerland          3
B2C_WEB_APP_VD   market_name            VAUD                                                    3
B2C_WEB_CON      compagny_name          wisdom IT                                               4
B2C_WEB_CON      head_quarter_address   street village 34, 3819 Happiness, Switzerland          4
B2C_WEB_APP_GE   compagny_name          wisdom IT                                               6
B2C_WEB_APP_GE   head_quarter_address   street village 34, 3819 Happiness, Switzerland          6
B2C_WEB_APP_GE   market_name            GENEVA                                                  6
B2C_WEB_APP_ZH   compagny_name          wisdom IT                                               7
B2C_WEB_APP_ZH   head_quarter_address   street village 34, 3819 Happiness, Switzerland          7
B2C_WEB_APP_ZH   market_name            ZURICH                                                  7
B2C_WEB_APP_ZG   compagny_name          wisdom IT                                               5
B2C_WEB_APP_ZG   head_quarter_address   street village 34, 3819 Happiness, Switzerland          5

 

I wish this post will help to understand how to implement Container Application in real life and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Oracle Application Container: a Swiss Use case est apparu en premier sur Blog dbi services.

Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF!

$
0
0

Here we are, the Alfresco DevCon 2018 day-1 is over (well except for the social party)! It’s been already 2 years I attended my last Alfresco event (BeeCon 2016, first of its name (organized by the Order of the Bee)) because I wasn’t able to attend the second BeeCon (2017) since it happened on the exact dates of our internal dbi xChange event. Yesterday was the DevCon 2018 day-0 with the Hackathon, the full day training and the ACSCE/APSCE Certification preparation but today was really the first day of sessions.

DevCon2018_Logo

 

The day-1 started, as usual, with a Keynote from Thomas DeMeo which presented interesting information regarding the global direction of Alfresco products, the Roadmap (of course) for the coming year as well as some use cases where Alfresco was successfully used in very interesting projects including also AWS.

DevCon2018_Roadmap

 

The second part of the keynote has been presented by Brian Remmington which explained the future of the Alfresco Digital Platform. In the next coming months/years, Alfresco will include/refactor/work on the following points for its Digital Platform:

  • Improve SSO solutions. Kerberos is already working very well with Alfresco but they intend to also add SAML2, OAuth, aso… This is a very good thing!
  • Merging the Identity management for the ACS and APS into one single unit
  • Adding an API Gateway in front of ACS and APS to always talk to the same component and targeting in the background both the ACS and APS. It will also allow Alfresco to change the backend APIs, if needed (to align them for example), without the API Gateway noticing it. This is a very good thing too from a developer perspective since you will be sure that your code will not break if Alfresco rename something for example!
  • Merging the search on the ACS and APS into one single search index
  • We already knew it but it was confirmed that Alfresco [will probably drop the default installer and instead] will provide docker/kubernetes means for you to deploy Alfresco easily and quickly using these new technologies
  • Finishing the merge/refactor of other ACS/APS services into common units for both products so that work done once isn’t duplicated. This will concern the Search (Insight?) Service, the Transformation Service, the Form Service and a new Function Service (basically code functions shared between ACS and APS!).

All this looks promising, like really.

 

Then starting at 10am, there were four streams running in parallel so there is something that you will find interesting, that’s for sure. I didn’t mention it but DevCon isn’t just a name… It means that the sessions are really technical, we are far from the (boring) Business presentations that you can find on all other competitors’ events… I did a full morning on ADF. Mario Romano and Ole Hejlskov were presenting ADF Basics and Beyond.

For those of you who don’t know yet, ADF (Alfresco Development Framework) is the last piece of the Digital Platform that Alfresco has been bringing recently. It is a very interesting new framework that allows you to create your own UI to use in front of the ACS/APS. There are at the moment more than 100 angular components that you can use, extend, compose and configure to build the UI that will match your use case. Alfresco Share still provide way more features than ADF but I must say that I’m pretty impressed by what you can achieve in ADF with very little: it looks like it is going in the right direction.

ADF 2.0 has been released recently (November 2017) and it is based on three main pillars: the latest version of Angular 5, a powerful JavaScript API (that talk in the background with the ACS/APS/AGS APIs) and the Yeoman generator+Angular CLI for fast deployments.

ADF provides 3 extensions points for you to customize a component:

  • html extension points => adding html to customize the look&feel or the behavior
  • event listeners => adding behaviors on events for example
  • config properties => each component has properties that will customize it

One of the goal of ADF is the ability to upgrade your application without any efforts. Angular components will be updated in the future but it was designed (and Alfresco effort is going) in a way that even if you use these components in your ADF application, then an upgrade of your application won’t hurt at all. If you want to lean more about ADF, then I suggest you the Alfresco Tech Talk Live that took place in December as well as the Alfresco Office Hours.

 

After this first introduction session to ADF, Eugenio Romano went deeper and showed how to play with ADF 2.0: installing it, deploying a first application and then customizing the main pieces like the theme, the document list, the actions, the search, aso… There were some really interesting examples and I’m really looking forward seeing the tutorials and documentations popping up on the Alfresco Website about these ADF 2.0 new features and components.

 

To conclude the morning, Denys Vuika presented a session about how to use and Extend the Alfresco Content App (ACA). The ACA is the new ADF 2.0 application provided by Alfresco. It is a demo/sample application whose purpose is to be lightweight so it is as fast as possible. You can then customize it as you want, play with the ADF so that this sample application match your needs. One of the demo Denys presented is how you can change the default previewer for certain type of files (.txt, .js, .xml for example). In ADF, that’s like 5 lines of code (of course you need to have another previewer of your own but that’s not ADF stuff) and then he had an awesome preview for .js files where there were syntax highlighting right inside the Alfresco preview as well as tooltips on names to give description of variables and functions apparently. This kind of small features but done so easily look quite promising.

 

I already wrote a lot on ADF today so I will stop my blog here but I did attend a lot of other very interesting sessions on the afternoon. I might talk about that tomorrow.

 

 

 

Cet article Alfresco DevCon 2018 – Day 1 – ADF, ADF and… ADF! est apparu en premier sur Blog dbi services.


Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again!

$
0
0

Today is the second day of the Alfresco DevCon 2018 and therefore yes, it is already over, unfortunately. In this blog, I will be continuing my previous one with sessions I attended on the afternoon of the day-1 as well as day-2. There were too many interesting sessions and I don’t really have the time to talk about all of them… But if you are interested, all the sessions were recorded (as always) so wait a little bit and check out the DevCon website, the Alfresco Community or the Alfresco Youtube channel and I’m sure you will find all the recordings as soon as they are available.

 

So on the afternoon of the day-1, I started with a presentation of Jeff Potts, you all know him, and he was talking about how to move in (upload) and out (download) of Alfresco some gigantic files (several gigabytes). He basically presented a use case where the users had to manage big files and put them all in Alfresco with the less headache possible. On the paper, Alfresco can handle any file no matter the size because the only limit is what the File System of the Alfresco Server supports. However, when you start working with 10 or 20 GB files, you can sometimes have issues like exceptions, timeouts, network outage, aso… It might not be frequent but it can happen for a variety of reasons (not always linked to Alfresco). The use case here was to simplify the import into Alfresco and make it faster. Jeff tested a lot of possible solutions like using the Desktop Sync, CMIS, FTP, the resumable upload share add-on, aso…

In the end, a pure simple (1 stream) upload/download will always be limited by the network. So he tried to work on improving this part and used the Resilio Sync software (formerly BitTorrent Sync). This tool can be used to stream a file to the Alfresco Server, BitTorrent style (P2P). But the main problem of this solution is that P2P is only as good as the number of users having this specific file available on their workstation… Depending on the use case, it might increase the performance but it wasn’t ideal.

In the end, Jeff came across the protocol “GridFTP”. This is an extension of the FTP for grid computing whose purpose is to make the file transfer more reliable and faster using multiple simultaneous TCP streams. There are several implementations of the GridFTP like the Globus Toolkit. Basically, the solution in this case was to use Globus to transfer the big files from the user’s workstation to a dedicated File System which is mounted on the Alfresco Server. Then using the Alfresco Bulk FileSystem Import Tool (BFSIT), it is really fast to import documents into Alfresco, as soon as they are on the File System of the Alfresco Server. For the download, it is just the opposite (using the BFSET)…

For files smaller than 512Mb, this solution is probably slower than the default Alfresco upload/download actions but for bigger files (or group of files), then it becomes very interesting. Jeff did some tests and basically for one or several files with a total size of 3 or 4GB, then the transfer using Globus and then the import into Alfresco was 50 to 60% faster than the Alfresco default upload/download.

 

Later, Jose Portillo shared Solr Sharding Best Practices. Sharding is the action of splitting your indexes into Shards (part of an index) to increase the searches and indexing (horizontal scaling). The Shards can be stored on a single Solr Server or they can be dispatched on several. Doing this basically increase the search speed because the search is executed on all Shards. For the indexing of a single node, there is no big difference but for a full reindex, it does increase a lot the performance because you do index several nodes at the same time on each Shards…

A single Shard can work well (according to the Alfresco Benchmark) with up to 50M documents. Therefore, using Shards is mainly for big repositories but it doesn’t mean that there are no use cases where it would be interesting for smaller repositories, there are! If you want to increase your search/index performance, then start creating Shards much earlier.

For the Solr Sharding, there are two registration options:

  • Manual Sharding => You need to manually configure the IPs/Host where the Shards are located in the Alfresco properties files
  • Dynamic Sharding => Easier to setup and Alfresco automatically provide information regarding the Shards on the Admin interface for easy management

There are several methods of Shardings which are summarized here:

  • MOD_ACL_ID (ACL v1) => Sharding based on ACL. If all documents have the same ACL (same site for example), then they will all be on the same Shard, which might not be very useful…
  • ACL_ID (ACL v2) => Same as v1 except that it uses the murmur hash of the ACL ID and not its modulus
  • DB_ID (DB ID) => Default in Solr6. Nodes are evenly distributed on the Shards based on their DB ID
  • DB_ID_RANGE (DB ID Range) => You can define the DB ID range for which nodes will go to which Shard (E.g.: 1 to 10M => Shard-0 / 10M to 20M => Shard-1 / aso…)
  • DATE (Date and Time) => Assign date for each Shards based on the month. It is possible to group some months together and assign a group per Shard
  • PROPERTY (Metadata) => The value of some property is hashed and this hash is used for the assignment to a Shard so all nodes with the same value are in the same Shard
  • EXPLICIT (?) => This is an all-new method that isn’t yet on the documentation… Since there aren’t any information about this except on the source code, I asked Jose to provide me some information about what this is doing. He’ll look at the source code and I will update this blog post as soon as I receive some information!

Unfortunately, the Solr Sharding has only been available starting with Alfresco Content Services 5.1 (Solr 4) and only using the ACL v1 method. New methods were then added using the Alfresco Search Services (Solr 6). The availability of methods VS Alfresco/Solr versions has been summarized in Jose’s presentation:

DevCon2018_ShardingMethodsAvailability

Jose also shared a comparison matrix of the different methods to choose the right one for each use case:

DevCon2018_ShardingMethodsFeatures

Some other best practices regarding the Solr Sharding:

  • Replicate the Shards to increased response time and it also provides High Availability so… No reasons not to!
  • Backup the Shards using the provided Web Service so Alfresco can do it for you for one or several Shards
  • Use DB_ID_RANGE if you want to be able to add Shards without having to perform a full reindex, this is the only way
  • If you need another method than DB_ID_RANGE, then plan carefully the number of Shards to be created. You might want to overshard to take into account the future growth
  • Keep in mind that each Shard will pull the changes from Alfresco every 15s and it all goes to the DB… It might create some load there and therefore be sure that your DB can handle that
  • As far as I know, at the moment, the Sharding does not support Solr in SSL. Solr should anyway be protected from external accesses because it is only used by Alfresco internally so this is an ugly point so far but it’s not too bad. Sharding is pretty new so it will probably support the SSL at some point in the future
  • Tune properly Solr and don’t forget the Application Server request header size
    • Solr4 => Tomcat => maxHttpHeaderSize=…
    • Solr6 => Jetty => solr.jetty.request.header.size=…

 

The day-2 started with a session from John Newton which presented the impact of emerging technologies on content. As usual, John’s presentation had a funny theme incorporated in the slides and this time it was Star Wars.

DevCon2018_StarWars

 

After that, I attended the Hack-a-thon showcase, presented/introduced by Axel Faust. In the Alfresco world, Hack-a-thons are:

  • There since 2012
  • Open-minded and all about collaboration. Therefore, the output of any project is open source and available for the community. It’s not about money!
  • Always the source of great add-ons and ideas
  • 2 times per year
    • During conferences (day-0)
    • Virtual Hack-a-thon (36h ‘follow-the-sun’ principle)

A few of the 16 teams that participated in the Hack-a-thon presented the result of their Hack-a-thon day and there were really interesting results for ACS, ACS on AWS, APS, aso…

Except that, I also attended all lightning talks on this day-2 as well as presentations on PostgreSQL and Solr HA/Backup solutions and best practices. The presentations about PostgreSQL and Solr were interesting especially for newcomers because it really explained what should be done to have a highly available and resilient Alfresco environment.

 

There were too many lightning talk to mention them all but as always, there were some quite interesting and there I just need to mention the talk about the ContentCraft plugin (from Roy Wetherall). There cannot be an Alfresco event (be it a Virtual Hack-a-thon, BeeCon or DevCon now) without an Alfresco integration into Minecraft. Every year, Roy keeps adding new stuff into his plugin… I remember years ago, Roy was already able to create a building in Minecraft where the height represented the number of folders stored in Alfresco and the depth was the number of documents inside, if my memory is correct (this changed now, it represents the number of sub-folders). This year, Roy presented the new version and it’s even more incredible! Now if you are in front of one of the building’s door, you can see the name and creator of the folder in a ‘Minecraft sign’. Then you can walk in the building and there is a corridor. On both sides, there are rooms which represent the sub-folders. Again, there are ‘Minecraft signs’ there with the name and creator of the sub-folders. Until then, it’s just the same thing again so that’s cool but it will get even better!

If you walk in a room, you will see ‘Minecraft bookshelves’ and ‘Minecraft chests’. Bookshelves are just there for the decoration but if you open the chests, then you will see, represented by ‘Minecraft books’, all your Alfresco documents stored on this sub-folder! Then if you open a book, you will see the content of this Alfresco document! And even crazier, if you update the content of the book on Minecraft and save it, the document stored in Alfresco will reflect this change! This is way too funny :D.

It’s all done using CMIS so there is nothing magical… Yet it really makes you wonder if there are any limits to what Alfresco can do ;).

 

If I dare to say: long live Alfresco! And see you around again for the next DevCon.

 

 

Cet article Alfresco DevCon 2018 – Day 2 – Big files, Solr Sharding and Minecraft, again! est apparu en premier sur Blog dbi services.

SQL Server on Linux and logging

$
0
0

On Windows world, SQL Server logs information both into the SQL Server error log and the Application log. Both automatically timestamp all recorded events. Unlike the SQL Server error log, the Windows application log provides an overall picture of events that occur globally on the Windows operating system. Thus, regarding the encountered issues taking a look at such event logs – by using either the Windows event viewer or the Get-EventLog PowerShell cmdlet – may be very helpful to figure out they are only SQL Server-scoped or if you have to correlate with to other operating system issues.

But what about SQL Server on Linux? Obviously, we may use the same logging technologies. As Windows, SQL Server logs information both in the SQL Server error log located on /var/opt/mssql/log/ and in Linux logs. Because SQL Server is only supported on Linux distributions that all include systemd ( RHEL 7.3+, SLES V12 SP2+ or Ubuntu 16.04+) we have to go through the journalctl command to browse the messages related to the SQL Server instance.

systemd-journald is a system service that collects and stores logging data based on logging information that is received from a variety of sources – Kernel and user log messages. All can be viewed through the journalctl command.

Let’s say that the journalctl command is very powerful and I don’t aim to cover all the possibilities. My intention is only to dig into some examples in the context of SQL Server. Conceptually this is not so different than we may usually do on Windows system for basic stuff.

Firstly, let’s say we may use a couple of options to filter records we want to display. Probably the first intuitive way to go through the journalctl command is to use time interval parameters as –since and –until as follows:

[root@sqllinux ~] journalctl --since "2018-01-16 12:00:00" --until "2018-01-16 23:30:00"

Here a sample of the corresponding output:

blog 126 - 1 - journalctl with time interval filter

All log messages are displayed including the kernel. But rather than using time interval filters we may prefer to use the -b parameter to show all log messages since the last system boot for instance:

[root@sqllinux ~] journalctl -b

The corresponding output:

blog 126 - 2 - journalctl with last boot filter

You may use different commands to get the system reboot as uptime, who -b. I’m in favour of last reboot because it provides the last reboot date rather than the uptime of the system.

Furthermore, one interesting point is that if you want to get log messages from older system boots (and not only the last one) you have to setup accordingly system-journald to enable log persistence. By default, it is volatile and logs are cleared after each system reboot. You may get this information directly from the system-journald configuration file (#Storage=auto by default):

[root@sqllinux ~] cat /etc/systemd/journald.conf
…
[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
…

I remembered a customer case where I had to diagnose a database check integrity job scheduled on each Sunday and that failed randomly. We finally figure out that the root cause was a system reboot after an automatic update. But the tricky part was that not all system reboots did not lead to fail the DBCC CHECKDB command and according to the information from the Windows log we understood it depended mostly on the DBCC CHECKDB command execution time which sometimes exceeded the time scheduled for system reboot. So, in this case going back to the previous reboots (before the last one) was helpful for us. Let’s say that for some Linux distributions this is not the default option and my colleague Daniel Westermann in the dbi services open source team explained it well through his blog post and how to change the default behavior as well.

So, after applying the correct setup, if you want to display log messages after a pre-defined boot time you may first identify the different system boot times logged into the journal as follows:

[root@sqllinux ~] journalctl --list-boots
-1 576f0fb259f4433083c05329614d749e Tue 2018-01-16 15:41:15 CET—Wed 2018-01-17 20:30:41 CET
 0 ea3ec7019f8446959cfad0bba517a47e Wed 2018-01-17 20:33:30 CET—Wed 2018-01-17 20:37:05 CET

Then you may rewind the journal until the corresponding offset:

[root@sqllinux ~] journalctl -b -1 
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:37:40 CET. --
Jan 16 15:41:15 localhost.localdomain systemd-journal[105]: Runtime journal is using 8.0M (max allowed 188.7M, trying to leave 283.1M free
 of 1.8G available → current limit 188.7M).
….

Let’s go ahead with filtering by unit (mssql-server unit). This is likely the most useful way for DBAs to display only SQL Server related records with a combination of the aforementioned options (time interval or last boot(s) parameters). In the following example, I want to display SQL Server related records since a system boot that occurred on 18 January 2018 20:39 (I may also deal with interval time filters)

[root@sqllinux ~] journalctl -b -1 -u mssql-server.service 
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:39:55 CET. --
Jan 16 15:41:17 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
ion 'Service'
Jan 16 20:47:15 sqllinux.dbi-services.test systemd[1]: Started Microsoft SQL Server Database Engine.
Jan 16 20:47:15 sqllinux.dbi-services.test systemd[1]: Starting Microsoft SQL Server Database Engine
...
Jan 16 20:47:22 sqllinux.dbi-services.test sqlservr[1119]: 2018-01-16 20:47:22.35 Server      Microsoft SQL Server 2017 (RTM-CU2) (KB40525
74) - 14.0.3008.27 (X64)
…

You may also want to get only error concerned your SQL Server instance. If you already used syslog in the past you will still be comfortable with systemd-journal that implements the standard syslog message levels and message priorities. Indeed, each message has its own priority as shown below. The counterpart on Windows event log are event types (warning, error, critical etc …). On Linux priorities are identified by number – 6 corresponds to info messages and 3 to error messages. Here an log message’s anatomy with the priority value.

[root@sqllinux ~] journalctl -b -1 -u mssql-server.service -n 1 -o verbose
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 20:48:36 CET. --
Wed 2018-01-17 20:30:38.937388 CET [s=5903eef6a5fd45e584ce03a4ae329ac3;i=88d;b=576f0fb259f4433083c05329614d749e;m=13e34d5fcf;t=562fde1d9a1
    PRIORITY=6
    _UID=0
    _GID=0
    _BOOT_ID=576f0fb259f4433083c05329614d749e
    _MACHINE_ID=70f4e4633f754037916dfb35844b4b16
    SYSLOG_FACILITY=3
    SYSLOG_IDENTIFIER=systemd
    CODE_FILE=src/core/job.c
    CODE_FUNCTION=job_log_status_message
    RESULT=done
    _TRANSPORT=journal
    _PID=1
    _COMM=systemd
    _EXE=/usr/lib/systemd/systemd
    _CAP_EFFECTIVE=1fffffffff
    _SYSTEMD_CGROUP=/
    CODE_LINE=784
    MESSAGE_ID=9d1aaa27d60140bd96365438aad20286
    _HOSTNAME=sqllinux.dbi-services.test
    _CMDLINE=/usr/lib/systemd/systemd --switched-root --system --deserialize 21
    _SELINUX_CONTEXT=system_u:system_r:init_t:s0
    UNIT=mssql-server.service
    MESSAGE=Stopped Microsoft SQL Server Database Engine.
    _SOURCE_REALTIME_TIMESTAMP=1516217438937388

 

So, if you want to restrict more the output with only warning, error or critical messages (from a daemon point of view), you may have to add the -p option with a range of priorities from 2 (critical) and 4 (warning) as shown below:

[root@sqllinux ~] journalctl -p 2..4 -u mssql-server.service
-- Logs begin at Tue 2018-01-16 15:41:15 CET, end at Wed 2018-01-17 21:44:04 CET. --
Jan 16 15:41:17 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
-- Reboot --
Jan 17 15:27:42 sqllinux.dbi-services.test systemd[1]: [/usr/lib/systemd/system/mssql-server.service:21] Unknown lvalue 'TasksMax' in sect
lines 1-4/4 (END)

Ultimately, filtering by message will be probably the most natural way to find out log messages. Let’s say in this case there is no built-in parameters or options provided by journalctl command and grep will be your friend for sure. In the following example, a classic customer case where we want to count number of failed logins during a specific period. So, I will have to use a combination of journalctl, grep and wc commands:

[root@sqllinux ~] journalctl -u mssql-server.service --since "2018-01-17 12:00:00" --until "2018-01-17 23:00:00"  | grep "Login failed" | wc -l
31

Finally, the journalctl command offers real-time capabilities to follow log messages through the -f option. For very specific cases it might be useful. In the example below I can use it to follow SQL Server related log messages:

[root@sqllinux ~] journalctl -u mssql-server.service -f
-- Logs begin at Tue 2018-01-16 15:41:15 CET. --
Jan 17 21:52:57 sqllinux.dbi-services.test sqlservr[1121]: 2018-01-17 21:52:57.97 Logon       Error: 18456, Severity: 14, State: 8.
Jan 17 21:52:57 sqllinux.dbi-services.test sqlservr[1121]: 2018-01-17 21:52:57.97 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: 192.168.40.30]

 

Another topic I wanted to introduce is the centralized logging management. Nowadays, a plenty of third party tools like splunk – or built-in Microsoft tools as SCOM – may address this need both on Windows and Linux world. I also remembered a special customer case where we went through built-in Windows event forwarding mechanism. On Linux world, you may benefit from a plenty of open source tools and you may also rely on built-in Linux tools as systemd-journal-remote, systemd-journal-upload and systemd-journal-gateway as well. I will probably go further into these tools in the future but this time let’s use an older tool rsyslog that implements the basic syslog protocol and extends it with additional features. In this blog post I used a CentOS 7 distro that comes with rsyslog. The good news is that it also includes by default the imjournal module (that provides access to the systemd journal). This module reads log from /run/log/journal and then writes out /var/log/messages, /var/log/maillog, /var/log/secure or others regarding the record type. Log records may be send over TCP or UDP protocols and securing capabilities are also provided (by using TLS and certificates for instance).

Just out of curiosity, I decided to implement a very simple log message forwarding scenario to centralize only SQL Server log messages. Basically, I only had to setup some parameters in the /etc/rsyslog.conf on both sides (sender and receiver servers) as well as applying some firewall rules to allow the traffic on port 514. In addition, I used TCP protocol because this is probably the simplest way to send log messages (because corresponding module are already loaded). Here an illustration of my scenario:

blog 126 - 3 - rsyslog architecture

Here the configuration settings of my log message sender. You may notice that I used expression-Based filters to filter and to send only my SQL Server instance related messages :

[root@sqllinux ~] cat /etc/rsyslog.conf
#### MODULES ####

# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
#$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark  # provides --MARK-- message capability
…
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
#*.* @@remote-host:514
if $programname == 'sqlservr' then @@192.168.40.21:514 
…

On the receiver side I configured rsyslog daemon to accept messages that come from TCP protocol and port 514. Here a sample (only the interesting part) of the configuration file:

[root@sqllinux2 ~] cat /etc/rsyslog.conf
…
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514

 

Finally, I ran a simple test to check if the log message forwarding process works correctly by using the following T-SQL command from my SQLLINUX instance …

RAISERROR('test syslog from SQLLINUX instance', 10, 1) WITH LOG

 

… and after jumping to the receiver side (SQLLINUX2) I used the tail command to check if my message was sent correctly:

[root@sqllinux2 ~] tail -f /var/log/messages
…
Jan 18 21:50:40 sqllinux sqlservr: 2018-01-18 21:50:40.66 spid57      test syslog
Jan 18 21:51:03 sqllinux sqlservr: 2018-01-18 21:51:03.75 spid57      test syslog 1 2 3
Jan 18 21:52:08 sqllinux sqlservr: 2018-01-18 21:52:08.74 spid52      Using 'dbghelp.dll' version '4.0.5'
Jan 18 21:56:31 sqllinux sqlservr: 2018-01-18 21:56:31.13 spid57      test syslog from SQLLINUX instance

Well done!
In this blog post we’ve surfaced how SQL Server deals with Linux logging system and how we may use the journalctl command to find out information for troubleshooting. Moving from Windows to Linux in this field remains straightforward with finally the same basics. Obviously, Linux is a command-line oriented operating system so you will not escape to use them :-)

 

Cet article SQL Server on Linux and logging est apparu en premier sur Blog dbi services.

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

$
0
0

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
22:22:46 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
5 PDB6 READ WRITE NO

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb1 AcyH+Z... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
3 pdb6 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
 
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
 
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
 
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
6 PDB6 READ WRITE YES
 
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS MESSAGE ACTION
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
 
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
 
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
 
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
 
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
 
CON_ID TAG KEY_ID... CREATOR KEY_USE KEYSTORE_TYPE ORIGIN CREATOR_PDBNAME ACTIVATING_PDBNAME
------ --- --------- ------- ------- ------------- ------ --------------- ------------------
1 cdb2 AdTdo9... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL CDB$ROOT CDB$ROOT
4 pdb1 Adnhnu... SYS TDE IN PDB SOFTWARE KEYSTORE LOCAL PDB6 PDB6

 

Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.

Explain Plan format

$
0
0

The DBMS_XPLAN format accepts a lot of options, which are not all documented. Here is a small recap of available information.

The minimum that is displayed is the Plan Line Id, the Operation, and the Object Name. You can add columns and/or sections with options, such as ‘rows’, optionally starting with a ‘+’ like ‘+rows’. Some options group several additional information, such ‘typical’, which is also the default, or ‘basic’, ‘all’, ‘advanced’. You can choose one of them and remove some columns, with ‘-‘, such as ‘typical -rows -bytes -cost -plan_hash -predicate -remote -parallel -partition -note’. Finally, from an cursor executed with plan statistics, you can show all execution statistics with ‘allstats’, and the last execution statistics with ‘allstats last’. Subsets of ‘allstats’ are ‘rowstats’, ‘memstats’, ‘iostats’, buffstats’.

Of course, the column/section is displayed only if the information is present.

This blog post shows what is display by which option, as of 12cR2, and probably with some missing combinations.

+plan_hash, or BASIC


PLAN_TABLE_OUTPUT
-----------------
Plan hash value: 1338588353

Plan hash value: is displayed by ‘basic +plan_hash’ or ‘typical’ or ‘all’ or ‘advanced’

+rows +bytes +cost +partition +parallel, or TYPICAL


-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ/Ins |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 287 | 19516 | 5 (20)| 00:00:01 | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (ORDER) | :TQ10002 | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | P->S | QC (ORDER) |
| 3 | SORT ORDER BY | | 287 | 19516 | 5 (20)| 00:00:01 | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 5 | PX SEND RANGE | :TQ10001 | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | P->P | RANGE |
|* 6 | HASH JOIN | | 287 | 19516 | 4 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 7 | PX BLOCK ITERATOR | | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWC | |
| 8 | TABLE ACCESS FULL | EMP | 14 | 532 | 2 (0)| 00:00:01 | 1 | 1 | Q1,01 | PCWP | |
| 9 | BUFFER SORT | | | | | | | | Q1,01 | PCWC | |
| 10 | PX RECEIVE | | 82 | 2460 | 2 (0)| 00:00:01 | | | Q1,01 | PCWP | |
| 11 | PX SEND BROADCAST| :TQ10000 | 82 | 2460 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 12 | REMOTE | DEPT | 82 | 2460 | 2 (0)| 00:00:01 | | | LOOPB~ | R->S | |
-----------------------------------------------------------------------------------------------------------------------------------

Rows or E-Rows: is displayed by ‘basic +rows’ or ‘typical’ or ‘all’ or ‘advanced’
Bytes or E-Bytes: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Cost: is displayed by ‘basic +cost’ or ‘typical’ or ‘all’ or ‘advanced’
TmpSpc or E-Temp: is displayed by ‘basic +bytes’ or ‘typical’ or ‘all’ or ‘advanced’
Time or E-Time: is displayed by ‘typical’ or ‘all’ or ‘advanced’
Pstart/Pstop: is displayed by ‘basic +partition’ or ‘typical’ or ‘all’ or ‘advanced’
TQ/Ins, IN-OUT, PQ Distrib: is displayed by ‘basic +parallel’ or ‘typical’ or ‘all’ or ‘advanced’

The ‘A-‘ and ‘E-‘ prefixes are used when displaying execution statistics, to differentiate estimations with actual numbers

+alias


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
1 - SEL$58A6D7F6
8 - SEL$58A6D7F6 / EMP@SEL$1
12 - SEL$58A6D7F6 / DEPT@SEL$1

Query Block Name / Object Alias: is displayed by ‘basic +alias’ or ‘typical +alias’ or ‘all’ or ‘advanced’

+outline


Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
PQ_DISTRIBUTE(@"SEL$58A6D7F6" "DEPT"@"SEL$1" NONE BROADCAST)
USE_HASH(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
LEADING(@"SEL$58A6D7F6" "EMP"@"SEL$1" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "DEPT"@"SEL$1")
FULL(@"SEL$58A6D7F6" "EMP"@"SEL$1")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
MERGE(@"SEL$1" >"SEL$2")
OUTLINE_LEAF(@"SEL$58A6D7F6")
ALL_ROWS
DB_VERSION('12.2.0.1')
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
IGNORE_OPTIM_EMBEDDED_HINTS
END_OUTLINE_DATA
*/

Outline Data: is displayed by ‘basic +outline’ or ‘typical +outline’ or ‘all +outline’ or ‘advanced’

+peeked_binds


Peeked Binds (identified by position):
--------------------------------------
 
1 - :X (VARCHAR2(30), CSID=873): 'x'

Peeked Binds: is displayed by ‘basic +peeked_binds’ or ‘typical +peeked_binds’ or ‘all +outline’ or ‘advanced’

+predicate


Predicate Information (identified by operation id):
---------------------------------------------------
 
6 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Predicate Information: is displayed by ‘basic +predicate’ or ‘typical’ or ‘all’ or ‘advanced’

+column


Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 2 - (#keys=0) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 3 - (#keys=1) INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13] 4 - INTERNAL_FUNCTION("DEPT"."DEPTNO")[22], "EMP"."EMPNO"[NUMBER,22], "EMP"."ENAME"[VARCHAR2,10],
"EMP"."JOB"[VARCHAR2,9], "EMP"."MGR"[NUMBER,22], "EMP"."HIREDATE"[DATE,7], "EMP"."SAL"[NUMBER,22],
"EMP"."COMM"[NUMBER,22], "DEPT"."DNAME"[VARCHAR2,14], "DEPT"."LOC"[VARCHAR2,13]

Column Projection Information: is displayed by ‘basic +projection’ or ‘typical +projection’ or ‘all’ or ‘advanced’

+remote


Remote SQL Information (identified by operation id):
----------------------------------------------------
 
12 - SELECT "DEPTNO","DNAME","LOC" FROM "DEPT" "DEPT" (accessing 'LOOPBACK' )

Remote SQL Information: is displayed by ‘basic +remote’ or ‘typical’ or ‘all’ or ‘advanced’

+metrics


Sql Plan Directive information:
-------------------------------
 
Used directive ids:
9695481911885124390

Sql Plan Directive information: is displayed by ‘+metrics’

+note

The Note section can show information about SQL Profiles, SQL Patch, SQL Plan Baseline, Outlines, Dynamic Sampling, Degree of Parallelism, Parallel Query, Parallel DML, Create Index Size, Cardinality Feedback, Rely Constraints used for transformation, Sub-Optimal XML, Adaptive Plan, GTT private statistics,…


Note
-----
- Degree of Parallelism is 2 because of table property
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
- this is an adaptive plan (rows marked '-' are inactive)

Note: is displayed by ‘basic +note’ or ‘typical’ or ‘all’ or ‘advanced’

+adaptive


---------------------------------------------------------------------------------------
| Id | Operation | Name |Starts|E-Rows| A-Rows|
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |
| 1 | HASH UNIQUE | | 1 | 1 | 0 |
| * 2 | HASH JOIN SEMI | | 1 | 1 | 0 |
|- 3 | NESTED LOOPS SEMI | | 1 | 1 | 7 |
|- 4 | STATISTICS COLLECTOR | | 1 | | 7 |
| * 5 | TABLE ACCESS FULL | DEPARTMENTS | 1 | 1 | 7 |
|- * 6 | TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES | 0 | 1 | 0 |
|- * 7 | INDEX RANGE SCAN | EMP_DEP_IX | 0 | 10 | 0 |
| * 8 | TABLE ACCESS FULL | EMPLOYEES | 1 | 1 | 1 |
---------------------------------------------------------------------------------------

Inactive branches of adaptive plan: is displayed by ‘+adaptive’

+report


Reoptimized plan:
-----------------
This cursor is marked for automatic reoptimization. The plan that is
expected to be chosen on the next execution is displayed below.

Reoptimized plan: is displayed by ‘+report’

ALLSTATS


---------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | O/1/M |
---------------------------------------------------------------------------------------------------------------------------

Starts: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Rows: is displayed by ‘basic +rowstats’, ‘basic +allstats’
A-Time: is displayed by ‘typical +rowstats’, ‘basic +allstats’
Buffers, Reads, Writes: is displayed by ‘basic +buffstats’, ‘basic +iostats’, ‘basic +allstats’
OMem, 1Mem, Used-Mem, O/1/M, Used-Mem: is displayed by ‘basic +memstats’, ‘basic +allstats’
Max-Tmp,Used-Tmp is displayed by ‘basic +memstats’, ‘typical +allstats’

With summed stats, O/1/M and Max-Tmp are used for the headers. With last stats, Used-Mem and Used-Tmp.

 

Cet article Explain Plan format est apparu en premier sur Blog dbi services.

Testing Oracle SQL online

$
0
0

Want to test some DDL, a query, check an execution plan? You need only a browser. And you can copy-paste, or simply link, your test-case in a forum, a tweet, an e-mail, a tweet. Here is a small list (expecting to grow from your comments) of free online services which can run with an Oracle Database: SQL Fiddle, Rextester, db<>fiddle and Oracle Live SQL

SQL Fiddle

SQL Fiddle let you build a schema and run DDL on the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.6
  • Postgres 9.6 and 9.3
  • SQLLite (WebSQL and SQL.js)

As an Oracle user, the Oracle 11gR2 is not very useful as it is a version from 2010. But there’s a simple reason for that: that’s the latest free version – the Oracle XE Edition. And a free online service can run only free software. Now that Oracle plans to release an XE version every year, this should be better soon.

Example: http://sqlfiddle.com/#!4/42960/1/0

CaptureSQLfiddle

Rextester

Rextester is a service to compile code online, in a lot of languages and also the following databases:

  • Oracle 11gR2
  • Microsoft SQL Server 2014
  • MySQL 5.7
  • PostgreSQL 9.6

Example: http://rextester.com/QCYJF41984

Rextester has also an API where you can run a query and get a JSON answer:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api
{"Warnings":null,"Errors":null,"Result":"\u003ctable class=\"sqloutput\"\u003e\u003ctbody\u003e\u003ctr\u003e\u003cth\u003e\u0026nbsp;\u0026nbsp;\u003c/th\u003e\r\n\u003cth\u003eDUMMY\u003c/th\u003e\r\n\u003c/tr\u003e\r\n\u003ctr\u003e\u003ctd\u003e1\u003c/td\u003e\r\n\u003ctd\u003eX\u003c/td\u003e\r\n\u003c/tr\u003e\r\n\u003c/tbody\u003e\u003c/table\u003e\r\n","Stats":"absolute service time: 1,37 sec","Files":null}

The answer has the result as an HTML table:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result
<table class="sqloutput"><tbody><tr><th> nbsp; nbsp;</th>
<th>DUMMY</th>
</tr>
<tr><td>1</td>
<td>X</td>
</tr>
</tbody></table>

Here is my SELECT * FROM DUAL:

$ curl -s --request POST --data 'LanguageChoice=35 Program=select * from dual' http://rextester.com/rundotnet/api | jq -r .Result | lynx -dump -stdin
DUMMY
1 X

Capturerextester

db<>fiddle

db<>fiddle has a very nice interface, easy to link and easy to paste to StackOverflow (click on ‘markdown’)

  • Oracle 11gR2
  • SQL Server 2014 2016 2017, and even 2017 Linux version.
  • MariaDB 10.2
  • SQLite 3.8
  • PostgreSQL 8.4 9.4 9.6 10

Example: http://dbfiddle.uk/?rdbms=oracle_11.2&fiddle=948a067dd17780ca65b01243751c2cb0

Capturedbfiddle

Oracle Live SQL

Finally, you can also run on the latest release of Oracle, with a service provided by Oracle itself: Live SQL.

  • Oracle 12cR2 (an early build from October 2016)

Example: https://livesql.oracle.com/apex/livesql/s/f6ydueahcslf66dlynagw9s3w

CaptureLiveSQL

 

Cet article Testing Oracle SQL online est apparu en premier sur Blog dbi services.

Result Cache: when *not* to use it

$
0
0

I encountered recently a case where result cache was incorrectly used, leading to high contention when the application encountered a peak of load. It was not a surprise when I’ve seen that the function was called with an ‘ID’ as argument, which may have thousands of values in this system. I mentioned to the software vendor that the result cache must be used only for frequently calling the function with same arguments, not for random values, even if each value have 2 or 3 identical calls. And, to detail this, I looked at the Oracle Documentation to link the part which explains when the result cache can be used and when it should be avoided.

But I’ve found nothing relevant. This is another(*) case where the Oracle Documentation is completely useless. Without explaining how a feature works, you completely fail to get this feature used. Most people will not take the risk to use it, and a few will use it in the wrong place, before definitely blacklisting this feature.

(*) By another case, I’m thinking about Kamil Stawiarski presentation about Pragma UDF and the lack of useful documentation about it.

Oracle documentation

So this is what I’ve find in the Database Performance Tuning Guide about the Benefits of Using the Server Result Cache

  1. The benefits of using the server result cache depend on the application
  2. OLAP applications can benefit significantly from its use.
  3. Good candidates for caching are queries that access a high number of rows but return a small number, such as those in a data warehouse.

So, this is vague (‘depends’, ‘can benefit’, ‘good candidates’). And doesn’t help to decide when it can be used.
The ‘access a high number of rows but return a small number’ is an indication why cache hits can benefit. However, there is no mention of the most important things, which are :

  • The cache result is invalidated for any DML on the tables the result relies on.
  • The cache miss, when the result is invalidated is expensive
  • The cache miss, when the result is not in the result cache is expensive
  • The ‘expensive’ here is a scalability issue: not detected in unit tests, but big contention when load increases

Real things to know

The first thing to know is that the Result Cache memory is protected by a latch:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This latch has no children:

SQL> select * from v$latch_children where name like '%Result Cache%';
 
no rows selected

Only one latch to protect the whole result cache: concurrent sessions – even for different functions – have to serialize their access on the same latch.

This latch is acquired in exclusive mode when the session has to write to the result cache (cache miss, invalidation,…) or in shared mode – since 11gR2 when reading only. This has been explained by Alex Fatkulin http://afatkulin.blogspot.ch/2012/05/result-cache-latch-in-11gr2-shared-mode.html.

This means that, whatever the Oracle Documentation says, the benefit of result cache comes only at cache hit: when the result of the function is already there, and has not been invalidated. If you call the same function with always the same parameter, frequently, and with no changes in the related tables, then we are in the good case.

But if there was a modification of one of the tables, even some rows that have nothing to do with the result, then you will have an overhead: exclusive latch get. And if you call the function with new values for the arguments, that’s also a cache miss which has to get this exclusive latch. And if you have multiple sessions experiencing a cache miss, then they will spin on CPU to get the exclusive latch. This can be disastrous with a large number of sessions. I have seen this kind of contention for hours with connection pools set to 100 sessions when the call to the function is frequent with different values.

To show it, I create a demo table (just to have a dependency) and a result_cache function:

SQL> create table DEMO as select rownum n from xmltable('1 to 1000');
Table created.
 
SQL> create or replace function F(n number) return number result_cache as begin for i in (select * from DEMO where DEMO.n=F.n) loop return i.n; end loop; end;
2 /
Function created.

I have just restarted the instance and my latch statistics are reset:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 2 0 0 0 0
0000000060047870 Result Cache: SO Latch 0 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

Result Cache Hit

This will call the function always with the same argument, and no change in the table it relies on:
SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(1); end loop; end;
2 /
PL/SQL procedure successfully completed.

So, the first call is a cache miss and the 999 next calls are cache hits. This is the perfect case for Result Cache.

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 1009 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So, that’s about 1000 latch gets. With cache hits you get the latch once per execution, and this is a shared latch, so no contention here.
You want to see check that it is a shared latch? Just set a breakpoint with gdb on the ksl_get_shared_latch function (up to 12.1 because 12.2 uses ksl_get_shared_latch_int) and print the arguments (as explained by Stefan Koehler and Frits Hoogland):

As my RC latch is at address 00000000600477D0 I set a beakpoint on ksl_get_shared_latch where the first argument is 0x600477d0 and display the other arguments:

break ksl_get_shared_latch
condition 1 $rdi == 0x600477d0
commands
silent
printf "ksl_get_shared_latch laddr:%x, willing:%d, where:%d, why:%d, mode:%d\n", $rdi, $rsi, $rdx, $rcx, $r8
c
end

Then one call with cache hit displays:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8

Mode 8 is shared: many concurrent sessions can do the same without waiting. Shared is scalable: cache hits are scalable.

Cache miss – result not in cache

Here each call will have a different value for the argument, so that they are all cache misses (except the first one):

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

Now the ‘RC latch’ statistics have increased further:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 6005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

This is about 5000 additional latch gets, which means 5 per execution. And, because it writes, you can expect them to be exclusive.

Here is my gdb script output when I call the function with a value that is not already in cache:

ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:8
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5347, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5358, mode:16
ksl_get_shared_latch laddr:600477d0, willing:1, where:1, why:5374, mode:16

Mode 16 is exclusive. And we have 3 of them in addition to the shared one. You can imagine what happens when several sessions are running this: spin and wait, all sessions on the same resource.

Cache miss – result in cache but invalid

I run the same again, where all values are in cache now:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

So this is only 1000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 7005 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

The function depends on DEMO table, and I do some modifications on it:

SQL> insert into DEMO values (0)
1 row created.
SQL> commit;
Commit complete.

This has invalidated all previous results. A new run will have all cache miss:

SQL> declare n number; begin for i in 1..1e3 loop n:=n+f(i); end loop; end;
2 /
PL/SQL procedure successfully completed.

And this is 5000 additional gets:

SQL> select addr,name,gets,misses,sleeps,spin_gets,wait_time from v$latch where name like 'Result Cache%';
 
ADDR NAME GETS MISSES SLEEPS SPIN_GETS WAIT_TIME
---------------- ------------------------- ---------- ---------- ---------- ---------- ----------
00000000600477D0 Result Cache: RC Latch 12007 0 0 0 0
0000000060047870 Result Cache: SO Latch 1 0 0 0 0
0000000060047910 Result Cache: MB Latch 0 0 0 0 0

So what?

The important thing to know is that each cache miss requires an exclusive access to the Result Cache, multiple times. Those must be avoided. The Result Cache is good for a static set of result. It is not a short-term cache to workaround an application design where the function is called two or three times with the same values. This is, unfortunately, not explained in the Oracle Documentation. But it becomes obvious when we look at the implementation, or when we load test it with multiple sessions. The consequence can be this kind of high contention during minutes or hours:

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
latch free 858,094 1,598,387 1863 78.8
enq: RC - Result Cache: Contention 192,855 259,563 1346 12.8

Without either the knowledge of the implementation, or relevant load tests, the risk is that a developer stays on his good results in unit testing, and implement Result Cache in each function. The consequence will be seen too late, in production, at a time of load peak. If this happens to you, you can disable the result cache (DBMS_RESULT_CACHE.BYPASS(TRUE);) but the risk is to have performance degradation in the ‘good cases’. Or recompile the procedures with removed RESULT_CACHE, but you may bring a new contention on library cache then.

 

Cet article Result Cache: when *not* to use it est apparu en premier sur Blog dbi services.

Migrate Windows VM with more than 4 Disks from VMware to OVM

$
0
0

Suppose you got an OVA image created on VMware and the VM contains more than 4 Disks
and you have to migrate this machine from VMware to OVM.

As first step you import the OVA into the OVM in the usual way:

Bildschirmfoto 2018-02-02 um 09.43.12

You see the that the appliance was imported successfully, we have 5 disks:

Bildschirmfoto 2018-02-02 um 09.45.25

Now you create your VM from the imported appliance:

Bildschirmfoto 2018-02-02 um 09.47.34

So far so good, lets have a look on our newly created VM:

Bildschirmfoto 2018-02-02 um 09.51.22

All seems good, but if you edit the machine (we want to add a network and give a boot order for the system) you will be surprised:

Bildschirmfoto 2018-02-02 um 10.11.35

Oops, we lost a disk, what happened? You don’t make a mistake, its a restriction on OVM:
http://www.oracle.com/us/technologies/virtualization/ovm3-supported-config-max-459314.pdf states that a VM can have only 4 IDE disks in maximum, and if you import your Windows VM its considered as Xen HVM Domain Type, so you can only attach 4 disks to the VM.

And now, how can we solve the problem? Lets first try, if we can boot the system:

Bildschirmfoto 2018-02-02 um 10.25.32

Ok system is up what next? We deinstall all the VMware utilities:

Bildschirmfoto 2018-02-02 um 10.35.40For the next step we download the Oracle VM Server for x86 Windows PV Drivers – 3.4.2.0.0 for Microsoft Windows x64 (64-bit) from https://edelivery.oracle.com and install them on our Windows Box:

Bildschirmfoto 2018-02-02 um 11.04.28

After a system restart, all disks except the C: drive are gone:

Bildschirmfoto 2018-02-02 um 11.08.51

We shutdown the Windows Box and put the VM into the Xen HVM PV Driver Domain:

Bildschirmfoto 2018-02-02 um 12.07.39

After that we can add our lost disk without any problems:

Bildschirmfoto 2018-02-02 um 12.09.00

Ok lets restart the system look what happens:

Bildschirmfoto 2018-02-02 um 12.16.48

Ok all disks are there, we can now bring them online:

Bildschirmfoto 2018-02-02 um 12.59.29

After a reboot we can see that our drives are used correctly:

ARC3: Archival started
LGWR: STARTING ARCH PROCESSES COMPLETE
Thread 1 opened at log sequence 4743
Current log# 3 seq# 4743 mem# 0: I:\ORADATA\TESTIDM1\REDO03.LOG
Successful open of redo thread 1

 

 

 

 

Cet article Migrate Windows VM with more than 4 Disks from VMware to OVM est apparu en premier sur Blog dbi services.


Introducing SQL Server on Docker Swarm orchestration

$
0
0

SQL Server 2017 is available on multiple platforms: Windows, Linux and Docker. The latter provides containerization features with no lengthy setup and special prerequisites before running your SQL Server databases which are probably the key success of adoption for developers.

It was my case as developer for our DMK management kit which provide to our customers a SQL Server database maintenance solution on all editions from SQL Server 2005 to SQL Server 2017 (including Linux). In the context of our DMK, we have to develop for different versions of SQL Server, including cumulative updates and service packs that may provide new database maintenance features and it may be challenging when we often have to release a new development / fix or and to perform unit tests on different SQL Server versions or platforms. At this stage you may certainly claim that virtualization already addresses those requirements and you’re right because we used a lot of provisioned virtual machines on Hyper-V so far.

The obvious question is why to switch to Docker container technologies? Well, for different reasons in reality. Firstly, sharing my SQL Server containers and test databases with my team is pretty straightforward. We may use a Docker registry and use docker push / pull commands. Then, provisioning a new SQL Server instance is quicker with containers than virtual machines and generally lead to lower CPU / Memory / Disk footprint on my laptop. I talked a little bit about it in the last SQL Pass meetup in Geneva by the way.

But in this blog post I would like to take another step with Docker and to go beyond the development area. As DBA we may have to deal with container management in production in the near future (unless it is already done for you :) ) and we need to get a more global picture of the Docker echosystem. I remembered a discussion with an attendee during my SQL Server Docker and Microservices session in the last TugaIT 2017 Lisbon who told me Docker and containers are only for developers and not suitable for production. At the time of this discussion, I had to admit he was not entirely wrong. Firstly, let’s say that as virtualization before, based-container application adoption will probably take time. This is at least what I may concluded from my experience and from what I may notice around me, even if DevOps and microservices architectures seem to contribute to improve the situation. This is probably because production environments introduce other challenges and key factors than those we may have on development area as service availability, patching or upgrading stuff, monitoring and alerting, performance …. In the same time, Docker and more generally speaking container technologies are constantly maturing as well as tools to manage such infrastructures and in production area, as you know, we prefer to be safe and there is no room to no stable and non-established products that may compromise the core business.

So, we may wonder what’s the part of the DBAs in all of this? Well, regardless the underlying infrastructure we still have the same responsibilities as to provide configuration stuff, to ensure databases are backed up, to manage and to maintain data including performance, to prevent security threats and finally to guarantee data availability. In fact, looking back to last decade, we already faced exactly the same situation with the emergence of virtualization paradigm where we had to install our SQL Server instance in such infrastructures. I still remember some reluctance and heated discussions from DBAs.

From my side, I always keep in mind high availability and performance because it is the most concern of my customers when it comes to production environments. So, I was curious to dig further on container technologies in this area and with a first start on how to deal with different orchestration tools. The main leaders on the market are probably Docker Swarm, Kubernetes, Mesosphere, CoreOS fleet (recently acquired by RedHat), RedHat OpenShift, Amazon ECS and Azure Container Services.

In this first blog, I decided to write about Docker Swarm orchestrator probably because I was already comfortable with native Docker commands and Docker Swarm offers additional set of docker commands. When going into details, the interesting point is that I discovered a plenty of other concepts which lead me to realize I was reaching another world … a production world. This time it is not just about pulling / pushing containers for sure :) Before to keep reading this blog post, it is important to precise that it is not intended to learn about how to implement Docker Swarm. Docker web site is well-designed for that. My intention is just to highlight some important key features I think DBAs should to be aware before starting managing container infrastructures.

Firstly, implementing a Docker Swarm requires to be familiar with some key architecture concepts. Fortunately, most of them are easy to understand if you are already comfortable with SQL Server and cluster-based architectures including SQL FCIs or availability groups.

Let’s have a look at the main components:

  • Nodes: A node is just an instance of docker engine participating in swarm
  • Manager nodes: They are firstly designed to dispatch units of works (called tasks tied to containers) to worker nodes according your service definition
  • Worker nodes: Receive and execute tasks dispatched from manager nodes

Here was my first implementation of my Docker lab infrastructure:

blog 127 - 0 - swarm architecture lab

It was composed of 3 docker nodes and one of them acted as a both worker and manager. Obviously, this is not an ideal scenario to implement on production because this architecture lacks of fault-tolerance design. But anyway, that was enough to start with my basic container labs.

I use a Docker Server version 17.12.0 CE and as shown below swarm mode is enabled.

$sudo docker info
Containers: 13
 Running: 4
 Paused: 0
 Stopped: 9
Images: 37
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: 7a9v6uv6jur5cf8x3bi6ggpiz
 Is Manager: true
 ClusterID: 63pdntf40nzav9barmsnk91hb
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
The IP address of the manager is reachable from the host operation system
…

$ sudo docker node ls
ID                            HOSTNAME                      STATUS              AVAILABILITY        MANAGER STATUS
s6pu7x3htoxjqvg9vilkoffj1     sqllinux2.dbi-services.test   Ready               Active
ptcay2nq4uprqb8732u8k451a     sqllinux3.dbi-services.test   Ready               Active
7a9v6uv6jur5cf8x3bi6ggpiz *   sqllinux.dbi-services.test    Ready               Active              Leader

Here the IP address of the manager (sqllinux node) used during the swarm initialization with –advertise-addr parameter. Tu put it simply, this is the address used by other nodes to connect into this node during the joining phase.

There are a plenty of options to configure and to change the behavior of the swarm. Some of them concern resource and placement management and as DBA it makes sense to know how such infrastructures behave on your database environments regarding these settings. Maybe in a next blog post.

$ ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:13:4b brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.20/24 brd 192.168.40.255 scope global eth0

 

I also opened the required ports on each node

  • TCP port 2376 for secure docker client communication (Docker machine)
  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic (container ingress networking)
$ sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: dhcpv6-client ssh nfs mountd rpc-bind
  ports: 2376/tcp 2377/tcp 7946/tcp 7946/udp 4789/udp 80/tcp 1433/tcp 8080/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

 

After initializing the swarm, the next step will consist in deploying the containers on it. The point here is the swarm mode changes a little bit the game because you have to deal with service or stacks (collection of services) deployment rather than using container deployment. It does mean you cannot deploy directly containers but you won’t benefit from swarm features in this case.

Let’s continue with stack / service model on docker. Because a picture is often worth a thousand words I put an overview of relationship between stacks, services and tasks.

blog 127 - 1 - swarm stack service task relationship

You may find the definition of each component in the docker documentation but let’s make a brief summary of important things: services are the primary location of interactions with the swarm and includes the definition of tasks to execute on the manager or worker nodes. Then tasks carry Docker containers and commands to run inside. Maybe the most important thing to keep in mind here: /!\Once a task is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail /!\. This is a different behavior from virtualized architectures when you may go through different features to move manually one virtual machine from one host to another one (VMotion, DRS for VMware …). Finally, a stack is just a collection of services (1-N) that make up an application on a specific environment.

From a user perspective, you may deploy directly a service or to go through a stack definition if you have to deal with an application composed of several services and relationships between them. For the latter, you may probably guess that this model is pretty suitable with microservices architectures. These concepts may seem obscure but with practice they become clearer.

But just before introducing services deployment models, one aspect we did not cover so far is the storage layout. Docker has long been considered as designed for stateless applications and storage persistence a weakness in the database world. Furthermore, from a container perspective, it is always recommended to isolate the data from a container to retain the benefits of adopting containerization. Data management should be separate from the container lifecycle. Docker has managed to overcome this issue by providing different ways to persist data outside containers since the version 1.9 including the capabilities to share volumes between containers on the same host (aka data volumes). But thinking about production environments, customers will certainly deploy docker clusters rending these options useless and the containers non-portables as well. In my context, I want to be able to share data containers on different hosts and the good news is Docker provide distributed filesystem capabilities. I picked up NFS for convenience but it exists other solutions like Ceph or GluterFS for instance. A direct mapping between my host directory and the directory inside my SQL Server container over a distributed storage based on a NFS share seems to work well in my case. From a SQL Server perspective this is not an issue as long as you deploy the service with a maximum of one replica at time to avoid data corruption. My updated architecture is as following:

blog 127 - 2 - swarm architecture lab with nfs

Here the configuration from one node concerning the mount point based on NFS share. Database files will be stored on /u01/sql2 in my case.

$ cat /etc/fstab
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=eccbc689-88c6-4e5a-ad91-6b47b60557f6 /boot                   xfs     defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/sdb1       /u99    xfs     defaults        0 0
192.168.40.14:/u01      /u01    nfs     nfsvers=4.2,timeo=14,intr       0 0

$ sudo showmount -e 192.168.40.14
Export list for 192.168.40.14:
/u01 192.168.40.22,192.168.40.21,192.168.40.20

 

My storage is in-place and let’s continue with network considerations. As virtualization products, you have different option to configure network:

  • Bridge: Allows internal communication between containers on the same host
  • docker_gwbridge : Network created when the swarm is installed and it is dedicated for the communication between nodes
  • Ingress: All nodes by default participate to ingress routing mesh. I will probably introduce this feature in my next blog post but let’s say that the routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node
  • Overlay: The manager node automatically extends the overlay network to nodes that run service tasks to allow communication between host containers. Not available if you deploy containers directly

Let’s add that Docker includes an embedded DNS server which provides DNS resolution among containers connected to the same user defined network. Pretty useful feature when you deploy applications with dependent services!

So, I created 2 isolated networks. One is dedicated for back-end server’s communication (backend-server) and the other one for the front-end server’s communication (frontend-server).

$sudo docker network create \
  --driver overlay \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  backend-server

$sudo docker network create \
  --driver overlay \
  --subnet 172.19.0.0/16 \
  --gateway 172.19.0.1 \
  frontend-server

$ sudo docker network ls
NETWORK ID	NAME			DRIVER		SCOPE
oab2ck3lsj2o	backend-server      	overlay		swarm
1372e2d1c92f   	bridge			bridge       	local
aeb179876301  	docker_gwbridge     	bridge       	local
qmlsfg6vjdsb	frontend-server     	overlay  	swarm
8f834d49873e  	host			host		local
2dz9wi4npgjw  	ingress             	overlay         swarm

we are finally able to deploy our first service based on SQL Server on Linux image:

$ sudo docker service create \
   --name "sql2" \
   --mount 'type=bind,src=/u01/sql2,dst=/var/opt/mssql' \
   --replicas 1 \
   --network backend-server \
   --env "ACCEPT_EULA=Y" --env "MSSQL_SA_PASSWORD=P@$$w0rd1" \
   --publish published=1500,target=1433 \
   microsoft/mssql-server-linux:2017-latest

Important settings are:

  • –name “sql2″ = Name of the service to deploy
  • –replicas 1 = We tell to the manage to deploy only on one replica of the SQL Server container at time on docker workers
  • – mount ‘type=bind,src=…,dst=…’ = Here we define the data persistence strategy. It will map the /u01/02 folder directory on the host with /var/opt/mssql directory within the container. If we shutdown or remove the container the data is persisted. If container moves to another docker node, data is still available thank to the distributed storage over NFS.
  • –network back-endserver = we will attach the sql2 service to the back-endserver user network
  • microsoft/mssql-server-linux:2017-latest = The container based-image used in this case (Latest image available for SQL Server 2017 on Linux)

After deploying the sql2 service, let’s have a look at services installed from the manager. We get interesting output including the service name, the replication mode and the listen port as well. You may notice replication mode is set to replicated. In this service model, the swarm distributes a specific number of replicas among nodes. In my context I capped the number of maximum task to 1 as discussed previously.

$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                      PORTS
ih4e2acqm2dm        registry            replicated          1/1                 registry:2                                 *:5000->5000/tcp
bqx1u9lc8dni        sql2                replicated          1/1                 microsoft/mssql-server-linux:2017-latest   *:1433->1433/tcp

Maybe you have noticed one additional service registry. Depending on the context, when you deploy services the correspond based-images must be available from all the nodes to be deployed. You may use images stored in public docker registry and to use a private one if you deploy internal images.

Let’s dig further by looking at the tasks associated to the sql2 service. We get other useful information as the desired state, current state and the node where the task is running.

$ sudo docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                                      PORTS
ih4e2acqm2dm        registry            replicated          1/1                 registry:2                                 *:5000->5000/tcp
bqx1u9lc8dni        sql2                replicated          1/1                 microsoft/mssql-server-linux:2017-latest   *:1433->1433/tcp

Maybe you have noticed one additional service registry. Depending on the context, when you deploy services the correspond based-images must be available from all the nodes to be deployed. You may use images stored in public docker registry and to use a private one if you deploy internal images.

Let’s dig further by looking at the tasks associated to the sql2 service. We get other useful information as the desired state, current state and the node where the task is running.

$ sudo docker service ps sql2
ID                  NAME                IMAGE                                      NODE                          DESIRED STATE       CURRENT STATE            ERROR               PORTS
zybtgztgavsd        sql2.1              microsoft/mssql-server-linux:2017-latest   sqllinux3.dbi-services.test   Running             Running 11 minutes ago

In the previous example I deployed a service that concerned only my SQL Server instance. For some scenarios it is ok but generally speaking a back-end service doesn’t come alone on container world and it is often part of a more global application service architecture. This is where stack deployment comes into play.

As stated to the Docker documentation stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately. Stack files include environment variables, deployment tags, the number of services and dependencies, number of tasks to deploy, related environment-specific configuration etc… If you already dealt with docker-compose files to deploy containers and dependencies you will be comfortable with stack files. The stack file is nothing more than a docker-compose file adjusted for stack deployments. I used one to deploy the app-voting application here. This application is composed to 5 services including Python, NodeJS, Java Worker, Redis Cache and of course SQL Server.

Here the result on my lab environment. My SQL Server instance is just a service that composes the stack related to my application. Once again you may use docker commands to get a picture of the stack hierarchy.

$ sudo docker stack ls
NAME                SERVICES
myapp               5

$ sudo docker stack services myapp
ID                  NAME                MODE                REPLICAS            IMAGE                                               PORTS
fo2g822czblu        myapp_worker        replicated          1/1                 127.0.0.1:5000/examplevotingapp_worker:latest
o4wj3gn5sqd2        myapp_result-app    replicated          1/1                 127.0.0.1:5000/examplevotingapp_result-app:latest   *:8081->80/tcp
q13e25byovdr        myapp_db            replicated          1/1                 microsoft/mssql-server-linux:2017-latest            *:1433->1433/tcp
rugcve5o6i7g        myapp_redis         replicated          1/1                 redis:alpine                                        *:30000->6379/tcp
tybmrowq258s        myapp_voting-app    replicated          1/1                 127.0.0.1:5000/examplevotingapp_voting-app:latest   *:8080->80/tcp

 

So, let’s finish with the following question: what is the role of DBAs in such infrastructure as code? I don’t pretend to hold the truth but here my opinion:

From an installation and configuration perspective, database images (from official editors) are often released without neither any standard nor best practices. I believe very strongly (and it seems I’m aligned with the dbi services philosophy on this point) that the responsibility of the DBA team here is to prepare, to build and to provide well-configured images as well as related deployment files (at least the database service section(s)) related to their context – simple containers and more complex environments with built-in high availability for instance.

In addition, from a management perspective, containers will not really change the game concerning the DBA daily job. They are still responsible of the core data business regardless the underlying infrastructure where database systems are running on.

In this blog post we just surfaced the Docker Swarm principles and over the time I will try to cover other important aspects DBAs may have to be aware with such infrastructures.

See you!

 

 

 

 

 

Cet article Introducing SQL Server on Docker Swarm orchestration est apparu en premier sur Blog dbi services.

Multitenant, PDB, ‘save state’, services and standby databases

$
0
0

Creating – and using – your own services has always been the recommendation. You can connect to a database without a service name, though the instance SID, but this is not what you should do. Each database registers its db_unique_name as a service, and you can use it to connect, but it is always better to create your own application service(s). In multitenant, each PDB registers its name as a service, but the recommendation is still there: create your own services, and connect with your services.
I’ll show in this blog post what happens if you use the PDB name as a service and the standby database registers to the same listener as the primary database. Of course, you can workaround the non-unique service names by registering to different listeners. But this just hides the problem. The main reason to use services is to be independent from physical attributes, so being forced to assign a specific TCP/IP port is not better than using an instance SID.

I have the primary (CDB1) and standby (CDB2) databases registered to the default local listener:

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 03-FEB-2018 23:11:23
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 02-FEB-2018 09:32:30
Uptime 1 days 13 hr. 38 min. 52 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/VM122/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM122)(PORT=5501))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "627f7512a0452fd4e0537a38a8c055c0" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_CFG" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1_DGB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1_DGMGRL" has 1 instance(s).
Instance "CDB1", status UNKNOWN, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2_DGMGRL" has 1 instance(s).
Instance "CDB2", status UNKNOWN, has 1 handler(s) for this service...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

Look at service ‘pdb1′, which is the name for my PDB. Connecting to //localhost:1521/PDB1 can connect you randomly to CDB1 (the primary database) or CDB2 (the standby database).

Here is an example, connecting several times to the PDB1 service:

[oracle@VM122 ~]$ for i in {1..5} ; do sqlplus -L -s sys/oracle@//localhost/pdb1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1

I was connected at random to CDB1 or CDB2.

As an administrator, you know the instance names and you can connect to the one you want with: //localhost:1521/PDB1/CDB1 or //localhost:1521/PDB1/CDB2:

[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB1 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ WRITE CDB1
 
[oracle@VM122 ~]$ for i in {1..3} ; do sqlplus -L -s sys/oracle@//localhost/pdb1/CDB2 as sysdba <<< 'select name,open_mode,instance_name from v$instance , v$database;'; done
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2
 
NAME OPEN_MODE INSTANCE_NAME
--------- -------------------- ----------------
CDB1 READ ONLY WITH APPLY CDB2

Of course this is not what you want. And we must not start or stop the default services. For the application, the best you can do is to create your service. And if you want to be able to connect to the Active Data Guard standby, which is opened in read-only, then you can create a ‘read-write’ service and a ‘read-only’ service that you start depending on the role.

Create and Start a read-write service on the primary

This example supposes that you have only Oracle Database software installed. If you are in RAC, with the resources managed by Grid Infrastructure, or simply with Oracle Restart, creating a service is easy with srvctl, and you add it to a PDB with ‘-pdb’ and also with a role to start it automatically in the primary or in the standby. But without it, you use dbms_service:

SQL> connect /@CDB1 as sysdba
Connected.
 
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RW',network_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RW');
PL/SQL procedure successfully completed.
 
SQL> alter session set container=cdb$root;
Session altered.

The service is created, stored in SERVICE$ visible with DBA_SERVICES:

SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

Save state

I have created and started the PDB1_RW service. However, if I restart the database, the service will not start automatically. How do you ensure that the PDB1 pluggable database starts automatically when you open the CDB? You ‘save state’ when it is opened. It is the same for the services you create. You need to ‘save state’ when they are opened.


SQL> alter pluggable database all save state;
Pluggable database ALL altered.

The information is stored in PDB_SVC_STATE$, and I’m not aware of a dictionary view on it:

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4
pdb1 1888881990 pdb1 11-JAN-18 4
 
SQL> select * from containers(pdb_svc_state$);
 
INST_ID INST_NAME PDB_GUID PDB_UID SVC_HASH SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 CON_ID
------- --------- -------- ------- -------- ------ ------ ------ ------ ------ ------ ------
1 CDB1 627F7512A0452FD4E0537A38A8C055C0 2872139986 3128030313 1

The name is not in this table, you have to join with v$services using(name_hash):

SQL> select name,name_hash,network_name,creation_date,con_id from v$active_services order by con_id,service_id;
 
NAME NAME_HASH NETWORK_NAME CREATION_DATE CON_ID
---- --------- ------------ ------------- ------
SYS$BACKGROUND 165959219 26-JAN-17 1
SYS$USERS 3427055676 26-JAN-17 1
CDB1_CFG 1053205690 CDB1_CFG 24-JAN-18 1
CDB1_DGB 184049617 CDB1_DGB 24-JAN-18 1
CDB1XDB 1202503288 CDB1XDB 11-JAN-18 1
CDB1 1837598021 CDB1 11-JAN-18 1
pdb1 1888881990 pdb1 11-JAN-18 4
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 4

So, in addition to storing the PDB state in PDBSTATE$, visible with dba_pdb_saved_states, the service state is also stored. Note that they are at different level. PDBSTATE$ is a data link: stored on CDB$ROOT only (because the data must be read before opening the PDB) but PDB_SVC_STATE$ is a local table in the PDB as the services can be started only when the PDB is opened.

This new service is immediately registered on CDB1:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Create and Start a read-only service for the standby

If you try to do the same on the standby for a PDB1_RO service, you cannot because service information has to be stored in the dictionary:

SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
Error starting at line : 56 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
BEGIN dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO'); END;
Error report -
ORA-16000: database or pluggable database open for read-only access

So, the read-only service has to be created on the primary:

SQL> connect /@CDB1 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.create_service(service_name=>'pdb1_RO',network_name=>'pdb1_RO');
 
SQL> select name,name_hash,network_name,creation_date,pdb from cdb_services order by con_id,service_id;
NAME NAME_HASH NETWORK_NAME CREATION_DATE PDB
---- --------- ------------ ------------- ---
pdb1_RW 3128030313 pdb1_RW 03-FEB-18 PDB1
pdb1_RO 1562179816 pdb1_RO 03-FEB-18 PDB1
pdb1 1888881990 pdb1 11-JAN-18 PDB1

The SERVICE$ dictionary table is replicated to the standby, so I can I can start it on the standby:

SQL> connect /@CDB2 as sysdba
Connected.
SQL> alter session set container=pdb1;
Session altered.
 
SQL> exec dbms_service.start_service(service_name=>'pdb1_RO');
PL/SQL procedure successfully completed.

Here is what is registered to the listener:

Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Now, the PDB_RO connects to the standby and PDB_RW to the primary. Perfect.

No ‘save state’ on the standby

At this point, you would like to have the PDB_RO started when PDB1 is opened on the standby, but ‘save state’ is impossible on a read-only database:

SQL> alter session set container=cdb$root;
Session altered.
 
SQL> alter pluggable database all save state;
 
Error starting at line : 84 File @ /media/sf_share/122/blogs/pdb_svc_standby.sql
In command -
alter pluggable database all save state
Error report -
ORA-16000: database or pluggable database open for read-only access

You can’t manage the state (open the PDB, start the services) in the standby database.

The primary ‘save state’ is replicated in standby

For the moment, everything is ok with my services:

Service "pdb1_RO" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...

If I restart the primary CDB1, everything is ok again because I saved the state of the PDB and the service. But what happens when the standby CDB2 restarts?


SQL> connect /@CDB2 as sysdba
Connected.
SQL> startup force;
...
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The PDB is not opened: the ‘saved state’ for PDB is not read in the standby.
However, when I open the PDB, it seems that the ‘saved state’ for service is applied, and this one is replicated from the primary:

SQL> alter pluggable database PDB1 open;
Pluggable database altered.
SQL> host lsnrctl status
...
Service "pdb1" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1_RW" has 2 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

My PDB1_RW is registered for both, connections will connect at random to the primary or the standby, and then the transactions will fail half of the times. It will be the same in case of switchover. This is not correct.

Save state instances=()

What I would like is the possibility to save state for a specific DB_UNIQUE_NAME, like with pluggable ‘spfile’ parameters. But this is not possible. What is possible is to mention an instance but you can use it only for the primary instance where you save the state (or you get ORA-65110: Invalid instance name specified) and anyway, this will not be correct after a switchover.

So what?

Be careful, with services and ensure that the services used by the application are registered only for the correct instance. Be sure that this persists when the instances are restarted. For this you must link a service name to a database role. This cannot be done correctly with ‘save state’. You can use startup triggers, or better, Grid Infrastructure service resources.

Do not connect to the default service with the PDB name, you cannot remove it and cannot stop it, so you may have the same name for different instances in a Data Guard configuration. You can register the standby instances to different local listeners, to avoid the confusion, but you may still register to the same SCAN listener.

Create your own services, start them depending on the database role, and do not use ‘save state’ in a physical standby configuration.

 

Cet article Multitenant, PDB, ‘save state’, services and standby databases est apparu en premier sur Blog dbi services.

JAN18: Database 11gR2 PSU, 12cR1 ProactiveBP, 12cR2 RU

$
0
0

If you want to apply the latest patches (and you should), you can go to the My Oracle Support Recommended Patch Advisor. But sometimes it is not up-todate. For example, for 12.1.0.2 only the PSU is displayed and not the Proactive Bundle Patch, which is highly recommended. And across releases, the names have changed and can be misleading: PSU for 11.2.0.4 (no Proactive Bundle Patch except for Engineered Systems). 12.1.0.2 can have SPU, PSU, or Proactive BP but the latest is highly recommended, especially now that it includes the adaptive statistics patches. 12.2.0.1 introduce the new RUR and RU, the latest one being the one recommended.

To get things clear, there’s also the Master Note for Database Proactive Patch Program, with reference to one note per release. This blog post is my master note to link directly to the recommended updates for Oracle Database.

Master Note for Database Proactive Patch Program (Doc ID 756671.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=756671.1

11.2.0.4 – PSU

Database 11.2.0.4 Proactive Patch Information (Doc ID 2285559.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285559.1
Paragraph -> 11.2.0.4 Database Patch Set Update

Latest as of Q1 2018 -> 16-Jan-2018 11.2.0.4.180116 (Jan 2018) Database Patch Set Update (DB PSU) 26925576 (Windows: 26925576)

12.1.0.2  – ProactiveBP

Database 12.1.0.2 Proactive Patch Information (Doc ID 2285558.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285558.1
Paragraph -> 12.1.0.2 Database Proactive Bundle Patches (DBBP)

Latest as of Q1 2018 -> 16-Jan-2018 12.1.0.2.180116 Database Proactive Bundle Patch (Jan 2018) 12.1.0.2.180116 27010930

12.2.0.1 – RU

Database 12.2.0.1 Proactive Patch Information (Doc ID 2285557.1)
https://support.oracle.com/epmos/faces/DocContentDisplay?id=2285557.1
Paragraph -> 12.2.0.1 Database Release Update (Update)

Latest as of Q1 2018 -> 16-Jan-2018 12.2.0.1.180116 (Jan 2018) Database Release Update 27105253 (Windows: 12.2.0.1.180116 WIN DB BP 27162931)
 

Don’t forget SQL Developer

In the 12c Oracle Home SQL Developer is installed, but you should update it to the latest version.
Download the following from http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
– The SQL Developer zip for ‘Other Platforms’, currently 17.4
– The SQLcl zip for ‘All Platforms’, currently 17.4

On the server, remove, or rename the original directory:
mv $ORACLE_HOME/sqldeveloper $ORACLE_HOME/sqldeveloper.orig

Unzip what you have downloaded:
unzip -d $ORACLE_HOME/ sqldeveloper-*-no-jre.zip
unzip -d $ORACLE_HOME/sqldeveloper sqlcl-*-no-jre.zip

I suggest to have a login.sql which sets the beautiful ansiconsole for SQLcl:

echo "set sqlformat ansiconsole" > $ORACLE_HOME/sqldeveloper/sqlcl/login.sql

On 12.2 you can run SQLcl just with ‘sql’ (and same arguments as sqlplus: / as sysdba or /nolog,…) because this is what is defined in $ORACLE_HOME/bin.
However, it sets the current working directory and i prefer to keep the current one as it is probably were I want to run scripts from.

Then I add the following aliases in .bashrc

alias sqlcl='JAVA_HOME=$ORACLE_HOME/jdk SQLPATH=$ORACLE_HOME/sqldeveloper/sqlcl bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql'
alias sqldev='$ORACLE_HOME/sqldeveloper/sqldeveloper.sh'

When running SQL Developer for the first time you can create automatically a ‘/ as sysdba’ connection (but remember this is not a good practice to connect like this) and a connection for each user declared in the database: Right click on Connections and Create Local Connections

 

Cet article JAN18: Database 11gR2 PSU, 12cR1 ProactiveBP, 12cR2 RU est apparu en premier sur Blog dbi services.

12cR2 PDB archive

$
0
0

In 12.1 we had the possibility to unplug a PDB by closing it and generating a .xml file that describes the PDB metadata required to plug the datafiles into another CDB.
In 12.2 we got an additional possibility to have this .xml file zipped together with the datafiles, for an easy transport. But that was not working for ASM files.
The latest Release Update, Oct 17 includes the patch that fixes this issue and is the occasion to show PDB archive.

Here is Oracle 12.2.0.1 with Oct 2017 (https://updates.oracle.com/download/26737266.html) applied (needs latest OPatch https://updates.oracle.com/download/6880880.html)
With a PDB1 pluggable database:

[oracle@VM106 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Wed Oct 18 16:16:41 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=920040307)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 810 SYSTEM YES /acfs/oradata/CDB1/datafile/o1_mf_system_dmrbv534_.dbf
3 540 SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrbxvds_.dbf
4 70 UNDOTBS1 YES /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrbz8mm_.dbf
5 250 PDB$SEED:SYSTEM NO /acfs/oradata/CDB1/datafile/o1_mf_system_dmrc52tm_.dbf
6 330 PDB$SEED:SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrc52t9_.dbf
7 5 USERS NO /acfs/oradata/CDB1/datafile/o1_mf_users_dygrpz79_.dbf
8 100 PDB$SEED:UNDOTBS1 NO /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrc52x0_.dbf
21 250 PDB1:SYSTEM YES /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_system_dygrqqq2_.dbf
22 350 PDB1:SYSAUX NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_sysaux_dygrqqs8_.dbf
23 100 PDB1:UNDOTBS1 YES +ASM1/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/DATAFILE/undotbs1.257.957719779
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /acfs/oradata/CDB1/datafile/o1_mf_temp_dmrc4wlh_.tmp
2 64 PDB$SEED:TEMP 32767 /acfs/oradata/CDB1/pdbseed/temp012017-06-10_19-17-38-745-PM.dbf
3 64 PDB1:TEMP 32767 /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_temp_dygrqqsh_.dbf

I have moved one file to ASM to show that it is now handled correctly.

The pluggable database is closed, we can unplug it. Nothing changes with the unplug syntax except the extension of the file. If the file mentioned is a .pdb instead of a .xml then it is a PDB archive:

RMAN> alter pluggable database PDB1 unplug into '/var/tmp/PDB1.pdb';
 
RMAN> alter pluggable database PDB1 close;
 
Statement processed
 
RMAN> alter pluggable database PDB1 unplug into '/var/tmp/PDB1.pdb'
2> ;
 
Statement processed
 
RMAN> exit

Actually it is just a zip file with the datafiles, without the full path:

[oracle@VM106 ~]$ unzip -t /var/tmp/PDB1.pdb
Archive: /var/tmp/PDB1.pdb
testing: o1_mf_system_dygrqqq2_.dbf OK
testing: o1_mf_sysaux_dygrqqs8_.dbf OK
testing: undotbs1.257.957719779 OK
testing: /var/tmp/PDB1.xml OK
No errors detected in compressed data of /var/tmp/PDB1.pdb.

You can see that the ASM file is not different from the others.

I drop the pluggable database

RMAN> drop pluggable database PDB1 including datafiles;
 
using target database control file instead of recovery catalog
Statement processed
 

And plug back the PDB1, as PDB2, using the zip file:

RMAN> create pluggable database PDB2 using '/var/tmp/PDB1.pdb';
 
Statement processed
 
RMAN> report schema;
 
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 810 SYSTEM YES /acfs/oradata/CDB1/datafile/o1_mf_system_dmrbv534_.dbf
3 540 SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrbxvds_.dbf
4 70 UNDOTBS1 YES /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrbz8mm_.dbf
5 250 PDB$SEED:SYSTEM NO /acfs/oradata/CDB1/datafile/o1_mf_system_dmrc52tm_.dbf
6 330 PDB$SEED:SYSAUX NO /acfs/oradata/CDB1/datafile/o1_mf_sysaux_dmrc52t9_.dbf
7 5 USERS NO /acfs/oradata/CDB1/datafile/o1_mf_users_dygrpz79_.dbf
8 100 PDB$SEED:UNDOTBS1 NO /acfs/oradata/CDB1/datafile/o1_mf_undotbs1_dmrc52x0_.dbf
24 250 PDB2:SYSTEM NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_system_dygwt1lh_.dbf
25 350 PDB2:SYSAUX NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_sysaux_dygwt1lm_.dbf
26 100 PDB2:UNDOTBS1 NO /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_undotbs1_dygwt1lo_.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /acfs/oradata/CDB1/datafile/o1_mf_temp_dmrc4wlh_.tmp
2 64 PDB$SEED:TEMP 32767 /acfs/oradata/CDB1/pdbseed/temp012017-06-10_19-17-38-745-PM.dbf
4 64 PDB2:TEMP 32767 /acfs/oradata/CDB1/5BD3ED9D73B079D2E0536A4EA8C0967B/datafile/o1_mf_temp_dygwt1lp_.dbf

Here all files are there, created in the db_create_file_dest.

File name convert

When you create a pluggable database and you are not in OMF you need to add a FILE_NAME_CONVERT to convert from the source file names to destination file names. When the files are referenced by a .xml file, the .xml file references the path to the files as they were in the source database. If you move then, you can update the .xml file, or you can use SOURCE_FILE_NAME_CONVERT to mention the new place. With a .pdb archive, the .xml inside contains the original path, but this is not what will be used. The path of the .pdb itself is used, as if the files were unzipped at that place.

If you use Oracle-Managed-Files, don’t care about the file names and then you don’t need all those file name converts.

 

Cet article 12cR2 PDB archive est apparu en premier sur Blog dbi services.

Server process name in Postgres and Oracle

$
0
0

Every database analysis should start with system load analysis. If the host is in CPU starvation, then looking at other statistics can be pointless. With ‘top’ on Linux, or equivalent such as process explorer on Windows, you see the process (and threads). If the name of the process is meaningful, you already have a clue about the active sessions. Postgres goes further by showing the operation (which SQL command), the state (running or waiting), and the identification of the client.

Postgres

By default ‘top’ displays the program name (like ‘comm’ in /proc or in ‘ps’ format), which will be ‘postgres’ for all PostgreSQL processes. But you can also display the command line with ‘c’ in interactive mode, or directly starting with ‘top -c’, which is the same as the /proc/$pid/cmdline or ‘cmd’ or ‘args’ in ‘ps’ format.


top -c
 
Tasks: 263 total, 13 running, 250 sleeping, 0 stopped, 0 zombie
%Cpu(s): 24.4 us, 5.0 sy, 0.0 ni, 68.5 id, 0.9 wa, 0.0 hi, 1.2 si, 0.0 st
KiB Mem : 4044424 total, 558000 free, 2731380 used, 755044 buff/cache
KiB Swap: 421884 total, 418904 free, 2980 used. 2107088 avail Mem
 
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20347 postgres 20 0 394760 11660 8696 S 7.6 0.3 0:00.49 postgres: demo demo 192.168.56.125(37664) DELETE
20365 postgres 20 0 393816 11448 8736 S 6.9 0.3 0:00.37 postgres: demo demo 192.168.56.125(37669) idle
20346 postgres 20 0 393800 11440 8736 S 6.6 0.3 0:00.37 postgres: demo demo 192.168.56.125(37663) UPDATE
20356 postgres 20 0 396056 12480 8736 S 6.6 0.3 0:00.42 postgres: demo demo 192.168.56.125(37667) INSERT
20357 postgres 20 0 393768 11396 8736 S 6.6 0.3 0:00.40 postgres: demo demo 192.168.56.125(37668) DELETE waiting
20366 postgres 20 0 394728 11652 8736 S 6.6 0.3 0:00.35 postgres: demo demo 192.168.56.125(37670) UPDATE
20387 postgres 20 0 394088 11420 8720 S 6.6 0.3 0:00.41 postgres: demo demo 192.168.56.125(37676) UPDATE
20336 postgres 20 0 395032 12436 8736 S 6.3 0.3 0:00.37 postgres: demo demo 192.168.56.125(37661) UPDATE
20320 postgres 20 0 395032 12468 8736 R 5.9 0.3 0:00.33 postgres: demo demo 192.168.56.125(37658) DROP TABLE
20348 postgres 20 0 395016 12360 8736 R 5.9 0.3 0:00.33 postgres: demo demo 192.168.56.125(37665) VACUUM
20371 postgres 20 0 396008 12708 8736 R 5.9 0.3 0:00.40 postgres: demo demo 192.168.56.125(37673) INSERT
20321 postgres 20 0 396040 12516 8736 D 5.6 0.3 0:00.31 postgres: demo demo 192.168.56.125(37659) INSERT
20333 postgres 20 0 395016 11920 8700 R 5.6 0.3 0:00.36 postgres: demo demo 192.168.56.125(37660) UPDATE
20368 postgres 20 0 393768 11396 8736 R 5.6 0.3 0:00.43 postgres: demo demo 192.168.56.125(37671) UPDATE
20372 postgres 20 0 393768 11396 8736 R 5.6 0.3 0:00.36 postgres: demo demo 192.168.56.125(37674) INSERT
20340 postgres 20 0 394728 11700 8736 S 5.3 0.3 0:00.40 postgres: demo demo 192.168.56.125(37662) idle
20355 postgres 20 0 394120 11628 8672 S 5.3 0.3 0:00.32 postgres: demo demo 192.168.56.125(37666) DELETE waiting
20389 postgres 20 0 395016 12196 8724 R 5.3 0.3 0:00.37 postgres: demo demo 192.168.56.125(37677) UPDATE
20370 postgres 20 0 393768 11392 8736 S 4.6 0.3 0:00.34 postgres: demo demo 192.168.56.125(37672) DELETE
20376 postgres 20 0 393816 11436 8736 S 4.6 0.3 0:00.37 postgres: demo demo 192.168.56.125(37675) DELETE waiting
20243 postgres 20 0 392364 5124 3696 S 1.0 0.1 0:00.06 postgres: wal writer process

This is very useful information. Postgres changes the process title when it executes a statement. In this example:

  • ‘postgres:’ is the name of the process
  • ‘demo demo’ are the database name and the user name
  • ‘192.168.56.125(37664)’ are the IP address and port of the client.
  • DELETE, UPDATE… are the commands. They are more or less the command name used in the feed back after the command completion
  • ‘idle’ is for sessions not currently running a statement
  • ‘waiting’ is added when the session is waiting on a blocker session (enqueued on a lock for example)
  • ‘wal writer process’ is a background process

This is very useful information, especially because we have, on the same sampling, the Postgres session state (idle, waiting or running an operation) with the Linux process state (S when sleeping, R when runnable or running, D when in I/O,… ).

Oracle

With Oracle, you can have ASH to sample session state, but being able to see it at OS level would be great. It would also be a safeguard if we need to kill a process.

But, the Oracle processes do not change while running. They are set at connection time.

The background processes mention the Oracle process name and the Instance name:

[oracle@VM122 ~]$ ps -u oracle -o pid,comm,cmd,args | head
 
PID COMMAND CMD COMMAND
1873 ora_pmon_cdb2 ora_pmon_CDB2 ora_pmon_CDB2
1875 ora_clmn_cdb2 ora_clmn_CDB2 ora_clmn_CDB2
1877 ora_psp0_cdb2 ora_psp0_CDB2 ora_psp0_CDB2
1880 ora_vktm_cdb2 ora_vktm_CDB2 ora_vktm_CDB2
1884 ora_gen0_cdb2 ora_gen0_CDB2 ora_gen0_CDB2

The foreground processes mention the instance and the connection type, LOCAL=YES for bequeath, LOCAL=NO for remote via listener.


[oracle@VM122 ~]$ ps -u oracle -o pid,comm,cmd,args | grep -E "[ ]oracle_|[ ]PID"
 
PID COMMAND CMD COMMAND
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)

You need to join V$PROCESS with V$SESSION on (V$PROCESS.ADDR=V$SESSION.PADDR) to find the state, operation and client information

For the fun, you can change the program name (ARGV0) and arguments (ARGS).

The local connections can change the name in the BEQueath connection string:


sqlplus -s system/oracle@"(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=$ORACLE_HOME/bin/oracle)(ARGV0=postgres)(ARGS='(DESCRIPTION=(LOCAL=MAYBE)(ADDRESS=(PROTOCOL=BEQ)))')(ENVS='OLE_HOME=$ORACLE_HOME,ORACLE_SID=CDB1'))" <<< "host ps -u oracle -o pid,comm,cmd,args | grep -E '[ ]oracle_|[ ]PID'"
 
PID COMMAND CMD COMMAND
21155 oracle_21155_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21176 oracle_21176_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)
22593 oracle_22593_cd postgres (DESCRIPTION=(LOCA postgres (DESCRIPTION=(LOCAL=MAYBE)(ADDRESS=(PROTOCOL=BEQ)))

The remote connection can have the name changed from the static registration, adding an ARVG0 value on the listener side:


LISTENER=(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
SID_LIST_LISTENER=(SID_LIST=
(SID_DESC=(GLOBAL_DBNAME=MYAPP)(ARGV0=myapp)(SID_NAME=CDB1)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
(SID_DESC=(GLOBAL_DBNAME=CDB1_DGMGRL)(SID_NAME=CDB1)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
(SID_DESC=(GLOBAL_DBNAME=CDB2_DGMGRL)(SID_NAME=CDB2)(ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1))
)

When reloading the listener with this (ARGV0=myapp) to identify connection from this MYAPP service

[oracle@VM122 ~]$ sqlplus -s system/oracle@//localhost/MYAPP <<< "host ps -u oracle -o pid,comm,cmd,args | grep -E '[ ]oracle_|[ ]PID'"
PID COMMAND CMD COMMAND
21155 oracle_21155_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21176 oracle_21176_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21429 oracle_21429_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21431 oracle_21431_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21451 oracle_21451_cd oracleCDB2 (LOCAL=NO) oracleCDB2 (LOCAL=NO)
21517 oracle_21517_cd oracleCDB1 (LOCAL=NO) oracleCDB1 (LOCAL=NO)
24261 oracle_24261_cd myapp (LOCAL=NO) myapp (LOCAL=NO)

However, I would not recommend to change the default. This can be very confusing for people expecting ora_xxxx_SID and oracleSID process names.

 

Cet article Server process name in Postgres and Oracle est apparu en premier sur Blog dbi services.

Viewing all 2833 articles
Browse latest View live