Quantcast
Channel: dbi Blog
Viewing all 2833 articles
Browse latest View live

Does pg_upgrade in check mode raises a failure when the old cluster is running?

$
0
0

Today I had the pleasure to have Bruce Momjian in my session about PostgreSQL Upgrade Best Practices at the IT Tage 2017 in Frankfurt. While browsing through the various options you have for upgrading there was one slide where I claimed that the old cluster needs to be down before you run pg_upgrade in check mode as you will hit a (non-critical) failure message otherwise. Lets see if that really is the case or I did something wrong…

To start with lets initialize a new 9.6.2 cluster:

postgres@pgbox:/home/postgres/ [PG962] initdb --version
initdb (PostgreSQL) 9.6.2 dbi services build
postgres@pgbox:/home/postgres/ [PG962] initdb -D /tmp/aaa
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/aaa ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/aaa -l logfile start

Start that:

postgres@pgbox:/home/postgres/ [PG962] pg_ctl -D /tmp/aaa -l logfile start
postgres@pgbox:/home/postgres/ [PG962] psql -c "select version()" postgres
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.2 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

Time: 0.861 ms

For being able to upgrade we’ll need a new cluster, so:

postgres@pgbox:/home/postgres/ [PG10] initdb --version
initdb (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] initdb -D /tmp/bbb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/bbb ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/bbb -l logfile start

We’ll not start that one but will just run pg_upgrade in check mode from the new binaries:

postgres@pgbox:/home/postgres/ [PG10] pg_upgrade --version
pg_upgrade (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] export PGDATAOLD=/tmp/aaa
postgres@pgbox:/home/postgres/ [PG10] export PGDATANEW=/tmp/bbb
postgres@pgbox:/home/postgres/ [PG10] export PGBINOLD=/u01/app/postgres/product/96/db_2/bin/
postgres@pgbox:/home/postgres/ [PG10] export PGBINNEW=/u01/app/postgres/product/10/db_0/bin/
postgres@pgbox:/home/postgres/ [PG10] pg_upgrade -c

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
...

… and here we go. From the log:

postgres@pgbox:/home/postgres/ [PG10] cat pg_upgrade_server.log

-----------------------------------------------------------------
  pg_upgrade run on Tue Dec 12 21:23:43 2017
-----------------------------------------------------------------

command: "/u01/app/postgres/product/96/db_2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/tmp/aaa" -o "-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700" start >> "pg_upgrade_server.log" 2>&1
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  lock file "postmaster.pid" already exists
HINT:  Is another postmaster (PID 2194) running in data directory "/tmp/aaa"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So, @Bruce: Something to improve :)
Again: It was a pleasure to have you there and I hope we’ll meet again at one of the conferences in 2018.

 

Cet article Does pg_upgrade in check mode raises a failure when the old cluster is running? est apparu en premier sur Blog dbi services.


Oracle docker image from docker store

$
0
0

Did you notice that each time you want to download the Oracle binaries from the OTN website (for non-production usage) you have to click on the “Accept License Agreement”? This is because you have to agree with the OTN license agreement, and that makes it not easy to distribute an automated way to build an environment.
The only exception I have seen was the sealed envelopes provided for the RACattack:

In both cases, there must be a physical action involved to agree legally with the license terms.

Docker

There is now a new possibility where you click on ‘Agree’ only once, in the Docker Store, and then can download (aka ‘pull’) a container containing the binary distribution. You just go to: https://store.docker.com/images/oracle-database-enterprise-edition, login (you can create one in two minutes with a username, e-mail address, and password) and accept the license agreement:
CaptureOracle Docker

Once this is done, you will be able to pull the Oracle containers from the command line, after a ‘docker login’.

It may not be easy to use Docker on your laptop, especially in you are on Windows and don’t want to enable Hyper-V. Here is how I run it on a VirtualBox VM running Oracle Enterprise Linux. You may wonder what’s the point to run containers within a VM. But I think that you don’t have the choice here. The docker processes will run within the host. This means that you need an OS that is supported (and Oracle Enterprise Linux is the best fitted to run Oracle Database). This also means that you also need to adapt the kernel parameters for it, shm, limits, have the oracle user, etc. Better to do that in a VM dedicated for Oracle Database.

Then you wonder what’s the point of running in a container, given that you have to do all those installation prerequisites anyway, and installing Oracle is just two more commands (runInstaller and dbca). Well, it seems that the main reason is that it’s cool. In my opinion, any user of database (from developer to administrator) should have installed an Oracle Database at least with the DBA GUI, because it is a good way to understand what is a database, a listener, a datafile, the characterset,… But let’s be cool and pull it instead of install it.

Docker evolves quickly, I remove old releases just in case:

[root@localhost oracle]# yum -y remove docker-ce docker docker-common docker-selinux docker-engine
Loaded plugins: ulninfo
No Match for argument: docker-ce
No Match for argument: docker
No Match for argument: docker-common
No Match for argument: docker-selinux
No Match for argument: docker-engine
No Packages marked for removal

I’m on the latest OEL7:

[oracle@localhost ~]$ cat /etc/oracle-release
Oracle Linux Server release 7.4
 
[root@localhost oracle]# yum upgrade
Loaded plugins: ulninfo
No packages marked for update
 
[oracle@localhost ~]$ uname -a
Linux localhost.localdomain 4.1.12-103.10.1.el7uek.x86_64 #2 SMP Tue Dec 5 15:42:37 PST 2017 x86_64 x86_64 x86_64 GNU/Linux

I don’t want docker to fill my / filesystem, and those images with an Oracle Database are big, so I’m creating a link from /var/lib/docker to /u01

[root@localhost oracle]# mkdir /u01/docker
[root@localhost oracle]# ln -s /u01/docker /var/lib/docker
[root@localhost oracle]# ls -ld /var/lib/docker
lrwxrwxrwx. 1 root root 11 Dec 10 15:48 /var/lib/docker -> /u01/docker

Installing Docker (Community Edition):

[root@localhost oracle]# yum -y install docker-ce
Loaded plugins: ulninfo
docker-ce-stable | 2.9 kB 00:00:00
ol7_UEKR4 | 1.2 kB 00:00:00
ol7_addons | 1.2 kB 00:00:00
ol7_latest | 1.4 kB 00:00:00
docker-ce-stable/x86_64/primary_db | 10 kB 00:00:00
(1/7): ol7_UEKR4/x86_64/updateinfo | 135 kB 00:00:00
(2/7): ol7_addons/x86_64/updateinfo | 40 kB 00:00:00
(3/7): ol7_addons/x86_64/primary | 78 kB 00:00:00
(4/7): ol7_latest/x86_64/group | 681 kB 00:00:00
(5/7): ol7_latest/x86_64/updateinfo | 1.6 MB 00:00:02
(6/7): ol7_UEKR4/x86_64/primary | 25 MB 00:00:25
(7/7): ol7_latest/x86_64/primary | 31 MB 00:00:26
ol7_UEKR4 506/506
ol7_addons 251/251
ol7_latest 23517/23517
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:17.09.1.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-17.09.1.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.21-1.el7 will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
==============================================================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================================================
Installing:
docker-ce x86_64 17.09.1.ce-1.el7.centos docker-ce-stable 21 M
Installing for dependencies:
container-selinux noarch 2:2.21-1.el7 ol7_addons 28 k
 
Transaction Summary
==============================================================================================================================================================================================
Install 1 Package (+1 Dependent package)
 
Total download size: 21 M
Installed size: 76 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): container-selinux-2.21-1.el7.noarch.rpm | 28 kB 00:00:01
(2/2): docker-ce-17.09.1.ce-1.el7.centos.x86_64.rpm | 21 MB 00:00:07
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 2.7 MB/s | 21 MB 00:00:07
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2:container-selinux-2.21-1.el7.noarch 1/2
Installing : docker-ce-17.09.1.ce-1.el7.centos.x86_64 2/2
Verifying : docker-ce-17.09.1.ce-1.el7.centos.x86_64 1/2
Verifying : 2:container-selinux-2.21-1.el7.noarch 2/2
 
Installed:
docker-ce.x86_64 0:17.09.1.ce-1.el7.centos
 
Dependency Installed:
container-selinux.noarch 2:2.21-1.el7
 
Complete!

Starting Docker:

[root@localhost oracle]# systemctl start docker

I have to login with my credentials. This is the way to connect with the agreement I accepted online:

[root@localhost oracle]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: franck.pachot
Password:
Login Succeeded

Then I pull the docker container provided by Oracle. Oracle software is quit large when including all features, so I choose the ‘slim’ one:

[root@localhost oracle]# docker pull store/oracle/database-enterprise:12.2.0.1-slim
12.2.0.1-slim: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Pull complete
9d3556e8e792: Pull complete
fc60a1a28025: Pull complete
0c32e4ed872e: Pull complete
be0a1f1e8dfd: Pull complete
Digest: sha256:dbd87ae4cc3425dea7ba3d3f34e062cbd0afa89aed2c3f3d47ceb5213cc0359a
Status: Downloaded newer image for store/oracle/database-enterprise:12.2.0.1-slim
[root@localhost oracle]#

Here is the image:

[root@localhost oracle]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
store/oracle/database-enterprise 12.2.0.1-slim 27c9559d36ec 3 months ago 2.08GB

To run a database, you just have to run the container. In order to connect to it, you need to forward the 1521 port:

[root@localhost oracle]# docker run -p 0.0.0.0:9001:1521 store/oracle/database-enterprise:12.2.0.1-slim
Setup Oracle Database
Oracle Database 12.2.0.1 Setup
Sun Dec 10 19:09:14 UTC 2017
 
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
 
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log

Ok. This takes some time. The ORACLE_HOME is unzipped, then the database created…

I’ll not describe further. Just go to the Setup Instructions in https://store.docker.com/images/oracle-database-enterprise-edition where everything is clearly explained.

In my opinion, it is good to try and think about it. Docker was created to containerize an application with process(es) and memory. A database is a different beast. The database is persistent, so you should store it in an external volume, because you don’t want to create a new empty database each time you start it. There are also the configuration files which should be persistent: should they belong to the container or be external? And the logs under ORACLE_BASE_DIAG? Do you want to keep them? purge them? Or just let the image grow, which can be very quick if you fill the Recovery Area. Finally, do you want to run a container into a Virtual Machine, this container running Oracle Database 12c, which is a Container Database (CDB), itself containing containers (PDBs)? Personally, I’m very skeptical about the usage of docker for an Oracle Database. But I also gave it an early try some years ago (read here). And you may see things differently in your context. The very good point is that Oracle now provides an easy way to test whether it helps you to run a database quickly or not, with a way to get the binaries without an ‘accept’ click for each download.

 

Cet article Oracle docker image from docker store est apparu en premier sur Blog dbi services.

Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02)

$
0
0

In this post we are going to deploy a R.A.C system ready to run production load with near-zero knowledge with R.A.C, Oracle cluster nor Oracle database.

We are going to use the “Deploy Cluster Tool” which is provide by Oracle to perform Oracle deployment of many kind of database architectures you may need like Oracle single instance, Oracle Restart or Oracle R.A.C. This tool permits you to choose if you want an Enterprise Edition or a Standard Edition and if you want an Oracle Release 11g or 12c.

For this demonstration we are going to deploy a R.A.C 12cR2 in Standard Edition.

What you need at this stage

  • An OVM infrastructure as describe in this post. In this infrastructure we have
    • 2 virtual machines called rac001 and rac002 with the required network cabling and disk configuration to run R.A.C
    • The 2 VMs are create from the Oracle template which include the stuff needed to deploy whichever configuration you need
  • A copy of the last release of the “Deploy Cluster Tool” available here

The most important part here is to edit 2 configurations files to describe the configuration we want

  • deployRacProd_SE_RAC_netconfig.ini: network parameters needed for the deployment
  • deployRacProd_SE_RAC_params.ini: parameters related to database memory, name, ASM disk groups, User UID and so on

This is the content of the network configuration used for this infrastructure:

-bash-4.1# egrep -v "^$|^#" deployRacProd_SE_RAC_netconfig.ini
NODE1=rac001
NODE1IP=192.168.179.210
NODE1PRIV=rac001-priv
NODE1PRIVIP=192.168.3.210
NODE1VIP=rac001-vip
NODE1VIPIP=192.168.179.211
NODE2=rac002
NODE2IP=192.168.179.212
NODE2PRIV=rac002-priv
NODE2PRIVIP=192.168.3.212
NODE2VIP=rac002-vip
NODE2VIPIP=192.168.179.213
PUBADAP=eth1
PUBMASK=255.255.255.0
PUBGW=192.168.179.1
PRIVADAP=eth2
PRIVMASK=255.255.255.0
RACCLUSTERNAME=cluprod01
DOMAINNAME=
DNSIP=""
NETCONFIG_DEV=/dev/xvdc
SCANNAME=cluprod01-scan
SCANIP=192.168.179.205
FLEX_CLUSTER=yes
FLEX_ASM=yes
ASMADAP=eth3
ASMMASK=255.255.255.0
NODE1ASMIP=192.168.58.210
NODE2ASMIP=192.168.58.212

Let’s start from the OVM Manager server going in the “Deploy Cluster Tool” directory and initiating the first stage of the deployment:

-bash-4.1# cd /root/deploycluster3
-bash-4.1# ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini
Oracle DB/RAC OneCommand (v3.0.5) for Oracle VM - deploy cluster - (c) 2011-2017 Oracle Corporation
 (com: 29100:v3.0.4, lib: 231275:v3.0.5, var: 1800:v3.0.5) - v2.6.5 - ovmm (x86_64)
Invoked as root at Mon Dec 18 14:19:48 2017  (size: 43900, mtime: Tue Feb 28 01:03:00 2017)
Using: ./deploycluster.py -u admin -M rac00? -P deployRacProd_SE_RAC_params.ini -N deployRacProd_SE_RAC_netconfig.ini

INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting...
Password:

INFO: Attempting to connect to Oracle VM Manager...

Oracle VM Manager Core WS-API Shell 3.4.2.1384 (20160914_1384)

Copyright (C) 2007, 2016 Oracle. All rights reserved.
See the LICENSE file for redistribution information.


Connecting to https://localhost:7002/...

INFO: Oracle VM Client CONNECTED to Oracle VM Manager (3.4.4.1709) UUID (0004fb00000100001f20e914973507f6)

INFO: Inspecting /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini for number of nodes defined....
INFO: Detected 2 nodes in: /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini

INFO: Located a total of (2) VMs;
      2 VMs with a simple name of: ['rac001', 'rac002']

INFO: Detected a RAC deployment...

INFO: Starting all (2) VMs...

INFO: VM with a simple name of "rac001" is in a Stopped state, attempting to start it.................................OK.

INFO: VM with a simple name of "rac002" is in a Stopped state, attempting to start it.................................OK.

INFO: Verifying that all (2) VMs are in Running state and pass prerequisite checks.....

INFO: Detected that all (2) VMs specified on command line have (9) common shared disks between them (ASM_MIN_DISKS=5)

INFO: The (2) VMs passed basic sanity checks and in Running state, sending cluster details as follows:
      netconfig.ini (Network setup): /root/deploycluster3/deployRacProd_SE_RAC_netconfig.ini
      params.ini (Overall build options): /root/deploycluster3/deployRacProd_SE_RAC_params.ini
      buildcluster: yes

INFO: Starting to send configuration details to all (2) VM(s).................................................................
INFO: Sending to VM with a simple name of "rac001"...........................................................................................................................................................................................................................................................
INFO: Sending to VM with a simple name of "rac002"..............................................................................................................................................................

INFO: Configuration details sent to (2) VMs...
      Check log (default location /u01/racovm/buildcluster.log) on build VM (rac001)...

INFO: deploycluster.py completed successfully at 14:21:28 in 100.4 seconds (0h:01m:40s)
Logfile at: /root/deploycluster3/deploycluster23.log

 

At this stage we have 2 nodes with the network configuration required like host name and IP addresses. The deployment script has also pushed the configuration files mentioned previously in the VMs.

So we connect to the first VM rac001 to

-bash-4.1# ssh root@192.168.179.210
Warning: Permanently added '192.168.179.210' (RSA) to the list of known hosts.
root@192.168.179.210's password:
Last login: Mon Dec 11 10:31:03 2017
[root@rac001 ~]#

Then we go the deployment directory which is part of the Template and we can execute the deployment

[root@rac001 racovm]# ./buildcluster.sh -s
Invoking on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:43 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
   Step(s): buildcluster

INFO (node:rac001): Skipping confirmation, flag (-s) supplied on command line
2017-12-18 09:06:43:[buildcluster:Start:rac001] Building 12cR2 RAC Cluster

INFO (node:rac001): No database created due to (BUILD_RAC_DATABASE=no) & (BUILD_SI_DATABASE=no) setting in params.ini
2017-12-18 09:06:45:[setsshroot:Start:rac001] SSH Setup for the root user...
..
INFO (node:rac001): Passwordless SSH for the root user already configured, skipping...
2017-12-18 09:06:46:[setsshroot:Done :rac001] SSH Setup for the root user completed successfully
2017-12-18 09:06:46:[setsshroot:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:46:[copykit:Start:rac001] Copy kit files to remote nodes
Kit files: buildsingle.sh buildcluster.sh netconfig.sh netconfig.ini common.sh cleanlocal.sh diskconfig.sh racovm.sh ssh params.ini doall.sh  netconfig GetSystemTimeZone.class kitversion.txt mcast

INFO (node:rac001): Copied kit to remote node rac002 as root user
2017-12-18 09:06:48:[copykit:Done :rac001] Copy kit files to (1) remote nodes
2017-12-18 09:06:48:[copykit:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:06:48:[usrsgrps:Start:rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (create/modify mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:06:51:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (create/modify mode)..
2017-12-18 09:06:51:[usrsgrpslocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)
....
INFO (node:rac001): Passwordless SSH for the Oracle user (oracle) already configured to all nodes; not re-setting users passwords
2017-12-18 09:06:55:[usrsgrps:Done :rac001] Verifying Oracle users & groups on all nodes (create/modify mode)..
2017-12-18 09:06:55:[usrsgrps:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Parameters loaded from params.ini...
  Users & Groups:
   Role Separation: no  Running as: root
   OInstall    : oinstall       GID: 54321
   RAC Owner   : oracle         UID: 54321
    DB OSDBA   : dba            GID: 54322
    DB OSOPER  :                GID:
    DB OSBACKUP: dba            GID:
    DB OSDGDBA : dba            GID:
    DB OSKMDBA : dba            GID:
    DB OSRAC   : dba            GID:
   Grid Owner  : oracle         UID: 54321
    GI OSDBA   : dba            GID: 54322
    GI OSOPER  :                GID:
    GI OSASM   : dba            GID: 54322
  Software Locations:
   Operating Mode: RAC                   Database Edition: STD
   Flex Cluster: yes      Flex ASM: yes
   Central Inventory: /u01/app/oraInventory
   Grid Home: /u01/app/12.2.0/grid  (Detected: 12cR2, Enterprise Edition)
   Grid Name: OraGrid12c
   RAC Home : /u01/app/oracle/product/12.2.0/dbhome_1  (Detected: 12cR2, Enterprise Edition)
   RAC Name : OraRAC12c
   RAC Base : /u01/app/oracle
   DB/RAC OVM kit : /u01/racovm
   Attach RAC Home: yes   GI Home: yes  Relink Homes: no   On OS Change: yes
   Addnode Copy: no
  Database & Storage:
   Database : no         DBName: ORCL  SIDName: ORCL  DG: DGDATA   Listener Port: 1521
   Policy Managed: no
   DBExpress: no         DBExpress port: 5500
   Grid Management DB: no   GIMR diskgroup name:
   Separate GIMR diskgroup: no
   Cluster Storage: ASM
   ASM Discovery String: /dev/xvd[k-s]1
   ASM diskgroup: dgocrvoting      Redundancy: EXTERNAL   Allocation Unit (au_size): 4
      Disks     : /dev/xvdk1 /dev/xvdl1 /dev/xvdm1 /dev/xvdn1 /dev/xvdo1
   Recovery DG  : DGFRA            Redundancy: EXTERNAL
      Disks     : /dev/xvdr1 /dev/xvds1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Extra DG #1  : DGDATA           Redundancy: EXTERNAL
      Disks     : /dev/xvdp1 /dev/xvdq1
      Attributes: 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0'
   Persistent disknames: yes  Stamp: yes  Partition: yes  Align: yes  GPT: no Permissions: 660
   ACFS Filesystem: no

Network information loaded from netconfig.ini...
  Default Gateway: 192.168.179.1  Domain:
  DNS:
  Public NIC : eth1  Mask: 255.255.255.0
  Private NIC: eth2  Mask: 255.255.255.0
  ASM NIC    : eth3  Mask: 255.255.0.0
  SCAN Name: cluprod01-scan  SCAN IP: 192.168.179.205  Scan Port: 1521
  Cluster Name: cluprod01
  Nodes & IP Addresses (2 of 2 nodes)
  Node  1: PubIP : 192.168.179.210 PubName : rac001
     (Hub) VIPIP : 192.168.179.211 VIPName : rac001-vip
           PrivIP: 192.168.3.210   PrivName: rac001-priv
           ASMIP : 192.168.58.210
  Node  2: PubIP : 192.168.179.212 PubName : rac002
     (Hub) VIPIP : 192.168.179.213 VIPName : rac002-vip
           PrivIP: 192.168.3.212   PrivName: rac002-priv
           ASMIP : 192.168.58.212
Running on rac001 as root...
   Oracle DB/RAC 12c/11gR2 OneCommand (v2.1.9) for Oracle VM - (c) 2010-2017 Oracle Corporation
   Cksum: [2551004249 619800 racovm.sh] at Mon Dec 18 09:06:55 EST 2017
   Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen
   Kit Version: 12.2.0.1.170814 (RAC Mode, 2 nodes, Enterprise Edition)
2017-12-18 09:06:56:[printparams:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:56:[setsshora:Start:rac001] SSH Setup for the Oracle user(s)...
..
INFO (node:rac001): Passwordless SSH for the oracle user already configured, skipping...
2017-12-18 09:06:57:[setsshora:Done :rac001] SSH Setup for the oracle user completed successfully
2017-12-18 09:06:57:[setsshora:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)
2017-12-18 09:06:57:[diskconfig:Start:rac001] Storage Setup
2017-12-18 09:06:58:[diskconfig:Start:rac001] Running in configuration mode (local & remote nodes)
.
2017-12-18 09:06:58:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq............................OK
2017-12-18 09:07:02:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1.
2017-12-18 09:07:02:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with stamping (existence check)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo......../dev/xvdr./dev/xvds...../dev/xvdp./dev/xvdq........OK
2017-12-18 09:07:23:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002....................OK
2017-12-18 09:07:52:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:07:54:[diskconfig:Disks:rac001] Setting disk permissions for next startup (all nodes)...
.....OK
2017-12-18 09:07:56:[diskconfig:ClearPartTables:rac001] Clearing partition tables...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:03:[diskconfig:CreatePartitions:rac001] Creating 'msdos' partitions on disks (as needed)...
./dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.....................OK
2017-12-18 09:08:13:[diskconfig:CleanPartitions:rac001] Cleaning new partitions...
./dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1...OK
2017-12-18 09:08:13:[diskconfig:Done :rac001] Done configuring and checking disks on all nodes
2017-12-18 09:08:13:[diskconfig:Done :rac001] Storage Setup
2017-12-18 09:08:13:[diskconfig:Time :rac001] Completed successfully in 76 seconds (0h:01m:16s)
2017-12-18 09:08:15:[clearremotelogs:Time :rac001] Completed successfully in 2 seconds (0h:00m:02s)
2017-12-18 09:08:15:[check:Start:rac001] Pre-install checks on all nodes
..

INFO (node:rac001): Check found that all (2) nodes have the following (25586399 26609817 26609966) patches applied to the Grid Infrastructure Home (/u01/app/12.2.0/grid), the following (25811364 26609817 26609966) patches applied to the RAC Home (/u01/app/oracle/product/12.2.0/dbhome_1)
.2017-12-18 09:08:20:[checklocal:Start:rac001] Pre-install checks
2017-12-18 09:08:21:[checklocal:Start:rac002] Pre-install checks
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac001] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac001] Verifying Oracle users & groups (check only mode)..
2017-12-18 09:08:22:[usrsgrpslocal:Start:rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac002): The (oracle) user as specified in DBOWNER/RACOWNER is defined as follows:
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

2017-12-18 09:08:22:[usrsgrpslocal:Done :rac002] Verifying Oracle users & groups (check only mode)..

INFO (node:rac001): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac001): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac001): Running disk checks on all nodes, persistent disk names (/u01/racovm/diskconfig.sh -n 2 -D 1 -s)
2017-12-18 09:08:23:[diskconfig:Start:rac001] Running in dry-run mode (local & remote nodes, level 1), no stamping, partitioning or OS configuration files will be modified...(assuming persistent disk names)

INFO (node:rac002): Node forming new RAC cluster; Kernel: 4.1.12-103.3.8.el7uek.x86_64 (x86_64) [1 processor(s)] 2993 MB | xen

WARNING (node:rac002): Not performing any memory checks due to (CLONE_SKIP_MEMORYCHECKS=yes) in params.ini.

INFO (node:rac002): Running network checks...
......
2017-12-18 09:08:24:[diskconfig:Disks:rac001] Verifying disks exist, are free and with no overlapping partitions (localhost)...
/dev/xvdk./dev/xvdl./dev/xvdm./dev/xvdn./dev/xvdo./dev/xvdr./dev/xvds./dev/xvdp./dev/xvdq.............................OK
2017-12-18 09:08:29:[diskconfig:Disks:rac001] Checking existence of automatically renamed disks (localhost)...
/dev/xvdk1./dev/xvdl1./dev/xvdm1./dev/xvdn1./dev/xvdo1./dev/xvdr1./dev/xvds1./dev/xvdp1./dev/xvdq1.
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking permissions of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1
2017-12-18 09:08:30:[diskconfig:Disks:rac001] Checking contents of disks (localhost)...
/dev/xvdk1/dev/xvdl1/dev/xvdm1/dev/xvdn1/dev/xvdo1/dev/xvdr1/dev/xvds1/dev/xvdp1/dev/xvdq1..
2017-12-18 09:08:31:[diskconfig:Remote:rac001] Assuming persistent disk names on remote nodes with NO stamping (existence check)...
rac002........OK
2017-12-18 09:08:37:[diskconfig:Remote:rac001] Verify disks are free on remote nodes...
rac002........
INFO (node:rac001): Waiting for all checklocal operations to complete on all nodes (At 09:08:50, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 13222 13365)...
...............
INFO (node:rac002): Check completed successfully
2017-12-18 09:09:07:[checklocal:Done :rac002] Pre-install checks
2017-12-18 09:09:07:[checklocal:Time :rac002] Completed successfully in 46 seconds (0h:00m:46s)
.......OK
2017-12-18 09:09:11:[diskconfig:Remote:rac001] Checking existence of automatically renamed disks (remote nodes)...
rac002...
2017-12-18 09:09:17:[diskconfig:Remote:rac001] Checking permissions of disks (remote nodes)...
rac002....
2017-12-18 09:09:21:[diskconfig:Disks:rac001] Checking contents of disks (remote nodes)...
rac002.......OK
2017-12-18 09:09:26:[diskconfig:Done :rac001] Dry-run (local & remote, level 1) completed successfully, most likely normal run will too
..
INFO (node:rac001): Running multicast check on 230.0.1.0 port 42050 for 2 nodes...

INFO (node:rac001): All nodes can multicast to all other nodes on interface eth2 multicast address 230.0.1.0 port 42050...

INFO (node:rac001): Running network checks...
....................
INFO (node:rac001): Check completed successfully
2017-12-18 09:10:11:[checklocal:Done :rac001] Pre-install checks
2017-12-18 09:10:11:[checklocal:Time :rac001] Completed successfully in 111 seconds (0h:01m:51s)

INFO (node:rac001): All checklocal operations completed on all (2) node(s) at: 09:10:12
2017-12-18 09:10:12:[check:Done :rac001] Pre-install checks on all nodes
2017-12-18 09:10:13:[check:Time :rac001] Completed successfully in 117 seconds (0h:01m:57s)
2017-12-18 09:10:13:[creategrid:Start:rac001] Creating 12cR2 Grid Infrastructure
..
2017-12-18 09:10:16:[preparelocal:Start:rac001] Preparing node for Oracle installation

INFO (node:rac001): Resetting permissions on Oracle Homes... May take a while...
2017-12-18 09:10:17:[preparelocal:Start:rac002] Preparing node for Oracle installation

INFO (node:rac002): Resetting permissions on Oracle Homes... May take a while...

INFO (node:rac001): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:27:[preparelocal:Done :rac001] Preparing node for Oracle installation
2017-12-18 09:10:27:[preparelocal:Time :rac001] Completed successfully in 11 seconds (0h:00m:11s)

INFO (node:rac002): Configured size of /dev/shm is (see output below):
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.5G     0  1.5G   0% /dev/shm
2017-12-18 09:10:31:[preparelocal:Done :rac002] Preparing node for Oracle installation
2017-12-18 09:10:31:[preparelocal:Time :rac002] Completed successfully in 14 seconds (0h:00m:14s)
2017-12-18 09:10:32:[prepare:Time :rac001] Completed successfully in 19 seconds (0h:00m:19s)
....
2017-12-18 09:10:40:[giclonelocal:Start:rac001] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:41:[giattachlocal:Start:rac001] Attaching Grid Infratructure Home on node rac001

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
2017-12-18 09:10:41:[giclonelocal:Start:rac002] Attaching 12cR2 Grid Infrastructure Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/12.2.0/grid 2>/dev/null
2017-12-18 09:10:42:[giattachlocal:Start:rac002] Attaching Grid Infratructure Home on node rac002

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/12.2.0/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -waitforcompletion -attachHome INVENTORY_LOCATION='/u01/app/oraInventory' ORACLE_HOME='/u01/app/12.2.0/grid' ORACLE_HOME_NAME='OraGrid12c' ORACLE_BASE='/u01/app/oracle'   CRS=TRUE -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory pointer is located at /etc/oraInst.loc

INFO (node:rac001): Waiting for all giclonelocal operations to complete on all nodes (At 09:11:06, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 18135 18141)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac001): Running on: rac001 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:08:[giattachlocal:Done :rac001] Attaching Grid Infratructure Home on node rac001
2017-12-18 09:11:08:[giattachlocal:Time :rac001] Completed successfully in 27 seconds (0h:00m:27s)
2017-12-18 09:11:09:[girootlocal:Start:rac001] Running root.sh on Grid Infrastructure home

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac001_2017-12-18_09-11-09-287116939.log for the output of root script
2017-12-18 09:11:09:[girootlocal:Done :rac001] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:09:[girootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

INFO (node:rac002): Running on: rac002 as root: /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
2017-12-18 09:11:10:[giattachlocal:Done :rac002] Attaching Grid Infratructure Home on node rac002
2017-12-18 09:11:10:[giattachlocal:Time :rac002] Completed successfully in 28 seconds (0h:00m:28s)
2017-12-18 09:11:10:[girootlocal:Start:rac002] Running root.sh on Grid Infrastructure home

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/root.sh -silent
Check /u01/app/12.2.0/grid/install/root_rac002_2017-12-18_09-11-10-934545273.log for the output of root script
2017-12-18 09:11:11:[girootlocal:Done :rac002] Running root.sh on Grid Infrastructure home
2017-12-18 09:11:11:[girootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/12.2.0/grid)...
2017-12-18 09:11:11:[giclonelocal:Done :rac001] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:11:[giclonelocal:Time :rac001] Completed successfully in 33 seconds (0h:00m:33s)
2017-12-18 09:11:13:[giclonelocal:Done :rac002] Attaching 12cR2 Grid Infrastructure Home
2017-12-18 09:11:13:[giclonelocal:Time :rac002] Completed successfully in 34 seconds (0h:00m:34s)

INFO (node:rac001): All giclonelocal operations completed on all (2) node(s) at: 09:11:14
2017-12-18 09:11:14:[giclone:Time :rac001] Completed successfully in 42 seconds (0h:00m:42s)
....
2017-12-18 09:11:18:[girootcrslocal:Start:rac001] Running rootcrs.pl

INFO (node:rac001): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_<timestamp>.log

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac001/crsconfig/rootcrs_rac001_2017-12-18_09-11-19AM.log
2017/12/18 09:11:30 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:11:30 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:12:08 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2017/12/18 09:12:19 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:12:25 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:12:28 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:12:40 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:13:23 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:14:06 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:14:19 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2017/12/18 09:14:27 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:14:43 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2017/12/18 09:15:48 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2017/12/18 09:16:56 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:18:14 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:18:23 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.driver.afd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2676: Start of 'ora.driver.afd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded

Disk label(s) created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.
Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-171218AM091955.log for details.


2017/12/18 09:24:06 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade oracle oinstall'
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk e0321712cd544fa6bf438b5849f11155.
Successfully replaced voting disk group with +dgocrvoting.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   e0321712cd544fa6bf438b5849f11155 (AFD:DGOCRVOTING1) [DGOCRVOTING]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac001'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac001'
CRS-2677: Stop of 'ora.crsd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'rac001'
CRS-2673: Attempting to stop 'ora.crf' on 'rac001'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac001'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac001'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac001'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac001' succeeded
CRS-2677: Stop of 'ora.storage' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac001'
CRS-2677: Stop of 'ora.crf' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac001'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac001'
CRS-2677: Stop of 'ora.evmd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac001'
CRS-2677: Stop of 'ora.cssd' on 'rac001' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac001'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac001'
CRS-2677: Stop of 'ora.driver.afd' on 'rac001' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac001' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac001' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:26:52 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac001'
CRS-2672: Attempting to start 'ora.evmd' on 'rac001'
CRS-2676: Start of 'ora.mdnsd' on 'rac001' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac001'
CRS-2676: Start of 'ora.gpnpd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac001'
CRS-2676: Start of 'ora.gipcd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac001'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac001'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac001'
CRS-2676: Start of 'ora.diskmon' on 'rac001' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac001'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac001'
CRS-2676: Start of 'ora.ctssd' on 'rac001' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac001'
CRS-2676: Start of 'ora.storage' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac001'
CRS-2676: Start of 'ora.crf' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac001'
CRS-2676: Start of 'ora.crsd' on 'rac001' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac001
CRS-6016: Resource auto-start has completed for server rac001
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:30:15 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:30:15 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac001'
CRS-2676: Start of 'ora.asm' on 'rac001' succeeded
CRS-2672: Attempting to start 'ora.DGOCRVOTING.dg' on 'rac001'
CRS-2676: Start of 'ora.DGOCRVOTING.dg' on 'rac001' succeeded
2017/12/18 09:32:38 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2017/12/18 09:34:22 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:34:26:[girootcrslocal:Done :rac001] Running rootcrs.pl
2017-12-18 09:34:27:[girootcrslocal:Time :rac001] Completed successfully in 1388 seconds (0h:23m:08s)
2017-12-18 09:34:49:[girootcrslocal:Start:rac002] Running rootcrs.pl

INFO (node:rac002): rootcrs.pl log location is: /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_<timestamp>.log

INFO (node:rac002): Running on: rac002 as root: /u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl -auto
Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/rac002/crsconfig/rootcrs_rac002_2017-12-18_09-34-50AM.log
2017/12/18 09:35:03 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2017/12/18 09:35:04 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:35:18, elapsed: 0h:00m:31s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:35:44 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/12/18 09:35:44 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
.2017/12/18 09:35:51 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2017/12/18 09:35:56 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.
2017/12/18 09:36:02 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.
..2017/12/18 09:36:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.
2017/12/18 09:37:01 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.
2017/12/18 09:37:08 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2017/12/18 09:37:16 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2017/12/18 09:37:17 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
.2017/12/18 09:37:23 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2017/12/18 09:37:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:38:21, elapsed: 0h:03m:34s, 1) node remaining, all background pid(s): 7263)...
.2017/12/18 09:39:00 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
...2017/12/18 09:40:24 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
..
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:41:25, elapsed: 0h:06m:38s, 1) node remaining, all background pid(s): 7263)...
2017/12/18 09:41:40 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2017/12/18 09:41:42 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
.CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac002'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac002'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac002' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac002' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2017/12/18 09:42:04 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
.....
INFO (node:rac001): Waiting for all girootcrslocal operations to complete on all nodes (At 09:44:27, elapsed: 0h:09m:40s, 1) node remaining, all background pid(s): 7263)...
.CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac002'
CRS-2672: Attempting to start 'ora.evmd' on 'rac002'
CRS-2676: Start of 'ora.mdnsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac002'
CRS-2676: Start of 'ora.gpnpd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac002'
CRS-2676: Start of 'ora.gipcd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac002'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac002'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac002'
CRS-2676: Start of 'ora.diskmon' on 'rac002' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac002'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac002'
CRS-2676: Start of 'ora.ctssd' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac002'
CRS-2676: Start of 'ora.crf' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac002'
CRS-2676: Start of 'ora.crsd' on 'rac002' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6017: Processing resource auto-start for servers: rac002
CRS-2672: Attempting to start 'ora.net1.network' on 'rac002'
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002'
CRS-2676: Start of 'ora.net1.network' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'rac002'
CRS-2676: Start of 'ora.ons' on 'rac002' succeeded
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac002' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac002'
CRS-2676: Start of 'ora.asm' on 'rac002' succeeded
CRS-6016: Resource auto-start has completed for server rac002
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/12/18 09:45:22 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/12/18 09:45:22 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
.2017/12/18 09:45:49 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
..2017/12/18 09:46:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
2017-12-18 09:46:40:[girootcrslocal:Done :rac002] Running rootcrs.pl
2017-12-18 09:46:40:[girootcrslocal:Time :rac002] Completed successfully in 711 seconds (0h:11m:51s)

INFO (node:rac001): All girootcrslocal operations completed on all (2) node(s) at: 09:46:42
2017-12-18 09:46:42:[girootcrs:Time :rac001] Completed successfully in 2128 seconds (0h:35m:28s)
2017-12-18 09:46:42:[giassist:Start:rac001] Running RAC Home assistants (netca, asmca)

INFO (node:rac001): Creating the node Listener using NETCA... (09:46:44)

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/netca /orahome /u01/app/12.2.0/grid /instype typical /inscomp client,oraclenet,javavm,server,ano /insprtcl tcp /cfg local /authadp NO_VALUE /responseFile /u01/app/12.2.0/grid/network/install/netca_typ.rsp /silent /orahnam OraGrid12c

Parsing command line arguments:
    Parameter "orahome" = /u01/app/12.2.0/grid
    Parameter "instype" = typical
    Parameter "inscomp" = client,oraclenet,javavm,server,ano
    Parameter "insprtcl" = tcp
    Parameter "cfg" = local
    Parameter "authadp" = NO_VALUE
    Parameter "responsefile" = /u01/app/12.2.0/grid/network/install/netca_typ.rsp
    Parameter "silent" = true
    Parameter "orahnam" = OraGrid12c
Done parsing command line arguments.
Oracle Net Services Configuration:
Profile configuration complete.
Profile configuration complete.
Listener "LISTENER" already exists.
Oracle Net Services configuration successful. The exit code is 0

INFO (node:rac001): Running on: rac001 as oracle: export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/12.2.0/grid; /u01/app/12.2.0/grid/bin/asmca -silent -postConfigureASM

Post configuration completed successfully


INFO (node:rac001): Setting initial diskgroup name dgocrvoting's attributes as defined in RACASMGROUP_ATTRIBUTES ('compatible.asm'='12.2.0.1.0', 'compatible.rdbms'='12.2.0.1.0')...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:38: alter diskgroup dgocrvoting set attribute 'compatible.asm'='12.2.0.1.0';

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:48:40: alter diskgroup dgocrvoting set attribute  'compatible.rdbms'='12.2.0.1.0';

Diskgroup altered.
2017-12-18 09:48:44:[creatediskgroups:Start:rac001] Creating additional diskgroups

INFO (node:rac001): Creating Recovery diskgroup (DGFRA) at: 09:50:31...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGFRA" EXTERNAL redundancy disk 'AFD:RECO1','AFD:RECO2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:09.34

INFO (node:rac001): Creating Extra diskgroup (DGDATA) at: 09:52:22...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1: create diskgroup "DGDATA" EXTERNAL redundancy disk 'AFD:DGDATA1','AFD:DGDATA2' attribute 'compatible.asm'='12.1.0.0.0', 'compatible.rdbms'='12.1.0.0.0';

Diskgroup created.

Elapsed: 00:00:10.48

INFO (node:rac001): Successfully created the following ASM diskgroups (DGFRA DGDATA), setting them for automount on startup and attempting to mount on all nodes...

INFO (node:rac001): Running SQL on: rac001 as oracle user using SID: +ASM1 at: 09:52:35: alter system set asm_diskgroups='DGDATA', 'DGFRA';

System altered.

INFO (node:rac001): Successfully set the ASM diskgroups (DGDATA DGFRA) to automount on startup

INFO (node:rac001): Attempting to mount diskgroups on nodes running ASM: rac001 rac002

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:38: alter diskgroup "DGFRA" mount;

Diskgroup altered.

INFO (node:rac001): Running SQL on: rac002 as oracle user using SID: +ASM2 at: 09:52:39: alter diskgroup "DGDATA" mount;

Diskgroup altered.

INFO (node:rac001): Successfully mounted the created (DGFRA DGDATA) ASM diskgroups on all nodes running an ASM instance (rac001 rac002)
2017-12-18 09:52:41:[creatediskgroups:Done :rac001] Creating additional diskgroups
2017-12-18 09:52:41:[creatediskgroups:Time :rac001] Completed successfully in 237 seconds (0h:03m:57s)

WARNING (node:rac001): Management Database not created due to CLONE_GRID_MANAGEMENT_DB=no. Note that starting with release 12.1.0.2 and higher, the Management Database (GIMR) is required for a fully supported environment
2017-12-18 09:52:41:[giassist:Done :rac001] Running RAC Home assistants (netca, asmca)
2017-12-18 09:52:41:[giassist:Time :rac001] Completed successfully in 359 seconds (0h:05m:59s)
2017-12-18 09:52:41:[creategrid:Done :rac001] Creating 12cR2 Grid Infrastructure
2017-12-18 09:52:41:[creategrid:Time :rac001] Completed successfully in 2548 seconds (0h:42m:28s)

INFO (node:rac001): Skipping CVU post crsinst checks, due to CLONE_SKIP_CVU_POSTCRS=yes
2017-12-18 09:52:41:[cvupostcrs:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 09:52:41:[racclone:Start:rac001] Cloning 12cR2 RAC Home on all nodes
..

INFO (node:rac001): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:04:[racclonelocal:Start:rac001] Cloning 12cR2 RAC Home

INFO (node:rac001): Running on: rac001 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac001): Running on: rac001 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac001"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6740 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4059 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-12AM. Please wait ...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:53:14, elapsed: 0h:00m:31s, 2) nodes remaining, all background pid(s): 12271 12277)...

INFO (node:rac002): Changing Database Edition to: 'Standard Edition'; The Oracle binary (/u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle) is linked as (Enterprise Edition Release), however Database Edition set to (Standard Edition) in params.ini
2017-12-18 09:53:15:[racclonelocal:Start:rac002] Cloning 12cR2 RAC Home

INFO (node:rac002): Running on: rac002 as root: /bin/chown -HRf oracle:oinstall /u01/app/oracle/product/12.2.0/dbhome_1 2>/dev/null

INFO (node:rac002): Running on: rac002 as oracle: /u01/app/oracle/product/12.2.0/dbhome_1/perl/bin/perl /u01/app/oracle/product/12.2.0/dbhome_1/clone/bin/clone.pl -silent ORACLE_BASE='/u01/app/oracle' ORACLE_HOME='/u01/app/oracle/product/12.2.0/dbhome_1' ORACLE_HOME_NAME='OraRAC12c' INVENTORY_LOCATION='/u01/app/oraInventory' OSDBA_GROUP=dba OSOPER_GROUP= OSKMDBA_GROUP=dba OSDGDBA_GROUP=dba OSBACKUPDBA_GROUP=dba OSRACDBA_GROUP=dba oracle_install_db_InstallEdition=STD 'CLUSTER_NODES={rac001,rac002}' "LOCAL_NODE=rac002"  '-ignoreSysPrereqs'
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 6757 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 4082 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-12-18_09-53-42AM. Please wait ....You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.
.
Copy files successful.

Link binaries in progress.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log
...................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.
...
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:56:19, elapsed: 0h:03m:35s, 2) nodes remaining, all background pid(s): 12271 12277)...
..
Link binaries successful.

Setup files in progress.
.
Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-12AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac001]


..................................................   100% Done.

INFO (node:rac001): Relinking the oracle binary to disable Database Enterprise Edition options (09:58:45)...
..
INFO (node:rac001): Waiting for all racclonelocal operations to complete on all nodes (At 09:59:25, elapsed: 0h:06m:42s, 2) nodes remaining, all background pid(s): 12271 12277)...
..2017-12-18 10:00:28:[racrootlocal:Start:rac001] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac001_2017-12-18_10-00-28-680953042.log for the output of root script
2017-12-18 10:00:29:[racrootlocal:Done :rac001] Running root.sh on RAC Home
2017-12-18 10:00:29:[racrootlocal:Time :rac001] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac001): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:00:29:[racclonelocal:Done :rac001] Cloning 12cR2 RAC Home
2017-12-18 10:00:29:[racclonelocal:Time :rac001] Completed successfully in 464 seconds (0h:07m:44s)

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.
.
Setup Inventory successful.

Finish Setup successful.
The cloning of OraRAC12c was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-12-18_09-53-42AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/12.2.0/dbhome_1/root.sh

Execute /u01/app/oracle/product/12.2.0/dbhome_1/root.sh on the following nodes:
[rac002]


..................................................   100% Done.

INFO (node:rac002): Relinking the oracle binary to disable Database Enterprise Edition options (10:01:17)...
..2017-12-18 10:02:21:[racrootlocal:Start:rac002] Running root.sh on RAC Home
Check /u01/app/oracle/product/12.2.0/dbhome_1/install/root_rac002_2017-12-18_10-02-21-660060386.log for the output of root script
2017-12-18 10:02:22:[racrootlocal:Done :rac002] Running root.sh on RAC Home
2017-12-18 10:02:22:[racrootlocal:Time :rac002] Completed successfully in 1 seconds (0h:00m:01s)

INFO (node:rac002): Resetting permissions on Oracle Home (/u01/app/oracle/product/12.2.0/dbhome_1)...
2017-12-18 10:02:22:[racclonelocal:Done :rac002] Cloning 12cR2 RAC Home
2017-12-18 10:02:22:[racclonelocal:Time :rac002] Completed successfully in 576 seconds (0h:09m:36s)

INFO (node:rac001): All racclonelocal operations completed on all (2) node(s) at: 10:02:24
2017-12-18 10:02:24:[racclone:Done :rac001] Cloning 12cR2 RAC Home on all nodes
2017-12-18 10:02:24:[racclone:Time :rac001] Completed successfully in 583 seconds (0h:09m:43s)

INFO (node:rac002): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:28:[rmsshrootlocal:Time :rac002] Completed successfully in 0 seconds (0h:00m:00s)

INFO (node:rac001): Disabling passwordless ssh access for root user (from remote nodes)
2017-12-18 10:02:31:[rmsshrootlocal:Time :rac001] Completed successfully in 0 seconds (0h:00m:00s)
2017-12-18 10:02:31:[rmsshroot:Time :rac001] Completed successfully in 7 seconds (0h:00m:07s)

INFO (node:rac001): Current cluster state (10:02:31)...

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/olsnodes -n -s -t
rac001  1       Active  Hub     Unpinned
rac002  2       Active  Hub     Unpinned
Oracle Clusterware active version on the cluster is [12.2.0.1.0]
Oracle Clusterware version on node [rac001] is [12.2.0.1.0]
CRS Administrator List: oracle root
Cluster is running in "flex" mode
CRS-41008: Cluster class is 'Standalone Cluster'
ASM Flex mode enabled: ASM instance count: 3
ASM is running on rac001,rac002

INFO (node:rac001): Running on: rac001 as root: /u01/app/12.2.0/grid/bin/crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac001                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac001                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

INFO (node:rac001): For an explanation on resources in OFFLINE state, see Note:1068835.1
2017-12-18 10:02:40:[clusterstate:Time :rac001] Completed successfully in 9 seconds (0h:00m:09s)
2017-12-18 10:02:40:[buildcluster:Done :rac001] Building 12cR2 RAC Cluster
2017-12-18 10:02:40:[buildcluster:Time :rac001] Completed successfully in 3357 seconds (0h:55m:57s)

INFO (node:rac001): This entire build was logged in logfile: /u01/racovm/buildcluster3.log

 

We are now going to multiplex network both, the cluster Hearthbeat and ASM because currently the configuration is not HA. We also need only 2 flex asm instances for my 2 nodes R.A.C :

[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl modify asm -count 2

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is disabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:

/u01/app/12.2.0/grid/bin/srvctl add listener -asmlistener -l ASMNET2LSNR_ASM -subnet 192.168.58.0
/u01/app/12.2.0/grid/bin/srvctl start listener -l ASMNET2LSNR_ASM


[root@rac002 ~]# /u01/app/12.2.0/grid/bin/srvctl config listener -asmlistener
Name: ASMNET1LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.3.0
Home: <CRS home>
End points: TCP:1525
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
Name: ASMNET2LSNR_ASM
Type: ASM Listener
Owner: oracle
Subnet: 192.168.58.0
Home: <CRS home>
End points: TCP:1526
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:



[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect
eth3  192.168.58.0   global  asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth2/192.168.3.0:cluster_interconnect,asm
[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg setif -global eth3/192.168.58.0:cluster_interconnect,asm


[root@rac001 racovm]# /u01/app/12.2.0/grid/bin/oifcfg getif
eth1  192.168.179.0  global  public
eth2  192.168.3.0    global  cluster_interconnect,asm
eth3  192.168.58.0   global  cluster_interconnect,asm

 

Let check the overall cluster state

[root@rac002 ~]# /u01/app/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ASMNET2LSNR_ASM.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGDATA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGFRA.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.DGOCRVOTING.dg
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.net1.network
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.ons
               ONLINE  ONLINE       rac001                   STABLE
               ONLINE  ONLINE       rac002                   STABLE
ora.proxy_advm
               OFFLINE OFFLINE      rac001                   STABLE
               OFFLINE OFFLINE      rac002                   STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac002                   STABLE
ora.asm
      1        ONLINE  ONLINE       rac001                   Started,STABLE
      2        ONLINE  ONLINE       rac002                   Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       rac002                   STABLE
ora.qosmserver
      1        OFFLINE OFFLINE                               STABLE
ora.rac001.vip
      1        ONLINE  ONLINE       rac001                   STABLE
ora.rac002.vip
      1        ONLINE  ONLINE       rac002                   STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac002                   STABLE
--------------------------------------------------------------------------------

The cluster is now up, functional and resilient to network failure. We could have choose to create a database as a part of the deployment process because the “Deploy Cluster Tool” permits us to do that. Nevertheless, in this demonstration, I choose to execute manually the database creation on top of this deployment.
So we add a new database to the cluster:

 

[oracle@rac001 ~]$ dbca -silent -ignorePreReqs \
> -createDatabase \
> -gdbName app01 \
> -nodelist rac001,rac002 \
> -templateName General_Purpose.dbc \
> -characterSet AL32UTF8 \
> -createAsContainerDatabase false \
> -databaseConfigType RAC \
> -databaseType MULTIPURPOSE \
> -dvConfiguration false \
> -emConfiguration NONE \
> -enableArchive true \
> -memoryMgmtType AUTO_SGA \
> -memoryPercentage 75 \
> -nationalCharacterSet AL16UTF16 \
> -adminManaged \
> -storageType ASM \
> -diskGroupName DGDATA \
> -recoveryGroupName DGFRA \
> -sysPassword 0rAcle-Sys \
> -systemPassword 0rAcle-System \
> -useOMF true

Copying database files
1% complete
2% complete
15% complete
27% complete
Creating and starting Oracle instance
29% complete
32% complete
36% complete
40% complete
41% complete
43% complete
45% complete
Creating cluster database views
47% complete
63% complete
Completing Database Creation
64% complete
65% complete
68% complete
71% complete
72% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/app01/app01.log" for further details.

[oracle@rac001 ~]$ export ORACLE_SID=app011
[oracle@rac001 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Wed Dec 20 08:54:03 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select host_name from gv$instance ;

HOST_NAME
----------------------------------------------------------------
rac001
rac002

 

We have now a R.A.C 12.2 Standard Edition ready to run our critical applications with the latest patch level for OS and Oracle including all best practices and requirements.

So with this post we have a demonstration of how it make your life simpler, with a good underlying OVM infrastructure, to deploy various kinds of Oracle database infrastructure. This automation can also easily be done for any other technologies like PostgreSQL database or “Big Data” technologies like Hortonworks or any applications.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article Automate OVM deployment for a production ready Oracle RAC 12.2 architecture – (part 02) est apparu en premier sur Blog dbi services.

Oracle 12.2 Dataguard : PDB Flashback on the Primary

$
0
0

The last day I was discussing with one colleague about database flashback for a pluggable database in a dataguard environment. I did some tests and I present results in this blog.
Below our broker configuration. Oracle 12.2 is used.

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 35 seconds ago)
DGMGRL>

The primary database has the flashback database set to YES.

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE READ WRITE YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB2 READ WRITE

Same for the standby database

SQL> select db_unique_name,open_mode,flashback_on from v$database;
DB_UNIQUE_NAME OPEN_MODE FLASHBACK_ON
------------------------------ -------------------- ------------------
MYCONT_SITE1 READ ONLY WITH APPLY YES
.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED READ ONLY
PDB1 READ ONLY
PDB2 READ ONLY

For the tests we are going to do a flashback database for the primary PDB1.
Let’s connect to PDB1

10:15:59 SQL> alter session set container=pdb1;
Session altered.
.
10:16:15 SQL> show con_name;
CON_NAME
------------------------------
PDB1
10:16:22 SQL>

And let’s create a table article with some datafor reference

10:16:22 SQL> create table article (idart number);
Table created.
.
10:18:12 SQL> insert into article values (1);
1 row created.
10:18:31 SQL> insert into article values (2);
1 row created.
.
10:18:34 SQL> select * from article;
IDART
----------
1
2
.
10:18:46 SQL> commit;

Now let’s do a database flashback of primary pdb1 before the creation of the table article.

10:28:12 SQL> show con_name
CON_NAME
------------------------------
PDB1
.
10:28:16 SQL> shut immediate;
Pluggable Database closed.
.
10:28:28 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 MOUNTED
10:28:54 SQL>
.
10:28:54 SQL> FLASHBACK PLUGGABLE DATABASE PDB1 TO TIMESTAMP TO_TIMESTAMP('2017-12-20 10:16:00', 'YYYY-MM-DD HH24:MI:SS');
Flashback complete.
10:30:14 SQL>

Now let’s open PDB1 with resetlogs option

10:31:08 SQL> alter pluggable database PDB1 open resetlogs;
Pluggable database altered.
10:32:15 SQL>

And let’s query the table article. As expected the table is no longer present

10:32:15 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB1 READ WRITE
.
10:32:59 SQL> select * from article;
select * from article
*
ERROR at line 1:
ORA-00942: table or view does not exist
10:33:06 SQL>

Now if we check the status of our dataguard in the broker, we have errors

12/20/2017 10:23:07 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: DISABLED
Configuration Status:
ERROR (status updated 48 seconds ago)
12/20/2017 10:34:40 DGMGRL>

The status of the Primary database is fine

12/20/2017 10:34:40 DGMGRL> show database 'MYCONT_SITE';
Database - MYCONT_SITE
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
MYCONT
Database Status:
SUCCESS

But the standby status is returning some errors
12/20/2017 10:35:11 DGMGRL> show database 'MYCONT_SITE1';
Database - MYCONT_SITE1
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 3 minutes 10 seconds (computed 1 second ago)
Average Apply Rate: 7.00 KByte/s
Real Time Query: OFF
Instance(s):
MYCONT
Database Error(s):
ORA-16766: Redo Apply is stopped
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
Database Status:
ERROR
12/20/2017 10:35:15 DGMGRL>

And if we check the alert log of the standby dataset we can find following errors

(3):Recovery of pluggable database PDB1 aborted due to pluggable database open resetlog marker.
(3):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 2518041, or timestamp before 12/20/2017 10:16:01, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2017-12-20T10:32:05.565085+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 2520607
2017-12-20T10:32:05.612394+01:00
Errors in file /u01/app/oracle/diag/rdbms/mycont_site1/MYCONT/trace/MYCONT_mrp0_1590.trc:
ORA-39874: Pluggable Database PDB1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 2518041.
2017-12-20T10:32:05.612511+01:00
MRP0: Background Media Recovery process shutdown (MYCONT)

On the primary PDB, we can can query the current INCARNATION_SCN in the v$pdb_incarnation view. And we can remark that the current SCN is the same that the one specified in the standby alert log 2518041

11:08:11 SQL> show con_name
CON_NAME
------------------------------
PDB1
11:08:56 SQL> select status,INCARNATION_SCN from v$pdb_incarnation;
STATUS INCARNATION_SCN
------- ---------------
CURRENT 2518041
PARENT 2201909
PARENT 1396169
11:08:59 SQL>

And then as specified in the alert log we have to flashback the standby pdb to a SCN lower than 2518041
First let’s stop the redo apply on the standby

12/20/2017 11:13:14 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-OFF';
Succeeded.
12/20/2017 11:13:59 DGMGRL>

And then let’s flashback to 2518039 ( i.e 2518041 -2 ) for example
Let’s shutdown the standby container MYCONT and startup it in a mount state

11:18:42 SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
.
11:19:10 SQL> startup mount
ORACLE instance started.
Total System Global Area 956301312 bytes
Fixed Size 8799656 bytes
Variable Size 348129880 bytes
Database Buffers 595591168 bytes
Redo Buffers 3780608 bytes
Database mounted.
11:19:50 SQL>
.
11:19:50 SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
-------------------- ----------
PDB$SEED MOUNTED
PDB1 MOUNTED
PDB2 MOUNTED

Now let’s flashback PDB1 on the standby

11:20:19 SQL> flashback pluggable database PDB1 to SCN 2518039;
Flashback complete.
11:20:40 SQL>

The last step is to enable again the redo apply for the standby container

12/20/2017 11:13:59 DGMGRL> edit database 'MYCONT_SITE1' set state='APPLY-ON';
Succeeded.
12/20/2017 11:23:08 DGMGRL>

And then we can verify that the configuration is now fine

12/20/2017 11:25:05 DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 1 second ago)
12/20/2017 11:25:07 DGMGRL>

Conclusion
In this article we saw that the flashback in a dataguard environment is working in the same way for a container or a non container. The only difference is the SCN we must consider to flashback the pluggable database. This SCN should be queried fom the v$pdb_incarnation and not from the v$database as we usually do for a non container database.

 

Cet article Oracle 12.2 Dataguard : PDB Flashback on the Primary est apparu en premier sur Blog dbi services.

Online datafile move in a 12c dataguard environment

$
0
0

Oracle 12c introduces moving online datafile. One question we might ask is what about moving datafile online in a dataguard environment. In this blog we will do some tests
Below our configuration, we are using oracle 12.2

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 8 seconds ago)
DGMGRL>

The StandbyFileManagement property is set to auto for both primary and standby database.

DGMGRL> show database 'MYCONT_SITE' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL> show database 'MYCONT_SITE1' StandbyFileManagement;
StandbyFileManagement = 'auto'
DGMGRL>

Below datafiles on both primary and standby pluggable databases PDB1

SQL> show con_name;
CON_NAME
------------------------------
PDB1
SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

Now let’s move for example /u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf to a new location on the primary PDB1

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

We can verify the new location on the primary

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

As the StandbyFileManagement is set to auto for both databases, we might think that datafile is also moved in the standby, so let’s check

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf

The answer is no.
Ok but is my dataguard still working? Let’s query the broker

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 0 seconds ago)

Yes the configuration is fine.
Ok now can I move online my datafile in the new location on the standby server? Let’s try

SQL> alter database move datafile '/u01/app/oracle/oradata/MYCONT/PDB1/users01.dbf' to '/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf';
Database altered.
SQL>

And we can verify that datafile was moved.

SQL> select name from v$datafile;
NAME
-------------------------------------------------------
/u01/app/oracle/oradata/MYCONT/PDB1/system01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/MYCONT/PDB1/newloc/users01.dbf
SQL>

And we also can verify that my dataguard configuration is still fine

DGMGRL> show configuration;
Configuration - MYCONT_DR
Protection Mode: MaxPerformance
Members:
MYCONT_SITE - Primary database
MYCONT_SITE1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 46 seconds ago)
DGMGRL>

Conclusion
We can see that
1- StandbyFileManagement property dos not concern online datafile move
2- Moving online datafile in the primary does not move the datafile on the standby
3- Online datafile can be done on the standby database

 

Cet article Online datafile move in a 12c dataguard environment est apparu en premier sur Blog dbi services.

Oracle Licensing (R)evolution

$
0
0

In 2015 I wrote a blog named “All you need to know about Oracle Database licensing with VMware”. This blog generated lots of comments and hopefully helped some DBAs and IT Managers understanding the potential issues that they can face in case of Oracle LMS Audit.

In 2 years I made some new experiences related to Oracle LMS audit and I’d like to share those experiences with you to provide you up to date information. What is written in this blog is nothing more than summary of facts encountered with Swiss Oracle customers. The situation described in this blog is related to one of our customers named SITel (Service de l’Informatique et des Télécommunication) of Fribourg in Switzerland which has also been encountered by some others of our customers. I cannot guarantee that Oracle apply the same strategy worldwide. Please notice that this article has been written with the approval of all customers named in this blog.

What didn’t change?

Oracle on VMware not certified: Oracle still does not certify any of its products on VMware virtualized environments as explained in the My Oracle Support Note ID 249212.1. In summary it’s supported but not certified meaning that Oracle does not guarantee the proper working of these products on VMware architecture, but will support you in case of incidents not related to the VMware infrastructure. In cases where VMware could be involved, the Oracle support may ask you to reproduce the incident on a certified environment. VMware on his side has his own Oracle support policy available here.

Oracle Audit reputation: As mentioned on clearlicensing.org, Oracle didn’t improve them to get better audit reputation. Indeed according to this website, Oracle was voted worst vendor during the audit process. (cf graph below)

Clearlicensing

Who is the least helpful vendor in terms of audits (Agressive behavior, focused on short term revenue)

Soft and hard partitioning policy: VMware is still a soft partitioning technology according to Oracle. The distinction between hard and soft partitioning is explained in this Oracle pdf file. In addition VMware still explain to his customer that “DRS Host Affinity rules can be used to appropriately restrict the movement of virtual machines within the cluster. DRS Host Affinity is a vSphere feature that enables you to ensure that your Oracle applications are restricted to move  only between a subset of the hosts—that is, not all hardware in the cluster is “available” to the Oracle software”. – (cf UNDERSTANDING ORACLE CERTIFICATION, SUPPORT AND LICENSING FOR VMWARE ENVIRONMENTS). That’s for sure true but Oracle does not recognize this feature allowing VMware to provide a hard partitioning solution and won’t be taken into consideration during an LMS audit

 What changed ?

Oracle Edition: As you know Standard Edition One and Standard Edition are stopped and if you want to upgrade to 12.1.0.2 (12c patchet 1) then you have to go to Standard Edition Two. You can find a good blog from Franck Pachot related to the installation of Standard Edition Two here as well as the tests related to the 16 thread limitation here. Many questions related to Oracle Standard Edition 2 and the difference between the Standard Edition One and the Standard Edition Two are answered in this Oracle pdf.

Oracle LMS Auditors: In Switzerland we do now have a company (Entry of 15.02.2017), named Seven Eighths Schweiz GmbH doing Oracle audits. The JPE program is definitively stopped.

Oracle commercial approach for VMware customers: That is perhaps the most important thing that inspired me for the title of this blog. Up to the middle of last year, Oracle customers using VMware to host their Oracle products had to be licensed for all processors where Oracle products are installed and/or running but Oracle basically took into consideration all processors where Oracle products can/could be running. This has been well illustrated by the following famous image realized by House of brick.

House Of Brick Oracle Parking

House Of Brick Oracle Parking

Meaning than since VMware 5.1 all the physical hosts managed by a vCenter Server instance have to be licensed whatever the Oracle footprint on the virtual servers. This decision has probably been taken by Oracle due to the new features coming with VMware vSphere 5.1, particularly one named vSphere vMotion(zero-downtime migration) without the need for shared storage configurations. Of course with VMware 6.x according to this same principle all the vSphere environments of the company have to be licensed. Since last year some customers came to me requesting my advice on a proposal that have been done by Oracle. Basically Oracle proposed them to validate their Oracle/VMware infrastructure and license only the processors that are really used to execute Oracle products with these two conditions:

  1. Totally isolating Oracle products in their VMware infrastructure
  2. Oracle requires a “certain volume of business” (can be in terms of Oracle Cloud Credit) to validate the customer infrastructure schema

It is important to take into consideration that Oracle validated these infrastructures only for some specific versions of VMware, usually the current customer’s VMware version and the next one.

What means “Totally isolating Oracle products”?

Customers sent me the infrastructure example that has been provided to them by Oracle. You can find them below. (Notice that customer gave me the approval to publish these pictures)

Oracle VMware VLAN and storage configuration

Oracle VMware VLAN and storage configuration

 

vCenter, VLAN and storage configuration

vCenter, VLAN and storage configuration

What you can see on these infrastructure schemas is:

  1. A dedicated vCenter Server Instance with dedicated physical hosts
  2. A dedicated VLAN
  3. A Storage isolation through LUN Masking, Zoning or any approved restriction

What means a “certain volume of business”?

In the cases we encountered, Oracle asked a “certain volume of business” to validate the Oracle infrastructure on VMware. This volume of business has been in some cases realized by acquiring Oracle Cloud Credits. I cannot provide an exact budget for this compensation since it seems varying according to the customer infrastructure. In the cases we encountered with our customers we spoke of amounts above 40.000 US$ (CHF 40’000).

Conclusion

It seems that Oracle is not anymore so restrictive regarding the licensing of Oracle on VMware infrastructure. VMware is still not certified with any Oracle products but at least Oracle customers can speak with Oracle salespeople in order to avoid having to license all the processors of their vSphere environments. This discussion can lead in some circumstances and according to our experience, to the acquisition of some other Oracle products or Cloud Credits to get the infrastructure validated by Oracle for specific VMware versions.

The other alternative to VMware is Oracle VM Server. Although the support is chargeable, Oracle VM is free. Unfortunately even if Oracle did some great improvements over the last years, Oracle VM does not have the same functionalities as VMware but is can perfectly fit depending on your needs. You can find the documentation in order to configure it has hard partitioning, with CPU pining here since Oracle VM Server has to be configured specifically in order to be considered as a hard partitioning technology.

I very do hope that you find this blog helpful and would like to thanks the SITel for his precious collaboration in reviewing this blog.

 

Cet article Oracle Licensing (R)evolution est apparu en premier sur Blog dbi services.

12cR2 Subquery Elimination

$
0
0

More and more we can see crazy queries generated by ORM frameworks or BI query generators. They are build to be easily generated rather than being optimal. Then, the optimizer has to implement more and more transformations to get an efficient execution plan. Here is one new that appeared in Oracle 12cR2: Subquery Elimination when the subquery do not filter any rows.

A semi-join is a join where we do not need to match with all rows, but only one. We write it with an EXISTS subquery or a =ANY or =SOME one, which is equivalent.

12.1

Here is the behaviour in 12.1.0.2 when the subquery do not filter any row because it reads the same table as the outer one, without any predicate:
SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
-----------------
SQL_ID az1jcra46h5ua, child number 1
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 977554918
 
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 13 | | | |
|* 1 | HASH JOIN SEMI | | 1 | 14 | 14 |00:00:00.01 | 13 | 1098K| 1098K| 889K (0)|
| 2 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 7 | | | |
| 3 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | | | |
----------------------------------------------------------------------------------------------------------------

We read 2 times the same table, join all rows and finally return as result exacly the same rows as those coming from the first full scan. This is not efficient.

12.2

Here is the same query in 12.2 where we can see that the table is scanned only once because the optimizer knows that the subquery do not filter anything:

SQL> select * from dbms_xplan.display_cursor(null,null,'allstats last +outline');
PLAN_TABLE_OUTPUT
SQL_ID az1jcra46h5ua, child number 0
-------------------------------------
select * from EMP where ename in (select ename from EMP)
 
Plan hash value: 3956160932
 
------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 9 |
|* 1 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 9 |
------------------------------------------------------------------------------------

This is mentioned in the outline hints with ELIMINATE_SQ:

Outline Data
-------------
 
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
DB_VERSION('12.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$D0DB0F9A")
ELIMINATE_SQ(@"SEL$2")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
FULL(@"SEL$D0DB0F9A" "EMP"@"SEL$1")
END_OUTLINE_DATA
*/

This feature can be disabled with the following paramter:

SQL> alter session set "_optimizer_eliminate_subquery"=false;
Session altered.

Or with the following hint:

SQL> select * from EMP where ename in (select /*+ NO_ELIMINATE_SQ */ ename from EMP);

Finally here is what you can see in the CDB trace:

SQE: Trying SQ elimination.
SQE: Trying SQ elimination.
SQE: Query valid for SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE "EMP"."ENAME"=ANY (SELECT "EMP"."ENAME" "ENAME" FROM "SCOTT"."EMP" "EMP")
Registered qb: SEL$D0DB0F9A 0xe7035778 (SUBQUERY ELIMINATE SEL$1; SEL$2)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$D0DB0F9A nbfros=1 flg=0
fro(0): flg=0 objn=73253 hint_alias="EMP"@"SEL$1"
 
SQE: Query after SQ elimination:******* UNPARSED QUERY IS *******
SELECT "EMP"."EMPNO" "EMPNO","EMP"."ENAME" "ENAME","EMP"."JOB" "JOB","EMP"."MGR" "MGR","EMP"."HIREDATE" "HIREDATE","EMP"."SAL" "SAL","EMP"."COMM" "COMM","EMP"."DEPTNO" "DEPTNO" FROM "SCOTT"."EMP" "EMP" WHERE 0=0 AND "EMP"."ENAME" IS NOT NULL

This example is simple and it is obvious that the SQL should be re-written. But with large generated queries, on complex views, this is the kind of thing that can be seen in the resulting query and this transformation will help to avoid unnecessary work.

 

Cet article 12cR2 Subquery Elimination est apparu en premier sur Blog dbi services.

DOAG 2017 – Middleware: Automation and Docker container streams

$
0
0

doag2017

End of last Month I participated as presenter to the DOAG 2017. It was interesting to see the growing interest on the automatic deployment tools.
I presented the way the Fusion Middleware cloning can be used with such automation tool to lower the time needed to install an environment.
The Fusion Middleware cloning is based on a master installation where all patches, configuration, system components are installed and configured.
Thus, a real time reduction when several identical installations are required (TEST, CA, PROD) and the usage of such automation tool guaranties that all environments are totally identical.
This automation was done using Ansible.

The Oracle Fusion middleware cloning is no more supported in 12.2.1.3 and we decided to move to full installation, patching, WebLogic domaine creation and configuration using Ansible scripts.
This is again a gain of time compared to manual installation even if those a silent installation. Using Ansible playbook is a guarantee that all installation are fully identical and avoids human errors in repetitive tasks.

An other interesting stream at the DOAG was the Docker one. There were a few presentations about the usage of Docker to run WebLogic instances.

Container im Schwarm – Weblogic und Docker Swarm
Docker Container für das Deployment eines WebLogic Clusters
Docker und Oracle’s JDK Grundlagen für Entwickler
Docker for developers

Docker is now supported by Oracle for the WebLogic deployments. There are even Oracle Docker images that can be downloaded.

The presentation quality increase years after years. This is a new challenge to be prepared for next year: DOAG 2018.

 

 

 

Cet article DOAG 2017 – Middleware: Automation and Docker container streams est apparu en premier sur Blog dbi services.


Drop PDB including datafiles may keep files open

$
0
0

I like that Oracle drops the datafiles from the operating system when we remove them from the database (with drop tablespace or drop pluggable database) because I don’t like to have orphean files remaining in the filesystem. However, to ensure that space is reclaimed, we must be sure that Oracle did not leave a process with this file opened. Linux allows to drop an open file but then drops only the inode. The consequence is that we do not see the file, but space is not reclaimable until the process closes the handle.
Here is a case where I’ve had an issue in 12.2 where plugging a PDB is done in parallel and the parallel processes keep the files opened even if we drop the pluggable database.

I have 1.2 GB free on my filesystem:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 1.9G 1.2G 61% /

Plug

I plug a PDB from a PDB archvive:
SQL> create pluggable database PDB0
2 using '/u01/app/temp/PDB0.pdb'
3 file_name_convert=('/u01/app/temp','/u02/oradata')
4 /
 
Pluggable database PDB0 created.

Open

In my example the PDB was from an older PSU level. I open it:
SQL> alter pluggable database PDB0 open;
ORA-24344: success with compilation error
&mnsp;
Pluggable database PDB0 altered.

I got a warning and the PDB is in restricted session mode:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
------ ---------- ------------ ----------
2 PDB$SEED READ ONLY NO
3 PDB0 READ WRITE YES
4 PDB1 READ WRITE NO

The reason is that a PSU installed in the CDB$ROOT was not there when the PDB was unplugged:

SQL> select status,message from pdb_plug_in_violations;
STATUS MESSAGE
------ ------------------------------------------------------------------------------------------------------------
PENDING DBRU bundle patch 171017 (DATABASE RELEASE UPDATE 12.2.0.1.171017): Installed in the CDB but not in the PDB.

I can run datapatch, but let’s say that I realize it’s the wrong PDB archive and I want to drop what I’ve imported. Then I expect to reclaim the space in order to be able to import the right one again.

Processes

Before closing the PDB here are all the processes having one of the datafiles opened:
SQL> host for i in $(fuser /u02/oradata/*) ; do ps --no-headers -p $i ; done
/u02/oradata/sysaux01.dbf:
/u02/oradata/system01.dbf:
/u02/oradata/temp01.dbf:
/u02/oradata/undotbs01.dbf:
/u02/oradata/users01.dbf:
6053 ? 00:00:00 ora_dbw0_cdb1
6122 ? 00:00:01 ora_p003_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6120 ? 00:00:01 ora_p002_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6118 ? 00:00:00 ora_p001_cdb1
6594 ? 00:00:01 oracle_6594_cdb
6053 ? 00:00:00 ora_dbw0_cdb1
6116 ? 00:00:00 ora_p000_cdb1
6594 ? 00:00:01 oracle_6594_cdb

There is my session shadow process, also the DBWR, and the Pnnn parallel processes who did the copy of the datafiles during the plug.

Close

So, I want to drop it and then I close it:
SQL> alter pluggable database PDB0 close;
 
Pluggable database PDB0 altered.

Close means that all files are closed. Is it true? Actually not:
SQL> host for i in $(fuser /u02/oradata/*) ; do ps --no-headers -p $i ; done
/u02/oradata/sysaux01.dbf:
/u02/oradata/system01.dbf:
/u02/oradata/undotbs01.dbf:
/u02/oradata/users01.dbf:
6122 ? 00:00:01 ora_p003_cdb1
6120 ? 00:00:01 ora_p002_cdb1
6118 ? 00:00:00 ora_p001_cdb1
6116 ? 00:00:00 ora_p000_cdb1

The parallel processes still have the datafiles opened. This is probably a bug and I’ll open a SR referencing this blog post.

So I want to reclaim space:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 2.6G 527M 84% /

I expect to have this 527 MB available go back to 1.2 GB available once I drop the PDB.

Drop including datafiles

I cannot drop the PDB and keep the datafiles, or I’ll get ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged
I don’t want to unplug it but just to drop it, then I must mention the ‘including datafiles':

SQL> drop pluggable database PDB0 including datafiles;
 
Pluggable database PDB0 dropped.

Unfortunately, the space is not reclaimed:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 2.6G 527M 84% /

As you have seen that the Pnnn processes were still there after the close, you know the reason. Linux has removed the inode but the file is still there in the filesystem until the processes close the handles (or the processes are killed). You can see them with lsof or from the /proc filesystem:

SQL> host find /proc/*/fd -ls 2>/dev/null | grep deleted
79174 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6116/fd/257 -> /u02/oradata/users01.dbf\ (deleted)
79195 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6118/fd/257 -> /u02/oradata/undotbs01.dbf\ (deleted)
79216 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6120/fd/257 -> /u02/oradata/system01.dbf\ (deleted)
79237 0 lrwx------ 1 oracle oinstall 64 Dec 25 21:20 /proc/6122/fd/257 -> /u02/oradata/sysaux01.dbf\ (deleted)

On a running CDB I cannot kill the background processes because they may be doing something useful. Until I can re-start the instance, the only way to reclaim the space is to write an empty file to replace those files. I cannot use the file name which has been deleted but I can use the /proc link:

SQL> host find /proc/*/fd -ls 2>/dev/null | grep -E " [/]u02[/]oradata[/].* [(]deleted[)]" | awk '{print $11}' | while read f ; do : > $f ; done

And finally space is reclaimed:
SQL> host df -h /u02/oradata
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ol-root 3.1G 1.9G 1.2G 61% /

So what?

I encountered this issue with a PDB plug but I’m quite sure we can encounter it in other situations when the parallel processes had opened the PDB datafiles. You can imagine the consequence on a multitenant CDB for DBaaS where you have hundreds of PDBs and constantly create and drop them, probably in an automated way. Space not reclaimed means that at one time the provisioning will fail. I hope this bug will be filled and fixed. Closing a PDB should take care that all processes close the files. The safe way is to include something like I did, emptying the deleted files, with the proper verifications that the old files belong to a dropped PDB.

 

Cet article Drop PDB including datafiles may keep files open est apparu en premier sur Blog dbi services.

12c Multitenant internals: PDB replay DDL for common users

$
0
0

In multitenant, you can create common Users, Roles, and Profiles. You create them in CDB$ROOT, with the CONTAINER=ALL clause (which is optional because it is the only possible value when connected to CDB$ROOT) but they are visible to all containers. As the goal of multitenant is to avoid to duplicate common metadata to all containers, You may think that they are visible through those magic metadata links. But that’s actually wrong: they are simply replicated with a very simple mechanism: the DDL for common objects is replayed into each user PDB.

I’m connected to CDB2’s CDB$ROOT and I have two pluggable databases:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 MOUNTED

PDB1 is opened and PDB2 is closed.

PDB_SYNC$

In this example, I’ll query PDB_SYNC$ which is the table where Oracle stores all DDL for common users, roles, or profiles in order to be able to replay it later:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

I excluded the bitand(flags,8)=8 because it concerns application containers. I query with the container() to show the con_id but this is for con_id=1 which is the CDB$ROOT.

You can see some DDL for CTXSYS recorded on January 26th which is the day where this release (12.2.0.1) was built. I used a template with datafiles to create the CDB with DBCA. And you see some DDL to unlock SYS and SYSTEM on December 23rd when I created the database. You can also see that they are ordeded in sequence with REPLAY#.

More interesting is the OPCODE=-1 which is PDB$LASTREPLAY and looks like the last value of REPLAY#. This means that on this container, CDB$ROOT, all statements where REPLAY#<=5 was run.

With a similar query, I query the opened PDBs:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

There only one row here in CON_ID=3, which is PDB1: the PDB$LASTREPLAY mentioning that all statements up to REPLAY=5 have been run also in this container.
I don’t see PDB2 (CON_ID=4) here because the container() clause cannot query closed containers.

CONTAINER=ALL DDL

I’ll run some common DLL to create a profile, a role and a user:

SQL> create profile C##PROFILE1 limit inactive_account_time 15 container=all;
Profile C##PROFILE1 created.
 
SQL> create role C##ROLE1 container=all;
Role C##ROLE1 created.
 
SQL> create user C##USER1 identified by oracle container=all;
User C##USER1 created.
 
SQL> alter user C##USER1 profile C##PROFILE1;
User C##USER1 altered.
 
SQL> grant C##ROLE1 to C##USER1 container=all;
Grant succeeded.

The C## prefix is mandatory to isolate the common user namespace. You can change it with the common_prefix parameter. You can even set it to the empty string, but then you have a risk of namespace collision when you plug a PDB between CDB having different common profiles or roles.
The CONTAINER=ALL is the default and the only possibility when connected to CDB$ROOT so it is optional. I recommend to mention it explicitly in order to avoid problems when running the same DDL in CDB$ROOT and in PDBs.

All those DDL have been recorded into PDB_SYNC$ and the REPLAY# has been increased:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

PDB1 (CON_ID=3) was opened read write, and then has been synchronized (the DDL has been run in the container to create the same profile, role and user) and the PDB$LASTREPLAY has been updated in this container to show that all has been done:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
%nbsp;
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

MOUNTED or READ ONLY

I open the PDB2 read only because I want to see what is in PDB_SYNC$ there. But READ ONLY means that the DDL cannot be run because no write is allowed in the local dictionary.

SQL> alter pluggable database PDB2 open read only;
Pluggable database PDB2 altered.

Running the same query as above, I can see that PDB2 (CON_ID=4) is synchronized only up to the statements with REPLAY#=5 because my DDL was not replicated there.
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 5

Sync at OPEN

When I open PDB2 in read write mode, the DDL can be synchronized:

SQL> alter pluggable database PDB2 open read write force;
Pluggable database PDB2 altered.

At open, the DDL from REPLAY#>5 has been replayed and once opened the PDB is in sync with CDB$ROOT:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,opcode,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

So what?

The common users, roles and profiles are not stored only in CDB$ROOT to be shared, but rather replicated to all PDBs. The DDL is replicated synchronously to all opened pluggable databases in read write, and stored into the CDB$ROOT PDB_SYNC$ table to be replayed later when non-synced PDBs are opened. I’ll show in the next post what happens when the DDL is in error.

Note that even when all pluggable databases are opened read write, the DDL is stored and they are purged later (when replayed on all PDBs) because they are needed when you create a new PDB and open it. The PDB$SEED has REPLAY#=0 for PDB$LASTREPLAY which means that all statements will be replayed.

This is 12.2.0.1 where this mechanism is only for common users, roles and profiles having DDL in CDB$ROOT. With application containers, more than that is recorded: all DML and DDL run between the ‘begin install/upgrade/patch’ and ‘end install/upgrade/patch’ in the application root. Then, the statements can be replayed into the application PDB with a simple SYNC command. In the future release (18c) we expect to have that application root behavior ported to CDB$ROOT so that we don’t have to run catupgrd.sql in all containers. Then the PDB will probably be patched or upgraded when opened.

 

Cet article 12c Multitenant internals: PDB replay DDL for common users est apparu en premier sur Blog dbi services.

12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS

$
0
0

In the previous post https://blog.dbi-services.com/12c-multitenant-internals-pdb-replay-ddl-for-common-users I’ve done some DDL on a common user to show how this is replayed later for PDBs that were not opened at that time. But what happens when one of the DDL fails on one PDB?

PDB$LASTREPLAY

In the last post, the C##USER1 common user was created and all pluggable databases (PDB1 with con_id=3 and PDB2 with con_id=4) were opened and synchronized:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where opcode=-1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

When REPLAY# in the PDB is equal to the CDB$ROOT one, this means that there are no additional statements to replicate on the PDB.

I have PDB1 opened read write and PDB2 in read only:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ ONLY NO

For the demo my user’s default tablespace is SYSTEM:

SQL> select con_id,username,common,default_tablespace from cdb_users where username='C##USER1' order by 1;
 
CON_ID USERNAME COMMON DEFAULT_TABLESPACE
------ -------- ------ ------------------
1 C##USER1 YES SYSTEM
3 C##USER1 YES SYSTEM
4 C##USER1 YES SYSTEM

Failure in opened containers

I want to change the default tablespace for C##USER1 and I have a USERS tablespace in CDB$ROOT (but not in the PDBs):

SQL> alter user C##USER1 default tablespace USERS;
 
Error starting at line : 50 File @ common-users-pdb-sync.sql
In command -
alter user C##USER1 default tablespace USERS
Error report -
ORA-65048: error encountered when processing the current DDL statement in pluggable database PDB1
ORA-00959: tablespace 'USERS' does not exist

As we have seen in the last post, the DDL is executed on all containers that are opened read write. Here it is fine on CDB$ROOT but fails on PDB1.

Then I create the USERS tablespace in PDB1:

SQL> alter session set container=PDB1;
Session altered.
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB1/users.dbf' size 5M;
Tablespace USERS created.
SQL> alter session set container=CDB$ROOT;
Session altered.

And now, the statement is successful in CDB$ROOT, replicated on PDB1:

SQL> alter user C##USER1 default tablespace USERS;
User C##USER1 altered.

This is nice: the statement is successful in all containers or fails. When it is successful, statements are recorded in PDB_SYNC$:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id=1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
1 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
1 0 852610 26-jan-17 02:57:26 CTXSYS SYS 5 1 alter user CTXSYS account unlock identified by *^@
1 0 853177 26-jan-17 02:57:34 CTXSYS SYS 5 2 alter user CTXSYS password expire account lock^@
1 0 1405359 26-jan-17 03:31:31 SYSTEM SYS 5 3 alter user system account lock password expire^@
1 0 1408693 23-dec-17 11:34:43 SYS SYS 5 4 alter user sys account unlock identified by *^@
1 0 1408703 23-dec-17 11:34:43 SYSTEM SYS 5 5 alter user system account unlock identified by *^@
1 0 1466615 29-dec-17 09:26:56 C##PROFILE1 SYS 7 6 create profile C##PROFILE1 limit inactive_account_time 15 container=all^@
1 0 1466641 29-dec-17 09:26:57 C##ROLE1 SYS 3 7 create role C##ROLE1 container=all^@
1 0 1466748 29-dec-17 09:26:58 C##USER1 SYS 1 8 create user C##USER1 identified by * container=all^@
1 0 1466812 29-dec-17 09:26:59 C##USER1 SYS 5 9 alter user C##USER1 profile C##PROFILE1^@
1 0 1466853 29-dec-17 09:26:59 C##USER1 C##ROLE1 SYS 10 10 grant C##ROLE1 to C##USER1 container=all^@
1 0 1467010 29-dec-17 09:27:01 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@

Failure in replay at open for closed containers

But PDB2 is not synchronized because it was not opened read write:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
3 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 10

But I don’t have a USERS tablespace in PDB2, so the replay will fail:

SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
 
SQL> alter pluggable database PDB2 open;
ORA-24344: success with compilation error
Pluggable database PDB2 altered.

This is a warning only. The SQlcl feedback is a bit misleading, mentioning a compilation error because this is where we used to have warnings, but the SQl*Plus message is more clear:

SQL> alter pluggable database PDB2 open;
Warning: PDB altered with errors.

The PDB2 cannot be left closed, because you need to create a tablespace here. But it cannot be opened to everyone, because it is not in sync with CDB$ROOT. So what happens is that the PDB is opened in restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE YES

Note that if you look at PDB_SYNC$ in the PDB at that time, it looks like REPLAY#=11 has increased but you also see rows for the statement that has to be run. You have to connect to the PDB because containers() do not run in restricted session containers:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11
4 0 1469022 29-dec-17 09:27:02 C##USER1 SYS 5 11 alter user C##USER1 default tablespace USERS^@
 
SQL> alter session set container=CDB$ROOT;
Session altered.

Actually, the attempt to sync has inserted the statements and pushed the last replay indicator. Now, the PDB has all information to do a sync without the need to go to CDB$ROOT. The DDL was not replayed, but has been stored locally. When the sync will be successful, statements will be removed from the local PDB_SYNC$ leaving only the LASTREPLAY indicator.

PDB_PLUG_IN_VIOLATIONS

More info about the warning is stored in PDB_ALERT$ which you query from PDB_PLUG_IN_VIOLATIONS (the strange name reminds the TRANSPORT_SET_VIOLATIONS view used by DBMS_TTS):

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR PENDING Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.03.266780000 PM

Here you have the statement that failed and the error number, but no recommended ACTION. However, ORA-959 is “tablespace ‘%s’ does not exist” which gives a clue about the problem encountered.

As the PDB is opened a DBA (with RESTRICTED SESSION privilege) can add the tablespace:

SQL> alter session set container=PDB2;
Session altered.
 
SQL> create tablespace USERS datafile '/u01/oradata/CDB2/PDB2/users.dbf' size 5M;
Tablespace USERS created.

But you cannot simply disable restricted session:

SQL> alter system disable restricted session;
SQL> alter system disable restricted session;
 
Error starting at line : 74 File @ common-users-pdb-sync.sql
In command -
alter system disable restricted session
Error report -
ORA-65144: ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted
65144. 00000 - "ALTER SYSTEM DISABLE RESTRICTED SESSION is not permitted"
*Cause: An attempt was made to disable a restricted session while an unresolved error existed in PDB_PLUG_IN_VIOLATIONS.
*Action: Resolve all of the errors before trying to disable a restricted session.

One solution is to close and open the PDB to get the DDL replay:
SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB2 close;
Pluggable database PDB2 altered.
SQL> alter pluggable database PDB2 open;
Pluggable database PDB2 altered.

The other solution is to call DBMS_PDB.SYNC_PDB and disable restricted mode:

SQL> exec dbms_pdb.sync_pdb; commit;
PL/SQL procedure successfully completed.
 
SQL> alter system disable restricted session;
System DISABLE altered.

In both case, no warning here, and no restricted mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1 READ WRITE NO
4 PDB2 READ WRITE NO

and the PDB_PLUG_IN_VIOLATIONS is updated to flag the issue as resolved:

SQL> select name, cause, type, status,action,message,time from pdb_plug_in_violations;
 
NAME CAUSE TYPE STATUS ACTION MESSAGE TIME
---- ----- ---- ------ ------ ------- -------------------------------
PDB2 Sync Failure ERROR RESOLVED Sync PDB failed with ORA-959 during 'alter user C##USER1 default tablespace USERS' 29-DEC-17 09.27.04.093659000 PM

At that time, the local PDB_SYNC$ table in PDB2 contains only the PDB$LASTREPLAY row, with the same value as in the CDB$ROOT table. The rows with the statements have been deleted once the DDL has been successfully replayed:

SQL> select con_id,scnwrp,scnbas,ctime,name,auxname1,auxname2,opcode,replay#,sqlstmt from containers(pdb_sync$) where con_id>1 and bitand(flags,8)!=8 order by con_id,replay#;
 
CON_ID SCNWRP SCNBAS CTIME NAME AUXNAME1 AUXNAME2 OPCODE REPLAY# SQLSTMT
------ ------ ------ ----- ---- -------- -------- ------ ------- -------
4 0 0 26-jan-17 01:53:02 PDB$LASTREPLAY -1 11

So what?

The mechanism is simple: record what is done in CDB$ROOT, replicate it in PDBs when possible (opened read-write) and try to replay it, mark the last replay step. For containers that were not writeable, at open, the DDL is replicated on the PDBs that lag being CDB$ROOT and replay step is updated. Then the DDL is replayed. When sucessful, the statement is removed from the replicated DDL. When it fails, you get a warning, and a message in PDB_PLUG_IN_VIOLATIONS, and the PDB is opened in restricted session mode to let you solve the problem.
If you can fix the issue so that the DDL to be replayed is successful, then you can just sync and disable restricted session, or simply close and re-open the PDB.
If you can’t fix it I suppose you need to hack the statements in the local PDB_SYNC$, with Oracle Support agreement of course, and make sure that you arrive to a state which is consistent with the other containers, especially CDB$ROOT.

 

Cet article 12c Multitenant internals: PDB_PLUG_IN_VIOLATIONS est apparu en premier sur Blog dbi services.

Experiencing update statistics on a big table with circuitous ways

$
0
0

This is my first blog of this new year and since a while by the way. Indeed, last year, I put all my energy to realign my Linux skills with the new Microsoft’s strategy that opens SQL Server to Open Source world. But at the same time, I did a lot of interesting stuff at customers shops and I decided to write about one  of them to begin this new year 2018.

blog 124 - 0 - stats

In this blog post, I will highlight a distinctive approach, according to me, to optimize an update statistics operation for one particular and big table. I already had to manage such maintenance tasks in one of my previous jobs as DBA and I continue to learn more about it but from a different position now.  The fact is as consultant, I usually try to provide to my customer the best approach regarding both the issue and the context. In reality, from my experience, the latter is much more important than you may think and sometimes we have to consider different other ways to get the expected outcome. I think this is not a necessarily a bad thing because following a different path (not the best) may reveal different other interesting options we may consider to make our final recipe.

Let’s go back to my customer case and let set the context. One big database (1.7TB) on SQL Server 2014 SP2 and an update statistics operation that is part of a global database maintenance strategy and takes a very long time (roughly 30h in the best-case scenario). We identified the biggest part of the execution time is related to one big and non-partitioned table (let’s say dbo.bigtable) with the following figures: 148 millions of rows / 222GB in size / 119 columns / 94 statistics / 14 indexes. Regarding the two last figures, the majority of statistics we re in fact auto-generated by the SQL Server engine over the time and from different application release campaigns. Furthermore, it is worth mentioning that the DBA had to use a customized sample value (between 10 and 50 percent) to minimize the impact of update statistics operation for this particular table regarding the table size and the configured maintenance windows timeframe.

 

My first and obvious approach

My first approach consisted in warning the database administrator about the number of statistics on this table (probably a lot of them are not in use anymore?) as well as the development team about the bad designed table. In addition, the number of rows in this table may also indicate that it contains a lot of archive data and we may reduce the global size by using archiving processes (why not built-in partitioning features and incremental statistics because we’re running with enterprise edition?). However, reviewing the model was not an option for my customer because it will require a lot of work and the DBA wasn’t keen on the idea of archiving data (business constraints) or removing auto generated statistics on this specific table. So, what I considered a good approach was not a success for adoption and the human aspect was definitely a big part of it. In a nutshell, at this stage a standby situation …

 

When a high-performance storage comes into rescue …

In parallel my customer was considering to replace his old storage by a new one and Pure Storage was in the loop. Pure Storage is one of the flash storage providers on the market and the good news is I already was in touch with @scrimi78 (Systems Engineer at Pure Storage) in Switzerland. During this project, we had a lot of interesting discussions and interaction s about Pure Storage products and I appreciated his availability to provide technical documentation and explanation. At the same time, they lent us generously a Pure Storage to play with snapshot volumes that will be used for database refresh between a production and dev environments.

In the context of my customer, we already were aware of the poor performance capabilities of the old storage and the replacement by a Pure Storage // M20 model was very beneficial for the database environments as shown by the following figures:

I only put the IO-related statistics of the production environment we had during the last year and we may notice a significant drop of average time after moving the production database files on the new storage layout. Figures are by year and month.

blog 124 - 1 - wait stats

We noticed the same from file IO statistics figures about the concerned database.

blog 124 - 3 - file io stats

Very impressive isn’t it? But what about our update statistics here? We naturally observed a significant drop in execution time to 6 hours (80% of improvement) because generally speaking this an IO-bound operation and especially in our case. The yellow columns represent operations we had to stop manually to avoid impacting the current application workload (> 35hours of execution time). You may also notice we changed the sample value to 100 percent after installing the Pure Storage // 20 model compared to previous runs with a sample value of 50 percent.

blog 124 - 3 - update stats execution time

 

Changing the initial way to update statistics for this table …

The point here is we know that we may now rely on the storage performance to update statistics efficiently and why not to push the limit of the storage by changing the way of running our update statistics operation – basically sequentially by default with one statistic at time. So, we decided to write a custom script to carry out the update operation in parallel to boost the overall execution performance. Since SQL Server 2014 SP1 CU6 we may benefit from an improved support for parallel statistics by using the trace flag 7471 that changes the locking behavior such that SQL Server engine no longer acquires X LOCK on UPDSTATS resource on this table. The script consists in creating a pool of parallel SQL jobs that update one particular statistic on a single table. I put it below if you want to use it but in a meantime, let’s say it is also possible to go through an interesting smart alternative solution based on the service broker capabilities here. What is certain is we will integrate one or other version – with some adjustments – to you DMK management kit tool. Here my proposal based on concurrent SQL jobs (feel free to comment):

SET NOCOUNT ON;

DECLARE 
	@database_name sysname = N'AdventureWorks2012', -- Target database
	@table_name sysname = N'bigTransactionHistory', -- Target table
	@schema_name sysname = N'dbo',                  -- Target schema
	@batch_upd_stats INT = 3,                       -- Number of simultaneous jobs
	@stat_sample INT = 0, 					        -- 0 = default sample rate value | 100 = FULLSCAN | Otherwise WITH SAMPLE @stat_sample PERCENT
	@debug BIT = 1									-- 0 = debug mode disabled -| 1 - Debug mode | 99 - Verbose mode
	

-- @stats_sample variable table
-- Will contains statistics with custom sampling rate value
-- Otherwise will use the default sample rate value from SQL Server
-- You may also use an user table for more flexibility that that will 
-- be used from this script 
DECLARE @stats_sample TABLE
(
	column_name sysname,
	stats_name sysname,
	sample_stat tinyint
);

INSERT @stats_sample VALUES ('TransactionID', '_WA_Sys_00000001_4CC05EF3', 100),
						    ('ProductID', '_WA_Sys_00000002_4CC05EF3', 100);



-- working parameters
DECLARE
	@nb_jobs_running INT = 0,
	@count INT = 0,
	@i INT,
	@j INT,
	@max INT,
	@sqlcmd NVARCHAR(MAX) = N'',
	@job_name_pattern sysname,
	@start_date DATETIME2(0),
	@end_date DATETIME2(0),
	@stats_name sysname,
	@sample_stat int
	;


-- Creation of update stat jobs
IF OBJECT_ID('tempdb..#update_stats_tt', 'U') IS NOT NULL
	DROP TABLE #update_stats_tt;

SELECT 
	id = IDENTITY(INT, 1,1),
	s.name AS [schema_name],
	t.name AS table_name,
	st.name AS stats_name,
	sample_stat,
	'USE [msdb]
	
	DECLARE @jobId BINARY(16);
	
	EXEC  msdb.dbo.sp_add_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', @job_id = @jobId OUTPUT
	--select @jobId
	
	EXEC msdb.dbo.sp_add_jobserver @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @server_name = N''' + @@SERVERNAME + '''
	
	EXEC msdb.dbo.sp_add_jobstep @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', @step_name=N''UPDATE STATS'', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_fail_action=2, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N''TSQL'', 
		@command=N''UPDATE STATISTICS ' + QUOTENAME(s.name) + '.' + QUOTENAME(t.name) + ' ' + st.name + CASE COALESCE(sample_stat, @stat_sample, 0)
																											WHEN 0 THEN ' '
																											WHEN 100 THEN ' WITH FULLSCAN'
																											ELSE ' WITH SAMPLE ' + CAST(COALESCE(sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																										END + ''', 
		@database_name=N''' + @database_name + ''', 
		@flags=0

	EXEC msdb.dbo.sp_update_job @job_name=N''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + st.name + ''', 
			@enabled=1, 
			@start_step_id=1, 
			@notify_level_eventlog=0, 
			@notify_level_email=2, 
			@notify_level_netsend=2, 
			@notify_level_page=2, 
			@delete_level=0, 
			@description=N'''', 
			@category_name=N''[Uncategorized (Local)]'', 
			@owner_login_name=N''sa'', 
			@notify_email_operator_name=N'''', 
			@notify_netsend_operator_name=N'''', 
			@notify_page_operator_name=N''''
	' AS upd_stats_cmd
INTO  #update_stats_tt
FROM 
	sys.stats AS st
JOIN 
	sys.tables AS t ON st.object_id = t.object_id
JOIN
	sys.schemas AS s ON s.schema_id = t.schema_id
LEFT JOIN 
	@stats_sample AS ss ON ss.stats_name = st.name
WHERE 
	t.name =  @table_name
	AND s.name = @schema_name
ORDER BY 
	stats_id;

IF @debug = 99
	SELECT * FROM #update_stats_tt;

-- Enable traceflag 7471 to allow U lock while stat is updating
PRINT '--> Enable trace flag 7471 during update stats operation';

SET @sqlcmd = N'DBCC TRACEON(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

PRINT '-----------------------------------';

SET @start_date = CURRENT_TIMESTAMP;

SET @max = (SELECT MAX(id) FROM #update_stats_tt);
SET @i = 0;

IF @debug = 99
	SELECT 'DEBUG -->  @max (nb stats) : ' + CAST(@max AS VARCHAR(15));

-- Entering to the loop ...
WHILE (@i <= @max OR @nb_jobs_running <> 0)
BEGIN

	SET @j = @i + 1;

	IF @debug = 99
	BEGIN
		SELECT 'DEBUG -->  @i : ' + CAST(@i AS VARCHAR(15));
		SELECT 'DEBUG --  @j = @i + 1 : ' + CAST(@j AS VARCHAR(15));
	END

	-- Computing number of update stats jobs to create
	-- regarding both the max configured of simulataneous jobs and current running jobs
	SET @count = @batch_upd_stats - @nb_jobs_running;

	IF @debug = 99
		SELECT 'DEBUG --  @count : ' + CAST(@count AS VARCHAR(15));

	-- Here we go ... creating update stats sql_jobs
	WHILE (@j <= @i + @count)
	BEGIN

		SET @sqlcmd = '';
		SET @stats_name = NULL;
		SET @sample_stat = NULL;
		SET @sqlcmd = NULL;

		SELECT 
			@stats_name = stats_name,
			@sample_stat = sample_stat,
			@sqlcmd = upd_stats_cmd + CHAR(13) + '---------------------' + CHAR(13)
		FROM 
			#update_stats_tt
		WHERE 
			id = @j;

		IF @debug = 99
		BEGIN
			SELECT 'DEBUG --  @j loop : ' + CAST(@j AS VARCHAR(15));
			SELECT @stats_name, @sample_stat
		END

		IF @debug = 1
			PRINT 'UPDATE STATISTICS ' + QUOTENAME(@schema_name) + '.' + QUOTENAME(@table_name) + ' ' + @stats_name + CASE COALESCE(@sample_stat, @stat_sample, 0)
																															WHEN 0 THEN ' '
																															WHEN 100 THEN ' WITH FULLSCAN'
																															ELSE ' WITH SAMPLE ' + CAST(COALESCE(@sample_stat, @stat_sample, 0) AS VARCHAR(15)) + ' PERCENT'
																														  END + '';
		IF @debug IN (0,1) 
		BEGIN
			PRINT '--> Create SQL job UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + @stats_name + '';
			
			EXEC sp_executesql @sqlcmd;
		END

		SET @j += 1;

	END

	-- We need to rewind by 1 to target the next stat to update
	SET @j -= 1;

	PRINT '-----------------------------------';

	-- Start all related update stats jobs 
	SET @sqlcmd = N'';

	SELECT @sqlcmd += 'EXEC msdb.dbo.sp_start_job @job_name = ''UPDATE_STATS_' + @schema_name + '_' + @table_name + '_' + stats_name + ''';' + CHAR(13)
	FROM #update_stats_tt
	WHERE id BETWEEN (@i + 1) AND (@i + @count);

	IF @debug = 1
		PRINT @sqlcmd;
	
	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Starting UPDATE_STATS_' + @schema_name + '_' + @table_name + ' jobs';
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	-- Waiting 10 seconds before checking running jobs
	WAITFOR DELAY '00:00:10';

	-- Construction job pattern to the next steps - check running jobs and stop terminated jobs
	SET @job_name_pattern = 'UPDATE_STATS_' + @schema_name + '_' + @table_name + '_';
	
	IF @debug = 99
		SELECT 'DEBUG - @job_name_pattern = ' + @job_name_pattern

	SELECT 
		@nb_jobs_running = COUNT(*)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	JOIN 
		msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
	WHERE 
		ja.session_id = (
							SELECT TOP 1 session_id 
							FROM msdb.dbo.syssessions  (NOLOCK) 
							ORDER BY agent_start_date DESC)
		AND start_execution_date is not null
		AND stop_execution_date is NULL
		AND j.name LIKE @job_name_pattern + '%';

	IF @debug = 99
		SELECT 'DEBUG --  @nb_jobs_running : ' + CAST(@nb_jobs_running AS VARCHAR(15));
	
	IF @nb_jobs_running = @batch_upd_stats
		PRINT '--> All SQL jobs are running. Waiting for 5s ...';

	-- Waiting until at least one job is terminated ...
	WHILE (@nb_jobs_running = @batch_upd_stats)
	BEGIN

		-- Count nb of running jobs only
		SELECT 
			@nb_jobs_running = COUNT(*)
		FROM 
			msdb.dbo.sysjobactivity ja (NOLOCK)
		LEFT JOIN 
			msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
		JOIN 
			msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
		JOIN 
			msdb.dbo.sysjobsteps js  (NOLOCK) ON ja.job_id = js.job_id AND ISNULL(ja.last_executed_step_id,0)+1 = js.step_id
		WHERE ja.session_id = (
								SELECT TOP 1 session_id 
								FROM msdb.dbo.syssessions  (NOLOCK) 
								ORDER BY agent_start_date DESC)
			AND start_execution_date is not null
			AND stop_execution_date is NULL
			AND j.name LIKE @job_name_pattern + '%'

		WAITFOR DELAY '00:00:05';

	END

	PRINT '-----------------------------------';

	-- Delete terminated SQL jobs 
	SET @sqlcmd = '';

	SELECT 
		@sqlcmd += 'EXEC msdb.dbo.sp_delete_job  @job_name = ''' + j.name + ''';' + CHAR(13)
	FROM 
		msdb.dbo.sysjobactivity ja (NOLOCK)
	LEFT JOIN 
		msdb.dbo.sysjobhistory jh (NOLOCK) ON ja.job_history_id = jh.instance_id
	JOIN 
		msdb.dbo.sysjobs j  (NOLOCK) ON ja.job_id = j.job_id 
	WHERE 
		j.name LIKE @job_name_pattern + '%'
		AND start_execution_date is not null
		AND stop_execution_date is not null

	IF @debug = 1
		PRINT @sqlcmd;

	IF @debug IN (0,1)
	BEGIN
		PRINT '--> Removing terminated UPDATE_STATS_' + @schema_name + '_' + @table_name + '_XXX jobs'
		EXEC sp_executesql @sqlcmd;
		PRINT '-----------------------------------';
	END

	SET @i = @j;

	IF @debug = 99
		SELECT 'DEBUG --  @i = @j : ' + CAST(@j AS VARCHAR(15));
	
END

-- Disable traceflag 7471 (only part of the update stats maintenance
PRINT '-----------------------------------';
PRINT '--> Disable trace flag 7471';
SET @sqlcmd = N'DBCC TRACEOFF(7471, -1);' + CHAR(13)
EXEC sp_executesql @sqlcmd;

SET @end_date = CURRENT_TIMESTAMP;

-- Display execution time in seconds
SELECT DATEDIFF(SECOND, @start_date, @end_date) AS duration_S;

 

We initially run the test on a QA environment with 4 VCPUs and 22GB of RAM – that was pretty close to the production environment. We noticed when we began to increase the number of parallel jobs over 3 we encountered RESOURCE_SEMAPHORE waits. This is because of memory grants required for each update statistics command . Unfortunately, no chance here to increase the amount of memory to push the limit further but we noticed a factor improvement of 1.5 in average (with still a sample of 100 percent).

blog 124 - 5 - update stats execution time

At this point I asked myself if we may rely only on the storage layout performance to update statistics. After all, we managed to reduce the execution time below to the maximum windows maintenance timeframe – fixed to 8 hours in your context.

Analyzing further the data distribution

Relying on the storage performance and the new SQL Server capabilities was a good thing for us but however I kept in mind that updating 94 statistics was probably not a smart idea because I was convicted a big part of them was pretty useless. There is no easy way to verify it because we had a mix of stored procedures and ad-hoc statements from different applications that refer to this database (let’s say we also have super users who run queries directly from SSMS). So I decided to put the question differently: If we may not remove some auto-generated statistics, do we have necessarily to update all of them with FULLSCAN for this specific table? What about data distribution for columns involved by auto generated statistics? In the context of the dbo.bigTable and regarding the number of rows we may easily run into cardinality estimation issues if the data distribution is not correctly represented especially in case of skewed data. Nevertheless, analyzing manually histogram steps of each statistic may be cumbersome and we decided to go through the stored procedures provided by Kimberly Tripp here. However, the version we got did not support analyzing columns not involved in an index as mentioned below:

-- Considering for v2
--	, @slowanalyze	char(5) = 'FALSE'		
-- No index that exists with this column as the 
-- high-order element...analyze anyway. 
-- NOTE: This might require MULTIPLE table scans. 
-- I would NOT RECOMMEND THIS. I'm not even sure why I'm allowing this...

 

We had to update a little bit of code to fix it but don’t get me wrong, the Kimberly’s recommendation still makes sense because in the background SQL Server order data from the concerned column to get a picture of the data distribution. Without any indexes on this concerned column, analyzing data distribution may be a very time and resource-consuming (including tempdb for sorting data) task especially when the table becomes big in size as illustrated by the following sample of code executed by SQL Server while updating statistics.

SELECT StatMan([SC0]) 
FROM 
(
	SELECT TOP 100 PERCENT [TransactionID] AS [SC0] 
	FROM [dbo].[bigTransactionHistory] WITH (READUNCOMMITTED)  
	ORDER BY [SC0] 
) AS _MS_UPDSTATS_TBL  OPTION (MAXDOP 16)

This was the case for my dbo.bigTable and I had to cut the operation into small pieces spread within several windows maintenance timeframes.

So, we have to add  to the sp_SQLskills_AnalyzeColumnSkew stored procedure the creation of temporary index on the concerned column before analyzing data. This compromise allowed us to gain a lot of time and was validated on the QA environment.

We tried different  combinations of parameters and we finally finished by using the following ones (@difference = 10000, @numofsteps = 5) according to our background when we faced query performance and cardinality estimation issues on this specific table. We got an interesting output as shown below:

blog 124 - 6 - update stats execution time

The number of columns with skewed data are low compared to those with data distributed uniformly. [Partial skew] value means we got only differences from the column [Nb steps (factor)] => 2.5 by default.

That was a pretty good news because for columns with no skew data we were able to consider updating them by either using the default sampling rate used by SQL Server (nonlinear algorithm under the control of the optimizer) or to specify a custom sampling rate value to make sure we are not scanning too much data. Regarding my context, mixing parallel jobs and a default sampling rate value for column statistics with no skewed data seems to be good enough (no query plan regression at this moment) but we will have probably to change in the future. Anyway, we managed to reduce the execution time to one hour as shown below:

blog 124 - 7 - update stats execution time

For columns with skewed data we are still keeping the FULLSCAN option and we plan to investigate filtered statistics to enhance further cardinality estimations in the future.

The bottom line of this story is that I probably never thought to go through all the aforementioned options if the customer accepted to follow my first proposal (who knows?). Technically and humanly speaking it was a good learning experience. Obviously, you would think it was not the best or the simplest approach and you would be right. Indeed, there are drawbacks here as adding overhead and complexity to write custom scripts and maintaining ndividual statistics over the time as well. But from my experience in a consulting world everything is not often black or white and we also have to compose with a plenty of customer’s context variations to achieve not necessarily what we consider the best but one satisfactory outcome for the business.

Happy new year 2018!

 

 

 

 

 

 

 

 

Cet article Experiencing update statistics on a big table with circuitous ways est apparu en premier sur Blog dbi services.

GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2

$
0
0

This post is a demonstration on how to deploy a fully supported Oracle GoldenGate as cluster resources.

We are going to use the Oracle Grid Infrastructure Agents, called XAG to integrate the GoldenGate instance within our cluster as a cluster managed resource. This tool is provided by Oracle in the binaries with the release 12c and later but it is recommended to download and use the last version out of the box (available here).

This XAG provide an interface “agctl” which enable us to interact with GoldenGate resources the same way as “srvctl” does for Oracle database resources within the Oracle cluster. This tool support various product that can be integrated to the Oracle cluster like Tomcat, MySQL and so on.

Here what’s can be done with XAG:

[oracle@rac001 ~]$ agctl
Manages Apache Tomcat, Apache Webserver, Goldengate, JDE Enterprise Server, MySQL Server, Peoplesoft App Server, Peoplesoft Batch Server, Peoplesoft PIA Server, Siebel Gateway, Siebel Server, WebLogic Administration Server as Oracle Clusterware Resources

Usage: agctl <verb> <object> [<options>]
   verbs: add|check|config|disable|enable|modify|query|relocate|remove|start|status|stop
   objects: apache_tomcat|apache_webserver|goldengate|jde_enterprise_server|mysql_server|peoplesoft_app_server|peoplesoft_batch_server|peoplesoft_pia_server|siebel_gateway|siebel_server|weblogic_admin_server
For detailed help on each verb and object and its options use:
   agctl <verb> --help or
   agctl <verb> <object> --help

We are going to deploy that GoldenGate HA configuration on top of our existing 12.2 Grid Infrastructure. I choose for that demo to deploy the whole GoldenGate instance and its binaries to a dedicated mount point using ACFS to be sure my GoldenGate resources will have access to their data regardless of the node where the GoldenGate instance will be running. Could also be a DBFS or NFS.

Step 1 – Create an ACFS mount point called /acfsgg

[oracle@rac001 Disk1]$ /u01/app/12.2.0/grid/bin/asmcmd volcreate -G DGFRA -s 4G --width 1M --column 8 ACFSGG

[root@rac001 ~]# /sbin/mkfs -t acfs /dev/asm/acfsgg-215
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl add filesystem -d /dev/asm/acfsgg-215 -m /acfsgg -u oracle -fstype ACFS -autostart ALWAYS
[root@rac001 ~]# /u01/app/12.2.0/grid/bin/srvctl start filesystem -d /dev/asm/acfsgg-215
[root@rac001 ~]# chown oracle:oinstall /acfsgg
[root@rac001 ~]# chmod 775 /acfsgg

[oracle@rac001 ~]$ ./execall "df -hT /acfsgg"
rac001: Filesystem          Type  Size  Used Avail Use% Mounted on
rac001: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg
rac002: Filesystem          Type  Size  Used Avail Use% Mounted on
rac002: /dev/asm/acfsgg-215 acfs  4.0G   85M  4.0G   3% /acfsgg

 

Step 2 – install GoldenGate binaries in /acfsgg

cd /u01/install/
unzip 122022_fbo_ggs_Linux_x64_shiphome.zip
cd fbo_ggs_Linux_x64_shiphome/Disk1

[oracle@rac001 Disk1]$ ./runInstaller -silent -nowait -showProgress \
INSTALL_OPTION=ORA12c \
SOFTWARE_LOCATION=/acfsgg \
START_MANAGER=false \
MANAGER_PORT= \
DATABASE_LOCATION= \
INVENTORY_LOCATION=/u01/app/oraInventory \
UNIX_GROUP_NAME=oinstall

[...]
Finish Setup successful.
The installation of Oracle GoldenGate Core was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2018-01-04_06-37-02AM.log' for more details.
Successfully Setup Software.

 

Step 3 – install the last version of XAG (recommended by Oracle) in all nodes

# creation of the XAG home on all nodes as root
[root@rac001 ~]# mkdir /u01/app/xag
[root@rac001 ~]# chown oracle. /u01/app/xag

# back as oracle
[oracle@rac001 ~]$ cd /u01/install/
[oracle@rac001 install]$ unzip xagpack.zip
[oracle@rac001 xag]$ cd xag

[oracle@rac001 xag]$ export XAG_HOME=/u01/app/xag

[oracle@rac001 xag]$ ./xagsetup.sh --install --directory $XAG_HOME --all_nodes
Installing Oracle Grid Infrastructure Agents on: rac001
Installing Oracle Grid Infrastructure Agents on: rac002
Done.

[oracle@rac001 xag]$ /u01/app/12.2.0/grid/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 7.1.0

[oracle@rac001 xag]$ $XAG_HOME/bin/agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 8.1.0

 

Step 4 – configure GoldenGate instance and the manager

[oracle@rac002 acfsgg]$ export ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1
[oracle@rac002 acfsgg]$ export LD_LIBRARY_PATH=$ORACLE_HOME/lib
[oracle@rac002 acfsgg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.2.2 OGGCORE_12.2.0.2.0_PLATFORMS_170630.0419_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jun 30 2017 16:12:28
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


GGSCI (rac001) 1> CREATE SUBDIRS

Creating subdirectories under current directory /acfsgg

Parameter files                /acfsgg/dirprm: created
Report files                   /acfsgg/dirrpt: created
Checkpoint files               /acfsgg/dirchk: created
Process status files           /acfsgg/dirpcs: created
SQL script files               /acfsgg/dirsql: created
Database definitions files     /acfsgg/dirdef: created
Extract data files             /acfsgg/dirdat: created
Temporary files                /acfsgg/dirtmp: created
Credential store files         /acfsgg/dircrd: created
Masterkey wallet files         /acfsgg/dirwlt: created
Dump files                     /acfsgg/dirdmp: created

GGSCI (rac001) 2> edit params mgr

PORT 7809
AUTORESTART ER *, RETRIES 3, WAITMINUTES 3, RESETMINUTES 10
AUTOSTART ER *
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS
LAGCRITICALMINUTES 5
LAGREPORTMINUTES 60
LAGINFOMINUTES 0

 

Step 5 – Add the GoldenGate resource to the cluster

[root@rac001 ~]# export XAG_HOME=/u01/app/xag
[root@rac001 ~]# $XAG_HOME/bin/agctl add goldengate GGAPP01 --gg_home /acfsgg --instance_type source --oracle_home /u01/app/oracle/product/12.2.0/dbhome_1 --ip 192.168.179.15 --network 1 --user oracle --filesystems ora.dgfra.acfsgg.acfs
Calling POSIX::isdigit() is deprecated at /u01/app/xag/agcommon.pm line 809.

## all parameters in above command are required except the FS specification

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl start goldengate GGAPP01

[oracle@rac001 ~]$ $XAG_HOME/bin/agctl status goldengate GGAPP01
Goldengate  instance 'GGAPP01' is running on rac001

[oracle@rac001 ~]$ /u01/app/12.2.0/grid/bin/crsctl stat res -t
[...]
Cluster Resources
--------------------------------------------------------------------------------
[...]
xag.GGAPP01-vip.vip
      1        ONLINE  ONLINE       rac001                   STABLE
xag.GGAPP01.goldengate
      1        ONLINE  ONLINE       rac001                   STABLE
--------------------------------------------------------------------------------

At this stage we have an operational GoldenGate fully managed by the cluster and ready to be configured for replication.

In case of failure of the node running the Manager, the cluster will restart the Manger process on first remaining available node. As soon as the Manager start, it will automatically restart all Extract and Replicate processes as we instruct it in the Manager configuration file (parameter AUTOSTART). The restarting of Extraction and Replication processes will be also done by the Manager and not by the cluster (so far with this release).

We can add Extract and Replicate processes as cluster resources but in this case it will only be monitored by the cluster and the cluster will update their states with ONLINE, OFFLINE, INTERMEDIATE or UNKNOWN depending the scenario.

I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

 

Cet article GoldenGate 12.2.0.2 installation in a Grid infrastructure12cR2 est apparu en premier sur Blog dbi services.

SQL Server 2017: TRIM not only a replacement of RTRIM and LTRIM

$
0
0

Last month, I present in the IT-Tage 2017 in Frankfurt am Main, a session about SQL Server 2017 overview.
During my session I made a demo on one of the new string T-SQL Commands: TRIM.
My demo was very simple but I have also 2 very good questions.

Before, I write about these question, I will show you what is TRIM.
This function is a replacement of RTRIM and LTRIM to delete all space character before and after a string.

SELECT Name, RTRIM(LTRIM(Name)) as RLTrim_name, TRIM(Name) as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim01

As you can see in the result of the query, TRIM() do the same as RTRIM(LTRIM()).

But TRIM can also have a second usage and you can give a set of characters to be deleted and this use case is very useful.

I made an example with this query to delete all A,n & e from my precedent query:

SELECT Name, TRIM('A,n,e' FROM Name)  as Trim_name, TRIM('A,n,e, ' FROM Name)  as Trim_name2
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim02

In the result in the column Trim_name, I haven’t what I expected… The problem is that all name values begin or end with several space characters.
I you look the Trim_name2 column and his associated query, you will see that I add a space character in the special characters’ list.
The first interesting question was:

Can I use the char(xx) to define the character to be deleted?

The ASCII code for a Space is 32 (20 in Hex). I test my query with char(32) like this:

SELECT Name, TRIM(char(32) FROM Name)  as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

I run the query…

trim03
Yes, it’s works! I can see that my result is without space character.
Now, I try with another character: char(65) à A

SELECT Name, TRIM(char(32) FROM Name) as Trim_name , TRIM(char(65) FROM Name)  as Trim_name2 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim04

The result is what I expected with a deleted A at the beginning of the string.
My last test is with both characters together in the same Trim function like this:

SELECT Name, TRIM(char(32),char(65) FROM Name) as Trim_name 
FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim05

As you can see, I get an error. Incorrect syntax near the keyword ‘FROM’.

This way is perhaps not the good way and I try another syntax with ‘char(32),char(65)':

SELECT Name, TRIM('char(32),char(65)' FROM Name) as Trim_name

FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim06

The result is very strange. The character A at the beginning of the string is deleted but not the space character…

This does not work! :oops:

Let’s go to the second question
The second question was:

Can I use a variable?

My first test with a variable is very simple with the same characters set that I use in my demo:

DECLARE @characters nvarchar(10);
SET @characters= 'A,n,e, '
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim07
It’s working fine.
Now, I will try with the ASCII code.

First I try with one characters :

DECLARE @characters nvarchar(10);
SET @characters= char(32)
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim11

I try with several characters.

DECLARE @characters nvarchar(10);
SET @characters= 'char(32),char(65)'
SELECT Name, TRIM(@characters FROM Name)  as Trim_name FROM [GoT].[dbo].[character-deaths2] Where Name like '%Aegon Targaryen%'

trim08
As you can see, the variable with several characters in ASCII code does not work.

Summary

To finish, I summarize all my tests in a little report:

Direct characters set ok
Space character in ASCII code ok
One character in ASCII code ok
Multiple characters in ASCII code nok
Variable with several characters ok
Variable with one character in ASCII code ok
Variable with multiple characters in ASCII code nok

 

 

Cet article SQL Server 2017: TRIM not only a replacement of RTRIM and LTRIM est apparu en premier sur Blog dbi services.

GDPR compliant by installing software editors tools?

$
0
0

GDPR goals and scope

In few months (25 May 2018) the EU General Data Protection Regulation (GDPR) will be in force and will replace the Data Protection Directive 95/46/EC. His goals are to harmonize data privacy laws across Europe, to protect and empower all EU citizens’ data privacy and to reshape the way organizations across the region approach data privacy. You can find all information related to GDPR on https://www.eugdpr.org and the official PDF of the Regulation (EU) 2016/679 on https://gdpr-info.eu/

GDPR 25th of May 2018

GDPR

 

The biggest change of GDPR compared to Data Protection Directive 95/46/EC is perhaps the extended jurisdiction as it applies to all companies processing the personal data of data subjects residing in the Union, regardless of the company’s location. But other key points such a penalties, consent, breach notification, right to access, right to be forgotten, data portability, privacy by design and data protection officers have been added to this Regulation.

Consequences of non compliance with GDPR

From 25 May 2018 on, non-compliant organizations will face heavy penalties in terms of fine and reputation. Indeed according to the Regulation, non-compliant organizations can be fined up to 4% of annual global turnover for breaching GDPR or €20 Million. However there is no minimum fine or even an automatic fine in case of violation. To decide whether to impose a fine and its amount, the following items can be taken into consideration: the nature, severity and duration of the violation, the number of persons impacted, the impact on the persons, the measures taken to mitigate the damage, first or subsequent breach, and finaly the cooperation with the authority to remedy the violation. There is currently no clear procedure for a foreign authority to collect the fine imposed on a Swiss company without presence in the EU.

What about Switzerland, UK and other countries ?

The impact of the GDPR is worldwide since it doesn’t only affect EU organization but all companies processing and holding the personal data of data subjects residing in the European Union, regardless of the company’s location. Meaning that all countries (even Switzerland and UK) are affected. You can find the official information regarding GDPR provided by the Swiss Confederation on the following link and the current Federal Act on Data protection here (Note: The Swiss Data Protection Act which is currently under revision will incorporate key regulations similar to the GDPR).  You can find the status of the UK Data Protection Bill on the Following link.

GDPR articles and usage by software editors

Personal data definition

According to the GDPR the personal data definition is the following:
Any information related to a natural person or ‘Data Subject’, that can be used to directly or indirectly identify the person. It can be anything from a name, a photo, an email address, bank details, posts on social networking websites, medical information, or a computer IP address.

GDPR Content

To protect these personal data, the GDPR has 99 articles divided in 11 chapters. Two of these articles, the article 25 (Data protection by design and default) and article 32 (Security of Processing) are usually put forward by software vendors: Oracle, Microsoft, IBM. These editors usually use these articles to promote encryption and anonymization tools which can make sense depending on the personal data hosted as the GDPR requires an adapted approach, depending on the nature and the scope of the personal data impacted. Indeed, encryption at rest may be appropriate depending on the circumstances, but they are not mandated by the GDPR in every instance.

Conclusion

In other terms the technical tools can help to be in conformity regarding a small subset of the Regulation but the GDPR is mostly about processes related to personal data identification and treatment. You will probably have a better understanding of what I mean by having a look on the two checklists (one for data controllers and one for data processors) provided by the ICO (UK Information Commissioner’s Office) on the following link. After having completed the checklist you will get an overall rating, a list of suggested actions and guidance.

 

Cet article GDPR compliant by installing software editors tools? est apparu en premier sur Blog dbi services.


Keep your orapw password file secure

$
0
0

This is a small demo I did when I’ve found a database password file (orapw) lying around in /tmp with -rw-rw-rw- permissions, to show how this is a bad idea. People think that the orapw file only contains hashes to validate a password given, and forget that it can be used to connect to a remote database without password.

I can easily imagine why the orapwd was there in /tmp. To build a standby database, you need to copy the password file to the standby server. If you don’t have direct access to the oracle user, but only a sudo access for ‘security reasons’, you can’t scp easily. Then you copy the file to /tmp, make it readable by all users, and you can scp with your user.

In this demo I don’t even have access to the host. I’ve only access to connect to a PDB with the SCOTT users, reated with utlsampl.sql, with those additional privileges, a read access on $ORACLE_HOME/dbs:

SQL> connect sys/oracle@//192.168.56.122/PDB1 as sysdba

Connected.

SQL> create or replace directory DBS as '/u01/app/oracle/product/12.2.0/dbhome_1/dbs';

Directory DBS created.

SQL> grant read on directory DBS to SCOTT;

Grant succeeded.
People tend to grant many privileges, and think that a read access on a directory which is supposed to contain only configuration files is harmless. Let’s see what you can do from another server.

Get the orapw file from a remote connection

I connect with SCOTT which can read from ORACLE_HOME/dbs:

SQL> connect scott/tiger@//192.168.56.122/PDB1

Connected.

SQL> show user

USER is "SCOTT"

SQL> select * from all_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID

----- -------------- -------------- -------------

SYS DBS /u01/app/oracle/product/12.2.0/dbhome_1/dbs 4
I create a table to read this file (other possibilities utl_tile, external tables,…):

SQL> create table DEMO ( b blob );

Table DEMO created.

SQL> insert into demo values ( bfilename('DBS','orapwCDB1') );

1 row inserted.
I’m on another server with the same version of Oracle Database software installed.

I use sqlplus to retrieve the server file to my client:

sqlcl -s scott/tiger@//192.168.56.120/PDB1 < $ORACLE_HOME/dbs/orapwCDB1

set pages 0 lin 17000 long 1000000000 longc 16384

select * from DEMO;

exit

EOF
This (documented by Laurent Schneider) uses sqlplus to display the BLOB variable as hexadecimal code and xdd (installed with vim-common) to revert it to binary.

So, on my server I have a copy of the database password file for the database I want to steal:

[oracle@VM122 ~]$ strings /u01/app/oracle/product/12.2.0/dbhome_1/dbs/orapwCDB1

ORACLE Remote Password file

X)l)|

SYSDG

+933k\

SYSBACKUP

f ts6 $9

SYSKM
Pull

A nice feature of 12c is the ability to pull backups from a service. With this, it is the destination that connects to the source. I have diagrams to explain here). It is an easy alternative to RMAN DUPLICATE (see MOS Doc ID 2283978.1 Creating a Physical Standby database using RMAN restore from service). And one difference is that you don’t have to provide the password:

I prepare a small init.ora and directory for the datafiles

echo "db_name=CDB1" > $ORACLE_HOME/dbs/initCDB1.ora

mkdir -p /u01/oradata/CDB1
I’m still on my server with the copy of the remote orapw file and a network access to the source database and I just restore it, without the need for a password:

RMAN> connect target /

connected to target database (not started)
I start a local instance:

RMAN> startup nomount force

Oracle instance started

Total System Global Area 859832320 bytes

Fixed Size 8798552 bytes

Variable Size 784338600 bytes

Database Buffers 58720256 bytes

Redo Buffers 7974912 bytes
I restore the controlfile:

RMAN> restore controlfile from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

using target database control file instead of recovery catalog

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: restoring control file

channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

output file name=/u01/oradata/CDB1/control01.ctl

output file name=/u01/fast_recovery_area/CDB1/control02.ctl

Finished restore at 05-JAN-18
That’s the interesting part because it has to be connected, at least as SYSOPER, to the source database but I didn’t provide any password.

I mount this controlfile locally:

RMAN> alter database mount;

Statement processed

released channel: ORA_DISK_1
And now it is easy to pull the whole database (the CDB with all its PDBs) to my local server:

RMAN> restore database from service '//192.168.56.122/CDB1';

Starting restore at 05-JAN-18

Starting implicit crosscheck backup at 05-JAN-18

allocated channel: ORA_DISK_1

channel ORA_DISK_1: SID=262 device type=DISK

Crosschecked 6 objects

Finished implicit crosscheck backup at 05-JAN-18

Starting implicit crosscheck copy at 05-JAN-18

using channel ORA_DISK_1

Finished implicit crosscheck copy at 05-JAN-18

searching for all files in the recovery area

cataloging files...

cataloging done

List of Cataloged Files

=======================

File Name: /u01/fast_recovery_area/CDB1/autobackup/2018_01_04/o1_mf_s_964524009_f4vzyt59_.bkp

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_15_f4w5vv19_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_16_f4wmm0t8_.arc

File Name: /u01/fast_recovery_area/CDB1/archivelog/2018_01_04/o1_mf_1_14_f4vzjdl1_.arc

using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore

channel ORA_DISK_1: using network backup set from service //192.168.56.122/CDB1

channel ORA_DISK_1: specifying datafile(s) to restore from backup set

channel ORA_DISK_1: restoring datafile 00001 to /u01/oradata/CDB1/system01.dbf

channel ORA_DISK_1: restore complete, elapsed time: 00:00:16

channel ORA_DISK_1: starting datafile backup set restore

...
So what?

This is not an issue and is totally expected. In a Data Guard configuration, the primary and standby database have to communicate with each others and then need a passwordless authentication. This is done with the password file, and this is the reason why you need to copy it rather than just create another one with the same passwords.

So, there is more than just a hash of the password (which is required to validate a password) and probably includes a key (randomly generated when you create the password file) used for passwordless authentication.

Then, be careful, and do not give read access to the orapw files. You must secure them in the same way as a ssh key or an encryption wallet. and this include:

  • Do not leave a copy of the orapw file in a shared location
  • Be careful with grants on directories, even in READ
  • Do not grant CREATE ANY DIRECTORY except for a PDB with PATH_PREFIX lockdown
 

Cet article Keep your orapw password file secure est apparu en premier sur Blog dbi services.

Running the Oracle Client on a Raspberry Pi

$
0
0

What are the possibilities to use a Raspberry Pi computer as an Oracle client?

IMG_1653

Besides other things I’ll show a possibility in this Blog to run the fat/thick Oracle Client on a Raspberry Pi!

REMARK: All examples below were made with an Rasperry Pi 3 and the OS Raspbian, which can be downloaded from

https://www.raspberrypi.org/downloads

First of all what’s possible with Java and Thin Clients?
Running the Java-Programs sqldeveloper or its counterpart in command line mode sqlcl is of course possible on a Raspberry Pi:

1.) sqldeveloper

The prerequisite for running sqldveloper 17.4. (current version as of writing this Blog) is an installed JDK 1.8. As I had that installed by default, I could run sqldeveloper as documented. I.e.


pi@raspberrypi:~ $ sudo apt list --installed | grep jdk
oracle-java8-jdk/stable,now 8u65 armhf [installed] pi@raspberrypi:~ $ cd Oracle
pi@raspberrypi:~/Oracle $ unzip sqldeveloper-17.4.0.355.2349-no-jre.zip
...
pi@raspberrypi:~/Oracle $ cd sqldeveloper/
pi@raspberrypi:~/Oracle/sqldeveloper $ ./sqldeveloper.sh
 
Oracle SQL Developer
Copyright (c) 1997, 2017, Oracle and/or its affiliates. All rights reserved.

sqldeveloper

2.) sqlcl

Installing sqlcl is as easy as installing sqldeveloper:


pi@raspberrypi:~/Oracle $ unzip sqlcl-17.4.0.354.2224-no-jre.zip
...
pi@raspberrypi:~/Oracle $ alias sqlcl='/home/pi/Oracle/sqlcl/bin/sql'
pi@raspberrypi:~/Oracle $ sqlcl cbleile/cbleile@192.168.178.65:1521/prem122.localdomain
 
SQLcl: Release 17.4.0 Production on Tue Jan 09 14:28:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Last Successful login time: Thu Jan 04 2018 22:15:36 +01:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> set sqlformat ansiconsole
SQL> select table_name, tablespace_name from tabs;
TABLE_NAME TABLESPACE_NAME
T1 USERS
 
SQL>

3.) Running Java-Code using the JDBC Thin driver

Running Java-Code with access to Oracle is easy as well. Just download the JDBC Thin Driver ojdbc8.jar and put it somewhere on the Pi. In the example below I actually do use the ojdbc8.jar from sqlcl:


pi@raspberrypi:~/Oracle/Java $ more Conn.java
import java.sql.*;
class Conn {
public static void main (String[] args) throws Exception
{
Class.forName ("oracle.jdbc.OracleDriver");
 
Connection conn = DriverManager.getConnection ("jdbc:oracle:thin:@//192.168.178.65:1521/prem122.localdomain", "cbleile", "cbleile");
// @//machineName:port/SID, userid, password
try {
Statement stmt = conn.createStatement();
try {
ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION where BANNER like '%Enterprise Edition%'");
try {
while (rset.next())
System.out.println ("Connected to "+rset.getString(1));
}
finally {
try { rset.close(); } catch (Exception ignore) {}
}
}
finally {
try { stmt.close(); } catch (Exception ignore) {}
}
}
finally {
try { conn.close(); } catch (Exception ignore) {}
}
}
}
pi@raspberrypi:~/Oracle/Java $ javac Conn.java
pi@raspberrypi:~/Oracle/Java $ java -cp ../sqlcl/lib/ojdbc8.jar:. Conn
Connected to Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

4.) Running a fat Client on the Raspberry Pi

Is it actually possible to run the normal Oracle thick/fat Client on the Pi? As the Oracle Client Binaries are not available for the ARM processor it seems not possible, but emulating the x86-platform you actually can do it.

The easiest way to run x86-Code on a Raspberry Pi is to actually buy the product ExaGear Desktop from Eltechs ( https://eltechs.com ) for aound 20 Euros (they usually sell it for 16 Euros).
REMARK: You can actually also install the QEMU image from https://github.com/AlbrechtL/RPi-QEMU-x86-wine , but that’s far more effortful.

What is ExaGear? ExaGear is an emulator (i.e. a virtual machine) which emulates a x86 Debian Linux on your Raspberry Pi. After downloading ExaGear and unzipping it it’s installed easily on the Pi with just


$ sudo ./install-exagear.sh

Afterwards you can start it with the command exagear:


pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l GNU/Linux
pi@raspberrypi:~ $ exagear
Starting /bin/bash in the guest image /opt/exagear/images/debian-8
pi@raspberrypi:~ $ uname -a
Linux raspberrypi 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 i686 GNU/Linux
pi@raspberrypi:~ $ arch
i686

I am now inside the x86 world. I.e. I can use this guest shell as if it were running on an x86 machine.
First I do update the repositories:


pi@raspberrypi:~ $ sudo apt-get update
...

The next step is to download the Oracle 32-Bit 12.2.-Client-Software to the raspberry pi. To be able to install the Oracle software a couple of libraries and programs need to be installed. I.e. inside exagear:


pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo apt-get install libxrender1 libxtst6 libxi6 libaio1 make gcc gawk

To avoid some errors I also had to create 3 symbolic links:


pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/lib/i386-linux-gnu/libpthread_nonshared.a /usr/lib/libpthread_nonshared.a
pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/lib/i386-linux-gnu/libc_nonshared.a /usr/lib/libc_nonshared.a
pi@raspberrypi:~/Oracle/Downloads/client32 $ sudo ln -s /usr/bin/awk /bin/awk

At that point I could install the Oracle Client software as usual:


pi@raspberrypi:~/Oracle/Downloads/client32 $ ./runInstaller
Starting Oracle Universal Installer...
 
Checking Temp space: must be greater than 415 MB. Actual 1522 MB Passed
Checking swap space: must be greater than 150 MB. Actual 828 MB Passed
Checking monitor: must be configured to display at least 256 colors
>>> Could not execute /usr/bin/xdpyinfo Failed <<<<
 
Some requirement checks failed. You must fulfill these requirements before
 
continuing with the installation,
 
Continue? (y/n) [n] y

xdpyinfo can be installed by installing the x11-utils on Debian, but it’s actually not necessary, so just continue by answering “y” at the prompt.

The rest is a normal Oracle-Client installation. Here some screen shots:

runInstaller1_1

runInstaller2

runInstaller6

runInstaller8

runInstaller10

runInstaller12

I created my small script to set the environment and was then able to run the client-software:


pi@raspberrypi:~ $ more oraclient.sh
#!/bin/bash
 
export ORACLE_HOME=/home/pi/Oracle/app/pi/product/12.2.0/client_1
export PATH=$ORACLE_HOME/bin:$PATH
pi@raspberrypi:~ $ . ./oraclient.sh
pi@raspberrypi:~ $ sqlplus cbleile/cbleile@192.168.178.65:1521/prem122.localdomain
 
SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 9 16:04:36 2018
 
Copyright (c) 1982, 2016, Oracle. All rights reserved.
 
Last Successful login time: Thu Jan 04 2018 22:33:09 +01:00
 
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
 
SQL> select count(*) from all_objects;
 
COUNT(*)
----------
19640
 
SQL>

Everything worked as expected. I actually haven’t found any issue with the available Oracle programs:


pi@raspberrypi:~ $ ls $ORACLE_HOME/bin
adapters expdp lcsscan orapki.bat statusnc
adrci expdpO linkshlib oraxml symfind
adrciO extjob lmsgen oraxsl sysresv
aqxmlctl extjobo loadjava orion tkprof
aqxmlctl.pl extproc loadpsp osdbagrp tkprofO
bndlchk extprocO loadpspO osh tnsping
coraenv genagtsh lxchknlb ott tnsping0
dbfs_client genclntsh lxegen owm trcasst
dbgeu_run_action.pl genclntst lxinst platform_common trcroute
dbhome genezi mkstore plshprof trcroute0
dbshut geneziO mkstore.bat plshprofO uidrvci
dbstart gennfgt ncomp proc uidrvciO
deploync gennttab netmgr rconfig umu
dg4pwd genoccish oerr relink unzip
dg4pwdO genorasdksh oerr.pl rman wrap
dgmgrl gensyslib ojvmjava rmanO wrc
diagsetup imp ojvmtc roohctl wrcO
diskmon.bin impO orabase schema xml
dropjava impdp orabaseconfig skgxpinfo xmlwf
echodo impdpO orabasehome sqlldr zip
eusm kgmgr oraenv sqlldrO
exp kgmgrO orajaxb sqlplus
expO lbuilder orapki srvctl
pi@raspberrypi:~ $

Just for fun I started a Data Guard Observer on my Rasperry Pi and let it perform a fast-start failover followed by reinstating the previous Primary DB:


pi@raspberrypi:~ $ dgmgrl
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 9 17:08:31 2018
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/manager@tismeds1
Connected to "TISMEDS1"
Connected as SYSDBA.
DGMGRL> show configuration;
 
Configuration - TISMED
 
Protection Mode: MaxAvailability
Members:
TISMEDS1 - Primary database
TISMEDS2 - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 49 seconds ago)
 
DGMGRL> ENABLE FAST_START FAILOVER ;
Enabled.
DGMGRL> start observer;
[W000 01/09 17:09:16.20] FSFO target standby is TISMEDS2
[W000 01/09 17:09:20.18] Observer trace level is set to USER
[W000 01/09 17:09:20.19] Try to connect to the primary.
[W000 01/09 17:09:20.20] Try to connect to the primary TISMEDS1.
[W000 01/09 17:09:20.28] The standby TISMEDS2 is ready to be a FSFO target
[W000 01/09 17:09:22.29] Connection to the primary restored!
[W000 01/09 17:09:24.35] Disconnecting from database TISMEDS1.
[W000 01/09 17:10:32.84] Primary database cannot be reached.
[W000 01/09 17:10:49.29] Primary database cannot be reached.
[W000 01/09 17:10:49.30] Fast-Start Failover threshold has expired.
[W000 01/09 17:10:49.31] Try to connect to the standby.
[W000 01/09 17:10:49.32] Making a last connection attempt to primary database before proceeding with Fast-Start Failover.
[W000 01/09 17:10:49.33] Check if the standby is ready for failover.
[S002 01/09 17:10:50.03] Fast-Start Failover started...
 
17:10:50.04 Tuesday, January 09, 2018
Initiating Fast-Start Failover to database "TISMEDS2"...
[S002 01/09 17:10:50.05] Initiating Fast-start Failover.
Performing failover NOW, please wait...
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
Failover succeeded, new primary is "TISMEDS2"
17:10:52.79 Tuesday, January 09, 2018
[S002 01/09 17:10:52.80] Fast-Start Failover finished...
[W000 01/09 17:10:52.81] Failover succeeded. Restart pinging.
[W000 01/09 17:10:52.88] Primary database has changed to TISMEDS2.
[W000 01/09 17:10:52.91] Try to connect to the primary.
[W000 01/09 17:10:52.92] Try to connect to the primary TISMEDS2.
[W000 01/09 17:10:54.33] The standby TISMEDS1 needs to be reinstated
[W000 01/09 17:10:54.34] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:10:54.35] Connection to the primary restored!
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
[W000 01/09 17:10:56.36] Disconnecting from database TISMEDS2.
[W000 01/09 17:11:24.84] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12543: TNS:destination host unreachable
 
[W000 01/09 17:11:54.85] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12541: TNS:no listener
 
[W000 01/09 17:12:24.17] Try to connect to the new standby TISMEDS1.
Unable to connect to database using TISMEDS1
ORA-12541: TNS:no listener
 
[W000 01/09 17:12:54.54] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:12:57.58] Connection to the new standby restored!
[W000 01/09 17:12:57.63] Failed to ping the new standby.
[W000 01/09 17:12:58.64] Try to connect to the new standby TISMEDS1.
[W000 01/09 17:13:00.65] Connection to the new standby restored!
[W000 01/09 17:13:32.32] Try to connect to the primary TISMEDS2.
[W000 01/09 17:13:34.36] Connection to the primary restored!
[W000 01/09 17:13:35.37] Wait for new primary to be ready to reinstate.
[W000 01/09 17:13:36.38] New primary is now ready to reinstate.
[W000 01/09 17:13:36.38] Issuing REINSTATE command.
 
17:13:36.39 Tuesday, January 09, 2018
Initiating reinstatement for database "TISMEDS1"...
Reinstating database "TISMEDS1", please wait...
[W000 01/09 17:13:54.64] The standby TISMEDS1 is ready to be a FSFO target
Reinstatement of database "TISMEDS1" succeeded
17:13:56.24 Tuesday, January 09, 2018
[W000 01/09 17:13:56.65] Successfully reinstated database TISMEDS1.
[W000 01/09 17:13:57.70] The reinstatement of standby TISMEDS1 was just done

Summary: Is it possible to run an Oracle client on the Rasberry Pi? Yes, it is! Running native Java-applications using JDBC Thin Connections is not a problem at all. Running a fat Oracle Client is possible as well using x86 emulation software. Is this supported by Oracle? I do assume that like with any other non-Oracle-VM-solution you would have to prove possible issues by reproducing the problem on a bare metal x86 platform to be able to open a Service Request.

Anyway, if you plan to run an Oracle 18c XE DB at home (see e.g. here
https://ora-00001.blogspot.de/2017/10/oracle-xe-12c-becomes-oracle-xe-18c.html )
then you might consider running your client on a Raspberry Pi ;-)

 

Cet article Running the Oracle Client on a Raspberry Pi est apparu en premier sur Blog dbi services.

Spectre and Meltdown, Oracle Database, AWS, SLOB

$
0
0

Last year, I measured the CPU performance for an Oracle Database on several types of AWS instances. Just by curiosity, I’ve run the same test (SLOB cached reads) now that Amazon has applied all Spectre and Meltdown mitigation patches.

I must admit that I wanted to test this on the Oracle Cloud first. I’ve updated a IaaS instance to the latest kernel but the Oracle Unbreakable Enterprise Kernel does not include the Meltdown fix yet, and booting on the Red Hat Compatible Kernel quickly goes to a kernel panic not finding the root LVM.

This is not a benchmark you can rely on to estimate the CPU usage overhead on your application. This test is not doing system calls (so the KPTI fix should be at its minimal impact). If your application is bound on system calls (network roundtrips, physical reads) the consequences can be worse. But in that case, you have a design problem which was just masked by hardware, optimized, but insecure, by a processor running the code before testing.

Figures from last year: M4.xlarge: 4vCPU, 16GB RAM

M4 is hyper-threaded so with 2 Oracle processor licenses we can use 4 vCPU.
Here I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 2 cores with 2 threads each.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.46
DB CPU(s): 1.0 13.1 0.00 5.46
Logical read (blocks): 874,326.7 11,420,189.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.24
DB CPU(s): 2.0 27.2 0.00 9.22
Logical read (blocks): 1,540,116.9 21,047,307.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 14.46
DB CPU(s): 4.0 54.3 0.00 14.39
Logical read (blocks): 1,779,361.3 24,326,538.0

Jan. 2018 with Spectre and Meltdown mitigation:

Same CPU now with the latest RedHat kernel.

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8

Here is the LIOPS result for the same runs.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 4.69
DB CPU(s): 1.0 13.7 0.00 4.69
Logical read (blocks): 808,954.0 11,048,988.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.00
DB CPU(s): 2.0 27.1 0.00 7.96
Logical read (blocks): 1,343,662.0 18,351,369.1

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 13.49
DB CPU(s): 4.0 42.5 0.00 13.37
Logical read (blocks): 1,684,204.6 18,106,823.6

Jan. 2018, with Spectre and Meltdown patches, but disabled IBRS, IBPB, KPTI

The RedHat kernel has options to disable Indirect Branch Restricted Speculation, Indirect Branch Prediction Barriers and Kernel Page Table Isolation

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 nopti noibrs noibpb

Here are the same runs after rebooting with nopti noibrs noibpb kernel options:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.86
DB CPU(s): 1.0 29.8 0.00 4.80
Logical read (blocks): 861,138.5 25,937,061.0

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 8.00
DB CPU(s): 2.0 27.0 0.00 7.92
Logical read (blocks): 1,493,336.8 20,395,790.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 13.49
DB CPU(s): 4.0 42.4 0.00 13.34
Logical read (blocks): 1,760,218.4 18,911,346.0
Read IO requests: 33.5 360.2

Then with only KPTI disabled, but all Spectre mitigation enabled

Here only the page table isolation is is disabled.

[ec2-user@ip-172-31-15-31 ~]$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-693.11.6.el7.x86_64 root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8 nopti

Here are the same runs witn only nopti kernel option:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 3.91
DB CPU(s): 1.0 29.8 0.00 3.87
Logical read (blocks): 873,451.2 26,303,984.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 23.1 0.00 7.60
DB CPU(s): 2.0 22.9 0.00 7.54
Logical read (blocks): 1,502,151.4 17,360,883.8

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 12.64
DB CPU(s): 4.0 42.4 0.00 12.50
Logical read (blocks): 1,764,293.0 18,954,682.3

Large pages

The previous tests were using small pages. I did a quick test with KPTI enabled and SGA using large pages:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.85
DB CPU(s): 1.0 30.1 0.00 4.85
Logical read (blocks): 854,682.1 27,672,906.8

Here is the same but with KPTI disabled:
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.1 0.00 4.85
DB CPU(s): 1.0 30.1 0.00 4.85
Logical read (blocks): 920,129.9 27,672,906.8

So what?

This is just a test on a synthetic workload. Nothing similar to a production database situation. However, those cached SLOB runs are doing what an optimized database application should do most of the time: read blocks from the buffer cache. At least this test is much better than the graphs without explanations, or the SELECT 1, that I have seen these days on social media.

Some interesting food for thought in those numbers, by the way.

Now vs. last year: between 5% and 12% degradation, which is what people have reported those days in general. That looks high but usually when we do database performance troubleshooting we are there to address queries with x10 to x100 CPU usage doing unnecessary stuff because of bad design or suboptimal execution plan.

If disable KPTI: degradation is less than 1%, so that’s an easy way to get the same performance if you are sure that you control all software running. At least temporarily before some database tuning is done.

If disable KPTI, IBRS and IBPB: not better than when disabling only KPTI. I’ve no explanation about that… Makes me wonder if those predictive branching are always a good idea.

In all case, if you are not allocating SGA with large pages, then you should. The KPTI degradation is lowered with large pages, which makes sense as the page table is smaller. And if you are not yet using large pages, the benefit will probably balance the KPTI degradation.

This is not a benchmark and your application may see a higher degradation if doing a lot of system calls. If you upgrade from an old kernel, you may even see an improvement thanks to other features compensating the mitigation ones.

 

Cet article Spectre and Meltdown, Oracle Database, AWS, SLOB est apparu en premier sur Blog dbi services.

Automatic Data Optimization Part I : Compression Tiering

$
0
0

Nowadays data are increasing more and more. And some challenges we can face can be how to reduce storage costs and how to improve performance. With Oracle 12c, the feature Automatic Data Optimization (ADO) can help us.
In this first blog we will see how we can use ADO to compress data under predefined conditions.
ADO is part of Information Lifecycle Management (ILM). Note that ADO requires Advanced Compression Option.
In this article we are using oracle a 12.1.0.2 non-CDB database.
First let’s create the user we will use for the demonstration

SQL> create user app identified by app default tablespace users temporary tablespace temp;
User created.
.
SQL> grant create session,create table,alter tablespace,select any dictionary,unlimited tablespace to app;
Grant succeeded.

Now with this user let’s create a table and let’s insert some data
SQL> show user
USER is "APP"
SQL> create table article(idart number,designation varchar2(20));
Table created.
.
SQL> select count(*) from article;
COUNT(*)
----------
1048576

What we will demonstrate is how data in table ARTICLE can be automatically compressed if they are not modified within the last 30 days for example. It’s just an example and there are many other conditions.
First let’s verify that the compression attribute of the table article is disabled
SQL> show user
USER is "APP"
SQL> SELECT compression, compress_for FROM user_tables WHERE table_name = 'ARTICLE';
COMPRESS COMPRESS_FOR
-------- ------------------------------
DISABLED

We can also confirm that there is no compression in the table using this oracle script.
[oracle@serverora1 ]$ cat comp_art.sql
SELECT CASE compression_type
WHEN 1 THEN 'No Compression'
WHEN 2 THEN 'Advanced compression level'
WHEN 4 THEN 'Hybrid Columnar Compression for Query High'
WHEN 8 THEN 'Hybrid Columnar Compression for Query Low'
WHEN 16 THEN 'Hybrid Columnar Compression for Archive High'
WHEN 32 THEN 'Hybrid Columnar Compression for Archive Low'
WHEN 64 THEN 'Compressed row'
WHEN 128 THEN 'High compression level for LOB operations'
WHEN 256 THEN 'Medium compression level for LOB operations'
WHEN 512 THEN 'Low compression level for LOB operations'
WHEN 1000 THEN 'Minimum required number of LOBs in the object for which LOB compression ratio is to be estimated'
WHEN 4096 THEN 'Basic compression level'
WHEN 5000 THEN 'Maximum number of LOBs used to compute the LOB compression ratio'
WHEN 1000000 THEN 'Minimum required number of rows in the object for which HCC ratio is to be estimated'
WHEN -1 THEN 'To indicate the use of all the rows in the object to estimate HCC ratio'
WHEN 1 THEN 'Identifies the object whose compression ratio is estimated as of type table'
ELSE 'Unknown Compression Type'
END AS compression_type, n as num_rows
FROM (SELECT compression_type, Count(*) n
FROM (SELECT dbms_compression.Get_compression_type(USER, 'ARTICLE', ROWID) AS COMPRESSION_TYPE
FROM app.article)
GROUP BY compression_type
);
[oracle@serverora1 ]$

Below we can see that there is no compressed data
SQL> col COMPRESSION_TYPE for a20
SQL> @comp_art
COMPRESSION_TYPE NUM_ROWS
-------------------- ----------
No Compression 1048576

To use ADO for compression, the Heat Map must be enabled. Indeed once enabled, Heat Map will collect statistics required for ADO actions. All accesses are tracked by the in-memory activity tracking module. So let’s enable the Heat Map at instance level

SQL> show parameter heat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string OFF
.
SQL> alter system set heat_map=ON scope=both;
System altered.
.
SQL> show parameter heat
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string ON

By default the policy time is specified in days. If we query the DBA_ILMPARAMETERS, the value for POLICY TIME determines if ADO policies are specified in seconds or days. Values are 1 for seconds or 0 for days (default).

SQL> col name for a20
SQL> select * from DBA_ILMPARAMETERS;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 0
8 rows selected.

But in our demonstration we will not wait 30 days, so we will set the policy time in seconds instead of days and then if we specify 30 days this will mean 30 seconds.

SQL> EXEC dbms_ilm_admin.customize_ilm(dbms_ilm_admin.POLICY_TIME,dbms_ilm_admin.ILM_POLICY_IN_SECONDS);
PL/SQL procedure successfully completed.
.
SQL> select * from DBA_ILMPARAMETERS;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 1
8 rows selected.
SQL>

Before adding the ADO policy, we can verify that heat map statistics are already collected.

SELECT OBJECT_NAME, to_char(TRACK_TIME,'DD-MON-YYYY HH24:MI:SS'), SEGMENT_WRITE "Seg_write",
SEGMENT_READ "Seg_read", FULL_SCAN, LOOKUP_SCAN
FROM v$heat_map_segment WHERE object_name='ARTICLE';
.
OBJECT_NAME TO_CHAR(TRACK_TIME,'DD-MON-YY Seg Seg FUL LOO
------------------------------ ----------------------------- --- --- --- ---
ARTICLE 08-JAN-2018 17:02:21 NO NO YES NO
SQL>

Now let’s add a segment level row compression policy on ARTICLE table that will compress the segment when no modification on the segment will have occurred in the last 30 days (considered as 30 seconds due to the policy time).

SQL> show user
USER is "APP"
SQL> ALTER TABLE app.article ILM ADD POLICY ROW STORE COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;
Table altered.
SQL>

We can use the user_ilmdatamovementpolicies and user_ilmobjects with user app to verify the policy.

SQL> show user
USER is "APP"
SQL> SELECT policy_name, action_type, scope, compression_level, condition_type, condition_days FROM user_ilmdatamovementpolicies ORDER BY policy_name;
POLICY_NAME ACTION_TYPE SCOPE COMPRESSION_LEVEL CONDITION_TYPE CONDITION_DAYS
--------------- ----------- ------- ----------------- ---------------------- --------------
P45 COMPRESSION SEGMENT ADVANCED LAST MODIFICATION TIME 30
.
SQL> SELECT policy_name, object_name, enabled FROM user_ilmobjects;
POLICY_NAME OBJECT_NAME ENA
--------------- -------------------- ---
P45 ARTICLE YES

Flush the heat map statistics from memory to disk and let’s wait 30 days ( in fact 30 seconds because don’t forget the policy_time was changed to seconds instead of days). Can be also sometimes useful to gather the statistics of the table

SQL> EXEC dbms_ilm.flush_all_segments;
PL/SQL procedure successfully completed.
SQL>

By default ADO policies are automatically triggered during maintenance window. But in this demonstration we are going to manually execute the policy without waiting the maintenance window.

SQL> show user
USER is "APP"
SQL> DECLARE
v_executionid number;
BEGIN
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,
execution_mode => dbms_ilm.ilm_execution_offline,
task_id => v_executionid);
END;
/
PL/SQL procedure successfully completed.
SQL>

Some info about the job can be obtained in the following views

SQL> SELECT task_id, start_time as start_time FROM user_ilmtasks order by 1;
TASK_ID
----------
START_TIME
---------------------------------------------------------------------------
26
08-JAN-18 05.28.45.788076 PM
SQL>
.
SQL> SELECT task_id, job_name, job_state, completion_time completion
FROM user_ilmresults ORDER BY 1 ;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
26
ILMJOB98
COMPLETED SUCCESSFULLY
08-JAN-18 05.28.49.368009 PM

We can see that job run successfully. And we can verify that compression in table ARTICLE is now enabled

SQL> SELECT compression, compress_for FROM user_tables WHERE table_name = 'ARTICLE';
COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED ADVANCED
SQL>

And we also can see that all rows are compressed.

SQL> col COMPRESSION_TYPE for a20
SQL> @comp_art.sql
COMPRESSION_TYPE NUM_ROWS
-------------------- ----------
Advanced compression 1048576
level

Conclusion : In this first part we have seen that how ADO can help us for compression. In a second post we will talk about data movement with ADO.

 

Cet article Automatic Data Optimization Part I : Compression Tiering est apparu en premier sur Blog dbi services.

Automatic Data Optimization Part II : StorageTiering

$
0
0

In first a previous blog, we saw how Automatic Data Optimization can be used to compress data under predefined conditions. In this blog we will see that another possible action with ADO is to move data to another storage. Indeed we will demonstrate how it is possible to move a table to another tablespace based on predefined conditions.
We are using oracle a 12.1.0.2 non-CDB database.
Let’s considerer following table ARTICLE in the tablespace USERS

SQL> col TABLE_NAME for a20
SQL> select table_name,tablespace_name from user_tables where table_name='ARTICLE';
TABLE_NAME TABLESPACE_NAME
-------------------- ------------------------------
ARTICLE USERS

Now let’s create a target tablespace named COOLTAB. The table ARTICLE will be moved to this new COOLTAB tablespace depending on conditions we will define.

SQL> create tablespace COOLTAB datafile '/u01/app/oracle/oradata/NONCONT/cooltab01.dbf' size 50M;
Tablespace created.

As specified in the previous blog, the Heat Map must be enabled by setting the parameter HEAT_MAP to ON.

SQL> show parameter heat_map;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
heat_map string ON

We can verify that heat map statistics are collected for table ARTICLE. It can be useful to gather statistics for ARTICLE using DBMS_STATS.

SQL> SELECT OBJECT_NAME, to_char(TRACK_TIME,'DD-MON-YYYY HH24:MI:SS'), SEGMENT_WRITE "Seg_write",SEGMENT_READ "Seg_read", FULL_SCAN, LOOKUP_SCAN FROM v$heat_map_segment WHERE object_name='ARTICLE';
OBJECT_NAME TO_CHAR(TRACK_TIME,'DD-MON-YY Seg Seg FUL LOO
------------------------------ ----------------------------- --- --- --- ---
ARTICLE 10-JAN-2018 09:40:48 NO NO YES NO

Before creating the storage tiering policy, let’s verify the status of our tablespaces. We can see that all the tablespaces are online

SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
COOLTAB ONLINE
7 rows selected.

Now we are going to define a tiering policy which will move table ARTICLE from tablespace USERS to COOLTAB if there is no access to ARTICLE in the last 30 days. The policy will also put the tablespace COOLTAB in READ ONLY status.

SQL> ALTER TABLE ARTICLE ILM ADD POLICY TIER TO COOLTAB READ ONLY SEGMENT AFTER 30 DAYS OF NO ACCESS;
Table altered.

If we query the dba_ilmparameters we can see that for the demonstration the policy_time is set to 1. This means that 30 days are considered as 30 seconds due to the policy time

SQL> select * from dba_ilmparameters;
NAME VALUE
-------------------- ----------
ENABLED 1
RETENTION TIME 30
JOB LIMIT 2
EXECUTION MODE 2
EXECUTION INTERVAL 15
TBS PERCENT USED 85
TBS PERCENT FREE 25
POLICY TIME 1
8 rows selected.

We can verify that the policy is created

SELECT policy_name, action_type, condition_type, condition_days FROM user_ilmdatamovementpolicies where action_type='STORAGE';
POLICY_NAM ACTION_TYPE CONDITION_TYPE CONDITION_DAYS
---------- ----------- ---------------------- --------------
P65 STORAGE LAST ACCESS TIME 30

And that the policy is enabled

SQL> SELECT policy_name, object_name, enabled FROM user_ilmobjects where policy_name='P65';
POLICY_NAM OBJECT_NAME ENA
---------- --------------- ---
P65 ARTICLE YES

By default ADO policies are automatically triggered during maintenance window. But in this demonstration we are going to manually execute the policy without waiting the maintenance window. So after 30 days of non-access (in fact 30 seconds) let’s trigger the policy

SQL> EXEC dbms_ilm.flush_all_segments;
PL/SQL procedure successfully completed.
.
DECLARE
v_executionid number;
BEGIN
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,
execution_mode => dbms_ilm.ilm_execution_offline,
task_id => v_executionid);
END;
/
PL/SQL procedure successfully completed.

Let’s verify the result of the job. We can see that the job was successful

SQL> SELECT task_id, job_name, job_state, completion_time completion FROM user_ilmresults ORDER BY 1 ;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
86
ILMJOB232
COMPLETED SUCCESSFULLY
10-JAN-18 02.28.31.506590 PM

And if we query again the USER_TABLES, we can see that the table ARTICLE were moved from tablespace USERS to tablespace COOLTAB.

SQL> col TABLE_NAME for a20
SQL> select table_name,tablespace_name from user_tables where table_name='ARTICLE';
TABLE_NAME TABLESPACE_NAME
-------------------- ------------------------------
ARTICLE COOLTAB

And if we query the dba_tablespaces, we can see that now the tablespace COOLTAB is now in READ ONLY STATUS

SQL> select tablespace_name,status from dba_tablespaces;
TABLESPACE_NAME STATUS
------------------------------ ---------
SYSTEM ONLINE
SYSAUX ONLINE
UNDOTBS1 ONLINE
TEMP ONLINE
USERS ONLINE
EXAMPLE ONLINE
COOLTAB READ ONLY
7 rows selected.

Conclusion: We have seen how ADO can help us to move data. Note that this movement can also be based on the fullness threshold of the source tablespace. One condition should be for example to move the table if the source tablespace is full at 65%.

 

Cet article Automatic Data Optimization Part II : StorageTiering est apparu en premier sur Blog dbi services.

Viewing all 2833 articles
Browse latest View live