Quantcast
Channel: dbi Blog
Viewing all 2851 articles
Browse latest View live

Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets

$
0
0

I am not a big fan of having passwords in clear text laying around. This applies not only to application servers, but also for my Data Guard observer.

I do have a script for starting the observer that is reading a config file dgobserver.cfg, and this file contains the Username, Passwords and the Connectstring to my Primary and Standby database.

#*************************************************************
# Connection string to the primary
ConnectStringPrim="sys/Manager1@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="sys/Manager1@DBIT122_SITE2"

However, I don’t want to have these passwords in clear text anymore, so I setup wallets for that purpose on the observer host.

To setup the wallet connection we need to:

  • Create a wallet directory
  • Adjust the sqlnet.ora on the observer
  • Create the wallet and the credentials
  • Test the connections via wallets
  • Adjust the dgobserver.cfg file
  • Test a Fast Start Failover

Create a directory /u01/app/oracle/admin/wallets and add the following to your sqlnet.ora file

WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /u01/app/oracle/admin/wallets))
)

SQLNET.WALLET_OVERRIDE = TRUE

Now, create the wallet and the credentials

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -create
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential           
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
2: DBIT122_SITE2 SYS
1: DBIT122_SITE1 SYS

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] ls -l
total 8
-rw------- 1 oracle oinstall 957 Jan  3 13:57 cwallet.sso
-rw------- 1 oracle oinstall   0 Jan  3 13:56 cwallet.sso.lck
-rw------- 1 oracle oinstall 912 Jan  3 13:57 ewallet.p12
-rw------- 1 oracle oinstall   0 Jan  3 13:56 ewallet.p12.lck

 

After everything was successfully setup, it is time to test the connection via wallets with sqlplus and with dgmgrl.

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE1 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:07 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE2 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:12 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122]

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgh
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 3 14:00:05 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL> connect /@DBIT122_SITE2
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
DGMGRL> exit

 

Looks good so far, now let’s adjust the dgobserver.cfg file and start the observer.

-- adjust the dgobserver.cfg file

#*************************************************************
# Connection string to the primary
ConnectStringPrim="/@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="/@DBIT122_SITE2"

-- start the observer

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgobserver.ksh start DBIT122
2017-01-03_14:01:02::dgobserver.ksh::SetOraEnv          ::INFO ==> Environment: DBIT122 (/u01/app/oracle/product/12.2.0/dbhome_1)
2017-01-03_14:01:03::dgobserver.ksh::StatusObserver     ::INFO ==> Observer Stopped
2017-01-03_14:01:04::dgobserver.ksh::StartObserver      ::INFO ==> Connection to the primary database
2017-01-03_14:01:04::dgobserver.ksh::DoCommand          ::INFO ==> Start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat
2017-01-03_14:01:06::dgobserver.ksh::StatusObserver     ::INFO ==> Observer running
2017-01-03_14:01:07::dgobserver.ksh::CleanExit          ::INFO ==> Program exited with ExitCode : 0

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] ps -ef | grep dgmgrl | grep -v grep
oracle 9186 1 0 14:01 pts/0 00:00:00 dgmgrl -logfile /u01/app/oracle/admin/DBIT122/log/dgobserver.log -silent start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat'

After everything is setup and done, it is time for the fun part. Let’s initiate a Fast start failover by shutting down the primary with abort.

SQL> shutdown abort
ORACLE instance shut down.


-- observer log 

...
14:04:49.10  Tuesday, January 03, 2017
Initiating Fast-Start Failover to database "DBIT122_SITE2"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "DBIT122_SITE2"
14:04:58.85  Tuesday, January 03, 2017
...

14:07:39.04  Tuesday, January 03, 2017
Initiating reinstatement for database "DBIT122_SITE1"...
Reinstating database "DBIT122_SITE1", please wait...
Reinstatement of database "DBIT122_SITE1" succeeded
14:08:33.19  Tuesday, January 03, 2017

...

Cool, Fast Start Failover and the Reinstante worked as expected.

Conclusion

With Oracle wallets, I can make my DataGuard observer a little bit more secure by eliminating the passwords in clear text.

 

 

Cet article Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets est apparu en premier sur Blog dbi services.


Oracle 12cR2 – DataGuard Switchover with Oracle Wallets

$
0
0

I would like to make my DataGuard environment more secure, by eliminating the typing of “connect sys/Manager1″ for my DGMGRL commands. Especially the ones, that I have in my scripts. For example:

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect sys/Manager1
> show configuration verbose;
> EOF

or something like that:

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] cat show_config.dg
connect sys/Manager1;
show configuration;

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] dgmgrl @show_config.dg
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 12:54:11 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "DBIT122_SITE1"
Connected as SYSDG.

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 39 seconds ago)

HINT: Be aware that “dgmgrl [<options>] @script_file_name” is a new feature with the Broker in Oracle 12.2. It was not possible to use “dgmgrl @script” beforehand.

Ok. So how can I make my scripts more secure? Of course, by using wallets, like we did already with the observer configuration. See http://blog.dbi-services.com/oracle-12cr2-how-to-setup-dataguard-observer-with-oracle-wallets/
However, I want to do also the switchover and other operations with wallets.

So, lets create the necessary wallets for the SYS user on the Primary and the Standby.

-- Primary

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS             
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

-- Standby

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

And of course, we have to test the connections to see if everything is working correctly.

-- Primary
sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2


-- Standby

sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2

So far, so good. My connections with the wallets work from the Primary to the Standby and the other way around. Now, lets try to do a DataGuard switchover with wallets.

DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL>
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 58 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2';
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

Oppsssssss … doesn’t look good. It says “invalid username/password”, but everything worked beforehand. Ok. That output does not give me too much information. Lets try the whole thing again with the Debug mode … dgmgrl -debug

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:04:21 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE2
[W000 01/04 11:04:34.32] Connecting to database using DBIT122_SITE2.
[W000 01/04 11:04:34.33] Attempt logon as SYSDG
[W000 01/04 11:04:35.42] Attempt logon as SYSDBA
[W000 01/04 11:04:35.47] Executing query [select db_unique_name from v$database].
[W000 01/04 11:04:35.47] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:04:35.47] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:04:35.47] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 49 seconds ago)

DGMGRL> switchover to 'DBIT122_SITE2';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
[W000 01/04 11:05:04.99] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:05:04.99] Attempt logon as SYSDG
[W000 01/04 11:05:06.04] Attempt logon as SYSDBA
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up and mount instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

What is happening here? The Broker is not using the connect string “DBIT122_SITE1″, it is using the description list  “(DESCRIPTION=(ADDRESS …..)”, and when I look up my credentials in the wallet, I see only credentials for “DBIT122_SITE1 SYS” and “DBIT122_SITE2 SYS”.

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

The solution here is, to add the description list from the property StaticConnectIdentifier.

DGMGRL> show database 'DBIT122_SITE1' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

DGMGRL> show database 'DBIT122_SITE2' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

Ok. Lets add the new credentials to our wallet. Be careful that you specify it exactly like they show up in the StaticConnectIdentifier.

-- Primary

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat


-- Standby

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

After everything is setup and done, lets try again the switchover in debug mode.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:22:38 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
[W000 01/04 11:22:47.94] Connecting to database using DBIT122_SITE1.
[W000 01/04 11:22:47.94] Attempt logon as SYSDG
[W000 01/04 11:22:49.02] Attempt logon as SYSDBA
[W000 01/04 11:22:49.06] Executing query [select db_unique_name from v$database].
[W000 01/04 11:22:49.06] Query result is 'DBIT122_SITE1'
Connected to "DBIT122_SITE1"
[W000 01/04 11:22:49.06] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:22:49.06] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> switchover to 'DBIT122_SITE1';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
[W000 01/04 11:23:18.07] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:18.07] Attempt logon as SYSDG
[W000 01/04 11:23:19.15] Attempt logon as SYSDBA
[W000 01/04 11:23:20.23] Executing query [select db_unique_name from v$database].
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

ORACLE instance started.
[W000 01/04 11:23:36.03] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:36.03] Attempt logon as SYSDG
[W000 01/04 11:23:37.13] Attempt logon as SYSDBA
[W000 01/04 11:23:37.17] Executing query [select db_unique_name from v$database].
ORA-01507: database not mounted

[W000 01/04 11:23:37.20] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:37.20] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:37.20] Executing statement [alter database mount].
[W000 01/04 11:23:42.66] Statement [alter database mount] executed successfully.
Database mounted.
[W000 01/04 11:23:42.66] Checking for bootstrap done...
[W000 01/04 11:23:42.67] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:42.67] Attempt logon as SYSDG
[W000 01/04 11:23:43.77] Attempt logon as SYSDBA
[W000 01/04 11:23:43.82] Executing query [select db_unique_name from v$database].
[W000 01/04 11:23:43.83] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:23:43.83] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:43.83] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:55.85] Done waiting for bootstrap after 0 retries
Switchover succeeded, new primary is "DBIT122_SITE1"
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 60 seconds ago)

DGMGRL>

Now it worked perfectly, I can run now my switchover operations with wallets and I can run my scripts now without a password in clear text, like the following.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect /@DBIT122_SITE1
> show configuration verbose;
> EOF

Conclusion

Doing DataGuard switchovers with wallets work perfectly, if the setup was done correctly, and besides that, you can eliminate a lot of passwords in clear text that you might have laying around.

Cheers,

William

 

Cet article Oracle 12cR2 – DataGuard Switchover with Oracle Wallets est apparu en premier sur Blog dbi services.

NLS defaults for LANGUAGE and TERRITORY

$
0
0

When you set NLS_LANGUAGE then the values of NLS_DATE_LANGUAGE and NLS_SORT are set to their defaults for the LANGUAGE you set.
When you set NLS_TERRITORY then values of NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, NLS_CURRENCY and NLS_ISO_CURRENCY are set to the defaults for the TERRITORY you set. For reference, I’ve listed the defaults in this post.

Here for each value found from V$NLS_VALID_VALUES I set the session parameter and display the values that are derived, as well as an example of a date

TERRITORY

The TERRITORY sets the decimal and thousand separators, the currency and the date format. I display the short date example of last day of 2016.

SQL> with
2 function nls_territory_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_territory='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_FORMAT','NLS_NUMERIC_CHARACTERS','NLS_CURRENCY','NLS_ISO_CURRENCY') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'ds');
11 end;
12 select rpad(value,20)||nls_territory_defaults(value) "TERRITORY default NLS settings" from v$nls_valid_values where parameter='TERRITORY' order by 1
13 /
 
TERRITORY default NLS settings
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
AFGHANISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AFGHANISTAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=؋ example: 31/12/16
ALBANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ALBANIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=Lek example: 31.12.2016
ALGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ALGERIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ج. example: 31/12/2016
AMERICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AMERICA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 12/31/2016
ARGENTINA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ARGENTINA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
ARMENIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ARMENIA NLS_DATE_FORMAT=DD.fmMM.RRRR NLS_CURRENCY=դր. example: 31.12.2016
AUSTRALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AUSTRALIA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=$ example: 31/12/2016
AUSTRIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AUSTRIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
AZERBAIJAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=AZERBAIJAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=₼ example: 31.12.2016
BAHAMAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHAMAS NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=B$ example: 31/12/2016
BAHRAIN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHRAIN NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ب. example: 31/12/2016
BANGLADESH NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BANGLADESH NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=৳ example: 31-12-2016
BELARUS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=BELARUS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.2016
BELGIUM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BELGIUM NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
BELIZE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BELIZE NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=BZ$ example: 12/31/2016
BERMUDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BERMUDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=BD$ example: 31/12/2016
BOLIVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOLIVIA NLS_DATE_FORMAT=fmDD-MM-RR NLS_CURRENCY=B$ example: 12-31-2016
BOSNIA AND HERZEGOVINLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOSNIA AND HERZEGOVI NLS_DATE_FORMAT=fmMM.DD.RRRR NLS_CURRENCY=КМ example: 12.31.2016
BRAZIL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BRAZIL NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=R$ example: 31/12/2016
BULGARIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BULGARIA NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=лв example: 31.12.2016
CAMBODIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CAMBODIA NLS_DATE_FORMAT=RRRR-fmMM-DD NLS_CURRENCY=៛ example: 2016-12-31
CAMEROON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CAMEROON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CANADA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CANADA NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=$ example: 2016-12-31
CATALONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CATALONIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31-12-16
CHILE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CHILE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31-12-2016
CHINA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CHINA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=¥ example: 2016-12-31
CIS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CIS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.16
COLOMBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COLOMBIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
CONGO BRAZZAVILLE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO BRAZZAVILLE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CONGO KINSHASA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO KINSHASA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=FrCD example: 31/12/16
COSTA RICA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COSTA RICA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
CROATIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CROATIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=kn example: 31.12.16
CYPRUS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CYPRUS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=£ example: 31/12/2016
CZECH REPUBLIC NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECH REPUBLIC NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kč example: 31.12.2016
CZECHOSLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECHOSLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kčs example: 31.12.16
DENMARK NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=DENMARK NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=kr example: 31-12-2016
DJIBOUTI NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=DJIBOUTI NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ف.ج. example: 31/12/2016
ECUADOR NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ECUADOR NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
EGYPT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EGYPT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.م. example: 31/12/2016
EL SALVADOR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EL SALVADOR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
ESTONIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=ESTONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
ETHIOPIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ETHIOPIA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=ብር example: 12/31/2016
FINLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FINLAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=€ example: 31.12.2016
FRANCE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FRANCE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
FYR MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=FYR MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=ден. example: 31.12.2016
GABON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=GABON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
GERMANY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GERMANY NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
GREECE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GREECE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
GUATEMALA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=GUATEMALA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Q example: 31/12/2016
HONDURAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONDURAS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=L example: 31.12.2016
HONG KONG NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONG KONG NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 31/12/2016
HUNGARY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=HUNGARY NLS_DATE_FORMAT=RR-MON-DD NLS_CURRENCY=Ft example: 2016.12.31.
ICELAND NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ICELAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr. example: 31.12.2016
INDIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=INDIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=Rs example: 31/12/2016
INDONESIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=INDONESIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=Rp example: 31-12-2016
IRAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAN NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
IRAQ NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAQ NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ع. example: 31/12/2016
IRELAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRELAND NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
ISRAEL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ISRAEL NLS_DATE_FORMAT=DD-MON-RRRR NLS_CURRENCY=ש"ח example: 31/12/2016
ITALY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ITALY NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
IVORY COAST NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=IVORY COAST NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
JAPAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JAPAN NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=¥ example: 2016/12/31
JORDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JORDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ا.‏ example: 31/12/2016
KAZAKHSTAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=KAZAKHSTAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=KZT example: 31.12.2016
KENYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KENYA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=S example: 12/31/2016
KOREA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KOREA NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=₩ example: 2016/12/31
KUWAIT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KUWAIT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ك. example: 31/12/2016
LAOS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LAOS NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₭ example: 31/12/2016
LATVIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LATVIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=Ls example: 2016.12.31
LEBANON NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LEBANON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.ل. example: 31/12/2016
LIBYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LIBYA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ل. example: 31/12/2016
LITHUANIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LITHUANIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=€ example: 2016-12-31
LUXEMBOURG NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=LUXEMBOURG NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31.12.2016
MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=den. example: 31.12.2016
MALAYSIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALAYSIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=RM example: 31/12/2016
MALDIVES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALDIVES NLS_DATE_FORMAT=fmDD/fmMM/RR NLS_CURRENCY=ރ. example: 12/31/16
MALTA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALTA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=€ example: 12/31/2016
MAURITANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MAURITANIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ا.م. example: 31/12/2016
MEXICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MEXICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
MONTENEGRO NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MONTENEGRO NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=€ example: 31.12.2016.
MOROCCO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MOROCCO NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.م. example: 31/12/2016
NEPAL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEPAL NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=र example: 12/31/2016
NEW ZEALAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEW ZEALAND NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
NICARAGUA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NICARAGUA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C$ example: 31/12/2016
NIGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NIGERIA NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₦ example: 31/12/2016
NORWAY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=NORWAY NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
OMAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=OMAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ع. example: 31/12/2016
PAKISTAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PAKISTAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=PKR example: 31/12/2016
PANAMA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PANAMA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=B/ example: 12/31/2016
PARAGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PARAGUAY NLS_DATE_FORMAT=fmDD/MM/RR NLS_CURRENCY=G example: 12/31/16
PERU NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PERU NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S/ example: 31/12/2016
PHILIPPINES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PHILIPPINES NLS_DATE_FORMAT=MM/DD/RRRR NLS_CURRENCY=Php example: 12/31/2016
POLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=POLAND NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=zł example: 2016-12-31
PORTUGAL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PORTUGAL NLS_DATE_FORMAT=RR.MM.DD NLS_CURRENCY=€ example: 31-12-2016
PUERTO RICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PUERTO RICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
QATAR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=QATAR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ق. example: 31/12/2016
ROMANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ROMANIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=LEI example: 31.12.2016
RUSSIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=RUSSIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=₽ example: 31.12.2016
SAUDI ARABIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SAUDI ARABIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.س. example: 31/12/2016
SENEGAL NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=SENEGAL NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
SERBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=RSD example: 31.12.2016.
SERBIA AND MONTENEGRNLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA AND MONTENEGR NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
SINGAPORE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SINGAPORE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S$ example: 31/12/2016
SLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Sk example: 31.12.16
SLOVENIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVENIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SIT example: 31.12.16
SOMALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOMALIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ش.ص. example: 31/12/2016
SOUTH AFRICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOUTH AFRICA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=R example: 2016/12/31
SPAIN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SPAIN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
SRI LANKA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SRI LANKA NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
SUDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SUDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.س. example: 31/12/2016
SWEDEN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SWEDEN NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=Kr example: 2016-12-31
SWITZERLAND NLS_NUMERIC_CHARACTE=.' NLS_ISO_CURRENCY=SWITZERLAND NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SFr. example: 31.12.2016
SYRIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SYRIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.س. example: 31/12/2016
TAIWAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TAIWAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=NT$ example: 2016/12/31
TANZANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TANZANIA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=TSh example: 12/31/2016
THAILAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=THAILAND NLS_DATE_FORMAT=DD MON RRRR NLS_CURRENCY=฿ example: 31/12/2016
THE NETHERLANDS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=THE NETHERLANDS NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=€ example: 31-12-2016
TUNISIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TUNISIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ت. example: 31/12/2016
TURKEY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=TURKEY NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=₺ example: 31.12.2016
UGANDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UGANDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
UKRAINE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=UKRAINE NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=грн. example: 31.12.2016
UNITED ARAB EMIRATESNLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED ARAB EMIRATES NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.إ. example: 31/12/2016
UNITED KINGDOM NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED KINGDOM NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=£ example: 31/12/2016
URUGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=URUGUAY NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=NU$ example: 31/12/2016
UZBEKISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=UZBEKISTAN NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=UZS example: 31/DEC/16
VENEZUELA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VENEZUELA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Bs example: 31/12/2016
VIETNAM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VIETNAM NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=₫ example: 31-12-2016
YEMEN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=YEMEN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ي. example: 31/12/2016
YUGOSLAVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=YUGOSLAVIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
ZAMBIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ZAMBIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=ZK example: 31/12/2016

LANGUAGE

The LANGUAGE sets the linguistic sorting and the language used for words in dates. I display the long date example of last day of 2016.

SQL> with
2 function nls_language_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_language='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_LANGUAGE','NLS_SORT') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'dl');
11 end;
12 select rpad(value,20)||nls_language_defaults(value) "LANGUAGE default NLS settings" from v$nls_valid_values where parameter='LANGUAGE' order by 1
13 /
 
LANGUAGE default NLS settings
-----------------------------------------------------------------------------------------------------------------------------------------------
ALBANIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ALBANIAN example: E Shtunë, Dhjetor 31, 2016
AMERICAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=AMERICAN example: Saturday, December 31, 2016
AMHARIC NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=AMHARIC example: ቅዳሜ, ዲሴምበር 31, 2016
ARABIC NLS_SORT=ARABIC NLS_DATE_LANGUAGE=ARABIC example: السبت, ديسمبر 31, 2016
ARMENIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ARMENIAN example: Շաբա, Դեկտեմբեր 31, 2016
ASSAMESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=ASSAMESE example: শনিবাৰ, ডিচেম্বৰ 31, 2016
AZERBAIJANI NLS_SORT=AZERBAIJANI NLS_DATE_LANGUAGE=AZERBAIJANI example: Şənbə, Dekabr 31, 2016
BANGLA NLS_SORT=BINARY NLS_DATE_LANGUAGE=BANGLA example: শনিবার, ডিসেম্বর 31, 2016
BELARUSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=BELARUSIAN example: Субота, Сьнежань 31, 2016
BENGALI NLS_SORT=BENGALI NLS_DATE_LANGUAGE=BENGALI example: শনিবার, ডিসেমর 31, 2016
BRAZILIAN PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=BRAZILIAN PORTUGUESE example: Sábado, Dezembro 31, 2016
BULGARIAN NLS_SORT=BULGARIAN NLS_DATE_LANGUAGE=BULGARIAN example: Събота, Декември 31, 2016
CANADIAN FRENCH NLS_SORT=CANADIAN_M NLS_DATE_LANGUAGE=CANADIAN FRENCH example: Samedi, Décembre 31, 2016
CATALAN NLS_SORT=CATALAN NLS_DATE_LANGUAGE=CATALAN example: Dissabte, Desembre 31, 2016
CROATIAN NLS_SORT=CROATIAN NLS_DATE_LANGUAGE=CROATIAN example: Subota, Prosinac 31, 2016
CYRILLIC KAZAKH NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC KAZAKH example: Сенбі, Желтоқсан 31, 2016
CYRILLIC SERBIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC SERBIAN example: Субота, Децембар 31, 2016
CYRILLIC UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC UZBEK example: Шанба, Декабр 31, 2016
CZECH NLS_SORT=CZECH NLS_DATE_LANGUAGE=CZECH example: Sobota, Prosinec 31, 2016
DANISH NLS_SORT=DANISH NLS_DATE_LANGUAGE=DANISH example: Lørdag, December 31, 2016
DARI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DARI example: شنبه, دسمبر 31, 2016
DIVEHI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DIVEHI example: ހޮނިހިރު, ޑިސެމްބަރ 31, 2016
DUTCH NLS_SORT=DUTCH NLS_DATE_LANGUAGE=DUTCH example: Zaterdag, December 31, 2016
EGYPTIAN NLS_SORT=ARABIC NLS_DATE_LANGUAGE=EGYPTIAN example: السبت, ديسمبر 31, 2016
ENGLISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=ENGLISH example: Saturday, December 31, 2016
ESTONIAN NLS_SORT=ESTONIAN NLS_DATE_LANGUAGE=ESTONIAN example: Laupäev, Detsember 31, 2016
FINNISH NLS_SORT=FINNISH NLS_DATE_LANGUAGE=FINNISH example: Lauantai, Joulukuu 31, 2016
FRENCH NLS_SORT=FRENCH NLS_DATE_LANGUAGE=FRENCH example: Samedi, Décembre 31, 2016
GERMAN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN example: Samstag, Dezember 31, 2016
GERMAN DIN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN DIN example: Samstag, Dezember 31, 2016
GREEK NLS_SORT=GREEK NLS_DATE_LANGUAGE=GREEK example: Σάββατο, Δεκέμβριος 31, 2016
GUJARATI NLS_SORT=BINARY NLS_DATE_LANGUAGE=GUJARATI example: શનિવાર, ડિસેમ્બર 31, 2016
HEBREW NLS_SORT=HEBREW NLS_DATE_LANGUAGE=HEBREW example: שבת, דצמבר 31, 2016
HINDI NLS_SORT=BINARY NLS_DATE_LANGUAGE=HINDI example: शनिवार, दिसम्बर 31, 2016
HUNGARIAN NLS_SORT=HUNGARIAN NLS_DATE_LANGUAGE=HUNGARIAN example: Szombat, December 31, 2016
ICELANDIC NLS_SORT=ICELANDIC NLS_DATE_LANGUAGE=ICELANDIC example: Laugardagur, Desember 31, 2016
INDONESIAN NLS_SORT=INDONESIAN NLS_DATE_LANGUAGE=INDONESIAN example: Sabtu, Desember 31, 2016
IRISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=IRISH example: Dé Sathairn, Nollaig 31, 2016
ITALIAN NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=ITALIAN example: Sabato, Dicembre 31, 2016
JAPANESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=JAPANESE example: 土曜日, 12月 31, 2016
KANNADA NLS_SORT=BINARY NLS_DATE_LANGUAGE=KANNADA example: ಶನಿವಾರ, ಡಿಸೆಂಬರ್ 31, 2016
KHMER NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=KHMER example: ថ្ងៃសៅរ៍, ធ្នូ 31, 2016
KOREAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=KOREAN example: 토요일, 12월 31, 2016
LAO NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LAO example: ວັນເສົາ, ທັນວາ 31, 2016
LATIN AMERICAN SPANI NLS_SORT=SPANISH NLS_DATE_LANGUAGE=LATIN AMERICAN SPANI example: Sábado, Diciembre 31, 2016
LATIN BOSNIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN BOSNIAN example: Subota, Decembar 31, 2016
LATIN SERBIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=LATIN SERBIAN example: Subota, Decembar 31, 2016
LATIN UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN UZBEK example: Shanba, Dekabr 31, 2016
LATVIAN NLS_SORT=LATVIAN NLS_DATE_LANGUAGE=LATVIAN example: Sestdiena, Decembris 31, 2016
LITHUANIAN NLS_SORT=LITHUANIAN NLS_DATE_LANGUAGE=LITHUANIAN example: Šeštadienis, Gruodžio 31, 2016
MACEDONIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=MACEDONIAN example: Сабота, Декември 31, 2016
MALAY NLS_SORT=MALAY NLS_DATE_LANGUAGE=MALAY example: Sabtu, Disember 31, 2016
MALAYALAM NLS_SORT=BINARY NLS_DATE_LANGUAGE=MALAYALAM example: ശനിആഴ്ച, ഡിസംബര് 31, 2016
MALTESE NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=MALTESE example: Is-Sibt, Diċembru 31, 2016
MARATHI NLS_SORT=BINARY NLS_DATE_LANGUAGE=MARATHI example: शनिवार, डिसेंबर 31, 2016
MEXICAN SPANISH NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=MEXICAN SPANISH example: Sábado, Diciembre 31, 2016
NEPALI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=NEPALI example: शनिबार, डिसेम्बर 31, 2016
NORWEGIAN NLS_SORT=NORWEGIAN NLS_DATE_LANGUAGE=NORWEGIAN example: Lørdag, Desember 31, 2016
ORIYA NLS_SORT=BINARY NLS_DATE_LANGUAGE=ORIYA example: ଶନିବାର, ଡିସେମ୍ବର 31, 2016
PERSIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=PERSIAN example: شنبه, دسامبر 31, 2016
POLISH NLS_SORT=POLISH NLS_DATE_LANGUAGE=POLISH example: Sobota, Grudzień 31, 2016
PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=PORTUGUESE example: Sábado, Dezembro 31, 2016
PUNJABI NLS_SORT=BINARY NLS_DATE_LANGUAGE=PUNJABI example: ਸ਼ਨਿਚੱਰਵਾਰ, ਦਸੰਬਰ 31, 2016
ROMANIAN NLS_SORT=ROMANIAN NLS_DATE_LANGUAGE=ROMANIAN example: Sâmbătă, Decembrie 31, 2016
RUSSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=RUSSIAN example: Суббота, Декабрь 31, 2016
SIMPLIFIED CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=SIMPLIFIED CHINESE example: 星期六, 12月 31, 2016
SINHALA NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SINHALA example: සෙනසුරාදා, දෙසැම්බර් 31, 2016
SLOVAK NLS_SORT=SLOVAK NLS_DATE_LANGUAGE=SLOVAK example: Sobota, December 31, 2016
SLOVENIAN NLS_SORT=SLOVENIAN NLS_DATE_LANGUAGE=SLOVENIAN example: Sobota, December 31, 2016
SPANISH NLS_SORT=SPANISH NLS_DATE_LANGUAGE=SPANISH example: Sábado, Diciembre 31, 2016
SWAHILI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SWAHILI example: Jumamosi, Desemba 31, 2016
SWEDISH NLS_SORT=SWEDISH NLS_DATE_LANGUAGE=SWEDISH example: Lördag, December 31, 2016
TAMIL NLS_SORT=BINARY NLS_DATE_LANGUAGE=TAMIL example: சனிக்கிழமை, டிசம்பர் 31, 2016
TELUGU NLS_SORT=BINARY NLS_DATE_LANGUAGE=TELUGU example: శనివారం, డిసెంబర్ 31, 2016
THAI NLS_SORT=THAI_DICTIONARY NLS_DATE_LANGUAGE=THAI example: เสาร์, ธันวาคม 31, 2016
TRADITIONAL CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=TRADITIONAL CHINESE example: 星期六, 12月 31, 2016
TURKISH NLS_SORT=TURKISH NLS_DATE_LANGUAGE=TURKISH example: Cumartesi, Aralık 31, 2016
UKRAINIAN NLS_SORT=UKRAINIAN NLS_DATE_LANGUAGE=UKRAINIAN example: Субота, Грудень 31, 2016
VIETNAMESE NLS_SORT=VIETNAMESE NLS_DATE_LANGUAGE=VIETNAMESE example: Thứ Bảy, Tháng MườI Hai 31, 2016

NLS_LANG

Note that you can also set the NLS settings with environment variables, but you can do that only if NLS_LANG is set. This means that you must set NLS_LANG first, which sets the LANGUAGE and TERRITORY, and then set for example NLS_DATE_FORMAT.

Here is an example.
I set NLS_DATE_FORMAT from the environment:

[oracle@VM102 data]$ export NLS_DATE_FORMAT=DL

but the date format is the set in my session:
[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
---------
04-JAN-17

Now I set NLS_LANG (and you don’t need to specify a LANGUAGE, TERRITORY or CHARACTERSET, just put the separators ‘_’ and ‘.’)

[oracle@VM102 data]$ export NLS_LANG="_."

And now my NLS_DATE_FORMAT is taken into account

[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
-----------------------------
Wednesday, January 04, 2017

In general you should set NLS_LANG on any client, to the LANGUAGE and TERRITORY you want to display to client (or store in file, or manage by the application server).

I wish you an Happy New Year 2017.

 

Cet article NLS defaults for LANGUAGE and TERRITORY est apparu en premier sur Blog dbi services.

12cR2: no cardinality feedback for small queries

$
0
0

This follows the comments on Clemens Bleile post about SQL Plan Directives in Oracle 12.2 where he discovered a new behavior that is not documented. The feature is good: bypass the cardinality feedback overhead when the query is fast anyway.

Feature

In Clemens Bleile post about SQL Plan Directives changes in 12cR2 he determined than the new feature coming in bug 23596611 was responsible for the bypass of automatic reoptimization (the 12c cardinality feedback) in some cases.

The description gives the idea that the reason for the bypass of cardinality feedback is ‘small queries':

SQL> select BUGNO,SQL_FEATURE,DESCRIPTION,OPTIMIZER_FEATURE_ENABLE from v$system_fix_control where bugno=23596611;
 
BUGNO SQL_FEATURE DESCRIPTION OPTIMIZER_FEATURE_ENABLE
---------- ------------------------------ -------------------------------------- -------------------------
23596611 QKSFM_STATS_FEEDBACK_23596611 No feedback for small queries 12.2.0.1

This is a new concept and in the traces we know there is nothing about ‘small queries’. Here is how to get more information.

Function

First, I’m looking for the ‘small queries’ text in the oracle executable:

$ strings $ORACLE_HOME/bin/oracle | grep "small query"
kkocfbCheckCardEst: No feedback for small query (BG %f, CPU %f (ms)
kkocfbCompareExecStats: skipping cost comparision forsmall query (BG %f, CPU %f (ms)

Good. This looks like there is a way to trace some information about that bypass decision, and there’s a clue that ‘small queries’ are evaluated on number of buffer gets (BG) and CPU time threshold. More than that, we have the name of the C function that traces this: kkocfbCheckCardEst

kkocfb probably means:

and then I’m looking for a way to get this trace. I didn’t know which event can trace that, but others know:

However there is another way to get an idea of the events that are checked by a function.
The idea is described here: http://yong321.freeshell.org/computer/OracleFuncCalls.html

Event

We are not allowed to disassemble oracle code (this is in the licensing rules that you accept when you download oracle) so I’ll not do it here.
If you think you have a special permission to do it, just run:
gdb oracle <<<"disas kkocfbCheckCardEst"

I can’t show the dissassembled code, so here is how to get the list of the events that are checked by the kkocfbCheckCardEst function:
$ gdb oracle <<<"disas kkocfbCheckCardEst" | awk --non-decimal-data '/mov .*,%edi$/{gsub(/[$,]/," ");a=$4}/EventRdbmsErr/{printf "%d\n", a}' | sort -u
10507

The information we have about this event does not describe all what can be traced:

$ oerr ora 10507
10507, 00000, "Trace bind equivalence logic"
// *Cause:
// *Action:

From that, I don’t know which level to set the event, but in case of doubt and because levels are often bit flags, you can set it to the maximum 0xFFFFFFFF:


alter session set events '10507 trace name context forever, level 4294967295';

trace when cardinality feedback is bypassed

When I run the query where execution is doing less than 100 buffer gets I get the following trace with event 10507 level 512:


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fca901f0e30 ctx=0x9415d4f8 ******
Dumping Hints
=============
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
kkocfbCheckCardEst: No feedback for small query (BG 0.000000, CPU 0.000000 (ms)

The message is clear: no feedback for small query. The Buffer Get and CPU show 0 but on my test case, increasing the size of the table, I have seen that 100 buffer gets is the threshold:


bq4fc1rdx97av is_reoptimizable: N buffer_gets: 95 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 95 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 96 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 96 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 97 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 97 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 98 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 98 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 99 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 99 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 100 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 100 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 101 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 101 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 102 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 102 cpu_time: 1000

trace when query becomes reoptimizable

When the query reads more than 100 buffer gets, the cursor is marked as reoptimizable and here is the trace:


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fed7dc2ca40 ctx=0x9418f4f8 ******
Dumping Hints
=============
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
Reparsing due to card est...
@=0x95426550 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
kkocfbCheckCardEst [sql_id=a71qw8t17qpqq] reparse=y ecs=n efb=n ost=n fbs=n
*********** Begin Dump Context (kkocfbCheckCardEst) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x95426550 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********

act=500 is the actual number of rows (A-Rows) and est=63 is the estimated one (A-Rows) and this is how misestimate is detected.

trace when reoptimized query is run again

At that point we can see the feedback as OPT_ESTIMATE hints in v$sql_reoptimization_hints.

When we run the same query another time, it is parsed again with those hits, fixing cardinality tho the A-Rows of previous execution.

Here is the trace for this second execution.


kkoarCopyCtx: [sql_id=a71qw8t17qpqq] origin=CFB old=0x63904140 new=0x7fcda3716d40 copyCnt=1 copyClient=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********
kkocfbCopyCardCtx: oldCtx Dumping string mapping
----------------------
kkocfbCopyCardCtx: newCtx Dumping string mapping
----------------------
**************************************************************
kkocfbCopyCardCtx: Best stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
**** Begin Dump Context (kkocfbCopyCardCtx - best est) ****
********************* End Dump Context ********************
*********** Begin Dump Context (Copy:old) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x6446dc58 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********
*********** Begin Dump Context (Copy:new) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x7fcda3716a78 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********
kkoarReparse: xsc=0x7fcda3672b98 kxsAutoReoptCtx=0x7fcda3716d40
kkocfbAddCardHints: Dumping string mapping
----------------------
******** Begin CFB Hints [sql_id=a71qw8t17qpqq] xsc=0x7fcda3672b98 ********
Dumping Hints
=============
atom_hint=(@=0x7fcda37831e8 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
********** End CFB Hints **********

You can see the OPT_ESTIMATE hints here.


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fcda3672b98 ctx=0x8a274118 ******
Dumping Hints
=============
atom_hint=(@=0x7fcda37831e8 err=0 resol=1 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
atom_hint=(@=0x7fcda3782d10 err=0 resol=1 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********
kkocfbCheckCardEst [sql_id=a71qw8t17qpqq] reparse=n ecs=n efb=y ost=n fbs=n

So what?

With those adaptive feature it is good to be able to trace the decisions in order to understand and reproduce the problems we encounter. Event 10507 is very useful. It’s the execution time counterpart for the event 10053 which explains compile time decision. And in latest versions, the optimizer is more and more present at execution time.

We have always seen problems coming from cardinality feedback. Most of them are coming from bad statistics or a data model where there is not one optimal access plan. Then the CBO is always trying to find better and sometimes the best is the enemy of the good.
Now in 12.2 you have a very good way to avoid the cardinality feedback side effect: tune your queries and your data model so that the critical queries read less than 100 logical reads.

 

Cet article 12cR2: no cardinality feedback for small queries est apparu en premier sur Blog dbi services.

Dealing with WSFC 2016 quarantine state and availability groups

$
0
0

I was playing with my lab environment which includes Windows 2016 and SQL Server 2016 and I noticed an interesting scenario while testing cluster node failures. After simulating some network outage scenarios, I was not able to bring back my cluster node online immediately by using traditional way. A quick look at the cluster manager led me to notice something new:

blog 113 - 0 - WSFC new state

A quarantined state value … interesting! But what does it mean exactly?

In respect of this Microsoft blog from the Server and management team, this is a new feature shipped with Windows Server 2016 that concerns Virtual Machine Compute Resiliency. To cut the long story short, this feature is intended to improve of response of the Windows Failover Cluster to transient failures. Initially, it was designed for Hyper-V virtual machines resiliency and it has been introduced to avoid “flapping nodes” phenomena which may impact negatively the overall cluster health.

To illustrate the benefit of this feature, let’s go back to my failure scenario which consisted in simulating random network failures. Experiencing such scenario in production environment may lead to an unstable cluster state including quorum loss in the worst scenario. In fact, after 3 failure events, the concerned cluster node was quarantined and we may confirm by looking at the new following parameters (QuarantineThreshold is our concern here). The other parameter QuarantineDuration defines the timeframe during which the cluster node is not able to join the cluster, so basically two hours.

blog 113 - 1 - WSFC quarantine parameters

In my case, the WIN20169SQL16 node will not automatically join the cluster until 22:56:38 in my case.

blog 113 - 2 - WSFC quarantine error log

What about my availability group? Well no surprise here, the quarantined cluster node means an availability replica disconnected and a synchronization issue as well as confirmed below:

 

blog 113 - 3 - availability group state

We many notice a corresponding error number 100060 with the message An error occurred while receiving data: ‘10060(A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)’. There is no specific message from SQL Server error log about quarantine state. From the secondary replica, I got the following sample message into the SQL Server error log:

blog 113 - 4 - secondary replica error log

SQL Server is waiting for the cluster node to start and rejoin the WSFC. In short, the overall availability health state will not change while the quarantined node is active. So according to your context, it may be a good thing until you don’t fix the related issue on the concerned cluster node. Fortunately, as stated to the Microsoft document, this is not mandatory for us to wait for the quarantined period to finish by using the following PowerShell command:

blog 113 - 5 - WSFC start cluster node

Here we go! The cluster health state got back to normal

blog 113 - 6 - WSFC final state

Happy clustering!

 

 

 

 

Cet article Dealing with WSFC 2016 quarantine state and availability groups est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – online conversion of a non-partitioned table to a partitioned table

$
0
0

It’s time to a new blog about partitioning improvement in 12cR2.
After auto-list partitioning and multi-column list partitioning & read-only partitions, I’ll demonstrate how we can easily convert a non-partitioned table to a partitioned table…online !

As a reminder, the way to convert a table to a partitioned table were to use the DBMS_REDEFINITION package (since 9i to 12cR1) which require few steps :

  1. Verify that the table is a candidate for redefinition :
    DBMS_REDEFINITION.CAN_REDEF_TABLE
  2. Create an interim partitioned table
  3. Start the redefinition :
    DBMS_REDEFINITION.START_REDEF_TABLE
  4. Copy dependent objects to the new table (grants, constraints, …) :
    DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
  5. Synchronize the interim table (optional) :
    DBMS_REDEFINITION.SYNC_INTERIM_TABLE
  6. Complete the redefinition :
    DBMS_REDEFINITION.FINISH_REDEF_TABLE

With 12cR2, the conversion can be done in one shot :

Non-partitioned table creation :
SQL> CREATE TABLE sales_non_part
(
sales_id NUMBER(5),
sales_name VARCHAR2(30),
sales_city VARCHAR2(20),
sales_amount NUMBER(10)
);
Table created.

Data insertion :
SQL> INSERT INTO sales_non_part VALUES (1, 'JOHN', 'DELEMONT', 10);
1 row created.

SQL> INSERT INTO sales_non_part VALUES (2, 'JANE', 'BERN', 150);
1 row created.

SQL> INSERT INTO sales_non_part VALUES (3, 'JACK', 'NYON', 20);
1 row created.

Before starting the conversion, I’ll enable the lock tracing to prove that the operation is made online.
You can take note of the new statement in 12cR2 to enable it :
SQL> ALTER SESSION SET events='trace[ksq] disk medium';

Session altered.

(The “old” one was : alter session set events=’10704 trace name context forever, level 3′;)

Let’s start the conversion of the sales_non_part table :
SQL> ALTER TABLE sales_non_part MODIFY PARTITION BY LIST (sales_city) AUTOMATIC (PARTITION p_delemont VALUES ('DELEMONT')) ONLINE;

Table altered.

Looks good. Quick check of the partitions :
SQL> SELECT partition_name, high_value FROM dba_tab_partitions WHERE table_name = 'SALES_NON_PART';

PARTITION_NAME HIGH_VALUE
-------------------- ------------------------------
P_DELEMONT 'DELEMONT'
SYS_P5537 'BERN'
SYS_P5538 'NYON'

The table is now partitioned by list and with the AUTOMATIC option.
Now, we can check the locks which were applied on the table during the conversion, firstly by finding the objects ID of the table and of the partitions :
SQL> SELECT TO_CHAR(object_id,'0XXXXXXX'), object_name, object_type FROM all_objects WHERE owner='SYS' and object_name='SALES_NON_PART' ORDER BY 2;

TO_CHAR(O OBJECT_NAME OBJECT_TYPE
--------- --------------- ------------------------------
00011CD1 SALES_NON_PART TABLE
00011CE0 SALES_NON_PART TABLE PARTITION
00011CE1 SALES_NON_PART TABLE PARTITION
00011CE2 SALES_NON_PART TABLE PARTITION

Then by looking for the generated trace file :
SQL> SELECT tracefile FROM v$process WHERE addr=(SELECT paddr FROM v$session WHERE sid=sys_context('USERENV','SID'));

TRACEFILE
----------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8223.trc

And finally by checking inside the file which kind of locks were hold on the objects :
SQL> host cat /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8223.trc | grep -e 00011CD1 -e 00011CE0 -e 00011CE1 -e 00011CE2;

2016-12-16 11:22:09.821*:ksq.c@9006:ksqgtlctx(): *** TM-00011CD1-00000000-00000000-00000000 mode=3 flags=0x401 why=165 timeout=0 ***
2016-12-16 11:22:09.871*:ksq.c@9006:ksqgtlctx(): *** OD-00011CD1-00000000-00000000-00000000 mode=6 flags=0x10401 why=264 timeout=0 ***
2016-12-16 11:22:10.283*:ksq.c@9006:ksqgtlctx(): *** DL-00011CE0-00000000-00000000-00000000 mode=3 flags=0x10001 why=207 timeout=0 ***
2016-12-16 11:22:10.283*:ksq.c@9006:ksqgtlctx(): *** DL-00011CE1-00000000-00000000-00000000 mode=3 flags=0x10001 why=207 timeout=0 ***
2016-12-16 11:22:10.283*:ksq.c@9006:ksqgtlctx(): *** DL-00011CE2-00000000-00000000-00000000 mode=3 flags=0x10001 why=207 timeout=0 ***

One TM-lock mode=3 (Row Exclusive lock) were holds on the table. So the DML operations stay possible during the conversion !
Conclusion : the conversion is really done online :-)

Information concerning the others locks :
OD (Online DDL) lock : used internally to allow truly online DDL operations
DL : lock to prevent index DDL during direct load
TM : synchronizes accesses to an object

 

Cet article Oracle 12cR2 : Partitioning improvements – online conversion of a non-partitioned table to a partitioned table est apparu en premier sur Blog dbi services.

ORA-12801 and ORA-12154 with OracleRestart / GRID / RAC on Oracle Database 12c

$
0
0

ORA-12801 and ORA-12154 with OracleRestart / GRID / RAC on Oracle Database 12c

Symptom/Analysis:

Using Oracle 12c in a RAC environment, you may encounter the following errors:

ORA-12801: error signaled in parallel query server P000, instance rac2.localdomain:DBRAC2 (2)
ORA-12154: TNS:could not resolve the connect identifier specified

In this article, we will present you an issue that is inspired from a real case:

olivier@olivier-ThinkPad-T550 ~ $ sqlplus sys/***@DBRAC1 as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 9 15:03:42 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> 
select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS, SCOTT.OBJECTS;
exit
ERROR at line 1:
ORA-12801: error signaled in parallel query server P000, instance rac2.localdomain:DBRAC2 (2)
ORA-12154: TNS:could not resolve the connect identifier specified

The following queries return answers with no error:

SQL> select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS;
COUNT(*)
----------
20342
SQL> select count(*) from SCOTT.OBJECTS;
COUNT(*)
----------
90951

Strange !
select * from A is working.
select * from B is working.
select * from A,B is not working.

Let’s check if TNS_ADMIN Oracle environment variable is set in the session:
We will use the dbms_system.get_env function

dbms_system.get_env()
SQL> 
set serveroutput on
DECLARE
RetVal VARCHAR2(4000);
BEGIN
dbms_system.get_env('TNS_ADMIN', RetVal);
dbms_output.put_line('TNS_ADMIN: '||RetVal);
END;
/
SQL> 2 3 4 5 6 7
TNS_ADMIN: /u01/app/MyNetworkAdmin/
PL/SQL procedure successfully completed.
TNS_ADMIN is set correctly.

The dblink is working and is correctly created:

dba_db_links
SQL> SELECT owner, db_link, username, host FROM   dba_db_links ORDER BY owner, db_link;
OWNER	DB_LINK 		  USERNAME   HOST
------- ------------------------- ---------- --------------------
PUBLIC	DBLINK_ONE_MYHOME_TNS	  SYSTEM     DBONE-MYHOME-TNS
DBMS_METADATA.GET_DDL
QL> SELECT DBMS_METADATA.GET_DDL('DB_LINK','DBLINK_ONE_MYHOME_TNS','PUBLIC') ddl from dual;
DDL
-------------------------------------------------------------------------
CREATE PUBLIC DATABASE LINK "DBLINK_ONE_MYHOME_TNS" 
   CONNECT TO "SYSTEM" IDENTIFIED BY VALUES ':1' USING 'DBONE-MYHOME-TNS'
/u01/app/MyNetworkAdmin/tnsnames.ora
DBONE-MYHOME-TNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = X.X.X.X)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = ONE)
    )
  )

So what ?
Let’s print the SQL plan:

SQL> explain plan for
2 select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS,SCOTT.OBJECTS;
Explained
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------
Plan hash value: 1869185832
-------------------------------------------------------------------------------------------------------------
| Id  | Operation		 | Name        | Rows  | Cost (%CPU)| Time     | TQ/Ins |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	 |	       |     1 | 19591	 (2)| 00:00:01 |	|      |	    |
|   1 |  SORT AGGREGATE 	 |	       |     1 |	    |	       |	|      |	    |
|   2 |   PX COORDINATOR	 |	       |       |	    |	       |	|      |	    |
|   3 |    PX SEND QC (RANDOM)	 | :TQ10001    |     1 |	    |	       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE	 |	       |     1 |	    |	       |  Q1,01 | PCWP |	    |
|   5 |      MERGE JOIN CARTESIAN|	       |  1810M| 19591	 (2)| 00:00:01 |  Q1,01 | PCWP |	    |
|   6 |       BUFFER SORT	 |	       |       |	    |	       |  Q1,01 | PCWC |	    |
|   7 |        PX RECEIVE	 |	       | 19911 |    65	 (0)| 00:00:01 |  Q1,01 | PCWP |	    |
|   8 | 	PX SEND BROADCAST| :TQ10000    | 19911 |    65	 (0)| 00:00:01 |	| S->P | BROADCAST  |
|   9 | 	 REMOTE 	 | DBA_OBJECTS | 19911 |    65	 (0)| 00:00:01 | DBLIN~ | R->S |	       |
|  10 |       BUFFER SORT	 |	       | 90951 | 19526	 (2)| 00:00:01 |  Q1,01 | PCWP |	    |
|  11 |        PX BLOCK ITERATOR |	       | 90951 |    15	 (0)| 00:00:01 |  Q1,01 | PCWC |	    |
|  12 | 	TABLE ACCESS FULL| OBJECTS     | 90951 |    15	 (0)| 00:00:01 |  Q1,01 | PCWP |	    |
-------------------------------------------------------------------------------------------------------------
Remote SQL Information (identified by operation id):
----------------------------------------------------
9 - SELECT 0 FROM "DBA_OBJECTS" "DBA_OBJECTS" (accessing 'DBLINK_ONE_MYHOME_TNS' )
Note
-----
- Degree of Parallelism is 4 because of table property
29 rows selected.
 

Parallelism is enabled for the query.
Let’s disable it !

SQL> ALTER SESSION disable parallel query;

Let’s run our query again:

SQL> select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS,SCOTT.OBJECTS;
  COUNT(*)
----------
1850125242

It is working now, but without using parallelism features.

Problem:

In fact, the problem comes from the environment variable TNS_ADMIN that is not (or not correctly) set for the PX servers process:

When parallelism is enabled, the PX servers are doing all the work. (as shown in our parallel plans)
On the server, PX server processes can be easily identified on the OS level. On Linux they are the processes ora_p***:
Source: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-parallel-execution-fundamentals-133639.pdf

[oracle@rac1 admin]$ ps -ef | grep ora_p0
oracle 25803 1 0 11:21 ? 00:00:00 ora_p000_DBRAC1
oracle 25805 1 0 11:21 ? 00:00:00 ora_p001_DBRAC1
oracle 25807 1 0 11:21 ? 00:00:00 ora_p002_DBRAC1
oracle 25809 1 0 11:21 ? 00:00:00 ora_p003_DBRAC1
oracle 28021 1 0 14:25 ? 00:00:00 ora_p004_DBRAC1
oracle 28023 1 0 14:25 ? 00:00:00 ora_p005_DBRAC1
oracle 28025 1 0 14:25 ? 00:00:00 ora_p006_DBRAC1
oracle 28027 1 0 14:25 ? 00:00:00 ora_p007_DBRAC1
oracle 28029 1 0 14:25 ? 00:00:00 ora_p008_DBRAC1
oracle 28031 1 0 14:25 ? 00:00:00 ora_p009_DBRAC1
oracle 28033 1 0 14:25 ? 00:00:00 ora_p00a_DBRAC1
oracle 28035 1 0 14:25 ? 00:00:00 ora_p00b_DBRAC1
oracle 28037 1 0 14:25 ? 00:00:00 ora_p00c_DBRAC1
oracle 28039 1 0 14:25 ? 00:00:00 ora_p00d_DBRAC1
oracle 28041 1 0 14:25 ? 00:00:00 ora_p00e_DBRAC1
oracle 28047 1 0 14:25 ? 00:00:00 ora_p00f_DBRAC1

The file /proc/25803/environ contains the Oracle environment variables set for the P000 Process:
Command “sudo strings /proc/25803/environ | grep TNS_” give no result.
Source: https://blogs.oracle.com/myoraclediary/entry/how_to_check_environment_variables

If we check the environ file of the listener processes, we can see that the TNS_ADMIN is correctly set.

[oracle@rac1 ~]$ ps -ef | grep lsn
oracle 2731 12705 0 14:49 pts/0 00:00:00 grep --color=auto lsn
oracle 4176 1 0 08:57 ? 00:00:00 /u01/app/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracle 4309 1 0 08:57 ? 00:00:00 /u01/app/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
oracle 4320 1 0 08:57 ? 00:00:00 /u01/app/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
oracle 9059 1 0 12:01 ? 00:00:00 /u01/app/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
[oracle@rac1 ~]$ sudo strings /proc/9059/environ | grep TNS_
TNS_ADMIN=/u01/app/MyNetworkAdmin

This small environ.sh script for the lazy one can list the Oracle environment variables set for all the PX Server process and pmon:

environ.sh
#!/bin/bash
mypspmon='/tmp/pspmon'
myprocess1='pmon_DB'
myprocess2='ora_p00'
ps -ef | grep $myprocess1 | awk '{print $2}' > $mypspmon
ps -ef | grep $myprocess2 | awk '{print $2}' >> $mypspmon
while read ligne;
do
myenvironfile="/proc/${ligne#* }/environ"
if [ -e "$myenvironfile" ]
then
strings $myenvironfile
fi
done < $mypspmon

“sudo ./environ.sh | grep TNS” give no result
The TNS_ADMIN Oracle environment variable is not set for the PX server processes that are spawn to handle the parallel queries.

Solution:

  1. Be sure to set the TNS_ADMIN with srvctl !!
  2. Source: https://docs.oracle.com/cd/B19306_01/rac.102/b14197/srvctladmin.htm#i1010191

    [oracle@rac1 ~]$ srvctl setenv listener -l LISTENER -t TNS_ADMIN='/u01/app/MyNetworkAdmin/'
    [oracle@rac1 ~]$ srvctl setenv database -d DBRAC -t TNS_ADMIN='/u01/app/MyNetworkAdmin/'
    

    Let’s check if the variable are correctly set

    [oracle@rac1 ~]$ srvctl getenv listener -l LISTENER -t TNS_ADMIN
    
    LISTENER:
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    
    [oracle@rac1 ~]$ srvctl getenv database -d DBRAC -t TNS_ADMIN
    
    DBRAC:
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    

    TNS_ADMIN seems to be correctly set but we still receive ORA-12801 ORA-12154 errors.
    Moreover “sudo ./environ.sh | grep TNS” still gives no result !
    You need to restart database to set up thoroughly the TNS_ADMIN environment variable database !!

    [oracle@rac1 admin]$ srvctl stop listener -l LISTENER
    [oracle@rac1 admin]$ srvctl start listener -l LISTENER
    [oracle@rac1 admin]$ srvctl stop database -d DBRAC
    [oracle@rac1 admin]$ srvctl start database -d DBRAC
    
    [oracle@rac1 admin]$ srvctl status listener -l LISTENER
    
    Listener LISTENER is enabled
    Listener LISTENER is running on node(s): rac1,rac2
    
    [oracle@rac1 admin]$ srvctl status database -d DBRAC
    
    Instance DBRAC1 is running on node rac1
    Instance DBRAC2 is running on node rac2
    

    Now our “sudo ./environ.sh | grep TNS” commands list the TNS_ADMIN environment variable used by our pmon and px server processes.

    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    [...]
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    

    And our query is working using parallelism features.

  3. Other solutions exist:
  4. One of them would consist in modifying USR_ORA_ENV cluster resources attribute values with crsctl

    [oracle@rac1 admin]$ crsctl modify resource ora.dbrac.db -attr "USR_ORA_ENV=TNS_ADMIN=/u01/app/MyNetworkAdmin/"
    
    CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.
    

    In 12c, just add the flag -unsupported ad the end to avoid CRS-4995 errors.
    [oracle@rac1 admin]$ crsctl modify resource ora.dbrac.db -attr “USR_ORA_ENV=TNS_ADMIN=/u01/app/MyNetworkAdmin/” -unsupported

Comments:

  1. This issue is inspired from a real case. You can easily reproduce the issue
  2. SCOTT.OBJECTS table was created like that.

    SCOTT.OBJECTS
    
    create table SCOTT.OBJECTS as select * from DBA_OBJECTS;
    alter table scott parallel 32;
    

    Add an entry in your tnsnames.ora and create a dblink
    You can remove TNS_ADMIN environment database with “srvctl unsetenv database -d database -t TNS_ADMIN” command. And restart.

  3. On the server, on the OS level, if ORACLE_HOME is set and TNS_ADMIN is not set then $ORACLE_HOME/network/admin is used to locate tnsname.ora
  4. So it can be interesting to use symlink (ln -s) in $ORACLE_HOME/network/admin to point and use the same tnsnames.ora file when using sqlplus from the server.

    [oracle@rac1 admin]$ pwd
    
    /u01/app/oracle/product/12.1.0.2/db_1/network/admin
    
    [oracle@rac1 admin]$ ll
    
    total 0
    lrwxrwxrwx 1 oracle oinstall 36 Jan 10 09:53 listener.ora -> /u01/app/MyNetworkAdmin/listener.ora
    lrwxrwxrwx 1 oracle oinstall 34 Jan 10 09:53 sqlnet.ora -> /u01/app/MyNetworkAdmin/sqlnet.ora
    lrwxrwxrwx 1 oracle oinstall 36 Jan 10 09:53 tnsnames.ora -> /u01/app/MyNetworkAdmin/tnsnames.ora
    

But remember, although the TNS_ADMIN environment is set on the OS level when starting the instance, you need to to set the TNS_ADMIN with srvctl or crsctl before !!
If not, you may encounter ORA-12154 errors.
And not only for PDML !!

 

Cet article ORA-12801 and ORA-12154 with OracleRestart / GRID / RAC on Oracle Database 12c est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Support for Multiple Observers

$
0
0

With Oracle 12cR2 it is now possible to configure multiple observers within a single Oracle Data Guard broker configuration.
Multiple observers provide an immediate benefit for High Availability. If one observer fails, there are additional observers that can continue to monitor the status of the configuration.
In this blog we are going to talk about this new feature.

Our configuration have 3 servers:
primaserver.localdomain with db_unique_name ORCL_SITE
standserver1.localdomain with db_unique_name ORCL_SITE1
standserver2.localdomain with db_unique_name ORCL_SITE2

With Oracle 12cR2 you can register up to three observers to monitor a single Data Guard broker configuration. Each observer is identified by a name that you supply when you issue the START OBSERVER command

Let’s start with a configuration where the Fast Start Failover is disabled

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 11 seconds ago)
DGMGRL>

And let’s start an observer on each server

DGMGRL> connect sys/root@orcl_site
Connected to "ORCL_SITE"
Connected as SYSDBA.DGMGRL> start observer prima_observer
[W000 01/05 10:46:03.89] FSFO target standby is (empty)


DGMGRL> connect sys/root@ORCL_SITE1
Connected to "ORCL_SITE1"
Connected as SYSDBA.
DGMGRL> start observer standserver1_observer
[W000 01/05 10:47:19.22] FSFO target standby is (empty)


DGMGRL> connect sys/root@orcl_site2
Connected to "ORCL_SITE2"
Connected as SYSDBA.
DGMGRL> start observer standserver2_observer
[W000 01/05 10:48:25.86] FSFO target standby is (empty)

When fast-start failover is disabled the observer does not need to coordinate fast-start failover, so all observers are equivalent and have the same functionality as shown below

DGMGRL> show observer;
Configuration - ORCL_DR
Fast-Start Failover: DISABLED
Observer "prima_observer"
Host Name: primaserver.localdomain
Last Ping to Primary: 17 seconds ago
Observer "standserver1_observer"
Host Name: standserver1.localdomain
Last Ping to Primary: 2 seconds ago
Observer "standserver2_observer"
Host Name: standserver2.localdomain
Last Ping to Primary: 25 seconds ago
DGMGRL>

Now let’s enable the Fast Start Failover

DGMGRL> enable FAST_START FAILOVER
Enabled.


DGMGRL> show configuration
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - (*) Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: ENABLED
Configuration Status:
SUCCESS (status updated 43 seconds ago)
DGMGRL>

Now that FSFO is enabled, we have a Master observer and Backup observers. The master observer works in the same manner that a single observer worked prior to the advent of multiple observers in Oracle Database 12c Release 2 (12.2.0.1).
Only the master observer can coordinate fast-start failover with Data Guard broker. All other registered observers are considered to be backup observers. The master observer is randomly chosen by the primary and the standby.

DGMGRL> show observer;
Configuration - ORCL_DR
Primary: ORCL_SITE
Target: ORCL_SITE1
Observer "prima_observer" - Master
Host Name: primaserver.localdomain
Last Ping to Primary: 3 seconds ago
Last Ping to Target: 0 seconds ago
Observer "standserver1_observer" - Backup
Host Name: standserver1.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 3 seconds ago
Observer "standserver2_observer" - Backup
Host Name: standserver2.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 2 seconds ago
DGMGRL>

If we try to stop the master observer, we will get an error

DGMGRL> stop observer prima_observer
Error: ORA-16878: master observer cannot be stopped
Failed.
DGMGRL>

It means that if for one reason we want to stop an observer that is the master one, we have to promote a backup observer as the new master. The command SET MASTEROBSERVER is used to manually change the master observer

DGMGRL> SET MASTEROBSERVER to standserver2_observer
Sent the proposed master observer to the data guard broker configuration.
Please run SHOW OBSERVER to see if master observer switch actually happens.
DGMGRL>

We can see that now the new master is standserver2_observer

DGMGRL> show observer;
Configuration - ORCL_DR
Primary: ORCL_SITE
Target: ORCL_SITE1
Observer "standserver2_observer" - Master
Host Name: standserver2.localdomain
Last Ping to Primary: 0 seconds ago
Last Ping to Target: 1 second ago
Observer "prima_observer" - Backup
Host Name: primaserver.localdomain
Last Ping to Primary: 0 seconds ago
Last Ping to Target: 2 seconds ago
Observer "standserver1_observer" - Backup
Host Name: standserver1.localdomain
Last Ping to Primary: 1 second ago
Last Ping to Target: 1 second ago
DGMGRL>

Now if we crash the server hosting the master observer (i.e. standserver2.localdomain), one of the backup observers will automatically be the new master.

DGMGRL> show observer;
Configuration - ORCL_DR
Primary: ORCL_SITE
Target: ORCL_SITE1
Observer "prima_observer" - Master
Host Name: primaserver.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 1 second ago
Observer "standserver1_observer" - Backup
Host Name: standserver1.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 0 seconds ago
Observer "standserver2_observer" - Backup
Host Name: standserver2.localdomain
Last Ping to Primary: 119 seconds ago
Last Ping to Target: 84 seconds ago

What happens if we try to start a fourth observer?

DGMGRL> connect sys/root@ORCL_SITE1
Connected to "ORCL_SITE1"
Connected as SYSDBA.
DGMGRL> start observer fourth_observer
Error: ORA-16647: could not start more than three observers
Failed to start the observer.
DGMGRL>

As we see above, we cannot have more than 3 observers.

One important thing about multiple observers is that we have only more availability for the observer but not for the FSFO configuration. This means that if we lose at the same time the primary database and the master observer, no fast-start failover will be initiated.

Let’s represent this by some configurations
Case1: Master observer and primary database are located in the same datacenter
observer1
Fast Start Failover will not occur if we lose Primary Datacenter

Case2: Master observer and primary database are located in different datacenters
observer2
Fast Start Failover will occur if we lose Primary datacenter

Case3: Master observer and primary database are located in different datacenters
observer3
Fast Start Failover will occur if we lose Primary datacenter

Conclusion: Support of multiple observers is a very nice feature. When using this feature, master observer and primary database should not be located in the same datacenter

 

Cet article Dataguard Oracle 12.2 : Support for Multiple Observers est apparu en premier sur Blog dbi services.


dbi Tail ranked as the 4th best alternative !

$
0
0

Dear colleagues,

Since the publication of the dbi Tail one year ago, lots of people downloaded, and some reporters wrote articles about it.

We have got many positive feedbacks, and we would like to thank you all for your attention !

 

The best article is from Softpedia website, and the author made an interesting description of the tool as you can see:

http://www.softpedia.com/get/System/File-Management/dbi-Tail.shtml

 

More interesting, in June, the tool was published alongside of the reference on the “Alternative to” website. Since then, the tail from dbi services was moving higher and higher quite without interruption.

At one point in time, it was ranked as the 3rd best alternative of the reference. Today, it is in the 4th position. What a good feeling !

 

During one year the development of the tail continued, and new features were integrated. The biggest one was to allow SSH connection using a public key authentication, enabling the possibility of connecting without any password. In fact just by using a trusted user public key.

The other one is the ability to quickly navigate between the several “tree” files present in your “etc” folder. In fact this will enhance the user experience by grouping the monitor log files for one context, and the switching to another context (another “tree” file) just in one click.

 

In the freshly 1.3 version of dbi tail, you can also benefit from some bug fixes and enhancements especially for the Linux environment.

Enjoy continuing to use dbi tail, and do not hesitate to provide your feedback or to like it as well:

http://alternativeto.net/software/baretail/

 

dbi tail is an alive open source project, and will continue to be in the future !

https://github.com/pschweitz/DBITail/releases

 

Cheers,

Philippe

 

tail1.3

 

 

Cet article dbi Tail ranked as the 4th best alternative ! est apparu en premier sur Blog dbi services.

2016 is over and it was great, 2017 will be even more exiting in the open source area

$
0
0

Disclaimer: This will not be a technical post at all. The goal is to provide a short overview of what we did in 2016 and what we plan to do in 2017 when it comes to our Open Infrastructure division. We’ll start with a few numbers then look at the events we did and some of the projects we completed in 2016. Here we go …

We had a total number of 90 blog posts in 2016 covering the following areas: Linux, MySQL/MariaDB, MongoDB, Cassandra, Mirantis, PostgreSQL, EnterpriseDB, Docker, Ansible, Amazon AWS. Here is the list for 2016:

installing-edb-postgres-advanced-server-9-5
the-postgres-plus-cloud-database
avoiding-access-to-the-public-schema-in-postgresql
using-the-official-postgresql-yum-repositories
external-tables-in-postgresql
postgresql-on-amazon-rds-loading-the-beast
postgresql-on-amazon-rds-adding-a-replica-to-the-beast
postgresql-on-amazon-rds-configuring-the-beast
postgresql-on-amazon-rds-securing-the-beast
postgresql-on-amazon-rds-setting-up-the-beast
the-dbi-services-postgresql-reference-architecture-2-the-community-approach
edb-postgres-advanced-server-9-5-new-features-profiles
install-pljava-in-postgresql-9-5-1
feeding-blogsrrs-items-directly-into-your-postgresql-database
pre-warming-the-buffer-cache-in-postgresql
transactional-ddl
install-pljava-in-postgres-plus-advanced-server-9-5
launching-a-vm-with-the-amazon-aws-command-line-tools
linux-how-to-check-the-exit-status-of-several-piped-commands
lvm-how-to-extend-a-volume-group
linux-quick-tip-what-is-the-local-time-in-kolkata
mongodb-installation
introduction-to-mongodb
getting-started-with-ansible-preparations
the-almost-same-sample-schema-for-all-major-relational-databases-4-mssql
connecting-your-postgresql-instance-to-a-mariadbmysql-instance
the-almost-same-sample-schema-for-all-major-relational-databases-3-mysqlmariadb
maintenance-scenarios-with-edb-failover-manager-3-witness-node
maintenance-scenarios-with-edb-failover-manager-2-primary-node
the-almost-same-sample-schema-for-all-major-relational-databases-2-oracle
the-almost-same-sample-schema-for-all-major-relational-databases-1-postgresql
stay-tuned-with-kernel-parameters
maintenance-scenarios-with-edb-failover-manager-1-standby-node
a-look-at-postgresql-9-6-psql-gexec-2
a-look-at-postgresql-9-6-killing-idle-transactions-automatically
a-look-at-postgresql-9-6-progress-reporting-for-vacuum-operations
a-look-at-postgresql-9-6-the-wait-interface
happy-birthday-postgresql
connecting-your-postgresql-instance-to-a-microsoft-sql-server-instance
interested-in-the-most-advanced-open-source-database-where-you-can-meet-us-in-june
postgresql-as-a-central-reporting-hub-yes-of-course
swiss-pgday-2016-slides-are-online
which-parameter-changes-do-require-a-restart-of-my-postgresql-instance
shrinking-oracle-vm-virtualbox-with-zerofree
elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more
getting-started-with-ansible-creating-the-postgresql-instance
getting-started-with-ansible-download-the-postgresql-sources-compile-and-install
getting-started-with-ansible-installing-os-packages-creating-groups-and-users/
gettin-ansible-up-and-running-on-a-sles-12-sp1-host
how-to-do-a-filesystem-resize-ext3ext4-on-redhat-running-on-vmware
running-postgresql-on-zfs-on-linux-fun-with-snapshots-and-clones
running-postgresql-on-zfs-on-linux
connecting-your-postgresql-instance-to-an-oracle-database-debian-version
the-dbi-open-infrastructure-technology-division
sharding-with-postgresql
what-the-hell-are-these-template0-and-template1-databases-in-postgresql
securing-your-connections-to-postgresql-by-using-ssl
edb-failover-manager-2-1-two-new-features
edb-failover-manager-2-1-upgrading
disable-red-hat-7-transparent-hugepages
auditing-in-postgresql
understanding-row-level-security-on-postgresql
mariadb-audit-plugin
how-to-install-mirantis-openstack-9-0-using-virtualbox-part-1-3
how-to-install-mirantis-openstack-9-0-using-virtualbox-part-1-2
how-to-install-mirantis-openstack-9-0-using-virtualbox-part-1
how-to-patch-postgres-plus-advanced-server-in-a-standby-configuration
running-a-cassandra-cluster-in-a-single-server
how-to-patch-postgres-plus-advanced-server
apache-cassandra-overview
running-postgresql-on-zfs-on-linux-compression
can-i-do-it-with-postgresql-5-generating-ddl-commands
can-i-do-it-with-postgresql-4-external-tables
can-i-do-it-with-postgresql-3-tablespaces
can-i-do-it-with-postgresql-2-dual
can-i-do-it-with-postgresql-1-restore-points
mongodb-installation-on-windows
managing-my-amazon-web-services-redhat-instance
linux-instance-in-amazon-web-services-aws
edb-postgres-advanced-server-9-6-beta-released
can-i-do-it-with-postgresql-8-transportable-tablespaces
getting-started-with-docker-2-building-your-own-base-image
getting-started-with-docker-1-overview-and-installation
enterprisedb-backup-and-recovery-tool-bart
installing-postgresql-9-6-1-with-enterprisedb-installer
can-i-do-it-with-postgresql-7-partitioning
oel-7-project-quotas-on-oracle-homes-with-xfs-on-oracle-linux-7
oel-7-how-to-disable-ipv6-on-oracle-linux-7
from-mysql-oracle-to-postgres-using-the-edb-migration-toolkit
can-i-do-it-with-postgresql-6-server-programming

Quite a lot of information was shared. For 2017 you should see more posts in the Cloudera and OpenStack areas as well as the private DBaaS service we are currently working on.

But obviously we did not only write blogs :) There have been two PostgreSQL events organized by us, we sponsored and had two talks at the Swiss PGDay 2016 and we had a talk at the IT Tage 2016 in Frankfurt.
For 2017 we’ll be organizing an Oracle · Open Infrastructure · Cloudera & Big Data Appliance event, we’ll be at the Swiss PGDay 2017 and probably we’ll be at the IT Tage in Frankfurt again. If nothing goes wrong we’ll be at the M|17 in New York. This is what currently is scheduled, maybe there will be even more.

For the projects we did in 2016 there was a lot of stuff: We did high available EnterpriseDB PPAS projects, we did high available community PostgreSQL projects, we did a MongoDB project and we even did a PostgreSQL project on Windows. We implemented Ansible at a customer to automate the PostgreSQL deployments. There have been several MariaDB and MySQL Galera cluster implementations as well as MySQL Cluster Control setups. Out of all these projects we got several SLAs signed for MySQL as well as for PostgreSQL, which is a great thing as this proves that our customers trust us. On top of that we certified 7 of our service desk people on EnterpriseDB Postgres Plus (EDB Postgres Advanced Server 9.5 Professional). Nobody in the D-A-CH region achieved that until now.
For 2017 there are already some projects in the pipe: One huge migration from another vendor to PostgreSQL/EDB, a migration from another vendor to community PostgreSQL. We will assist a customer in building a high available, extremely flexible PostgreSQL deployment for their web infrastructure. For two other projects we’ll migrate existing PostgreSQL deployments to PostgreSQL 9.5 and there will be one huge 6TB migration from PostgreSQL 9.2 to a recent release. And this is just what is known today.

Workshops: In 2016 we released our PostgreSQL DBA Essentials Workshop which was given four times since then. The MySQL DBA Essentials workshop was also given several times last year.
For 2017 the MySQL workshop will be updated and we plan to release a brand new MongoDB workshop.

In 2016 we released DMK for PostgreSQL and NoSQL.
For 2017 all DMK releases will be updated with the latest and greatest we learned at our customers. One big announcement will be an appliance based on open source products. I can not tell you much right now but there will be some news in the next weeks.

You can see that we invested much in these areas in 2016 and 2017 will be no exception to that. The trend goes even up.

Finally my personal wish list for 2017: Dare to be more open, more flexible and to have fun when you work in your IT landscape. Great things can be build based on open source tools.

PS: To my colleagues: I am sure I forgot one or the other thing we did in 2016 which needs to mentioned here. Just leave a comment to complete the list :)

Happy new 2017
Daniel

 

Cet article 2016 is over and it was great, 2017 will be even more exiting in the open source area est apparu en premier sur Blog dbi services.

Introducing high-availability with SQL Server on Linux

$
0
0

A couple of months ago, I did my first installation of SQL Server on Linux. I wrote about it in this blog post. So now it’s time to start the new year by talking about high availability on Linux with SQL Server vNext. Running standalone instances will be suitable for scenarios but I guess introducing high-availability with critical environments still remain mandatory.Currently, the CTP1 supports installing a SQL Server Failover Cluster Instance on the top of RHEL HA Add-on based on Pacemaker. This is a good start although I hope to see also availability groups in the future (maybe a future CTP version). In this blog post I will not go into details of my installation process because some steps will certainly change over the time but I would like to share my notes (or feelings) about this new way to achieve high availability with SQL Server on Linux world.

I performed the installation by using the same infrastructure exposed on the Microsoft documentation. As usual, my environment is fully virtualized with Hyper-V.

 

blog 114 - 1 - sql linux fci architecture

So, the first step consisted in installing 3 virtual machines based on a CentOS 7 distribution (the Microsoft documentation is intended to RHEL 7 but CentOS 7 is perfectly suitable in our case). The first two ones concerned the cluster and SQL Server. I performed the same installation process to install SQL Server on Linux. No change here. The third one concerned the NFS server. So let’s show my storage configuration:

[mikedavem@nfs ~]$ sudo fdisk -l | grep -i sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
/dev/sdb1            2048    20971519    10484736   83  Linux
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
/dev/sdc1            2048    20971519    10484736   83  Linux

 

The corresponding partition /dev/sdb1 (ext4 formatted) need to be mounted automatically by the system and will be used as a shared storage by the NFS server afterwards.

[mikedavem@nfs ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Jan 12 21:46:34 2017
#
…

/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=e4f5fc0b-1fd4-4e18-b655-a76b87778b73 /boot                   xfs     defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/sdb1       /mnt/sql_data_nfs       ext4    auto,user,rw    0 0
/dev/sdc1       /mnt/sql_log_nfs        ext4    auto,user,rw    0 0

 

Then my NFS server will expose the shared directory /mnt/sql_data_nfs to the cluster layer.

[mikedavem@nfs ~]$ cat /etc/exports
/mnt/sql_data_nfs 192.168.5.0/24(rw,sync,no_subtree_check,no_root_squash)
/mnt/sql_log_nfs  192.168.5.0/24(rw,sync,no_subtree_check,no_root_squash)

 

We will focus only on the directory /mnt/sql_data_nfs in this case.

[mikedavem@nfs ~]$ showmount -e
Export list for nfs.dbi-services.com:
/mnt/sql_log_nfs  192.168.5.0/24
/mnt/sql_data_nfs 192.168.5.0/24

 

That’s it. My directory is ready to be used by my SQL Server cluster nodes as shared storage for my databases. Let’s continue with the second step. We need to install the cluster underlying infrastructure which includes components as Pacemaker (the resource manager) and Corosync (Communication layer between cluster nodes).

blog 114 - 2 - packemaker architecture

                                        From Microsoft documentation

By reading the Pacemaker documentation on the web, I could find out similar concepts we may have with Microsoft and the Windows Failover Cluster feature. After facing some configuration issues, here my final configuration.

  • Two cluster nodes (linux01.dbi-services.test / linux02.dbi-services.test)
  • Two resources that concern my SQL Server FCI (sqllinuxfci resource + virtualip resource)
[mikedavem@linux01 ~]$ sudo pcs status

Cluster name: linux_cluster
Stack: corosync
Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum

Last updated: Sat Jan 14 19:53:55 2017          Last change: Sat Jan 14 17:28:36 2017 
by root via crm_resource on linux01.dbi-services.test

2 nodes and 2 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::sql:fci): Started linux02.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux02.dbi-services.test

...

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

As said previously, I faced some issues during the cluster installation process. The first one concerned a typo in the Microsoft documentation (at least the command did not work in my case). I was not able to create my SQL Server resource after installing the mssql-server-ha package. Indeed, according to Microsoft documentation we need to create a SQL Server resource based on the ocf:mssql:fci resource agent. However, after some investigations, I was able to figure out that this definition doesn’t exist according to the current OCF resource agent folder hierarchy (see below). In my case, I had to change the definition by ocf:sql:fci

[mikedavem@linux01 ~]$ ls -l /usr/lib/ocf/resource.d/

total 16
drwxr-xr-x. 2 root root 4096 Jan 12 19:31 heartbeat
drwxr-xr-x. 2 root root 4096 Jan 12 19:31 openstack
drwxr-xr-x. 2 root root 4096 Jan 12 19:31 pacemaker
drwxr-xr-x. 2 root root 4096 Jan 14 10:55 sql

[mikedavem@linux01 ~]$ ls -l /usr/lib/ocf/resource.d/sql/
total 20
-rw-r--r--. 1 root root  3473 Jan 14 10:58 cfg
-rwxr-xr-x. 1 root root 15979 Dec 16 02:09 fci

 

Let’s say it was also a good opportunity to understand what an OCF resource agent is. First coming from the Microsoft world, I figured out quickly the OCF resource agents correspond in fact to what we call the Windows Failover Cluster DLL resources. In addition, according to linux documentation, I noticed similar concepts like functions and entry points that a Windows resource DLL is expected to implement. Very interesting!

The second issue concerned a strange behavior when the failover of my SQL Server resource occurred. I first noticed the following messages:

Failed Actions:
* sqllinuxfci_start_0 on linux02.dbi-services.test ‘unknown error’ (1): call=16, status=complete, exitreason=’SQL server crashed during startup.’,
    last-rc-change=’Sat Jan 14 17:35:30 2017′, queued=0ms, exec=34325ms

Then I moved on the SQL Server error log to try to find out some clues about this issue (SQL Server error log is always your friend in this case)

[mikedavem@linux01 sql]$ sudo cat /var/opt/mssql/log/errorlog


2017-01-14 14:38:55.50 spid5s      Error: 17204, Severity: 16, State: 1.
2017-01-14 14:38:55.50 spid5s      FCB::Open failed: Could not open file C:\var\opt\mssql\data\mastlog.ldf for file number 2.  OS error: 2(The system cannot find the file specified.).
2017-01-14 14:38:55.50 spid5s      Error: 5120, Severity: 16, State: 101.
2017-01-14 14:38:55.50 spid5s      Unable to open the physical file "C:\var\opt\mssql\data\mastlog.ldf". Operating system error 2: "2(The system cannot find the file specified.)".
…

That’s the point. My SQL Server engine was not able to open the master database because it can’t find the specified path. As an apart, you may notice the path used by SQL Server in the error message. A Windows fashion path which includes a drive letter! Well, very surprising but I’m sure it will be changed in the near future. For the purpose of my tests, I had no choice to change the folder permission to 777 to expect my SQL Server instance starting well. One point to investigate of course because it will not meet the security policy rules in production environment.

The third one concerned IP and hostname resolution. I had to add my cluster IP and hostnames related information into the /etc/hosts file on each cluster node to get it to resolve correctly as follows:

[mikedavem@linux01 sql]$ cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.5.17    linux01.dbi-services.test linux01
192.168.5.18    linux02.dbi-services.test linux02
192.168.5.19    sqllinuxfci.dbi-services.test sqllinuxfci

 

Finally, after fixing the previous issues, I performed some failover tests (move resources and bring offline a cluster node as well) that ran successfully.

  • Move resource
[mikedavem@linux01 sql]$ sudo pcs resource move sqllinuxfci linux01.dbi-services.test

[mikedavem@linux01 sql]$ sudo pcs status

…
2 nodes and 2 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::sql:fci): Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.test

 

  • Simulate failure node LINUX02.dbi-services.test
[mikedavem@linux01 ~]$ sudo pcs status

Cluster name: linux_cluster
Stack: corosync
Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - part                                                                                        ition with quorum
Last updated: Sun Jan 15 10:59:14 2017          Last change: Sun Jan 15 10:56:54                                                                                         2017 by root via crm_resource on linux01.dbi-services.test

2 nodes and 2 resources configured

Online: [ linux01.dbi-services.test ]
OFFLINE: [ linux02.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::sql:fci): Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.tes                                                                                        t

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

Another “bug” I noticed is that the SERVERPROPERTY() function output does not show correctly the my FCI name. Probably a mistake which will be resolved in the next CTPs. Be patient …

blog 114 - 3 - serverproperty output

My final thoughts

Here my feeling after playing a little bit with my new infrastructure.

Based on this first installation, we must face the facts: building a SQL Server FCI infrastructure is a fastest process on Linux in comparison to the same in Windows but I prefer to be prudent and not to draw hasty conclusions. Let’s see what we are going to have in the RTM release. One important thing I noticed for example is there is no explicit cluster validation compared to Windows at least in appearance. In fact, if we take a closer look at the cluster side, we already have some validation steps during the cluster creation (node authentication and cluster setup). However, I didn’t see any validation step at the SQL Server side (compared to Windows) except basic verifications which include verifying the standalones instances are able to start and share the same storage.

Moreover, one another important point we may notice is that we don’t need to setup DNS servers to run the cluster infrastructure. During my tests, I didn’t use it (hostname resolution was made only from /etc/hosts file) but as soon as I had to connect my infrastructure from remote computers, DNS resolution became almost mandatory :)

Finally, there is a plenty of tests to perform to understand how behave the cluster layer as well as the cluster resource.

Well, there is still a way to go in order to complete all my work on Linux. To be continued ….

 

 

 

 

Cet article Introducing high-availability with SQL Server on Linux est apparu en premier sur Blog dbi services.

From PostgreSQL 9.1.8 to PostgreSQL 9.5.5 including a standby database with minimal downtime

$
0
0

As you might know PostgreSQL 9.1 is out of support since last September. But, no surprise, there are still plenty of PostgreSQL 9.1 instances out there which need to be upgraded to a supported release. At one of our customers we exactly have this situation: The production environment is running on PostgreSQL 9.1.8 and needs to be upgraded to a recent release (9.5.5 in this case). To make life a little bit more fun there are more constraints: This is a PostgreSQL streaming replication configuration, so there is a standby database involved. Allowed downtime is 30 minutes and there is space pressure on the current systems and the current systems need to be reused. In this post we’ll look at how you can do the upgrade with minimal downtime (without using logical replication).

First, lets build the test environment. We need two systems, one for the master instance (192.168.22.32) and one for the standby (192.168.22.32). On both of these system we’ll need PostgreSQL 9.1.8 installed, so:

postgres@debian6pg:~$ wget https://ftp.postgresql.org/pub/source/v9.1.8/postgresql-9.1.8.tar.bz2
postgres@debian6pg:~$ tar -axf postgresql-9.1.8.tar.bz2
postgres@debian6pg:~$ cd postgresql-9.1.8/
postgres@debian6pg:~/postgresql-9.1.8$ PGHOME=/u01/app/postgres/product/91/db_8
postgres@debian6pg:~/postgresql-9.1.8$ SEGSIZE=2
postgres@debian6pg:~/postgresql-9.1.8$ BLOCKSIZE=8
postgres@debian6pg:~/postgresql-9.1.8$ WALSEGSIZE=16
postgres@debian6pg:~/postgresql-9.1.8$ ./configure --prefix=${PGHOME} \
                                                   --exec-prefix=${PGHOME} \
                                                   --bindir=${PGHOME}/bin \
                                                   --libdir=${PGHOME}/lib \
                                                   --sysconfdir=${PGHOME}/etc \
                                                   --includedir=${PGHOME}/include \
                                                   --datarootdir=${PGHOME}/share \
                                                   --datadir=${PGHOME}/share \
                                                   --with-pgport=5432 \
                                                   --with-perl \
                                                   --with-python \
                                                   --with-tcl \
                                                   --with-openssl \
                                                   --with-pam \
                                                   --with-ldap \
                                                   --with-libxml \
                                                   --with-libxslt \
                                                   --with-segsize=${SEGSIZE} \
                                                   --with-blocksize=${BLOCKSIZE} \
                                                   --with-wal-segsize=${WALSEGSIZE}
postgres@debian6pg:~/postgresql-9.1.8$ make world
postgres@debian6pg:~/postgresql-9.1.8$ make install
postgres@debian6pg:~/postgresql-9.1.8$ cd contrib
postgres@debian6pg:~/postgresql-9.1.8/contrib$ make install
postgres@debian6pg:~/postgresql-9.1.8/contrib$ cd ../..
postgres@debian6pg:~$ rm -rf postgresql-9.1.8*

Once this is available on both nodes we can initialize our master instance:

postgres@debian6pg:~$ /u01/app/postgres/product/91/db_8/bin/initdb -D /u02/pgdata/testmig -X /u03/pgdata/testmig
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to "english".

creating directory /u02/pgdata/testmig ... ok
creating directory /u03/pgdata/testmig ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 24MB
creating configuration files ... ok
creating template1 database in /u02/pgdata/testmig/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.

Success. You can now start the database server using:

    /u01/app/postgres/product/91/db_8/bin/postgres -D /u02/pgdata/testmig
or
    /u01/app/postgres/product/91/db_8/bin/pg_ctl -D /u02/pgdata/testmig -l logfile start

(I am assuming that password less ssh authentication is already setup between the nodes for the following). Setup authentication:

postgres@debian6pg:/u03$ echo "host    replication     postgres       192.168.22.32/32        trust" >> /u02/pgdata/testmig/pg_hba.conf
postgres@debian6pg:/u03$ echo "host    replication     postgres       192.168.22.33/32        trust" >> /u02/pgdata/testmig/pg_hba.conf

Adjust the parameters:

postgres@debian6pg:/u03$ sed -i 's/#wal_level = minimal/wal_level = hot_standby/g' /u02/pgdata/testmig/postgresql.conf
postgres@debian6pg:/u03$ sed -i 's/#max_wal_senders = 0/max_wal_senders = 10/g' /u02/pgdata/testmig/postgresql.conf
postgres@debian6pg:/u03$ sed -i 's/#wal_keep_segments = 0/wal_keep_segments = 100/g' /u02/pgdata/testmig/postgresql.conf
postgres@debian6pg:/u03$ sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '*'/g" /u02/pgdata/testmig/postgresql.conf
postgres@debian6pg:/u03$ sed -i 's/#logging_collector = off/#logging_collector = on/g' /u02/pgdata/testmig/postgresql.conf    
postgres@debian6pg:/u03$ mkdir /u02/pgdata/testmig/pg_log    

Start and stop the instance:

postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/91/db_8/bin/:$PATH
postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ start -l /u02/pgdata/testmig/pg_log/log.log
postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ stop

Ready to setup the standby:

postgres@debian6pg:/u03$ cd /u02    
postgres@debian6pg:/u02$ rsync -r pgdata/ 192.168.22.33:/u02/pgdata
postgres@debian6pg:~$ cd /u03
postgres@debian6pg:/u03$ rsync -r pgdata/ 192.168.22.33:/u03/pgdata
postgres@debian6pg:/u03$ ssh 192.168.22.33 "ln -s /u03/pgdata/testmig/ /u02/pgdata/testmig/pg_xlog"
postgres@debian6pg:/u03$ scp /u02/pgdata/testmig/pg_hba.conf 192.168.22.33:/u02/pgdata/testmig/pg_hba.conf
postgres@debian6pg:/u03$ ssh 192.168.22.33 "echo \"standby_mode = on\" >> /u02/pgdata/testmig/recovery.conf"
postgres@debian6pg:/u03$ ssh 192.168.22.33 "echo \"primary_conninfo = 'host=192.168.22.32 port=5432 user=postgres'\" >> /u02/pgdata/testmig/recovery.conf"
postgres@debian6pg:/u03$ ssh 192.168.22.33 "echo \"trigger_file = '/u02/pgdata/testmig/up_slave'\" >> /u02/pgdata/testmig/recovery.conf"
postgres@debian6pg:/u03$ ssh 192.168.22.33 "sed -i 's/#hot_standby = off/hot_standby = on/g' /u02/pgdata/testmig/postgresql.conf"
postgres@debian6pg:/u03$ ssh 192.168.22.33 "sed -i 's/#logging_collector = off/#logging_collector = on/g' /u02/pgdata/testmig/postgresql.conf"
postgres@debian6pg:/u03$ ssh 192.168.22.33 "mkdir -p /u02/pgdata/testmig/pg_log"

Start the master:

postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ start -l /u02/pgdata/testmig/pg_log/log.log

Start the standby:

# standby side
postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/91/db_8/bin/:$PATH
postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig/ start -l /u02/pgdata/testmig/pg_log/log.log

… and if everything went fine we should see this in the log of the standby instance:

LOG: database system was shut down at 2017-01-18 07:28:02 CET
LOG: entering standby mode
LOG: consistent recovery state reached at 0/16BCBB0
LOG: database system is ready to accept read only connections
LOG: record with zero length at 0/16BCBB0
LOG: streaming replication successfully connected to primary

A quick check on the standby to confirm that it is operating in recovery mode:

postgres@debian6pg:~$ psql
psql (9.1.8)
Type "help" for help.

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 t
(1 row)

In the case we had at the customer there was the adminpack extension installed in the postgres database and the pg_trgm and pg_buffercache extension in the application database, so lets do the same here on the master (this will get replicated to the standby automatically):

postgres@debian6pg:/u03$ psql
psql (9.1.8)
Type "help" for help.

postgres=# create extension adminpack;
CREATE EXTENSION
postgres=# create database testmig;
CREATE DATABASE
postgres=# \c testmig
You are now connected to database "testmig" as user "postgres".
testmig=# create extension pg_trgm;
CREATE EXTENSION
testmig=# create extension pg_buffercache;
CREATE EXTENSION
testmig=# 

Quickly confirm that it is there on the standby:

postgres=# \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description               
-----------+---------+------------+-----------------------------------------
 adminpack | 1.0     | pg_catalog | administrative functions for PostgreSQL
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

postgres=# \c testmig
You are now connected to database "testmig" as user "postgres".
testmig=# \dx
                                       List of installed extensions
      Name      | Version |   Schema   |                            Description                            
----------------+---------+------------+-------------------------------------------------------------------
 pg_buffercache | 1.0     | public     | examine the shared buffer cache
 pg_trgm        | 1.0     | public     | text similarity measurement and index searching based on trigrams
 plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)

Finally, some sample data generated with pgbench:

postgres@debian6pg:/u03$ pgbench -i testmig -s 10

Should be there on the standby as well:

testmig=# select count(*) from pgbench_accounts;
 count  
--------
1000000
(1 row)
testmig=# 

This is, more or less, the situation to start from. How can we upgrade this to PostgreSQL 9.5.5 with minimal downtime and without using logical replication? Obviously we’ll need to get PostgreSQL 9.5.5 installed on both systems before we can do anything further, so:

postgres@debian6pg:~$ wget https://ftp.postgresql.org/pub/source/v9.5.5/postgresql-9.5.5.tar.bz2
postgres@debian6pg:~$ tar -axf postgresql-9.5.5.tar.bz2 
postgres@debian6pg:~$ cd postgresql-9.5.5/
postgres@debian6pg:~/postgresql-9.5.5$ PGHOME=/u01/app/postgres/product/95/db_5
postgres@debian6pg:~/postgresql-9.5.5$ PGHOME=/u01/app/postgres/product/95/db_5
postgres@debian6pg:~/postgresql-9.5.5$ SEGSIZE=2
postgres@debian6pg:~/postgresql-9.5.5$ BLOCKSIZE=8
postgres@debian6pg:~/postgresql-9.5.5$ WALSEGSIZE=16
postgres@debian6pg:~/postgresql-9.5.5$ ./configure --prefix=${PGHOME} \
                                                   --exec-prefix=${PGHOME} \
                                                   --bindir=${PGHOME}/bin \
                                                   --libdir=${PGHOME}/lib \
                                                   --sysconfdir=${PGHOME}/etc \
                                                   --includedir=${PGHOME}/include \
                                                   --datarootdir=${PGHOME}/share \
                                                   --datadir=${PGHOME}/share \
                                                   --with-pgport=5432 \
                                                   --with-perl \
                                                   --with-python \
                                                   --with-tcl \
                                                   --with-openssl \
                                                   --with-pam \
                                                   --with-ldap \
                                                   --with-libxml \
                                                   --with-libxslt \
                                                   --with-segsize=${SEGSIZE} \
                                                   --with-blocksize=${BLOCKSIZE} \
                                                   --with-wal-segsize=${WALSEGSIZE}
postgres@debian6pg:~/postgresql-9.5.5$ make world
postgres@debian6pg:~/postgresql-9.5.5$ make install
postgres@debian6pg:~/postgresql-9.5.5$ cd contrib
postgres@debian6pg:~/postgresql-9.5.5/contrib$ make install
postgres@debian6pg:~/postgresql-9.5.5/contrib$ cd ../..
postgres@debian6pg:~$ rm -rf postgresql-9.5.5*

Then we need a new cluster initialized with the new version of PostgreSQL on the master:

postgres@debian6pg:~$ /u01/app/postgres/product/95/db_5/bin/initdb -D /u02/pgdata/testmig95/ -X /u03/pgdata/testmig95/ 
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /u02/pgdata/testmig95 ... ok
creating directory /u03/pgdata/testmig95 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/testmig95/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ -l logfile start

Shutdown the master and record the latest checkpoint location (This is where your downtime starts):

postgres@debian6pg:/u02$ pg_ctl -D /u02/pgdata/testmig stop -m fast
LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
waiting for server to shut down....LOG:  database system is shut down
 done
server stopped

postgres@debian6pg:/u02/pgdata/testmig$ pg_controldata  | grep "Latest checkpoint location"
Latest checkpoint location:           0/C619840

Shutdown the slave and compare the last checkpoint:

# slave side
postgres@debian6pg:/u02/pgdata/testmig$ pg_ctl -D /u02/pgdata/testmig/ stop -m fast
waiting for server to shut down.... done
server stopped

postgres@debian6pg:/u02/pgdata/testmig$ pg_controldata  | grep "Latest checkpoint location"
Latest checkpoint location:           0/C619840

As both checkpoint locations match we are sure that the standby applied all changes and there is not difference in data.

Save your configuration files:

postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp 
postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp 
postgres@debian6pg:/u02$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp

Run pg_upgrade on the master with link (-k) mode (if you have many cores on your box you can use the “-j” option to parallelize pg_upgrade):

postgres@debian6pg:/u02$ export PGDATAOLD=/u02/pgdata/testmig/
postgres@debian6pg:/u02$ export PGDATANEW=/u02/pgdata/testmig95/
postgres@debian6pg:/u02$ export PGBINOLD=/u01/app/postgres/product/91/db_8/bin/
postgres@debian6pg:/u02$ export PGBINNEW=/u01/app/postgres/product/95/db_5/bin/

postgres@debian6pg:/u02$ /u01/app/postgres/product/95/db_5/bin/pg_upgrade -k 

(Usually you’d do a “-c” check run before doing the real upgrade). When using link mode the files get hard-linked instead of copied which is much faster and saves disk space. The downside is that you can not revert to the old cluster in case anything goes wrong. When it goes fine, it looks like this:


Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for invalid "line" user columns                    ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Setting oldest multixact ID on new cluster                  ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Setting minmxid counter in new cluster                      ok
Adding ".old" suffix to old global/pg_control               ok

If you want to start the old cluster, you will need to remove
the ".old" suffix from /u02/pgdata/testmig/global/pg_control.old.
Because "link" mode was used, the old cluster cannot be safely
started once the new cluster has been started.

Linking user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Restore the configuration files:

postgres@debian6pg:/u02$ mkdir -p /u02/pgdata/testmig95/pg_log
postgres@debian6pg:/u02$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf  
postgres@debian6pg:/u02$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf 

Start and stop the upgraded instance and check that everything is fine in the log file:

postgres@debian6pg:/u02$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ -l /u02/pgdata/testmig95/pg_log/log.log start    
postgres@debian6pg:/u02$ /u01/app/postgres/product/95/db_5/bin/pg_ctl -D /u02/pgdata/testmig95/ stop   

You could already keep your cluster running now and your downtime is completed when you plan to re-build the standby. When you want to do the standby now then: save the configuration files:

# standby side
postgres@debian6pg:/u03$ cp /u02/pgdata/testmig/postgresql.conf /var/tmp
postgres@debian6pg:/u03$ cp /u02/pgdata/testmig/pg_hba.conf /var/tmp
postgres@debian6pg:/u03$ cp /u02/pgdata/testmig/recovery.conf /var/tmp

Sync the directories from the master to the standby (this will be very fast because it will create hard links on the standby server instead of copying the user files):

postgres@debian6pg:/u03$ cd /u02/pgdata   
postgres@debian6pg:/u02$ rsync --archive --delete --hard-links --size-only testmig testmig95 192.168.22.33:/u02/pgdata
postgres@debian6pg:/u02$ cd /u03
postgres@debian6pg:/u03$ rsync -r pgdata/testmig95 192.168.22.33:/u03/pgdata/testmig95

Restore the configuration files on the standby:

postgres@debian6pg:/u03$ cp /var/tmp/postgresql.conf /u02/pgdata/testmig95/postgresql.conf 
postgres@debian6pg:/u03$ cp /var/tmp/pg_hba.conf /u02/pgdata/testmig95/pg_hba.conf
postgres@debian6pg:/u03$ cp /var/tmp/recovery.conf /u02/pgdata/testmig95/recovery.conf

Start the master:

postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log

Start the standby:

postgres@debian6pg:/u03$ export PATH=/u01/app/postgres/product/95/db_5/bin:$PATH
postgres@debian6pg:/u03$ pg_ctl -D /u02/pgdata/testmig95/ start -l /u02/pgdata/testmig95/pg_log/log.log

Check the standby’s logfile:


LOG:  database system was shut down at 2017-01-19 07:51:24 GMT
LOG:  creating missing WAL directory "pg_xlog/archive_status"
LOG:  entering standby mode
LOG:  started streaming WAL from primary at 0/E000000 on timeline 1
LOG:  consistent recovery state reached at 0/E024D38
LOG:  redo starts at 0/E024D38
LOG:  database system is ready to accept read only connections

Do some checks to see that everything is there on the standby:

postgres@debian6pg:~$ psql
psql (9.5.5)
Type "help" for help.

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 t
(1 row)

postgres=# \dx
                        List of installed extensions
   Name    | Version |   Schema   |               Description               
-----------+---------+------------+-----------------------------------------
 adminpack | 1.0     | pg_catalog | administrative functions for PostgreSQL
 plpgsql   | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

postgres=# \c testmig
You are now connected to database "testmig" as user "postgres".
testmig=# \dx
                                       List of installed extensions
      Name      | Version |   Schema   |                            Description                            
----------------+---------+------------+-------------------------------------------------------------------
 pg_buffercache | 1.0     | public     | examine the shared buffer cache
 pg_trgm        | 1.0     | public     | text similarity measurement and index searching based on trigrams
 plpgsql        | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)

testmig=# \d
              List of relations
 Schema |       Name       | Type  |  Owner   
--------+------------------+-------+----------
 public | pg_buffercache   | view  | postgres
 public | pgbench_accounts | table | postgres
 public | pgbench_branches | table | postgres
 public | pgbench_history  | table | postgres
 public | pgbench_tellers  | table | postgres
(5 rows)

testmig=# select count(*) from pgbench_accounts;
  count  
---------
 1000000
(1 row)

Run the analyze_new_cluster.sh on the master:

postgres@debian6pg:~$ ./analyze_new_cluster.sh
This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/u01/app/postgres/product/95/db_5/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "testmig": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "testmig": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "testmig": Generating default (full) optimizer statistics

Now you can delete the old cluster on the master:

postgres@debian6pg:~$ postgres@debian6pg:~$ ./delete_old_cluster.sh

Then either copy the script to the standby or delete the old standby the manual way:

postgres@debian6pg:~$ rm -rf /u02/pgdata/testmig
postgres@debian6pg:~$ rm -rf /u03/pgdata/testmig

Done. Hope this helps …

 

Cet article From PostgreSQL 9.1.8 to PostgreSQL 9.5.5 including a standby database with minimal downtime est apparu en premier sur Blog dbi services.

vagrant up – get your Oracle infrastructure up an running

$
0
0

By using Vagrant to manage your Virtual Machines and Ansible for configuration management and provisioning
you can easily automate the setup of your whole test environment in a standardized way.

If you have never heard about Ansible and Vagrant I try to give you an idea with the following very short  summary. There is a lot of good information available about Ansible and Vagrant.
Please check the provided links at the end of this blog for further information.

What is Vagrant ?

Vagrant is an open source tool on top of some virtualization solution like Oracle VirtualBox. It can automate the creation of VM’s. Additionally vagrant supports provisioning with scripts or with tools like Ansible, Puppet or Chef.
You can download a lot of useful  boxes from here: https://atlas.hashicorp.com/boxes/search

 What is Ansible ?

Ansible is an open source automation platform.
It is a radically simple IT automation engine designed for multi-tier deployments. [https://www.ansible.com/how-ansible-works]

Ansible just uses ssh and does not require agents or other software installed on the target nodes. You simply put your steps into an Ansible playbook, which is an easy to read text-file written in YAML syntax. Your playbook will simply look like documented steps.
Ansible will run the listed tasks described in the playbook on the target servers by invoking Ansible Modules.

Here is a simple example task from a playbook which will add a directory on a server. It uses the Ansible module “file”
- name: add home directory
file:
path: /home/myuser
state: directory

Ansible is quite well known to build up whole test environments including databases like mysql which are easy to install with simple tar balls or rpm files.

Unfortunately in the community of Oracle DBA’s usually Ansible is not on the radar despite there are already good Ansible playbooks available which proofed that you can also use Ansible to provision your whole Oracle Test Environment even with Oracle Real Application Cluster:
https://github.com/racattack/racattack-ansible-oracle

https://github.com/cvezalis/oracledb-ansible

Starting from these examples and adapting them for your needs you will experience how quick you will be able to automate your Oracle installations. This is what I did an want to show you here. Please keep in mind that this example is optimized for a fast installation and should not be used as it is for a productive system.

What you’ll get
In this blog I give you an example how to build an Oracle infrastructure from scratch containing
two virtual servers, installed and configured with CentOS 7.2 ,
each hosting an Oracle DB (12.1.0.2).
Example_Ansible

  • Step ONE – What you need to prepare once to run this example
      1) the Ansible Playbook and Vagrant configuration for this example
      you can download everything from the git repository. All files are simple text files.
      https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
      2) the Oracle 12.1.0.2 binaries
      the Oracle binaries are not included in the download. You have to provide them.
      Please copy the Oracle software zip files into the directory oracle-db-12c-vagrant-ansible/
      ./linuxamd64_12102_database_1of2.zip
      ./linuxamd64_12102_database_2of2.zip

      3) your Linux host or laptop
      with Network Connection,Oracle VirtualBox , Vagrant and Ansible installed.
      This can be done with your linux package manager.
      You will need Ansible version 2.1.1.0 or higher for this example!
      Please check http://docs.ansible.com/ansible/intro_installation.html for installation details.
      sudo yum install ansible
      You can find the Oracle VirtualBox Download and Installation Guide here:
      https://www.virtualbox.org/wiki/Linux_Downloads
      Download Vagrant with version 1.8.5 or higher from
      https://www.vagrantup.com/downloads.html
      Also install the vagrant hostmanager plugin:
      $ vagrant plugin install vagrant-hostmanager
  • Step TWO – Run it
      Now you are ready to start the whole setup which will create two virtual servers and oracle databases.
      On my laptop with SSD disks and 16 GB RAM this takes about 20 minutes.
      To run this example you will need minimal 8 GB RAM and 10G free disk space
      Go to the directory where you have downloaded this example. Everything will be started from here.
      cd oracle-db-12c-vagrant-ansible
      vagrant up
  • Of cause you do not want to start this without knowing what is going on.
    I will go a little bit into details therefore next week ….

    Further information about Ansible:
    There will be some Introduction Webinars for Ansible coming soon
    https://www.ansible.com/webinars-training

    you can find more examples at:
    http://galaxy.ansible.com
    https://github.com/ansible/ansible-examples
    https://groups.google.com/forum/#!forum/ansible-project
    If you want to read a book I can recommend this:
    Ansible: Up and Running
    Print ISBN: 978-1-4919-1532-5
    Ebook ISBN: 978-1-4919-1529-5

    https://www.ansible.com/ebooks

     

    Cet article vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.

    Oracle 12cR2 – DataGuard and Backups to NFS

    $
    0
    0

    From the RMAN point of view, you have generally two possibilities to store your backups, on Disk or on Tape. In case of a single instance, it doesn’t really matter from the accessible or inaccessible point of view. However, in a DataGuard environment it makes a huge difference.

    Per default, backups to Tape are always accessible and backups to disk always inaccessible. But what do you do in case you backup to NFS, and you want the backups to be usable on Primary and Standby?

    Ok. Let’s to a quick demo.

    Before we start, make sure that the oracle Unix user has the same unique id, and group id. If not, you end up with a big mess.

    oracle@dbidg01:/home/oracle/ [DBIT122] id -a
    uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)

    The next import thing with NFS are the mount options. Oracle has documented it quite well which mount options are needed if you put Binaries, Datafiles or CRS Voting Disk and OCR on it.

    Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS devices (Doc ID 359515.1)

    For Linux x86-64bit it would be the following options for Datafiles

    rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600

    However, if your NFS mount is needed only for RMAN backup pieces, the actimeo=0 is not recommended. The actimeo=0 disables all NFS attribute caching, like acregmin, acregmax, acdirmin and acdirmax. And disabling attribute caching is suboptimal for RMAN backups. See also the following MOS note:

    NFS options for 11.2.0.2 RMAN disk backups on Linux 64-bit (Doc ID 1117597.1)

    I am using for that example NFS 4.1 and these are the mount options I use.

    [root@dbidg01 etc]# mount | grep nfs4
    dbidg03:/u99 on /u99 type nfs4 (rw,relatime,vers=4.1,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.201,local_lock=none,addr=192.168.56.203)

    Last but not least, the Primary database has to be registered with the RMAN catalog. This is a very important point when working with RMAN in a DataGuard environment.

    Ok. Let’s do the first backup on the Primary.

    -- Primary
    
    oracle@dbidg01:/home/oracle/ [DBIT122] rman target / catalog rman/rman@pdb1
    
    Recovery Manager: Release 12.2.0.1.0 - Production on Tue Jan 24 08:09:16 2017
    
    Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT122 (DBID=653953745)
    connected to recovery catalog database
    
    
    RMAN> list backup summary;
    
    specification does not match any backup in the repository
    
    RMAN> backup database plus archivelog tag 'DBI_BACKUP_ON_PRIMARY' format '/u99/backup/DBIT122/%U';
    
    
    Starting backup at 24-JAN-2017 08:09:45
    current log archived
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=31 device type=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: SID=36 device type=DISK
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=68 RECID=386 STAMP=934081238
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:46
    channel ORA_DISK_2: starting archived log backup set
    channel ORA_DISK_2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=69 RECID=388 STAMP=934099691
    input archived log thread=1 sequence=70 RECID=390 STAMP=934099729
    input archived log thread=1 sequence=71 RECID=392 STAMP=934099785
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:46
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:47
    piece handle=/u99/backup/DBIT122/2vrqqeqa_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_2: starting archived log backup set
    channel ORA_DISK_2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=64 RECID=378 STAMP=934044533
    input archived log thread=1 sequence=65 RECID=380 STAMP=934044555
    input archived log thread=1 sequence=66 RECID=382 STAMP=934045153
    input archived log thread=1 sequence=67 RECID=384 STAMP=934045183
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:47
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:48
    piece handle=/u99/backup/DBIT122/2urqqeqa_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:48
    piece handle=/u99/backup/DBIT122/30rqqeqb_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:09:48
    
    Starting backup at 24-JAN-2017 08:09:48
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_system_d4fnthn8_.dbf
    input datafile file number=00007 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_users_d4fnx3bd_.dbf
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:49
    channel ORA_DISK_2: starting full datafile backup set
    channel ORA_DISK_2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_sysaux_d4fnvx0w_.dbf
    input datafile file number=00004 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_undotbs1_d4fnx03r_.dbf
    input datafile file number=00002 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbv_d7klp09z_.dbf
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:49
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:52
    piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T080948_d8fzfxqx_.bkp tag=TAG20170124T080948 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
    channel ORA_DISK_2: starting full datafile backup set
    channel ORA_DISK_2: specifying datafile(s) in backup set
    including current control file in backup set
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:53
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:54
    piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T080948_d8fzfxps_.bkp tag=TAG20170124T080948 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:54
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:54
    piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_ncnnf_TAG20170124T080948_d8fzg1yp_.bkp tag=TAG20170124T080948 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:55
    piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_nnsnf_TAG20170124T080948_d8fzg2yn_.bkp tag=TAG20170124T080948 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:09:55
    
    Starting backup at 24-JAN-2017 08:09:56
    current log archived
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=72 RECID=394 STAMP=934099796
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:56
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:57
    piece handle=/u99/backup/DBIT122/35rqqeqk_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:09:57
    
    RMAN>
    
    RMAN> list backup summary completed after 'sysdate -1';
    
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
    2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
    2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY

     

    Ok. So what does my Standby know about this backup, which I have done on the Primary.

    oracle@dbidg02:/u99/backup/DBIT122/ [DBIT122] rman target / catalog rman/rman@pdb1
    
    Recovery Manager: Release 12.2.0.1.0 - Production on Tue Jan 24 08:11:12 2017
    
    Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT122 (DBID=653953745)
    connected to recovery catalog database
    
    RMAN> list backup summary completed after 'sysdate -1';
    
    specification does not match any backup in the repository
    
    RMAN>

     

    Nothing …

    Ok. Let’s do now a backup on the Standby to NFS and check if it is visible on the Primary.

    RMAN> backup database plus archivelog tag 'DBI_BACKUP_ON_STANDBY' format '/u99/backup/DBIT122/%U';
    
    
    Starting backup at 24-JAN-2017 08:11:49
    RMAN-06820: warning: failed to archive current log at primary database
    cannot connect to remote database
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=271 device type=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: SID=41 device type=DISK
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=64 RECID=291 STAMP=934044533
    input archived log thread=1 sequence=65 RECID=292 STAMP=934044555
    input archived log thread=1 sequence=66 RECID=293 STAMP=934045153
    input archived log thread=1 sequence=67 RECID=294 STAMP=934045183
    input archived log thread=1 sequence=68 RECID=295 STAMP=934081238
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:11:50
    channel ORA_DISK_2: starting archived log backup set
    channel ORA_DISK_2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=69 RECID=296 STAMP=934099691
    input archived log thread=1 sequence=70 RECID=297 STAMP=934099729
    input archived log thread=1 sequence=71 RECID=298 STAMP=934099785
    input archived log thread=1 sequence=72 RECID=299 STAMP=934099796
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:11:50
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:11:51
    piece handle=/u99/backup/DBIT122/49rqqeu6_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:11:53
    piece handle=/u99/backup/DBIT122/48rqqeu6_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
    Finished backup at 24-JAN-2017 08:11:53
    
    Starting backup at 24-JAN-2017 08:11:53
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_system_d4zc9yd3_.dbf
    input datafile file number=00007 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_users_d4zccmnt_.dbf
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:11:54
    channel ORA_DISK_2: starting full datafile backup set
    channel ORA_DISK_2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_sysaux_d4zcbr8r_.dbf
    input datafile file number=00004 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_undotbs1_d4zccjnc_.dbf
    input datafile file number=00002 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbv_d7klp0gg_.dbf
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:11:54
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:12:09
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081153_d8fzktfn_.bkp tag=TAG20170124T081153 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:12:09
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081153_d8fzktg7_.bkp tag=TAG20170124T081153 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:15
    channel ORA_DISK_2: starting full datafile backup set
    channel ORA_DISK_2: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:12:09
    including current control file in backup set
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:12:10
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:12:10
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnsnf_TAG20170124T081153_d8fzl9p7_.bkp tag=TAG20170124T081153 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:12:11
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_ncnnf_TAG20170124T081153_d8fzlbqg_.bkp tag=TAG20170124T081153 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:12:11
    
    Starting backup at 24-JAN-2017 08:12:11
    RMAN-06820: warning: failed to archive current log at primary database
    cannot connect to remote database
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    specification does not match any archived log in the repository
    backup cancelled because there are no files to backup
    Finished backup at 24-JAN-2017 08:12:12
    
    RMAN>
    
    RMAN> list backup summary completed after 'sysdate -1';
    
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
    2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
    2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
    2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153

     

    Now let’s check the Primary.

    -- Primary
    
    RMAN> list backup summary completed after 'sysdate -1';
    
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
    2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
    2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY

     

    No … nothing there from the Standby. Only the backups done on the Primary are visible.

    Before we continue, let’s take a look at the following warning message that appeared on the standby database:

    RMAN-06820: warning: failed to archive current log at primary database

    In a DataGuard environment, it is not a good practise to logon to the standby with “rman target /”. You should use the
    username/password@connectstring instead, or Wallets, if configured. To have a good backup on the Standby, Oracle first attempts to archive the current online redo log, and before the backup ends, it does another one. That’s why you see the RMAN-06820 twice. In case you are doing backups on the standby, always use “username/password@connectstring” or Wallets.

    oracle@dbidg02:/u99/backup/DBIT122/ [DBIT122] rman target sys/Manager1@DBIT122_SITE2 catalog rman/rman@PDB1
    
    Recovery Manager: Release 12.2.0.1.0 - Production on Tue Jan 24 08:15:47 2017
    
    Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: DBIT122 (DBID=653953745)
    connected to recovery catalog database
    
    RMAN> backup database plus archivelog tag 'DBI_BACKUP_ON_STANDBY' format '/u99/backup/DBIT122/%U';
    
    
    Starting backup at 24-JAN-2017 08:16:09
    current log archived at primary database
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=271 device type=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: SID=16 device type=DISK
    skipping archived logs of thread 1 from sequence 64 to 72; already backed up
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=73 RECID=300 STAMP=934100169
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:11
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:12
    piece handle=/u99/backup/DBIT122/4erqqf6b_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:16:12
    
    Starting backup at 24-JAN-2017 08:16:12
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_system_d4zc9yd3_.dbf
    input datafile file number=00007 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_users_d4zccmnt_.dbf
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:13
    channel ORA_DISK_2: starting full datafile backup set
    channel ORA_DISK_2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_sysaux_d4zcbr8r_.dbf
    input datafile file number=00004 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_undotbs1_d4zccjnc_.dbf
    input datafile file number=00002 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbv_d7klp0gg_.dbf
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:16:13
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:28
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081612_d8fzsx8o_.bkp tag=TAG20170124T081612 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:16:28
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081612_d8fzsx94_.bkp tag=TAG20170124T081612 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:15
    channel ORA_DISK_2: starting full datafile backup set
    channel ORA_DISK_2: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:16:28
    including current control file in backup set
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:28
    channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:16:28
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnsnf_TAG20170124T081612_d8fztdkp_.bkp tag=TAG20170124T081612 comment=NONE
    channel ORA_DISK_2: backup set complete, elapsed time: 00:00:00
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:29
    piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_ncnnf_TAG20170124T081612_d8fztdl7_.bkp tag=TAG20170124T081612 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:16:29
    
    Starting backup at 24-JAN-2017 08:16:29
    current log archived at primary database
    using channel ORA_DISK_1
    using channel ORA_DISK_2
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=74 RECID=301 STAMP=934100189
    channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:31
    channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:32
    piece handle=/u99/backup/DBIT122/4jrqqf6v_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:16:32
    
    RMAN>
    
    
    -- A message like the following popps up in the alert.log of the Primary, when Backup was done correctly.
    
    ALTER SYSTEM ARCHIVE LOG
    2017-01-24T08:16:09.244238+01:00
    LGWR: Standby redo logfile selected to archive thread 1 sequence 74
    LGWR: Standby redo logfile selected for thread 1 sequence 74 for destination LOG_ARCHIVE_DEST_2
    2017-01-24T08:16:09.264207+01:00
    Thread 1 advanced to log sequence 74 (LGWR switch)
      Current log# 2 seq# 74 mem# 0: /u02/oradata/DBIT122_SITE1/onlinelog/o1_mf_2_d4fotcnx_.log
      Current log# 2 seq# 74 mem# 1: /u03/fast_recovery_area/DBIT122_SITE1/onlinelog/o1_mf_2_d4fotkvy_.log
    2017-01-24T08:16:09.268607+01:00
    Archived Log entry 396 added for T-1.S-73 ID 0x27387e4f LAD:1
    2017-01-24T08:16:29.636421+01:00
    ALTER SYSTEM ARCHIVE LOG
    2017-01-24T08:16:29.656594+01:00
    LGWR: Standby redo logfile selected to archive thread 1 sequence 75
    LGWR: Standby redo logfile selected for thread 1 sequence 75 for destination LOG_ARCHIVE_DEST_2
    2017-01-24T08:16:29.665882+01:00
    Thread 1 advanced to log sequence 75 (LGWR switch)

     

    Nevertheless, we still can’t see the backups on Primary which were done on Standby. Now the accessible flag comes into play. If executed on Primary, you can see now the backups which were done on the Standby.

    -- Primary
    
    RMAN> set backup files for device type disk to accessible;
    
    executing command: SET backup
    
    RMAN> list backup summary completed after 'sysdate -1';
    
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
    2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
    2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
    2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
    2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
    2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153
    2137    B  A  A DISK        24-JAN-2017 08:16:11 1       1       NO         DBI_BACKUP_ON_STANDBY
    2148    B  F  A DISK        24-JAN-2017 08:16:17 1       1       NO         TAG20170124T081612
    2149    B  F  A DISK        24-JAN-2017 08:16:21 1       1       NO         TAG20170124T081612
    2150    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
    2151    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
    2169    B  A  A DISK        24-JAN-2017 08:16:31 1       1       NO         DBI_BACKUP_ON_STANDBY
    
    RMAN>

     

    If executed on Standby, you can see now the backups which were done on the Primary.

     

    -- Standby
    
    RMAN> set backup files for device type disk to accessible;
    
    executing command: SET backup
    
    RMAN> list backup summary completed after 'sysdate -1';
    
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
    2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
    2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
    2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
    2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
    2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153
    2137    B  A  A DISK        24-JAN-2017 08:16:11 1       1       NO         DBI_BACKUP_ON_STANDBY
    2148    B  F  A DISK        24-JAN-2017 08:16:17 1       1       NO         TAG20170124T081612
    2149    B  F  A DISK        24-JAN-2017 08:16:21 1       1       NO         TAG20170124T081612
    2150    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
    2151    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
    2169    B  A  A DISK        24-JAN-2017 08:16:31 1       1       NO         DBI_BACKUP_ON_STANDBY
    
    RMAN>

    If you don’t want to use the the accessible flag, but still want to see the backups on NFS from both sites you have another option. You can use the Oracle DISKSBT library and point it to the NFS directory.

    RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT122)';
    
    new RMAN configuration parameters:
    CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT122)';
    new RMAN configuration parameters are successfully stored
    starting full resync of recovery catalog
    full resync complete
    
    RMAN> CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;
    
    old RMAN configuration parameters:
    CONFIGURE DEFAULT DEVICE TYPE TO DISK;
    new RMAN configuration parameters:
    CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
    new RMAN configuration parameters are successfully stored
    starting full resync of recovery catalog
    full resync complete
    
    RMAN> backup database plus archivelog tag 'DBI_SBT_TAPE_BACKUP_ON_PRIMARY';
    
    
    Starting backup at 24-JAN-2017 08:22:16
    current log archived
    allocated channel: ORA_SBT_TAPE_1
    channel ORA_SBT_TAPE_1: SID=31 device type=SBT_TAPE
    channel ORA_SBT_TAPE_1: WARNING: Oracle Test Disk API
    allocated channel: ORA_SBT_TAPE_2
    channel ORA_SBT_TAPE_2: SID=36 device type=SBT_TAPE
    channel ORA_SBT_TAPE_2: WARNING: Oracle Test Disk API
    channel ORA_SBT_TAPE_1: starting compressed archived log backup set
    channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=64 RECID=378 STAMP=934044533
    input archived log thread=1 sequence=65 RECID=380 STAMP=934044555
    input archived log thread=1 sequence=66 RECID=382 STAMP=934045153
    input archived log thread=1 sequence=67 RECID=384 STAMP=934045183
    input archived log thread=1 sequence=68 RECID=386 STAMP=934081238
    channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:16
    channel ORA_SBT_TAPE_2: starting compressed archived log backup set
    channel ORA_SBT_TAPE_2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=69 RECID=388 STAMP=934099691
    input archived log thread=1 sequence=70 RECID=390 STAMP=934099729
    input archived log thread=1 sequence=71 RECID=392 STAMP=934099785
    input archived log thread=1 sequence=72 RECID=394 STAMP=934099796
    input archived log thread=1 sequence=73 RECID=396 STAMP=934100169
    input archived log thread=1 sequence=74 RECID=398 STAMP=934100189
    channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:16
    channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:17
    piece handle=37rqqfho_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
    channel ORA_SBT_TAPE_2: starting compressed archived log backup set
    channel ORA_SBT_TAPE_2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=75 RECID=400 STAMP=934100536
    channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:18
    channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:19
    piece handle=38rqqfhq_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
    channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:25
    piece handle=36rqqfho_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:09
    Finished backup at 24-JAN-2017 08:22:25
    
    Starting backup at 24-JAN-2017 08:22:25
    using channel ORA_SBT_TAPE_1
    using channel ORA_SBT_TAPE_2
    channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
    channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_system_d4fnthn8_.dbf
    input datafile file number=00007 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_users_d4fnx3bd_.dbf
    channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:25
    channel ORA_SBT_TAPE_2: starting compressed full datafile backup set
    channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_sysaux_d4fnvx0w_.dbf
    input datafile file number=00004 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_undotbs1_d4fnx03r_.dbf
    input datafile file number=00002 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbv_d7klp09z_.dbf
    channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:25
    channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:51
    piece handle=39rqqfi1_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:26
    channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
    channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
    channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:51
    piece handle=3arqqfi1_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:26
    channel ORA_SBT_TAPE_2: starting compressed full datafile backup set
    channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:51
    including current control file in backup set
    channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:52
    channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:52
    piece handle=3crqqfir_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
    channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:53
    piece handle=3brqqfir_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:22:53
    
    Starting backup at 24-JAN-2017 08:22:53
    current log archived
    using channel ORA_SBT_TAPE_1
    using channel ORA_SBT_TAPE_2
    channel ORA_SBT_TAPE_1: starting compressed archived log backup set
    channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=76 RECID=402 STAMP=934100573
    channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:54
    channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:55
    piece handle=3drqqfiu_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
    channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 24-JAN-2017 08:22:55

    Now backups on NFS are accessible without the accessible flag. It’s not needed for backups to SBT_TAPE. It’s default. :-)

    RMAN> list backup summary completed after 'sysdate -1';
    
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
    1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
    2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
    2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
    2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY
    2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
    2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
    2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
    2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
    2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153
    2137    B  A  A DISK        24-JAN-2017 08:16:11 1       1       NO         DBI_BACKUP_ON_STANDBY
    2148    B  F  A DISK        24-JAN-2017 08:16:17 1       1       NO         TAG20170124T081612
    2149    B  F  A DISK        24-JAN-2017 08:16:21 1       1       NO         TAG20170124T081612
    2150    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
    2151    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
    2169    B  A  A DISK        24-JAN-2017 08:16:31 1       1       NO         DBI_BACKUP_ON_STANDBY
    2312    B  A  A SBT_TAPE    24-JAN-2017 08:22:17 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
    2313    B  A  A SBT_TAPE    24-JAN-2017 08:22:18 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
    2314    B  A  A SBT_TAPE    24-JAN-2017 08:22:20 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
    2345    B  F  A SBT_TAPE    24-JAN-2017 08:22:44 1       1       YES        TAG20170124T082225
    2346    B  F  A SBT_TAPE    24-JAN-2017 08:22:46 1       1       YES        TAG20170124T082225
    2347    B  F  A SBT_TAPE    24-JAN-2017 08:22:51 1       1       YES        TAG20170124T082225
    2348    B  F  A SBT_TAPE    24-JAN-2017 08:22:52 1       1       YES        TAG20170124T082225
    2377    B  A  A SBT_TAPE    24-JAN-2017 08:22:54 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
    
    RMAN>

     

    Conclusion

    Backups to NFS are perfectly possible, however, in a DataGuard environment it is very useful to see the backups on the primary and standby. You never know which host might fail, so it is good to have all options.

     

    Cet article Oracle 12cR2 – DataGuard and Backups to NFS est apparu en premier sur Blog dbi services.

    Exadata Express Cloud Service: PDB_ADMIN privileges

    $
    0
    0

    I usually explain that the main point about Multitenant Architecture is the separation of roles between a system DBA who manages the container, and is responsible for its availability, and application DBA who owns all the application objects, and is responsible for data integrity and performance. The Exadata Express Cloud Service is the implementation of that: a CDB managed by Oracle and a PDB that you administrate. But the fact that the system is shared, over the internet, brings the necessity for new limitations. Let’s see what you can do or not as a PDB administrator.

    CaptureEXCS024 When you create a pluggable database you need to provide a local user and its password. When you create an Exadata Express Cloud Service, the PDM_ADMIN user is created but the account is locked. You need to provide your password in order to unlock it. Then, this PDB_ADMIN is the only user for which you know the password.

    But what really means ‘PDB administrator’? Which privileges do you have?

    System privileges

    PDB_ADMIN looks like a DBA. He is not granted ‘DBA’ but ‘PDB_DBA’ and many other roles.
    Let’s see which system privileges are not granted, either directly or though a path of roles:

    with my_user_roles(grantee,granted_role) as (
    select grantee,granted_role from dba_role_privs where grantee='PDB_ADMIN' union all select r.grantee,r.granted_role from dba_role_privs r join my_user_roles u on r.grantee =u.granted_role
    ) select listagg(privilege,',')within group(order by privilege) from (
    select distinct privilege from dba_sys_privs minus select distinct privilege from dba_sys_privs where grantee in (select granted_role from my_user_roles));
     
    LISTAGG(PRIVILEGE,',')WITHINGROUP(ORDERBYPRIVILEGE)
    ---------------------------------------------------
    ADMINISTER KEY MANAGEMENT,BACKUP ANY TABLE,BECOME USER,CREATE ANY CREDENTIAL,CREATE ANY DIRECTORY,CREATE ANY JOB,CREATE ANY LIBRARY,CREATE CREDENTIAL,CREATE DATABASE LINK,CREATE EXTERNAL JOB,CREATE LIBRARY,CREATE PUBLIC DATABASE LINK,DROP ANY DIRECTORY,DROP PUBLIC DATABASE LINK,EM EXPRESS CONNECT,EXECUTE ANY CLASS,EXECUTE ANY PROGRAM,EXEMPT ACCESS POLICY,EXEMPT DDL REDACTION POLICY,EXEMPT DML REDACTION POLICY,EXEMPT REDACTION POLICY,EXPORT FULL DATABASE,FLASHBACK ANY TABLE,FLASHBACK ARCHIVE ADMINISTER,FORCE TRANSACTION,GRANT ANY PRIVILEGE,GRANT ANY ROLE,IMPORT FULL DATABASE,INHERIT ANY PRIVILEGES,LOGMINING,MANAGE ANY FILE GROUP,MANAGE FILE GROUP,MANAGE SCHEDULER,SELECT ANY TRANSACTION,UNLIMITED TABLESPACE,USE ANY JOB RESOURCE,USE ANY SQL TRANSLATION PROFILE

    So your PDB_ADMIN has some system privileges. For example you can gather system statistics:

    SQL> exec dbms_stats.gather_system_stats;
    PL/SQL procedure successfully completed.
     
    SNAME PNAME PVAL1 PVAL2
    ----- ----- ----- -----
    SYSSTATS_INFO STATUS COMPLETED
    SYSSTATS_INFO DSTART 01-24-2017 19:48
    SYSSTATS_INFO DSTOP 01-24-2017 19:48
    SYSSTATS_INFO FLAGS 1
    SYSSTATS_MAIN CPUSPEEDNW 3097
    SYSSTATS_MAIN IOSEEKTIM 7
    SYSSTATS_MAIN IOTFRSPEED 4096

    but you cannot create database links or directories, which may be considered as application objects:

    create directory mydir as '/tmp'
    ORA-01031: insufficient privileges
     
    SQL> create database link mydblink connect to pdb_admin identified by "Ach1z0#d" using '//localhost';
    ORA-01031: insufficient privileges

    What you should understand is that you have a PDB, in a shared CDB, it is yours and you can do whatever you want as long as it has no side effects on your neighbours. You will see that communication with the external world (network, filesystem, host server) are very limited.

    Lockdown profiles

    Now we will see that the privileges we have are going beyond those that you can see in DBA_SYS_PRIVS and DBA_ROLE_PRIVS.

    From my query above, I have the ALTER SYSTEM privilege, so I can change some parameters:

    SQL> alter system set temp_undo_enabled=true;
    System SET altered.
    SQL> alter system set cursor_sharing=exact;
    System SET altered.

    However, some parameters cannot be set:

    SQL> alter session set max_idle_time=60;
    ORA-01031: insufficient privileges
    SQL> alter system set sga_target=3G;
    ORA-01031: insufficient privileges
    SQL> alter system set sql_trace=true;
    ORA-01031: insufficient privileges

    and most of the other ALTER SYSTEM statements are forbidden:

    SQL> alter system flush shared_pool;
    ORA-01031: insufficient privileges

    This is forbidden by a multitenant lockdown profile, which gives a finer grain than privileges: it disables some statements, or statements clauses or options. The bad thing about it is that you don’t know what you are allowed or not. Always the same ‘insufficient privileges’ and the detail is stored only on CDB$ROOT. From the PDB:

    SQL> select * from dba_lockdown_profiles;
    no rows selected

    the only thing that can be known from the PDB is the name of the lockdown profile:

    SQL> show parameter lockdown
    NAME TYPE VALUE
    ------------ ------ -----
    pdb_lockdown string S20

    That’s not a coincidence. I’m on a ‘S20′ service (30 GB storage, 3GB SGA) and the lockdown profile is associated with the service level.
    Of course, you cannot change it:

    SQL> alter system set pdb_lockdown='S50';
    ORA-32017: failure in updating SPFILE
    ORA-01031: insufficient privileges
     
    SQL> alter system set pdb_lockdown='S50' scope=memory;
    ORA-01031: insufficient privileges

    Always the same message… Before 12cR2 you just query the system privileges to understand why you get this message. Now, you need to ask to your CDB administrator. Of course, for Exadata Express Cloud Service, this is documented in limitations.

    Lockdown profiles are not only for ALTER SYSTEM.
    You have a few things that you cannot do with ALTER SESSION, such as setting SQL Trace, any Event, and any underscore parameter is also forbidden. ALTER DATABASE and ALTER PLUGGABLE DATABASE allow only OPEN/CLOSE, or change default edition, default tablespace and temporary tablespace. On datafiles, you can only resize and set autoextend on/off. You can also set the time zone.

    Well, I’m not completely sure about CLOSE:

    SQL> alter pluggable database close immediate;
    ORA-01031: insufficient privileges
    alter pluggable database open
    ORA-65019: pluggable database EPTDOJVM1KG already open

    Lockdown profiles goes beyond enabling or disable statements. It can disable a few features that you see mentioned in ‘limitations': you can’t create a manual AWR snapshot, can’t access the OS files by any way (UTL_FILE is disabled), can’t use any network protocol (UTL_TCP, UTL_MAIL,…)

    So what?

    I’m a DBA and connecting to a database where I’m not SYSDBA is a bit frustrating. PDB_ADMIN is granted SYSOPER but is locked in the password file. I’ve unlocked PDB_ADMIN but it is still EXPIRED & LOCKED in password file;

    SQL> select * from dba_users where username='PDB_ADMIN';
     
    USERNAME USER_ID PASSWORD ACCOUNT_STATUS LOCK_DATE EXPIRY_DATE DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE CREATED PROFILE INITIAL_RSRC_CONSUMER_GROUP EXTERNAL_NAME PASSWORD_VERSIONS EDITIONS_ENABLED AUTHENTICATION_TYPE PROXY_ONLY_CONNECT COMMON LAST_LOGIN ORACLE_MAINTAINED INHERITED DEFAULT_COLLATION IMPLICIT
    -------- ------- -------- -------------- --------- ----------- ------------------ -------------------- --------------------- ------- ------- --------------------------- ------------- ----------------- ---------------- ------------------- ------------------ ------ ---------- ----------------- --------- ----------------- --------
    PDB_ADMIN 94 OPEN 22-JUL-2017 19:17:41 SYSEXT TEMP TEMP 27-JUL-2016 17:05:02 DEFAULT DEFAULT_CONSUMER_GROUP 11G 12C N PASSWORD N NO 24-JAN-17 08.46.02.000000000 PM EUROPE/BERLIN N NO USING_NLS_COMP NO
     
    SQL> select * from v$pwfile_users where username='PDB_ADMIN';
     
    USERNAME SYSDBA SYSOPER SYSASM SYSBACKUP SYSDG SYSKM ACCOUNT_STATUS PASSWORD_PROFILE LAST_LOGIN LOCK_DATE EXPIRY_DATE EXTERNAL_NAME AUTHENTICATION_TYPE COMMON CON_ID
    -------- ------ ------- ------ --------- ----- ----- -------------- ---------------- ---------- --------- ----------- ------------- ------------------- ------ ------
    PDB_ADMIN FALSE TRUE FALSE FALSE FALSE FALSE EXPIRED & LOCKED DEFAULT 07-JAN-2017 04:47:03 07-JAN-2017 04:47:03 PASSWORD NO 47

    Keep in mind that this service is not for system DBA. You have the DBaaS for this. This is Oracle managed PDBaaS for developers and developers will get lot of freedom here: ability to test any feature, any optimizer setting, etc. This is a lot more than what Amazon RDS allows. You don’t have file or network access, but this is for small databases and it is totally integrated with SQL Developer to move data. No sql trace, no tkprof, but you have SQL Monitor. This is a very interesting platform for development small projects, 12cR2 features testing or prototypes. Very easy and fast to start. You can connect and code with APEX, .Net, Java, OCI…

     

    Cet article Exadata Express Cloud Service: PDB_ADMIN privileges est apparu en premier sur Blog dbi services.


    Documentum D2 4.5 and IE compatibility and F5

    $
    0
    0

    We had a problem with a customer where D2 was not loading properly in IE when going through F5 (load balancer). When trying to access D2 through the F5, let’s say: https://d2prod/D2 only a few menus and some part of the workspace were loading but it ended to say “Unexpected error occured”.

    Investigation

    It would have been too easy if this error appeared in the logs, but it didn’t. So that means it was not a D2 internal error but maybe in the interface or the way it is loading in IE. Because, fun fact, it was loading properly in Chrome. Additional fun fact, when using a superuser account it was also loading properly in IE!

    As it was an interface error I used the IE debugging tool F12. At first I didn’t see the error in the console but when digging a bit inside all the verbose logs I found this:

    SEVERE: An unexpected error occurred. Please refresh your browser
    com.google.gwt.core.client.JavaScriptException: (TypeError) 
     description: Object doesn't support property or method 'querySelectorAll'
    number: -2146827850: Object doesn't support property or method 'querySelectorAll'

    After some researches I figured out that others had issues with “querySelectorAll” and IE. In fact it was depending on the version of  IE used because this function was not available prior IE 9.

    Hence I came to the idea that my IE was not in the right compatibility mode, because I had IE 11, so it couldn’t be a version mismatch.

    Fortunately thanks to the F12 console you can change the compatibility mode:

    Capture_Compat_8

    As I thought, the compatibility mode was set (and blocked) to 8, which was not supporting “querySelectorAll”. But I couldn’t change it to a higher value. Hence, I figured this out:

    Capture_Compat_Enterprise

    I was in Enterprise Mode. This mode forces the compatibility version and some other sort of things. Fortunately you can disable it in the browser by going into the “Tools” menu of IE. Then, like magic, I was able to switch to the compatibility version 10:

    Capture_Compat_10_2

    And, miracle. The D2 interface reloaded properly, with all menus and workspaces. You remember it was working with superuser accounts? In fact, when using a superuser account the Enterprise Mode was not activated and the Compatibility version was set to 10.

    The question is, why was it forced to 8?

    Solution

    In fact, it was customer related. They had a policy rule applying for the old D2 (3.1) which needed the Enterprise Mode and compatibility mode set to 8. So when using the old dns link to point to the new D2, these modes were still applied.

    So we asked to disable the Enterprise Mode and the compatibility mode returned to 10 by default. So be careful with IE in your company ;)

     

    Cet article Documentum D2 4.5 and IE compatibility and F5 est apparu en premier sur Blog dbi services.

    Oracle 12cR2 – RMAN cold backup with TAG’s

    $
    0
    0

    I am planning to backup my 12R2 container database, because a huge application change is coming up,
    and I want to be sure that I have a good RMAN backup beforehand. For that particular DB, I want to do it with a cold backup in combination with RMAN tags. Unfortunately I don’t have any backups at the moment, so I start with a full backup with the TAG ‘DBI_BACKUP’ to be 100% that I restore the correct one.

    RMAN> list backup summary;
    
    specification does not match any backup in the repository
    
    RMAN> shutdown immediate
    
    database closed
    database dismounted
    Oracle instance shut down
    
    RMAN> startup mount
    
    connected to target database (not started)
    Oracle instance started
    database mounted
    
    Total System Global Area    1795162112 bytes
    
    Fixed Size                     8793832 bytes
    Variable Size                553648408 bytes
    Database Buffers            1224736768 bytes
    Redo Buffers                   7983104 bytes
    
    RMAN> run
        {
             allocate channel c1 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c2 device type disk format '/u99/backup/CDB/database_%U';
             allocate channel c3 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c4 device type disk format '/u99/backup/CDB/database_%U';
         BACKUP INCREMENTAL LEVEL 0 FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';
             backup current controlfile tag 'DBI_BACKUP' format '/u99/backup/CDB/control_%U';
             backup spfile tag 'DBI_BACKUP' format '/u99/backup/CDB/spfile_%U';
             release channel c1;
             release channel c2;
             release channel c3;
             release channel c4;
        }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>
    
    allocated channel: c1
    channel c1: SID=16 device type=DISK
    
    allocated channel: c2
    channel c2: SID=271 device type=DISK
    
    allocated channel: c3
    channel c3: SID=31 device type=DISK
    
    allocated channel: c4
    channel c4: SID=272 device type=DISK
    
    
    Starting backup at 26-JAN-2017 13:18:53
    current log archived
    channel c1: starting compressed archived log backup set
    channel c1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=4 RECID=3 STAMP=934074668
    input archived log thread=1 sequence=5 RECID=4 STAMP=934154679
    channel c1: starting piece 1 at 26-JAN-2017 13:18:53
    channel c2: starting compressed archived log backup set
    channel c2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=2 RECID=1 STAMP=934038010
    input archived log thread=1 sequence=3 RECID=2 STAMP=934066843
    channel c2: starting piece 1 at 26-JAN-2017 13:18:53
    channel c3: starting compressed archived log backup set
    channel c3: specifying archived log(s) in backup set
    input archived log thread=1 sequence=6 RECID=5 STAMP=934203623
    input archived log thread=1 sequence=7 RECID=6 STAMP=934275778
    input archived log thread=1 sequence=8 RECID=7 STAMP=934284094
    channel c3: starting piece 1 at 26-JAN-2017 13:18:53
    channel c4: starting compressed archived log backup set
    channel c4: specifying archived log(s) in backup set
    input archived log thread=1 sequence=9 RECID=8 STAMP=934284153
    input archived log thread=1 sequence=10 RECID=9 STAMP=934284199
    input archived log thread=1 sequence=11 RECID=10 STAMP=934291133
    channel c4: starting piece 1 at 26-JAN-2017 13:18:53
    channel c4: finished piece 1 at 26-JAN-2017 13:18:54
    piece handle=/u99/backup/CDB/database_2arr09lt_1_1 tag=DBI_BACKUP comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:01
    channel c1: finished piece 1 at 26-JAN-2017 13:19:08
    piece handle=/u99/backup/CDB/database_27rr09lt_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:15
    channel c2: finished piece 1 at 26-JAN-2017 13:19:08
    piece handle=/u99/backup/CDB/database_28rr09lt_1_1 tag=DBI_BACKUP comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:15
    channel c3: finished piece 1 at 26-JAN-2017 13:19:08
    piece handle=/u99/backup/CDB/database_29rr09lt_1_1 tag=DBI_BACKUP comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:15
    Finished backup at 26-JAN-2017 13:19:08
    
    Starting backup at 26-JAN-2017 13:19:08
    channel c1: starting compressed incremental level 0 datafile backup set
    channel c1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
    channel c1: starting piece 1 at 26-JAN-2017 13:19:09
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:19:09
    channel c3: starting compressed incremental level 0 datafile backup set
    channel c3: specifying datafile(s) in backup set
    input datafile file number=00010 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
    channel c3: starting piece 1 at 26-JAN-2017 13:19:09
    channel c4: starting compressed incremental level 0 datafile backup set
    channel c4: specifying datafile(s) in backup set
    input datafile file number=00009 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
    input datafile file number=00011 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
    channel c4: starting piece 1 at 26-JAN-2017 13:19:09
    channel c4: finished piece 1 at 26-JAN-2017 13:19:24
    piece handle=/u99/backup/CDB/database_2err09md_1_1 tag=TAG20170126T131908 comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:15
    channel c4: starting compressed incremental level 0 datafile backup set
    channel c4: specifying datafile(s) in backup set
    input datafile file number=00006 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
    channel c4: starting piece 1 at 26-JAN-2017 13:19:24
    channel c3: finished piece 1 at 26-JAN-2017 13:19:39
    piece handle=/u99/backup/CDB/database_2drr09md_1_1 tag=TAG20170126T131908 comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:30
    channel c3: starting compressed incremental level 0 datafile backup set
    channel c3: specifying datafile(s) in backup set
    input datafile file number=00013 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
    input datafile file number=00012 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
    channel c3: starting piece 1 at 26-JAN-2017 13:19:39
    channel c3: finished piece 1 at 26-JAN-2017 13:19:40
    piece handle=/u99/backup/CDB/database_2grr09nb_1_1 tag=TAG20170126T131908 comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:01
    channel c3: starting compressed incremental level 0 datafile backup set
    channel c3: specifying datafile(s) in backup set
    input datafile file number=00005 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
    channel c3: starting piece 1 at 26-JAN-2017 13:19:41
    channel c2: finished piece 1 at 26-JAN-2017 13:19:41
    piece handle=/u99/backup/CDB/database_2crr09md_1_1 tag=TAG20170126T131908 comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:32
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00008 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:19:41
    channel c2: finished piece 1 at 26-JAN-2017 13:19:44
    piece handle=/u99/backup/CDB/database_2irr09nd_1_1 tag=TAG20170126T131908 comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:03
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00004 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:19:44
    channel c2: finished piece 1 at 26-JAN-2017 13:19:45
    piece handle=/u99/backup/CDB/database_2jrr09ng_1_1 tag=TAG20170126T131908 comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:01
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00007 name=/u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:19:45
    channel c2: finished piece 1 at 26-JAN-2017 13:19:46
    piece handle=/u99/backup/CDB/database_2krr09nh_1_1 tag=TAG20170126T131908 comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:01
    channel c1: finished piece 1 at 26-JAN-2017 13:19:52
    piece handle=/u99/backup/CDB/database_2brr09md_1_1 tag=TAG20170126T131908 comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:43
    channel c3: finished piece 1 at 26-JAN-2017 13:19:52
    piece handle=/u99/backup/CDB/database_2hrr09nd_1_1 tag=TAG20170126T131908 comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:11
    channel c4: finished piece 1 at 26-JAN-2017 13:19:52
    piece handle=/u99/backup/CDB/database_2frr09ms_1_1 tag=TAG20170126T131908 comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:28
    Finished backup at 26-JAN-2017 13:19:52
    
    Starting backup at 26-JAN-2017 13:19:52
    current log archived
    channel c1: starting compressed archived log backup set
    channel c1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=12 RECID=11 STAMP=934291192
    channel c1: starting piece 1 at 26-JAN-2017 13:19:53
    channel c1: finished piece 1 at 26-JAN-2017 13:19:54
    piece handle=/u99/backup/CDB/database_2lrr09np_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-JAN-2017 13:19:54
    
    Starting backup at 26-JAN-2017 13:19:54
    channel c1: starting full datafile backup set
    channel c1: specifying datafile(s) in backup set
    including current control file in backup set
    channel c1: starting piece 1 at 26-JAN-2017 13:19:55
    channel c1: finished piece 1 at 26-JAN-2017 13:19:56
    piece handle=/u99/backup/CDB/control_2mrr09nq_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-JAN-2017 13:19:56
    
    Starting backup at 26-JAN-2017 13:19:56
    channel c1: starting full datafile backup set
    channel c1: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel c1: starting piece 1 at 26-JAN-2017 13:19:56
    channel c1: finished piece 1 at 26-JAN-2017 13:19:57
    piece handle=/u99/backup/CDB/spfile_2nrr09ns_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-JAN-2017 13:19:57
    
    Starting Control File and SPFILE Autobackup at 26-JAN-2017 13:19:57
    piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934291197_d8mtcfjz_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-JAN-2017 13:19:58
    
    released channel: c1
    
    released channel: c2
    
    released channel: c3
    
    released channel: c4
    
    RMAN>

    After the backup was done, I do a quick “list backup summary” to see if everything is there, and also check the destination directory.

    RMAN> list backup summary tag 'DBI_BACKUP';
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    67      B  A  A DISK        26-JAN-2017 13:18:54 1       1       YES        DBI_BACKUP
    68      B  A  A DISK        26-JAN-2017 13:19:02 1       1       YES        DBI_BACKUP
    69      B  A  A DISK        26-JAN-2017 13:19:07 1       1       YES        DBI_BACKUP
    70      B  A  A DISK        26-JAN-2017 13:19:07 1       1       YES        DBI_BACKUP
    81      B  A  A DISK        26-JAN-2017 13:19:53 1       1       YES        DBI_BACKUP
    82      B  F  A DISK        26-JAN-2017 13:19:55 1       1       NO         DBI_BACKUP
    83      B  F  A DISK        26-JAN-2017 13:19:56 1       1       NO         DBI_BACKUP
    
    RMAN>
    
    oracle@dbidg03:/u99/backup/CDB/ [CDB] ls -l
    total 975304
    -rw-r----- 1 oracle oinstall  18792448 Jan 26 13:19 control_2mrr09nq_1_1
    -rw-r----- 1 oracle oinstall 112111616 Jan 26 13:19 database_27rr09lt_1_1
    -rw-r----- 1 oracle oinstall 112711168 Jan 26 13:19 database_28rr09lt_1_1
    -rw-r----- 1 oracle oinstall  58626048 Jan 26 13:19 database_29rr09lt_1_1
    -rw-r----- 1 oracle oinstall   3691520 Jan 26 13:18 database_2arr09lt_1_1
    -rw-r----- 1 oracle oinstall 215056384 Jan 26 13:19 database_2brr09md_1_1
    -rw-r----- 1 oracle oinstall 132710400 Jan 26 13:19 database_2crr09md_1_1
    -rw-r----- 1 oracle oinstall 112173056 Jan 26 13:19 database_2drr09md_1_1
    -rw-r----- 1 oracle oinstall  56778752 Jan 26 13:19 database_2err09md_1_1
    -rw-r----- 1 oracle oinstall 110149632 Jan 26 13:19 database_2frr09ms_1_1
    -rw-r----- 1 oracle oinstall   1507328 Jan 26 13:19 database_2grr09nb_1_1
    -rw-r----- 1 oracle oinstall  54157312 Jan 26 13:19 database_2hrr09nd_1_1
    -rw-r----- 1 oracle oinstall   7716864 Jan 26 13:19 database_2irr09nd_1_1
    -rw-r----- 1 oracle oinstall   1327104 Jan 26 13:19 database_2jrr09ng_1_1
    -rw-r----- 1 oracle oinstall   1073152 Jan 26 13:19 database_2krr09nh_1_1
    -rw-r----- 1 oracle oinstall      7680 Jan 26 13:19 database_2lrr09np_1_1
    -rw-r----- 1 oracle oinstall    114688 Jan 26 13:19 spfile_2nrr09ns_1_1

    But to be really 100% sure that I can restore the backup from TAG, I do a restore preview. The restore preview exists for quite a while now, but it is not so widly used for whatever reasons, I don’t know. I find it quite useful.

    RMAN> restore database preview from tag 'DBI_BACKUP';
    
    Starting restore at 26-JAN-2017 13:22:49
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=16 device type=DISK
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of restore command at 01/26/2017 13:22:49
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 13 found to restore
    RMAN-06023: no backup or copy of datafile 12 found to restore
    RMAN-06023: no backup or copy of datafile 11 found to restore
    RMAN-06023: no backup or copy of datafile 10 found to restore
    RMAN-06023: no backup or copy of datafile 9 found to restore
    RMAN-06023: no backup or copy of datafile 8 found to restore
    RMAN-06023: no backup or copy of datafile 7 found to restore
    RMAN-06023: no backup or copy of datafile 6 found to restore
    RMAN-06023: no backup or copy of datafile 5 found to restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    
    RMAN>

    Oh no … that doesn’t look good. RMAN complaints that no backup or copy exists for all datafiles. What is going here? Is my backup useless? Yes and no. If I rely only on the TAG, then yes. However, the RMAN backup have been created successfully but with two different TAG’s. For the datafiles it used tag=TAG20170126T131908 and for the archivelogs, the controlfile and the spfile it used tag=DBI_BACKUP.

    So what is wrong here? The TAG was simply specified at the wrong location. If you put tag after archivelog, then only the archivelogs get that tag.

    BACKUP INCREMENTAL LEVEL 0 FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';

    If you want to have the datafiles and the archivelogs tagged correctly, you have to put it after level 0 in my case. That’s usually enough.

    BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog;

    Or if you want to be double sure and you are sort of paranoid, you can specify it twice, one after level 0, and one after archivelog.

    BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';

    ok. So lets try it again from scratch. But this time I put the Tag after LEVEL 0.

    RMAN> list backup summary;
    
    specification does not match any backup in the repository
    
    RMAN> shutdown immediate
    
    database closed
    database dismounted
    Oracle instance shut down
    
    RMAN> startup mount
    
    connected to target database (not started)
    Oracle instance started
    database mounted
    
    Total System Global Area    1795162112 bytes
    
    Fixed Size                     8793832 bytes
    Variable Size                553648408 bytes
    Database Buffers            1224736768 bytes
    Redo Buffers                   7983104 bytes
    
    
    RMAN> run
        {
             allocate channel c1 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c2 device type disk format '/u99/backup/CDB/database_%U';
             allocate channel c3 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c4 device type disk format '/u99/backup/CDB/database_%U';
         BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog;
             backup current controlfile tag 'DBI_BACKUP' format '/u99/backup/CDB/control_%U';
             backup spfile tag 'DBI_BACKUP' format '/u99/backup/CDB/spfile_%U';
             release channel c1;
             release channel c2;
             release channel c3;
             release channel c4;
        }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>
    
    allocated channel: c1
    channel c1: SID=237 device type=DISK
    
    allocated channel: c2
    channel c2: SID=20 device type=DISK
    
    allocated channel: c3
    channel c3: SID=254 device type=DISK
    
    allocated channel: c4
    channel c4: SID=22 device type=DISK
    
    
    Starting backup at 26-JAN-2017 13:43:45
    channel c1: starting compressed archived log backup set
    channel c1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=4 RECID=3 STAMP=934074668
    input archived log thread=1 sequence=5 RECID=4 STAMP=934154679
    channel c1: starting piece 1 at 26-JAN-2017 13:43:46
    channel c2: starting compressed archived log backup set
    channel c2: specifying archived log(s) in backup set
    input archived log thread=1 sequence=2 RECID=1 STAMP=934038010
    input archived log thread=1 sequence=3 RECID=2 STAMP=934066843
    channel c2: starting piece 1 at 26-JAN-2017 13:43:46
    channel c3: starting compressed archived log backup set
    channel c3: specifying archived log(s) in backup set
    input archived log thread=1 sequence=6 RECID=5 STAMP=934203623
    input archived log thread=1 sequence=7 RECID=6 STAMP=934275778
    input archived log thread=1 sequence=8 RECID=7 STAMP=934284094
    input archived log thread=1 sequence=9 RECID=8 STAMP=934284153
    channel c3: starting piece 1 at 26-JAN-2017 13:43:46
    channel c4: starting compressed archived log backup set
    channel c4: specifying archived log(s) in backup set
    input archived log thread=1 sequence=10 RECID=9 STAMP=934284199
    input archived log thread=1 sequence=11 RECID=10 STAMP=934291133
    input archived log thread=1 sequence=12 RECID=11 STAMP=934291192
    input archived log thread=1 sequence=13 RECID=12 STAMP=934291966
    channel c4: starting piece 1 at 26-JAN-2017 13:43:46
    channel c4: finished piece 1 at 26-JAN-2017 13:43:47
    piece handle=/u99/backup/CDB/database_3frr0b4i_1_1 tag=DBI_BACKUP comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:01
    channel c4: starting compressed archived log backup set
    channel c4: specifying archived log(s) in backup set
    input archived log thread=1 sequence=14 RECID=13 STAMP=934292026
    input archived log thread=1 sequence=15 RECID=14 STAMP=934292464
    channel c4: starting piece 1 at 26-JAN-2017 13:43:47
    channel c4: finished piece 1 at 26-JAN-2017 13:43:48
    piece handle=/u99/backup/CDB/database_3grr0b4j_1_1 tag=DBI_BACKUP comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:01
    channel c1: finished piece 1 at 26-JAN-2017 13:44:02
    piece handle=/u99/backup/CDB/database_3crr0b4i_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:16
    channel c2: finished piece 1 at 26-JAN-2017 13:44:02
    piece handle=/u99/backup/CDB/database_3drr0b4i_1_1 tag=DBI_BACKUP comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:16
    channel c3: finished piece 1 at 26-JAN-2017 13:44:02
    piece handle=/u99/backup/CDB/database_3err0b4i_1_1 tag=DBI_BACKUP comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:16
    Finished backup at 26-JAN-2017 13:44:02
    
    Starting backup at 26-JAN-2017 13:44:02
    channel c1: starting compressed incremental level 0 datafile backup set
    channel c1: specifying datafile(s) in backup set
    input datafile file number=00001 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
    channel c1: starting piece 1 at 26-JAN-2017 13:44:02
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00003 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:44:02
    channel c3: starting compressed incremental level 0 datafile backup set
    channel c3: specifying datafile(s) in backup set
    input datafile file number=00010 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
    channel c3: starting piece 1 at 26-JAN-2017 13:44:02
    channel c4: starting compressed incremental level 0 datafile backup set
    channel c4: specifying datafile(s) in backup set
    input datafile file number=00009 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
    input datafile file number=00011 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
    channel c4: starting piece 1 at 26-JAN-2017 13:44:03
    channel c4: finished piece 1 at 26-JAN-2017 13:44:18
    piece handle=/u99/backup/CDB/database_3krr0b52_1_1 tag=DBI_BACKUP comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:15
    channel c4: starting compressed incremental level 0 datafile backup set
    channel c4: specifying datafile(s) in backup set
    input datafile file number=00006 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
    channel c4: starting piece 1 at 26-JAN-2017 13:44:18
    channel c3: finished piece 1 at 26-JAN-2017 13:44:33
    piece handle=/u99/backup/CDB/database_3jrr0b52_1_1 tag=DBI_BACKUP comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:31
    channel c3: starting compressed incremental level 0 datafile backup set
    channel c3: specifying datafile(s) in backup set
    input datafile file number=00013 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
    input datafile file number=00012 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
    channel c3: starting piece 1 at 26-JAN-2017 13:44:33
    channel c3: finished piece 1 at 26-JAN-2017 13:44:34
    piece handle=/u99/backup/CDB/database_3mrr0b61_1_1 tag=DBI_BACKUP comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:01
    channel c3: starting compressed incremental level 0 datafile backup set
    channel c3: specifying datafile(s) in backup set
    input datafile file number=00005 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
    channel c3: starting piece 1 at 26-JAN-2017 13:44:35
    channel c2: finished piece 1 at 26-JAN-2017 13:44:38
    piece handle=/u99/backup/CDB/database_3irr0b52_1_1 tag=DBI_BACKUP comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:36
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00008 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:44:38
    channel c2: finished piece 1 at 26-JAN-2017 13:44:41
    piece handle=/u99/backup/CDB/database_3orr0b66_1_1 tag=DBI_BACKUP comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:03
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00004 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:44:41
    channel c2: finished piece 1 at 26-JAN-2017 13:44:42
    piece handle=/u99/backup/CDB/database_3prr0b69_1_1 tag=DBI_BACKUP comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:01
    channel c2: starting compressed incremental level 0 datafile backup set
    channel c2: specifying datafile(s) in backup set
    input datafile file number=00007 name=/u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
    channel c2: starting piece 1 at 26-JAN-2017 13:44:43
    channel c1: finished piece 1 at 26-JAN-2017 13:44:44
    piece handle=/u99/backup/CDB/database_3hrr0b52_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:42
    channel c2: finished piece 1 at 26-JAN-2017 13:44:44
    piece handle=/u99/backup/CDB/database_3qrr0b6b_1_1 tag=DBI_BACKUP comment=NONE
    channel c2: backup set complete, elapsed time: 00:00:01
    channel c3: finished piece 1 at 26-JAN-2017 13:44:46
    piece handle=/u99/backup/CDB/database_3nrr0b62_1_1 tag=DBI_BACKUP comment=NONE
    channel c3: backup set complete, elapsed time: 00:00:11
    channel c4: finished piece 1 at 26-JAN-2017 13:44:46
    piece handle=/u99/backup/CDB/database_3lrr0b5i_1_1 tag=DBI_BACKUP comment=NONE
    channel c4: backup set complete, elapsed time: 00:00:28
    Finished backup at 26-JAN-2017 13:44:46
    
    Starting backup at 26-JAN-2017 13:44:46
    specification does not match any archived log in the repository
    backup cancelled because there are no files to backup
    Finished backup at 26-JAN-2017 13:44:46
    
    Starting backup at 26-JAN-2017 13:44:46
    channel c1: starting full datafile backup set
    channel c1: specifying datafile(s) in backup set
    including current control file in backup set
    channel c1: starting piece 1 at 26-JAN-2017 13:44:47
    channel c1: finished piece 1 at 26-JAN-2017 13:44:48
    piece handle=/u99/backup/CDB/control_3rrr0b6e_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-JAN-2017 13:44:48
    
    Starting backup at 26-JAN-2017 13:44:48
    channel c1: starting full datafile backup set
    channel c1: specifying datafile(s) in backup set
    including current SPFILE in backup set
    channel c1: starting piece 1 at 26-JAN-2017 13:44:48
    channel c1: finished piece 1 at 26-JAN-2017 13:44:49
    piece handle=/u99/backup/CDB/spfile_3srr0b6g_1_1 tag=DBI_BACKUP comment=NONE
    channel c1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-JAN-2017 13:44:49
    
    Starting Control File and SPFILE Autobackup at 26-JAN-2017 13:44:49
    piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934292553_d8mvt1l0_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-JAN-2017 13:44:50
    
    released channel: c1
    
    released channel: c2
    
    released channel: c3
    
    released channel: c4
    
    RMAN>

    As you can see in the log, all backup pieces have been done with tag=DBI_BACKUP. But let’s double check it again.

    RMAN> list backup summary tag 'DBI_BACKUP';
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    104     B  A  A DISK        26-JAN-2017 13:43:46 1       1       YES        DBI_BACKUP
    105     B  A  A DISK        26-JAN-2017 13:43:47 1       1       YES        DBI_BACKUP
    106     B  A  A DISK        26-JAN-2017 13:43:54 1       1       YES        DBI_BACKUP
    107     B  A  A DISK        26-JAN-2017 13:43:59 1       1       YES        DBI_BACKUP
    108     B  A  A DISK        26-JAN-2017 13:43:59 1       1       YES        DBI_BACKUP
    109     B  0  A DISK        26-JAN-2017 13:44:14 1       1       YES        DBI_BACKUP
    110     B  0  A DISK        26-JAN-2017 13:44:30 1       1       YES        DBI_BACKUP
    111     B  0  A DISK        26-JAN-2017 13:44:34 1       1       YES        DBI_BACKUP
    112     B  0  A DISK        26-JAN-2017 13:44:36 1       1       YES        DBI_BACKUP
    113     B  0  A DISK        26-JAN-2017 13:44:39 1       1       YES        DBI_BACKUP
    114     B  0  A DISK        26-JAN-2017 13:44:41 1       1       YES        DBI_BACKUP
    115     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
    116     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
    117     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
    118     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
    119     B  F  A DISK        26-JAN-2017 13:44:47 1       1       NO         DBI_BACKUP
    120     B  F  A DISK        26-JAN-2017 13:44:48 1       1       NO         DBI_BACKUP
    
    RMAN> restore database preview summary from tag 'DBI_BACKUP';
    
    Starting restore at 26-JAN-2017 13:45:26
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=237 device type=DISK
    
    List of Backups
    ===============
    Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
    ------- -- -- - ----------- -------------------- ------- ------- ---------- ---
    116     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
    112     B  0  A DISK        26-JAN-2017 13:44:36 1       1       YES        DBI_BACKUP
    114     B  0  A DISK        26-JAN-2017 13:44:41 1       1       YES        DBI_BACKUP
    117     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
    118     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
    115     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
    113     B  0  A DISK        26-JAN-2017 13:44:39 1       1       YES        DBI_BACKUP
    109     B  0  A DISK        26-JAN-2017 13:44:14 1       1       YES        DBI_BACKUP
    110     B  0  A DISK        26-JAN-2017 13:44:30 1       1       YES        DBI_BACKUP
    111     B  0  A DISK        26-JAN-2017 13:44:34 1       1       YES        DBI_BACKUP
    using channel ORA_DISK_1
    
    archived logs generated after SCN 1904449 not found in repository
    recovery will be done up to SCN 1904449
    Media recovery start SCN is 1904449
    Recovery must be done beyond SCN 1904725 to clear datafile fuzziness
    Finished restore at 26-JAN-2017 13:45:26
    
    RMAN>

    Ok. Very good. That looks promising now. :-) Let’s do the application changes now …

    RMAN> alter database open;
    
    Statement processed
    
    -- Do some application changes ...
    
    SQL> create table x ...
    SQL> create table y ...
    SQL> create table z ...
    
    

    And the final test is of course, to do the real restore/recovery to the point where the cold backup was done.

    RMAN> shutdown abort
    
    Oracle instance shut down
    
    RMAN> startup nomount
    
    connected to target database (not started)
    Oracle instance started
    
    Total System Global Area    1795162112 bytes
    
    Fixed Size                     8793832 bytes
    Variable Size                553648408 bytes
    Database Buffers            1224736768 bytes
    Redo Buffers                   7983104 bytes
    
    RMAN> restore controlfile from '/u99/backup/CDB/control_3rrr0b6e_1_1';
    
    Starting restore at 26-JAN-2017 13:48:50
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=237 device type=DISK
    
    channel ORA_DISK_1: restoring control file
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    output file name=/u02/oradata/CDB/controlfile/o1_mf_d81c6189_.ctl
    output file name=/u03/fast_recovery_area/CDB/controlfile/o1_mf_d81c61b4_.ctl
    Finished restore at 26-JAN-2017 13:48:51
    
    RMAN> alter database mount;
    
    Statement processed
    released channel: ORA_DISK_1
    
    	
    RMAN> run
        {
             allocate channel c1 device type disk;
         allocate channel c2 device type disk;
             allocate channel c3 device type disk;
         allocate channel c4 device type disk;
         restore database from tag 'DBI_BACKUP';
             release channel c1;
             release channel c2;
             release channel c3;
             release channel c4;
        }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12>
    
    allocated channel: c1
    channel c1: SID=256 device type=DISK
    
    allocated channel: c2
    channel c2: SID=24 device type=DISK
    
    allocated channel: c3
    channel c3: SID=257 device type=DISK
    
    allocated channel: c4
    channel c4: SID=25 device type=DISK
    
    Starting restore at 26-JAN-2017 13:49:39
    Starting implicit crosscheck backup at 26-JAN-2017 13:49:39
    Crosschecked 15 objects
    Finished implicit crosscheck backup at 26-JAN-2017 13:49:40
    
    Starting implicit crosscheck copy at 26-JAN-2017 13:49:40
    Finished implicit crosscheck copy at 26-JAN-2017 13:49:40
    
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    
    List of Cataloged Files
    =======================
    File Name: /u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934292553_d8mvt1l0_.bkp
    
    
    skipping datafile 5; already restored to file /u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
    skipping datafile 6; already restored to file /u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
    skipping datafile 8; already restored to file /u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
    channel c1: starting datafile backup set restore
    channel c1: specifying datafile(s) to restore from backup set
    channel c1: restoring datafile 00009 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
    channel c1: restoring datafile 00011 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
    channel c1: reading from backup piece /u99/backup/CDB/database_3krr0b52_1_1
    channel c2: starting datafile backup set restore
    channel c2: specifying datafile(s) to restore from backup set
    channel c2: restoring datafile 00010 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
    channel c2: reading from backup piece /u99/backup/CDB/database_3jrr0b52_1_1
    channel c3: starting datafile backup set restore
    channel c3: specifying datafile(s) to restore from backup set
    channel c3: restoring datafile 00012 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
    channel c3: restoring datafile 00013 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
    channel c3: reading from backup piece /u99/backup/CDB/database_3mrr0b61_1_1
    channel c4: starting datafile backup set restore
    channel c4: specifying datafile(s) to restore from backup set
    channel c4: restoring datafile 00003 to /u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
    channel c4: reading from backup piece /u99/backup/CDB/database_3irr0b52_1_1
    channel c3: piece handle=/u99/backup/CDB/database_3mrr0b61_1_1 tag=DBI_BACKUP
    channel c3: restored backup piece 1
    channel c3: restore complete, elapsed time: 00:00:03
    channel c3: starting datafile backup set restore
    channel c3: specifying datafile(s) to restore from backup set
    channel c3: restoring datafile 00004 to /u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
    channel c3: reading from backup piece /u99/backup/CDB/database_3prr0b69_1_1
    channel c3: piece handle=/u99/backup/CDB/database_3prr0b69_1_1 tag=DBI_BACKUP
    channel c3: restored backup piece 1
    channel c3: restore complete, elapsed time: 00:00:01
    channel c3: starting datafile backup set restore
    channel c3: specifying datafile(s) to restore from backup set
    channel c3: restoring datafile 00001 to /u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
    channel c3: reading from backup piece /u99/backup/CDB/database_3hrr0b52_1_1
    channel c1: piece handle=/u99/backup/CDB/database_3krr0b52_1_1 tag=DBI_BACKUP
    channel c1: restored backup piece 1
    channel c1: restore complete, elapsed time: 00:00:20
    channel c1: starting datafile backup set restore
    channel c1: specifying datafile(s) to restore from backup set
    channel c1: restoring datafile 00007 to /u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
    channel c1: reading from backup piece /u99/backup/CDB/database_3qrr0b6b_1_1
    channel c1: piece handle=/u99/backup/CDB/database_3qrr0b6b_1_1 tag=DBI_BACKUP
    channel c1: restored backup piece 1
    channel c1: restore complete, elapsed time: 00:00:01
    channel c2: piece handle=/u99/backup/CDB/database_3jrr0b52_1_1 tag=DBI_BACKUP
    channel c2: restored backup piece 1
    channel c2: restore complete, elapsed time: 00:00:27
    channel c4: piece handle=/u99/backup/CDB/database_3irr0b52_1_1 tag=DBI_BACKUP
    channel c4: restored backup piece 1
    channel c4: restore complete, elapsed time: 00:00:35
    channel c3: piece handle=/u99/backup/CDB/database_3hrr0b52_1_1 tag=DBI_BACKUP
    channel c3: restored backup piece 1
    channel c3: restore complete, elapsed time: 00:00:40
    Finished restore at 26-JAN-2017 13:50:25
    
    released channel: c1
    
    released channel: c2
    
    released channel: c3
    
    released channel: c4
    
    RMAN>

    No recovery is needed here, because it was an cold RMAN backup. You can just open the database with open resetslogs.

    RMAN> alter database open RESETLOGS;
    
    Statement processed

    Conclusion

    Take care that you put your RMAN Tags at the correct location.

     

    Cet article Oracle 12cR2 – RMAN cold backup with TAG’s est apparu en premier sur Blog dbi services.

    Part 2 – vagrant up – get your Oracle infrastructure up an running

    $
    0
    0

    Last week in the first part of this blog we have seen a short introduction how to setup an Oracle Infrastructure with Vagrant and Ansible. Remember all the files for this example are available here https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
    Get the example code:

    git clone https://github.com/nkadbi/oracle-db-12c-vagrant-ansible

    If you have prepared your environment with Ansible, Vagrant and Oracle Virtual Box installed – and provided the Oracle software zip files –
    than you can just start to build your Test Infrastructure with the simple callvagrant up
    cleanup is also easy- stop the vagrant machines and deletes all traces:
    vagrant destroy
    How does this work ?
    vagrant up starts Vagrant which will setup two virtual servers using a sample box with CentOS 7.2.
    When this has been finished Vagrant calls Ansible for provisioning which configures the linux servers, installs the Oracle software and creates your databases on the target servers in parallel.

    Vagrant configuration
    All the configuration for Vagrant is in one file called Vagrantfile
    I used a box with CentOS 7.2 which you can find among other vagrant boxes here https://atlas.hashicorp.com/search
    config.vm.box = "boxcutter/centos72" If you start vagrant up the first time it will download the vagrant box
    $ vagrant up

    Bringing machine 'dbserver1' up with 'virtualbox' provider...
    Bringing machine 'dbserver2' up with 'virtualbox' provider...
    ==> dbserver1: Box 'boxcutter/centos72' could not be found. Attempting to find and install...
    dbserver1: Box Provider: virtualbox
    dbserver1: Box Version: >= 0
    ==> dbserver1: Loading metadata for box 'boxcutter/centos72'
    dbserver1: URL: https://atlas.hashicorp.com/boxcutter/centos72
    ==> dbserver1: Adding box 'boxcutter/centos72' (v2.0.21) for provider: virtualbox
    dbserver1: Downloading: https://atlas.hashicorp.com/boxcutter/boxes/centos72/versions/2.0.21/providers/virtualbox.box
    ==> dbserver1: Successfully added box 'boxcutter/centos72' (v2.0.21) for 'virtualbox'!
    ==> dbserver1: Importing base box 'boxcutter/centos72'...

    I have chosen a private network for the virtual servers and use vagrant hostmanager plugin to take care of the /etc/hosts files on all guest machines (and optionally your localhost)
    you can add this plugin to vagrant with:
    vagrant plugin install vagrant-hostmanager
    The corresponding part in the Vagrantfile will look like this:
    config.hostmanager.enabled = true
    config.hostmanager.ignore_private_ip = false # include private IPs of your VM's
    config.vm.hostname = “dbserver1”
    config.vm.network "private_network", ip: "192.168.56.31"

    ssh Configuration
    The Vagrant box comes already with ssh key configuration and- if security does not matter in your demo environment – the easiest way to configure ssh connection to your guest nodes is to use the same ssh key for all created virtual hosts.
    config.ssh.insert_key = false # Use the same insecure key provided from box for each machine After bringing up the virtual servers you can display the ssh settings:
    vagrant ssh-config The important lines from the output are:
    Host dbserver1
    HostName 127.0.0.1
    User vagrant
    Port 2222
    IdentityFile /home/user/.vagrant.d/insecure_private_key
    You should be able to reach your guest server without password with user vagrant
    vagrant ssh dbserver1
    Than you can switch to user oracle ( password = welcome1 ) or root (default password for vagrant boxes vagrant) su - oracle or directly connect with ssh ssh vagrant@127.0.0.1 -p 2222 -i /home/user/.vagrant.d/insecure_private_key
    Virtual Disks
    I added additional virtual disks because I wanted to separate data file destination from fast recovery area destination. # attach disks only localy
    if ! File.exist?("dbserver#{i}_disk_a.vdi") # create disks only once
    v.customize ['createhd', '--filename', "dbserver#{i}_disk_a.vdi", '--size', 8192 ] v.customize ['createhd', '--filename', "dbserver#{i}_disk_b.vdi", '--size', 8192 ] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_a.vdi"] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_b.vdi"] end # create disks only once

    Provisioning with Ansible
    At the end of the Vagrantfile provisioning with Ansible is called.
    N = 2
    (1..N).each do |i| # do for each server i
    ...
    if i == N
    config.vm.provision "ansible" do |ansible| # vm.provisioning
    #ansible.verbose = "v"
    ansible.playbook = "oracle-db.yml"
    ansible.groups = { "dbserver" => ["dbserver1","dbserver2"] }
    ansible.limit = 'all'
    end # end vm.provisioning
    end
    end
    To prevent the Ansible provisioning to start before all servers have been setup by Vagrant, I included the condition if i == N , where N is the number of desired servers.

    Ansible Inventory
    The Ansible Inventory is a collection of guest hosts against which Ansible will work.
    You can either put the information in an inventory file or let Vagrant create an Inventory file for you. Vagrant does this if you did not specify any inventory file.
    To enable Ansible to connect to the target hosts without password Ansible has to know the ssh key provided by the vagrant box.
    Example Ansible Inventory:
    # Generated by Vagrant
    dbserver2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
    dbserver1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
    [dbserver] dbserver1
    dbserver2
    You can see that the inventory created by Vagrant presents the necessary information to Ansible to connect to the targets and has also defined the group dbserver which includes the server dbserver1 and dbserver2.

    Ansible configuration
    tell Ansible where to find the inventory in the ansible.cfg.
    nocows=1
    hostfile = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
    host_key_checking = False

    Ansible Variables
    In this example I have put the general variables for all servers containing an Oracle Database into this file:
    group_vars/dbserver
    The more specific variables including variables used to create the database like the database name, character set
    can be adapted individual for each server:
    host_vars/dbserver1,host_vars/dbserver2

    Ansible Playbook
    The Ansible playbook is a simple textfile written in YAML syntax, which is easy readable.
    Our playbook oracle-db.yml has only one play called “ Configure Oracle Linux 7 with Oracle Database 12c” which will be applied on all servers belonging to the group dbserver. In my example Vagrant creates the vagrant inventory and initiates the play of the playbook but you can also start it stand-alone or repeat it if you want.
    ansible-playbook oracle-db.yml
    This is the whole playbook, to configure the servers and install Oracle Databases:
    $cat oracle-db.yml
    ---
    - name: Configure Oracle Linux 7 with Oracle Database 12c
    hosts: dbserver
    become: True
    vars_files:
    # User Passwords hashed are stored here:
    - secrets.yml
    roles:
    - role: disk_layout
    - role: linux_oracle
    - role: oracle_sw_install
    become_user: '{{ oracle_user }}'
    - role: oracle_db_create
    become_user: '{{ oracle_user }}'

    Ansible roles
    To make the playbook oracle-db.yml lean and to be more flexible I have split all the tasks into different roles.This makes it easy to reuse parts of the playbook or skip parts. For example if you only want to install the oracle software on the server, but do not want to create databases you can just delete the role oracle_db_create from the playbook.
    You (and Ansible ) will find the files containing the tasks for a role in the directory roles/my_role_name/main.yml.
    There can be further directories. The default directory structure looks like below. If you want to create a new role you can even create the directory structure by using ansible-galaxy. Ansible Galaxy is Ansible’s official community hub for sharing Ansible roles. https://galaxy.ansible.com/intro

    # example to create the directory structure for the role "my_role_name"
    ansible-galaxy init my_role_name


    # default Ansible role directory structure
    roles/
    my_role_name/
    defaults/
    files/
    handlers/
    meta/
    tasks/
    templates/
    vars/

    Ansible Modules
    Ansible will run the tasks described in the playbook on the target servers by invoking Ansible Modules.
    This Ansible Web Page http://docs.ansible.com/ansible/list_of_all_modules.html shows information about Modules ordered by categories.
    You can also get information about all the Ansible modules from command line:

    # list all modules
    ansible-doc --list
    # example to show documentation about the Ansible module "copy"
    ansible-doc copy

    One Example:
    To install the oracle software with response file I use the Ansible module called “template”. Ansible uses Jinja2, a templating engine for Python.
    This makes it very easy to design reusable templates. For example Ansible will replace {{ oracle_home }} with the variable, which I have defined in group_vars/dbserver, and than copies the response file to the target servers:

    Snipped from the Jinja2 template db_install.rsp.j2

    #-------------------------------------------------------------------------------
    # Specify the complete path of the Oracle Home.
    #-------------------------------------------------------------------------------
    ORACLE_HOME={{ oracle_home }}

    Snipped from roles/oracle_sw_install/tasks/main.yml

    - name: Gerenerate the response file for software only installation
    template: src=db_install.rsp.j2 dest={{ installation_folder }}/db_install.rsp

    Ansible Adhoc Commands – Some Use Cases
    Immediately after installing Ansible you already can use Ansible to gather facts from your localhost which will give you a lot of information:
    ansible localhost -m setup
    Use Ansible adhoc command with module ping to check if you can reach all target servers listed in your inventory file:

    $ ansible all -m ping
    dbserver2 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }
    dbserver1 | SUCCESS => {
    "changed": false,
    "ping": "pong"
    }

    File transfer – spread a file to all servers in the group dbserver
    ansible dbserver -m copy -b -a "src=/etc/hosts dest=/etc/hosts"

    Conclusion
    With the open source tools Vagrant and Ansible you can easily automate the setup of your infrastructure.
    Even if you do not want to automate everything, Ansible still can help you with your daily work if you want to check or apply something on several servers.
    Just group your servers in an inventory and run an Ansible Adhoc Command or write a small playbook.

    Please keep in mind that this is a simplified example for an automated Oracle Database Installation.
    Do not use this example for productive environments.

     

    Cet article Part 2 – vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.

    Temporal tables with PostgreSQL

    $
    0
    0

    In this blog we are going to talk about a nice extension in PostgreSQL: temporal_tables. This extension provides support for temporal tables.
    What is a temporal table? Just a table that tracks the period of validity of a row.
    When implemented, this feature allows you to specify that old rows are archived into another table (that is called the history table). This can be useful for many purposes
    -Audit
    -Comparison
    -Checking table state in the past
    First we have to install the temporal_table extension. We are going to use the pgxn client to install the extension.
    Install the yum repository for PostgreSQL

    [root@pgserver1 ~]# rpm -ivh https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-oraclelinux96-9.6-3.noarch.rpm
    Retrieving https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-oraclelinux96-9.6-3.noarch.rpm
    warning: /var/tmp/rpm-tmp.3q9X12: Header V4 DSA/SHA1 Signature, key ID 442df0f8: NOKEY
    Preparing... ################################# [100%] Updating / installing...
    1:pgdg-oraclelinux96-9.6-3 ################################# [100%] [root@pgserver1 ~]#

    And after we install the pgxn client

    root@pgserver1 ~]# yum search pgxn
    Loaded plugins: langpacks, ulninfo
    pgdg96 | 4.1 kB 00:00:00
    (1/2): pgdg96/7Server/x86_64/group_gz | 249 B 00:00:00
    (2/2): pgdg96/7Server/x86_64/primary_db | 127 kB 00:00:00
    ==================================================== N/S matched: pgxn =====================================================
    pgxnclient.x86_64 : Command line tool designed to interact with the PostgreSQL Extension Network
    Name and summary matches only, use "search all" for everything.


    [root@pgserver1 ~]# yum install pgxnclient.x86_64
    Loaded plugins: langpacks, ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package pgxnclient.x86_64 0:1.2.1-2.rhel7 will be installed
    ....
    ....
    Installed:
    pgxnclient.x86_64 0:1.2.1-2.rhel7
    Complete!
    [root@pgserver1 ~]#

    And finally we can install the extension

    [root@pgserver1 ~]# pgxn install temporal_tables --pg_config=/u01/app/PostgreSQL/9.6/bin/pg_config
    INFO: best version: temporal_tables 1.1.1
    INFO: saving /tmp/tmpJit39m/temporal_tables-1.1.1.zip
    INFO: unpacking: /tmp/tmpJit39m/temporal_tables-1.1.1.zip
    INFO: building extension
    gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -I. -I./ -I/u01/app/PostgreSQL/9.6/include/postgresql/server -I/u01/app/PostgreSQL/9.6/include/postgresql/internal -D_GNU_SOURCE -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -c -o temporal_tables.o temporal_tables.c
    gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -I. -I./ -I/u01/app/PostgreSQL/9.6/include/postgresql/server -I/u01/app/PostgreSQL/9.6/include/postgresql/internal -D_GNU_SOURCE -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -c -o versioning.o versioning.c
    gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -shared -o temporal_tables.so temporal_tables.o versioning.o -L/u01/app/PostgreSQL/9.6/lib -L/opt/local/Current/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/PostgreSQL/9.6/lib',--enable-new-dtags
    INFO: installing extension
    /bin/mkdir -p '/u01/app/PostgreSQL/9.6/lib/postgresql'
    /bin/mkdir -p '/u01/app/PostgreSQL/9.6/share/postgresql/extension'
    /bin/mkdir -p '/u01/app/PostgreSQL/9.6/share/postgresql/extension'
    /bin/mkdir -p '/u01/app/PostgreSQL/9.6/doc/postgresql/extension'
    /usr/bin/install -c -m 755 temporal_tables.so '/u01/app/PostgreSQL/9.6/lib/postgresql/temporal_tables.so'
    /usr/bin/install -c -m 644 .//temporal_tables.control '/u01/app/PostgreSQL/9.6/share/postgresql/extension/'
    /usr/bin/install -c -m 644 .//temporal_tables--1.1.1.sql .//temporal_tables--1.0.0--1.0.1.sql .//temporal_tables--1.0.1--1.0.2.sql .//temporal_tables--1.0.2--1.1.0.sql .//temporal_tables--1.1.0--1.1.1.sql '/u01/app/PostgreSQL/9.6/share/postgresql/extension/'
    /usr/bin/install -c -m 644 .//README.md '/u01/app/PostgreSQL/9.6/doc/postgresql/extension/'
    [root@pgserver1 ~]#

    Once the installation done, we can load it in our database.

    [postgres@pgserver1 extension]$ psql
    Password:
    psql.bin (9.6.1)
    Type "help" for help.
    postgres=# CREATE EXTENSION temporal_tables;
    CREATE EXTENSION
    postgres=#

    We can then verify that the temporal extension is now present in our database.

    postgres=# \dx
    List of installed extensions
    Name | Version | Schema | Description
    -----------------+---------+------------+-----------------------------------------
    adminpack | 1.0 | pg_catalog | administrative functions for PostgreSQL
    plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
    temporal_tables | 1.1.1 | public | temporal tables
    (3 rows)

    postgres=# \dx+ temporal_tables
    Objects in extension "temporal_tables"
    Object Description
    ----------------------------------------------------
    function set_system_time(timestamp with time zone)
    function versioning()
    (2 rows)

    For the demonstration, we create the following table Customers

    CREATE TABLE Customers (
    CustNo SERIAL NOT NULL,
    CustName VARCHAR(30) NOT NULL,
    start_date timestamp NOT NULL DEFAULT now(),
    PRIMARY KEY (CustNo)
    );

    In order to make this table system-period temporal table we should first add a system period column

    postgres=# ALTER TABLE Customers ADD COLUMN sys_period tstzrange NOT NULL;
    ALTER TABLE

    Then we need a history table that contains archived rows of our table. The easiest way to create it is by using LIKE statement

    postgres=# CREATE TABLE Customers_history (LIKE Customers);
    CREATE TABLE

    Finally we create a trigger on our table to link it with the history table

    postgres=# CREATE TRIGGER customers_hist_trigger BEFORE INSERT OR UPDATE OR DELETE ON Customers FOR EACH ROW
    EXECUTE PROCEDURE versioning('sys_period', 'Customers_history', true);
    CREATE TRIGGER
    postgres=#

    Now Let’s insert data into customers

    insert into customers (custname,start_date) values ('HP','2013-08-05 00:00:00');
    insert into customers (custname,start_date) values ('IBM','2014-10-10 00:00:00');
    insert into customers (custname,start_date) values ('DBI','2017-01-07 00:00:00');
    insert into customers (custname) values ('DHL');

    We can see below rows in customers.
    For example the row concerning IBM was inserted on 2017-01-26 10:48:49. Information stored in the sys_period column and represents the starting validity of the row. Note the borders [,). The lower bound is [ and thus inclusive. The upper bound is ) which means it is exclusive.
    For IBM [“2017-01-26 10:48:49.768031+01″,) means
    Start of validity: 2017-01-26 10:48:49.
    End of validity: infinity (because there is nothing).

    postgres=# table customers;
    custno | custname | start_date | sys_period
    --------+----------+----------------------------+------------------------------------
    1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
    2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
    3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01",)
    4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01",)
    (4 rows)

    The table Customers_history is empty. This is normal because no update or delete are done, just we have inserted rows.

    postgres=# table customers_history;
    custno | custname | start_date | sys_period
    --------+----------+------------+------------
    (0 rows)
    postgres=#

    Let’s do an update on customers, but before let’s display the current time.
    postgres=# select now();
    now
    -------------------------------
    2017-01-26 11:02:32.381634+01
    (1 row)


    postgres=# update customers set custname='HPSuisse' where custno=4;
    UPDATE 1
    postgres=#

    Verifying again the customers table, we can see that the validity of row concerning HPsuisse starts at 2017-01-26 11:02:46

    postgres=# table customers;
    custno | custname | start_date | sys_period
    --------+----------+----------------------------+------------------------------------
    1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
    2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
    3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01",)
    4 | HPSuisse | 2013-08-05 00:00:00 | ["2017-01-26 11:02:46.347574+01",)
    (4 rows)

    If we now query the table customers_history, we can see the row updated on the table customers with the validity of the row.

    postgres=# table customers_history;
    custno | custname | start_date | sys_period
    --------+----------+---------------------+-------------------------------------------------------------------
    4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01","2017-01-26 11:02:46.347574+01")

    Let’s do a delete on the table customers

    postgres=# select now();
    now
    -------------------------------
    2017-01-26 11:32:12.229105+01
    (1 row)


    postgres=# delete from customers where custno=3;
    DELETE 1

    Below rows in table customers

    postgres=# table customers;
    custno | custname | start_date | sys_period
    --------+----------+---------------------+------------------------------------
    1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
    2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
    4 | HPSuisse | 2013-08-05 00:00:00 | ["2017-01-26 11:02:46.347574+01",)
    (3 rows)

    And in the history table, we can see a new row with the validity date.

    postgres=# table customers_history;
    custno | custname | start_date | sys_period
    --------+----------+----------------------------+-------------------------------------------------------------------
    4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01","2017-01-26 11:02:46.347574+01")
    3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01","2017-01-26 11:32:15.370438+01")
    (2 rows)

    Conclusion
    In this blog we see how temporal tables can be implemented with PostgreSQL using extention temporal_table. This feature can help for auditing, archiving,…
    And the history table can be moved to lower storage.

     

    Cet article Temporal tables with PostgreSQL est apparu en premier sur Blog dbi services.

    Oracle 12c – How to Prevent License Violation with Active Data Guard

    $
    0
    0

    There are some articles floating around how to prevent a license violation with Active Data Guard. Some of them related to an underscore parameter “_query_on_physical” and some of them related to a startup trigger. Both of them have advantages and disadvantages. E.g. regarding the “_query_on_physical” I can’t find any MOS Note about it, and I don’t know the side effects.

    Oracle gives us a hard time to disable features that we don’t want to be enabled by accident. It gets much better with 12.2 where you can use lockdown profiles. My colleague Franck explained it very well at the DOAG 2016 how this new feature works.

    http://www.doag.org/formes/pubfiles/8586609/docs/Konferenz/2016/vortraege/Datenbank/2016-DB-Franck_Pachot-Multitenant_New_Security_Features_Clarify_DevOps_DBA_roles-Praesentation.pdf

    But for now, I am on 12cR1 and I need a solution for that version. Especially with Active Data Guard it is very easy to activate it. Just type in “startup” on the Standby, and then you have it already. Nothing more is needed.

    Nevertheless, I have 12cR1 here, and my favorite way to prevent a license violation with Active Data Guard is related to cluster resources, in combination with the DataGuard Broker and an Observer. If all of them are in place and when you are on the right patch level, then it works. Especially the patch level is quite important. We will see later. What is also important, is that you should work only with the Broker command or with the srvctl utility.

    In my case I have a primary single instance called DBIT121_SITE1 and a standby single instance called DBIT121_SITE2. After the Data Guard has been setup, it is time to configure the Cluster Resources.

    In this particular case, the most important parameters when you add the database cluster resources are “role” and “startoption”

    $ srvctl add database -h | egrep '(<role>|<start_options>)' | tail -2
        -role <role>                   Role of the database (PRIMARY, PHYSICAL_STANDBY, LOGICAL_STANDBY, SNAPSHOT_STANDBY, FAR_SYNC)
        -startoption <start_options>   Startup options for the database. Examples of startup options are OPEN, MOUNT, or "READ ONLY".

    With the parameter “role” you specify the role that your database has at the moment (not the future role). The role adjustments are done later by the Broker whenever you do a switchover or failover.

    The role option is not only available with the “srvctl add database” command, it is also available with the “srvctl add service” command. Now it becomes really interesting. You tell Oracle to start the service only, if the role is PRIMARY.

    $ srvctl add service -h | grep '<role>'
        -role <role>                   Role of the service (primary, physical_standby, logical_standby, snapshot_standby)

    Ok. Let’s create the cluster resources now.

    -- Primary
    $ srvctl add database -db DBIT121_SITE1 -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
    -dbtype SINGLE -instance DBIT121 -node dbidg01 \
    -spfile /u01/app/oracle/admin/DBIT121/pfile/spfileDBIT121.ora \
    -pwfile /u01/app/oracle/admin/DBIT121/pfile/orapwDBIT121 \
    -role PRIMARY -startoption OPEN \
    -dbname DBIT121
    
    $ srvctl add service -db DBIT121_SITE1 -service DBIT121_SERVICE -role primary \
    -failovertype SELECT -notification TRUE -tafpolicy BASIC
    
    -- Standby
    $ srvctl add database -db DBIT121_SITE2 -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
    -dbtype SINGLE -instance DBIT121 -node dbidg02 \ 
    -spfile /u01/app/oracle/admin/DBIT121/pfile/spfileDBIT121.ora \
    -pwfile /u01/app/oracle/admin/DBIT121/pfile/orapwDBIT121 \
    -role PHYSICAL_STANDBY -startoption MOUNT \
    -dbname DBIT121
    
    $ srvctl add service -db DBIT121_SITE2 -service DBIT121_SERVICE -role primary \
    -failovertype SELECT -notification TRUE -tafpolicy BASIC

    To test if everything works, simply do a “SWITCHOVER” with the Data Guard Broker and check the Cluster Resources afterwards. After a role change, you should see the following Cluster resource entries on the Primary

    $ crsctl stat res ora.dbit121_site1.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
    ROLE=PRIMARY
    USR_ORA_OPEN_MODE=open

    and these ones on the Standby

    $ crsctl stat res ora.dbit121_site2.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
    ROLE=PHYSICAL_STANDBY
    USR_ORA_OPEN_MODE=mount

    Oracle preserves the Open modes and also some other stuff like Block Change Tracking. If Active Data Guard was not enabled beforehand, it will also not be enabled afterwards (this is at least how it should be), and besides that, Oracle also disables the “Block Change Tracking” feature on the new Standby, because this would need the Active Data Guard license as well.

    alert.log
    ...
    Completed: ALTER DATABASE SWITCHOVER TO 'DBIT121_SITE2'
    Target standby DBIT121_SITE2 did not have Active Data Guard enabled at the time of switchover.
    To maintain Active Data Guard license compliance Block Change Tracking will be disabled.
    Fri Jan 27 08:49:23 2017
    ..

    But the final and most important test is killing the PMON on the Standby. In GI version below 12.1.0.2 with 2016 Oct PSU, you might end up with Active Data Guard enabled. Opsssssss …
    Everything was setup up correctly, but still not working like expected. I just have simulated that a background process dies. This could happen in reality for example due to a bug with “_use_single_log_writer=false” which is the default with 12c, or simply by someone accidently killing the wrong process.

    $ ps -ef | grep ora_pmon_DBIT121 | grep -v grep 
    oracle 639 1 0 13:31 ? 00:00:00 ora_pmon_DBIT121
    
    $ kill -9 639 
    
    alert.log 
    ... 
    ... 
    Physical Standby Database mounted. 
    Lost write protection mode set to "typical" 
    Completed: ALTER DATABASE MOUNT /* db agent *//* {0:33:25} */ 
    ALTER DATABASE OPEN /* db agent *//* {0:33:25} */ 
    Data Guard Broker initializing... 
    ... 
    
    Physical standby database opened for read only access. 
    Completed: ALTER DATABASE OPEN /* db agent *//* {0:33:25} */ 
    
    ... 
    
    SQL> select open_mode from v$database; 
    
    OPEN_MODE 
    -------------------- 
    READ ONLY WITH APPLY

    After killing the PMON, the instance dies and the Cluster takes over which is very good. However, the cluster is ignoring my startup options which I have configured beforehand. After upgrading GI and the Database to 12.1.0.2 with 2016 Oct PSU, I could not reproduce this issue anymore and I have a good solution for preventing Active Data Guard to be activated.

    But what happens if my Primary host dies and a Failover is initiated by the observer. Then I do have two cluster resources with Primary and startup option OPEN. Let’s simulate this scenario by doing a shutdown abort with srvctl.

    DGMGRL> show configuration;
    
    Configuration - DBIT121
    
      Protection Mode: MaxAvailability
      Members:
      DBIT121_SITE1 - Primary database
        DBIT121_SITE2 - (*) Physical standby database
    
    Fast-Start Failover: ENABLED
    
    Configuration Status:
    SUCCESS   (status updated 5 seconds ago)
    
    
    $ srvctl stop database -db DBIT121_SITE1 -stopoption ABORT

     

    After 30 seconds, the observer initiated a fast start failover, and the new primary is now on SITE2.

    Initiating Fast-Start Failover to database "DBIT121_SITE2"...
    Performing failover NOW, please wait...
    Failover succeeded, new primary is "DBIT121_SITE2"

    On SITE1 I still have the old Primary with Startup option OPEN. Not an issue at the moment, because it is a Primary and on a Primary there is no Active Data Guard. After I start up SITE1, a few moments later the reinstate takes place. Therefore, the database has to be brought again into the MOUNT state to do a “FLASHBACK DATABASE”.

    $ srvctl start database -db DBIT121_SITE1
    
    observer.log
    ...
    Initiating reinstatement for database "DBIT121_SITE1"...
    Reinstating database "DBIT121_SITE1", please wait...
    Reinstatement of database "DBIT121_SITE1" succeeded
    
    broker.log on old Primary
    ...
    Data Guard notifying Oracle Clusterware to prepare database for role change
    Database Reinstate needs instance count to be reduced to 1
    Flashback SCN is 22408550; DB checkpoint SCN is 22405622. Flashback to SCN 22408550.
    01/28/2017 10:59:25
    Physical Standby Reinstatement: Converting old primary to a physical standby
    01/28/2017 10:59:34
    Conversion to physical standby database succeeded
    Instance restart not required
    Purging diverged redos on resetlogs branch 933516912, starting SCN 22408551
    Purged 0 archived logs
    Target standby DBIT121_SITE2 did not have Active Data Guard enabled at the time of failover.
    To maintain Active Data Guard license compliance Block Change Tracking will be disabled.
    01/28/2017 10:59:42
    Notifying Oracle Clusterware to buildup after database reinstatement

    The broker knows that Active DataGuard was not enabled beforehand, an so it does not enable it now.

    $ crsctl stat res ora.DBIT121_SITE1.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
    ROLE=PHYSICAL_STANDBY
    USR_ORA_OPEN_MODE=mount
    
    
    SQL> select open_mode from v$database;
    
    OPEN_MODE
    --------------------
    MOUNTED

    That’s it. This is my way to prevent Active Data Guard from being activated. :-)

    Conclusion

    Using cluster resources to prevent Active Data Guard from being activated is a fully supported way. You only need to take care that you are on GI/DB and Observer version 12.1.0.2 2016 Oct PSU or higher. Before that patchlevel, it never worked for me correctly with cluster resources. Besides that, use only Broker and the cluster srvctl commands to manage your Data Guard environment.

     

    Cet article Oracle 12c – How to Prevent License Violation with Active Data Guard est apparu en premier sur Blog dbi services.

    Viewing all 2851 articles
    Browse latest View live