Quantcast
Channel: dbi Blog
Viewing all 2837 articles
Browse latest View live

SQL Server 2016: IF EXISTS is included in the DROP command

$
0
0

With SQL Server 2016, I’m nicely surprised by the addition of IF EXISTS directly in the T-SQL command DROP.
Before this new option, all queries are written with IF EXISTS (SELECT * FROM sys….) DROP object.
I have quickly tested just for 2 drop commands(See List of available objects below):

  • one for a table
  • one for a column in a table

DROP a table

Before SQL Server 2016

IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'DatabaseLog' AND TABLE_SCHEMA = 'dbo')
DROP TABLE [dbo].[DatabaseLog]

With SQL Server 2016

DROP TABLE IF EXISTS [dbo].[DatabaseLog]

Mdsn reference for DROP TABLE

Drop01

DROP a column

Before SQL Server 2016

IF EXISTS (SELECT * FROM sys.columns WHERE NAME = 'newcolumn' AND Object_ID = Object_ID(N'DatabaseLog') )
ALTER TABLE [dbo].[DatabaseLog] DROP COLUMN newcolumn;

With SQL Server 2016

ALTER TABLE [dbo].[DatabaseLog] DROP COLUMN  IF EXISTS newcolumn

Mdsn reference for ALTER TABLE DROP COLUMN

Drop02

List of available objects

This option is also available for a lot of objects:

  • AGGREGATE
  • ASSEMBLY
  • COLUMN
  • CONSTRAINT
  • DATABASE
  • DEFAULT
  • FUNCTION
  • INDEX
  • PROCEDURE
  • ROLE
  • RULE
  • SCHEMA
  • SECURITY POLICY
  • SEQUENCE
  • SYNONYM
  • TABLE
  • TRIGGER
  • TYPE
  • USER
  • VIEW

As you can see, this option is also available at server level not just at database level like for DROP DATABASE

DROP a database

DROP DATABASE IF EXISTS AdventureWorksDW2012

Mdsn reference for DROP DATABASE

This is a very nice option to know to simplify your future queries… 8-)

 

Cet article SQL Server 2016: IF EXISTS is included in the DROP command est apparu en premier sur Blog dbi services.


Oracle DBA Essentials Workshop – Patchset 11.2.0.4 Available

$
0
0

We have Upgrade our Environment Oracle DBA Essentials Workshop of release 11.2.0.3 to 11.2.0.4. From now on, you can operates the various subjects on the new release.

In this article, I present you the procedure to be followed for the upgrade of your database:

  • Update DMK to the version 14-10
  • Upgrade database
  • Install Oracle Patchset 11.2.0.4 and PSU

Update DMK 14-10

The first step it’s update Data Management Kit, included dmk_sql. We need to Download, Unzip and configured Patch DMK.

Unzip DMK-14-10-1 and resourcing:

oracle@srvora01:/u00/app/oracle/local/ [rdbms11203] tar xvzf dmk-14-10.1-unix.tar.gz
oracle@srvora01:/u00/app/oracle/local/ [rdbms11203] cd dmk/bin/
oracle@srvora01:/u00/app/oracle/local/dmk/bin/ [rdbms11203] echo $SHELL
/bin/bash
oracle@srvora01:/u00/app/oracle/local/dmk/bin/[rdbms11203]. ./dmk.bash

Configure new DMK:

oracle@srvora01:/u00/app/oracle/local/ [rdbms11203] cd dmk
oracle@srvora01:/u00/app/oracle/local/dmk/ [rdbms11203] cd etc/
oracle@srvora01:/u00/app/oracle/local/dmk/etc/ [rdbms11203] cp dmk.conf dmk.conf.20150210.yan
oracle@srvora01:/u00/app/oracle/local/dmk/etc/ [rdbms11203] cp dmk.conf.unix dmk.conf

We make a difference of parameter with the Old and New DMK version:

oracle@srvora01:/u00/app/oracle/local/dmk/etc/ [rdbms11203] diff dmk.conf dmk.conf.20150210.yan
1c1
# $Id: dmk.conf.unix 703 2012-06-19 06:34:59Z jew $
20,21c20,21
< alias::sqh::novar_noforce::"rlwrap.`uname` -i sqlplus '/ as sysdba' ${DMK_EXEC_SQL_SCRIPT}"::
alias::sqh::novar_noforce::"rlwrap.`uname` -i sqlplus '/ as sysdba'"::
> alias::sq::novar_noforce::"sqlplus '/ as sysdba'"::
26c26
alias::rmanc::novar_noforce::"if [ -f ${DMK_ORA_ADMIN_SID}/etc/rman.cfg ]; then . ${DMK_ORA_ADMIN_SID}/etc/rman.cfg; if [ $catalog != "nocatalog" ]; then catalog="catalog=${catalog}"; fi; else catalog=nocatalog; fi; rlwrap.`uname` -i rman target=/ ${catalog};"::
43c43,44
var::PS1::=::nooption::'${LOGNAME}@${HostName}:${PWD}/ [${ORACLE_SID}] '::#prompt
> var::PATH::+::begin::"${ORACLE_HOME}/bin:${ORACLE_HOME}/ctx/bin:${ORACLE_HOME}/OPatch"::
62,63d62
<
< # The dmk-run.ksh option --columns works only for BASH shell
67d65
<
82,86d79
< var::DMK_LINUX_AWK::=::nooption::"/bin/awk"::
< var::DMK_SUNOS_AWK::=::nooption::"awk"::
< var::DMK_SOLARIS_AWK::=::nooption::"awk"::
< var::DMK_HPUX_AWK::=::nooption::"awk"::
< var::DMK_AIX_AWK::=::nooption::"awk"::
88,94d80
< #### NEW in DMK release 2014
< alias::lspdb::novar_noforce::'${PERL_EXEC} ${DMK_HOME}/lib/lspdb.pl'::
< var::PS1::=::nowarn::'${LOGNAME}@${HostName}:${PWD}/ [${ORACLE_SID}${DMK_SID_PDB}] '::#prompt PS1
< alias::sqh::novar_noforce::"rlwrap.`uname` -i sqlplus '/ as sysdba' ${DMK_EXEC_SQL_SCRIPT}"::
< alias::sq::novar_noforce::"sqlplus '/ as sysdba' ${DMK_EXEC_SQL_SCRIPT}"::
< var::PATH::+::begin::"${ORACLE_HOME}/perl/bin:${ORACLE_HOME}/bin:${ORACLE_HOME}/ctx/bin:${ORACLE_HOME}/OPatch"::
< var::NLS_DATE_FORMAT::=::nooption::"DD-MON-YYYY HH24:MI:SS"::
108c94,96
[WSDBA1] > var::ORACLE_UNQNAME::=::nooption::"WSDBA1_SITE1"::
111d98
< #var::DMK_EXEC_SQL_SCRIPT::=::nowarn::"@/"::
125,127d111
< # Example Grid Infrastructure to enable the DMK pluggable database(s) feature
< #[CDBUTF8] < #var::DMK_ORA12C_PDBS::=::nowarn::"SALESPDB,HRPDB"::

Update dmk_sql:

Add WSDBA1 config :
oracle@srvora01:/u00/app/oracle/local/dmk/etc/ [rdbms11203] tail -30 dmk.conf
.....
[WSDBA1] var::ORACLE_UNQNAME::=::nooption::"WSDBA1_SITE1"::
.....
oracle@srvora01:/u00/app/oracle/local/ [rdbms11203] mv dmk_sql dmk_sql.to.del_yan
oracle@srvora01:/u00/app/oracle/local/ [rdbms11203] tar xvzf dmk_sql-14-10.tar.gz
oracle@srvora01:/u00/app/ [rdbms11204] echo $DMK_SQL
/u00/app/oracle/local/dmk_sql
oracle@srvora01:/u00/app/ [rdbms11204] sql
oracle@srvora01:/u00/app/oracle/local/dmk_sql/ [rdbms11204]

Installation Binaries 11.2.0.4 and PSU

Unzip the different packages of binaries 11.2.0.4:

oracle@srvora01:/u00/app/oracle/software/ [rdbms11203] unzip p13390677_112040_Linux-x86-64_1of7.zip
oracle@srvora01:/u00/app/oracle/software/ [rdbms11203] unzip p13390677_112040_Linux-x86-64_2of7.zip

Empty the old Oracle environment:

oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] env | grep ORACLE
ORACLE_SID=rdbms11203
ORACLE_BASE=/u00/app/oracle
PS1=${LOGNAME}@${HostName}:${PWD}/ [${ORACLE_SID}${DMK_SID_PDB}] ORACLE_HOME=/u00/app/oracle/product/11.2.0/db_3_0

Unset the Oracle home and export the perl Path. Then we launch the run-installer:

oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] unset ORACLE_HOME
oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] echo $PATH
/u00/app/oracle/product/11.2.0/db_3_0/perl/bin:/u00/app/oracle/product/11.2.0/db_3_0/bin:/u00/app/oracle/product/11.2.0/db_3_0/ctx/bin:/u00/app/oracle/product/11.2.0/db_3_0/OPatch:/u00/app/oracle/product/11.2.0/db_3_0/bin:/u00/app/oracle/product/11.2.0/db_3_0/ctx/bin:/u00/app/oracle/product/11.2.0/db_3_0/OPatch:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin:/u00/app/oracle/local/dmk_dbbackup/bin:/u00/app/oracle/local/dmk/bin:/u00/app/oracle/local/dmk_ha/bin:/u00/app/oracle/local/dmk_dbcreate/bin
oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/oracle/bin:/u00/app/oracle/local/dmk_dbbackup/bin:/u00/app/oracle/local/dmk/bin:/u00/app/oracle/local/dmk_ha/bin:/u00/app/oracle/local/dmk_dbcreate/bin
oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] echo $LD_LIBRARY_PATH
/u00/app/oracle/product/11.2.0/db_3_0/lib:/u00/app/oracle/product/11.2.0/db_3_0/lib32
oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] export LD_LIBRARY_PATH=/u00/app/oracle/product/11.2.0/db_3_0/lib32
oracle@srvora01:/u00/app/oracle/software/database/ [rdbms11203] ./runInstaller

Step by Step of the GUI:

3241

Execute the script root.sh:

[root@srvora01 oracle]# /u00/app/oracle/product/11.2.0.4_ee/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u00/app/oracle/product/11.2.0.4_ee/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

Configure the DMK and the new ORACLE_HOME in the /etc/oratab file:

oracle@srvora01:/home/oracle/ [rdbms11203] vio
rdbms11203:/u00/app/oracle/product/11.2.0/db_3_0:D
rdbms11204:/u00/app/oracle/product/11.2.0.4_ee/db_1:D
WSDBA1:/u00/app/oracle/product/11.2.0/db_3_0:Y
Source the profile if ORACLE_SID still empty :
oracle@srvora01:/home/oracle/ [rdbms11203] . ~/.bash_profile
Dummy:
------
Dummy : rdbms11203(11.2.0/db_3_0)
Dummy : rdbms11204(11.2.0.4_ee/db_1)
Database(s):
------------
Open/(No)Mount : WSDBA1(11.2.0/db_3_0)
Listener(s):
------------
Started : EXTPROC_LSNR(11.2.0/db_3_0) LISTENER(11.2.0/db_3_0)
oracle@srvora01:/home/oracle/ [rdbms11203] rdbms11204
oracle@srvora01:/home/oracle/ [rdbms11204] cdh
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/ [rdbms11204]

Remove archive log from the database WSDBA1:

oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/ [WSDBA1] rmanc
---
Recovery Manager: Release 11.2.0.3.0 - Production on Fri Feb 20 15:27:41 2015
---
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
---
connected to target database: WSDBA1 (DBID=2437526025)
using target database control file instead of recovery catalog
---
RMAN> delete noprompt force archivelog all;
---
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
---
SQL> startup mount;
ORACLE instance started.
---
Total System Global Area 1068937216 bytes
Fixed Size 2235208 bytes
Variable Size 599786680 bytes
Database Buffers 461373440 bytes
Redo Buffers 5541888 bytes
Database mounted.
SQL> alter database noarchivelog;
---
Database altered.
---
SQL> alter database open;
---
Database altered.

Oracle Patch Update:

unzip p6880880_112000_Linux-x86-64.zip
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/ [WSDBA1] mv OPatch/ OPatch.old
mv /u00/app/oracle/software/OPatch/ /u00/app/oracle/product/11.2.0.4_ee/db_1/
---
Install the patch according to README.html
---
Check the log file :
oracle@srvora01:/u00/app/oracle/software/19877440/ [rdbms11204] vi /u00/app/oracle/product/11.2.0.4_ee/db_1/cfgtoollogs/opatch/19877440_Feb_20_2015_15_34_10/apply2015-02-20_15-34-10PM_1.log

Upgrade Patchset 11.2.0.4 for WSDBA1

You can see the Oracle Documentation for the Upgrade. We need to execute the script @utlu112i.sql for the pre-upgrade. It performs a simple verification of the major database components prior to running a database and dictionary upgrade.

oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [WSDBA1] ls utlu*
utlu112i.sql utlu112s.sql utlu112x.sql utluiobj.sql utlurl.sql utlusts.sql
---
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [WSDBA1] sq
---
SQL*Plus: Release 11.2.0.3.0 Production on Fri Feb 20 13:57:08 2015
---
Copyright (c) 1982, 2011, Oracle. All rights reserved.
---
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
---
SQL> spool upgrde_info.log
SQL> @utlu112i.sql
Oracle Database 11.2 Pre-Upgrade Information Tool 02-20-2015 13:57:26
Script Version: 11.2.0.4.0 Build: 001
---
**********************************************************************
Database:
**********************************************************************
--> name: WSDBA1
--> version: 11.2.0.3.0
--> compatible: 11.2.0.0.0
--> blocksize: 8192
--> platform: Linux x86 64-bit
--> timezone file: V14
.
**********************************************************************
Tablespaces: [make adjustments in the current environment] **********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 531 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 142 MB
WARNING: --> TEMP tablespace is not large enough for the upgrade.
.... currently allocated size: 20 MB
.... minimum required size: 60 MB
.... increase current size by: 40 MB
.... tablespace is NOT AUTOEXTEND ENABLED.
WARNING: --> UNDOTBS1 tablespace is not large enough for the upgrade.
.... currently allocated size: 200 MB
.... minimum required size: 400 MB
.... increase current size by: 200 MB
.... tablespace is NOT AUTOEXTEND ENABLED.
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile] Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
--> If Target Oracle is 32-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.
--> If Target Oracle is 64-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile] **********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile] **********************************************************************
-- No obsolete parameters found. No changes are required
.
**********************************************************************
Components: [The following database components will be upgraded or installed] **********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> Oracle Workspace Manager [upgrade] VALID
.
**********************************************************************
Recommendations
**********************************************************************
Oracle recommends gathering dictionary statistics prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
.
EXECUTE dbms_stats.gather_dictionary_stats;
.
**********************************************************************
Oracle recommends reviewing any defined events prior to upgrading.
.
To view existing non-default events execute the following commands
while connected AS SYSDBA:
Events:
SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
WHERE UPPER(name) ='EVENT' AND isdefault='FALSE'
.
Trace Events:
SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2
WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'
.
Changes will need to be made in the init.ora or spfile.
**********************************************************************
SQL> spool off

Resizing the information perquisite of differents tablespaces :

SQL> select name, bytes/1024/1024 from v$datafile;
NAME BYTES/1024/1024
------------------------------------------------------------ ---------------
/u01/oradata/WSDBA1/system01WSDBA1.dbf 1024
/u01/oradata/WSDBA1/sysaux01WSDBA1.dbf 600
/u01/oradata/WSDBA1/undotbs01WSDBA1.dbf 200
/u01/oradata/WSDBA1/tools01WSDBA1.dbf 100
/u01/oradata/WSDBA1/users01WSDBA1.dbf 500
/u01/oradata/WSDBA1/example01WSDBA1.dbf 500
.
SQL> select name, bytes/1024/1024 from v$tempfile;
.
NAME BYTES/1024/1024
------------------------------------------------------------ ---------------
/u01/oradata/WSDBA1/temp01WSDBA1.dbf 60
.
SQL> set feed on
SQL> alter database tempfile '/u01/oradata/WSDBA1/temp01WSDBA1.dbf' resize 60M;
SQL> alter database datafile '/u01/oradata/WSDBA1/undotbs01WSDBA1.dbf' resize 400M;

Execute the script @utlu112i.sql for the Pre-Upgrade:

SQL> @utlu112i.sql
Oracle Database 11.2 Pre-Upgrade Information Tool 02-20-2015 14:01:32
Script Version: 11.2.0.4.0 Build: 001
.
**********************************************************************
Database:
**********************************************************************
--> name: WSDBA1
--> version: 11.2.0.3.0
--> compatible: 11.2.0.0.0
--> blocksize: 8192
--> platform: Linux x86 64-bit
--> timezone file: V14
.
**********************************************************************
Tablespaces: [make adjustments in the current environment] **********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 531 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 142 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 60 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 400 MB
.
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile] Note: Pre-upgrade tool was run on a lower version 64-bit database.
**********************************************************************
--> If Target Oracle is 32-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.
--> If Target Oracle is 64-Bit, refer here for Update Parameters:
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile] **********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile] **********************************************************************
-- No obsolete parameters found. No changes are required
.
**********************************************************************
Components: [The following database components will be upgraded or installed] **********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> Oracle Workspace Manager [upgrade] VALID
.
**********************************************************************
Recommendations
**********************************************************************
Oracle recommends gathering dictionary statistics prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
.
EXECUTE dbms_stats.gather_dictionary_stats;
.
**********************************************************************
Oracle recommends reviewing any defined events prior to upgrading.
.
To view existing non-default events execute the following commands
while connected AS SYSDBA:
Events:
SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
WHERE UPPER(name) ='EVENT' AND isdefault='FALSE'
.
Trace Events:
SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2
WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'
.
Changes will need to be made in the init.ora or spfile.
**********************************************************************
SQL> EXECUTE dbms_stats.gather_dictionary_stats;
SQL> SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
WHERE UPPER(name) ='EVENT' AND isdefault='FALSE' 2 ;
SQL> set feed on
SQL> SELECT (translate(value,chr(13)||chr(10),' ')) FROM sys.v$parameter2
WHERE UPPER(name) ='EVENT' AND isdefault='FALSE'
2 3 ;
.
no rows selected
.
SQL> SELECT (translate(value,chr(13)||chr(10),' ')) from sys.v$parameter2
WHERE UPPER(name) = '_TRACE_EVENTS' AND isdefault='FALSE'
2 3 ;
.
no rows selected

Shutdown the Database and change the location of the ORACLE_HOME:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
.
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [WSDBA1] vio
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [WSDBA1] grep WSDBA1 /etc/oratab
WSDBA1:/u00/app/oracle/product/11.2.0.4_ee/db_1:Y
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [WSDBA1] . dmk.bash
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [rdbms11203] WSDBA1
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/dbs/ [WSDBA1] ln -s /u00/app/oracle/admin/WSDBA1/pfile/orapwWSDBA1
oracle@srvora01:/u00/app/oracle/product/11.2.0/db_3_0/dbs/ [WSDBA1] mv orapwWSDBA1 /u00/app/oracle/product/11.2.0.4_ee/db_1/dbs/
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/dbs/ [WSDBA1] sq
SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 20 14:07:54 2015
.
Copyright (c) 1982, 2013, Oracle. All rights reserved.
.
Connected to an idle instance.

Launch the Database in the mode startup upgrade:

SQL> startup upgrade
ORACLE instance started.
.
Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 734004104 bytes
Database Buffers 327155712 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Launch the script @catupgrd.sql:

oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/dbs/ [WSDBA1] cd ..
.
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/ [WSDBA1] cd rdbms/admin/
.
oracle@srvora01:/u00/app/oracle/product/11.2.0.4_ee/db_1/rdbms/admin/ [WSDBA1] sq
.
SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 20 14:08:14 2015
.
Copyright (c) 1982, 2013, Oracle. All rights reserved.
.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
.
SQL> !ls catup*
catupend.sql catupgrd.sql catupprc.sql catuppst.sql catupses.sql catupshd.sql catupstr.sql
.
SQL> @catupgrd.sql

When it is finished executed scripts @utul112s.sql, @catuppst.sql and @utlrp :

SQL> startup
ORACLE instance started.
.
Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 734004104 bytes
Database Buffers 327155712 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
.
SQL> !ls utlu112*
utlu112i.sql utlu112s.sql utlu112x.sql
SQL> @utlu112s.sql
SQL> @catuppst.sql
SQL> @utlrp

Start the post Installer for the PSU:

oracle@srvora01:/u00/app/oracle/software/19877440/ [WSDBA1] sq
SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 20 15:51:59 2015
Copyright (c) 1982, 2013, Oracle. All rights reserved.
.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
.
SQL> @postinstall.sql
PL/SQL procedure successfully completed.
Function created.
.....
.....
.....
20-FEB-15 03.46.36.204840 PM
UPGRADE SERVER
11.2.0.4.0
Upgraded from 11.2.0.3.0
.
6 rows selected.
.
SQL> @utlrp.sql

When the Upgrade are successfully, we can repeat the training environment of the ORA-DBA-Essantials:

oracle@srvora01:/u00/app/oracle/local/ORA-DBA-Essentials/ [WSDBA1]

The last step is for the Oratab file, we need to delete the old environment :

# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
#rdbms11203:/u00/app/oracle/product/11.2.0.3/db_1:D
rdbms11204:/u00/app/oracle/product/11.2.0.4_ee/db_1:D

Now we have our Environment Oracle DBA Essentials Workshop to the release 11.2.0.4:

SQL> select * from v$version;
.
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release      11.2.0.4.0 - 64bit Production
PL/SQL Release                                      11.2.0.4.0 - Production
CORE                                                11.2.0.4.0 - Production
TNS for Linux: Version                              11.2.0.4.0 - Production
NLSRTL Version                                      11.2.0.4.0 - Production
.
.
SQL> SELECT version FROM product_component_version WHERE product like '%Database%';
.
VERSION
------------
11.2.0.4.0

 

Cet article Oracle DBA Essentials Workshop – Patchset 11.2.0.4 Available est apparu en premier sur Blog dbi services.

Journées SQL Server (JSS) 2015 à Paris – Sessions & Sponsoring

$
0
0

blog 69 - JSS 2015 bandeau

Faisant suite à la première annonce de David Barbarin sur les journées SQL Server du 30 novembre et 1er décembre, j’ai le plaisirs de vous inviter à venir nous voir également à notre stand.
Effectivement, comme l’année passée, nous sommes heureux d’être à nouveau sponsors GOLD de l’événement et seront placés comme le schéma ci-dessous vous le montre.
JSS201501
N’hésitez pas à venir nous rencontrer sur notre stand!

Nous aurons également le privilège d’animer 2 sessions autour des technologies de la prochaine version de SQL Server:

  • Nouveautés SQL Server 2016: Sécurité, Temporal et Stretch Tables présentés par Nathan Courtine et moi-même juste après le déjeuner de lundi
  • In-Memory 2016: Operational Analytics présenté par David Barbarin en fin de journée également le lundi

Le calendrier définitif des sessions est enfin disponible avec de très bonnes thématiques et d’excellents présentateurs (comme toujours ;-) ).

AGENDA_JSS2015

Je profite également de cet article pour vous inviter à aller voir tous nos articles/tests sur SQL SERVER 2016.

Au plaisir de vous retrouver au JSS2015!

 

 

 

Cet article Journées SQL Server (JSS) 2015 à Paris – Sessions & Sponsoring est apparu en premier sur Blog dbi services.

set “_rowsets_enabled”=false

$
0
0

This 12c ‘wrong results’ issue is thoroughly described by Mike Dietrich (https://blogs.oracle.com/UPGRADE/entry/switch_off_rowsets_enabled_in) and I’ve nothing to add.
This blog post is there to be used as a reference for the comment in the following recommendation that we give to our 12c customers:

alter system set "_rowsets_enabled"=false comment='see http://blog.dbi-services.com/_rowsets_enabled' scope=both;

Our best practice is to document any parameter that is set as a workaround so that it can be revert back when not needed anymore.

This post will be updated when a fix is available.
By the way, I recommend every Oracle DBA to follow Mike Dietrich blog.

 

Cet article set “_rowsets_enabled”=false est apparu en premier sur Blog dbi services.

Software architecture vs code representation

$
0
0

jugs_logo_03Tuesday the 10th of November I attended a public lecture at Swiss Java User Group in Zürich. Topic of the pretension was to describe and address challenges any project manager can encounter, when reflexion switches from perspective of software architecture to real coding implementation.

Presentation was essentially based on Java language, but in fact applicable to any other language.

So first, presenter shared project experiences he has got with some customers, and pointed weaknesses of their architecture diagrams in comparison with their implementation.

To give you an example, we can consider having a layered model eventually a Model/View/Controller approach, for several parallel services. Be careful, I am not saying this approach could no longer be applicable to a project, but as long as the scope remains relatively restricted, architectural design will not be so hard to represent.

However, when we start having bigger problems to address, with dependencies between components in same or across these layers, architecture diagram could more look like a spider web. Moreover, by doing abstraction in this manner, it becomes also painful to navigate between one layer and/or dependent sub-services to understand and recover the “big-picture” of a business requirement. On the other hand, same challenge happens for finding the linked code representation.

Presenter also argued by saying that traditional modeling patterns could also lead to miss-understandings between customers, software designers and developers. Indeed by keeping such abstraction vocabulary, same word could have several meanings depending on position of the person involved inside the project. He just invited us to avoid, as far as possible to use generic terms, like “Service”, “Layer” or “Component” when sharing knowledge or doing synchronization between involved teams.

After this introduction, presenter then gave us some keys and best practices for improving software architecture representations, and make them closer to code representation as well.

Main proposition is to represent a software architecture more by “containers”. A container could be understood as an aggregate set of (micro)services that answers a particular need rather than “view” or “controller” layers, which are involved by multiple services. He convinced us to represent our software more by “utilities” modules, rather than abstracted -unnamed- layers (unnamed meaning not representing any functional activity of the software solution).

He also invited us to formalize this architecture in several abstracted representation, up to 4 depending on perspective we want to have regarding finality of a question. The first, very high level, close to a use-case diagram but with third party dependences. Here, software is only represented by a black-box at the center. Then he described how we could zoom into this “box”, and into next subsequent “boxes” etc. up to a traditional class diagram.

By experience, and knowing I am more coming from “coding” world, I also experienced same challenges than he described during coordination with people coming with an “architectural” point of view. We had altogether to found out solutions to better understood ourselves. And to say the least, outcome was something close to presenter conclusions.

In case you would like to continue getting more details about this original software architecture approach, I invite you to follow Simon’s Brown presentations and news.

http://www.codingthearchitecture.com/authors/sbrown/

I also want to thanks him and Swiss Java User Group for organizing such very interesting lectures.

Philippe Schweitzer

 

Cet article Software architecture vs code representation est apparu en premier sur Blog dbi services.

Interested in a deep dive of logical replication?

$
0
0

It’s next week at #DOAG2015

Interested in a deep dive of logical replication? Let’s try it. The best way to learn is to try it and this is the idea of #RepAttack where you can quickly install a replication configuration on your laptop, using VirtualBox, Oracle XE, and Dbvisit replicate trial version.

Last year Jan Karremans (www.jk-consult.nl) organized a #RepAttack which was cool but we were in a room at the end of the 3rd floor and many people hesitated to come inside. So for this year we had the idea to do it in a different way: fix a rendez-vous at the 2nd floor in front of the stairs:

Wednesday, November 18 – 2:00pm

There we meet up, give a brief intro of #RepAttack, distribute USB stick with all required software, and we will find a place to go to put the laptops on a table (or not – it’s lap-top afterall) and start installing/configuring.

Why on the 2nd floor? Because all Dbvisit partners are there: Opitz, Herrmann & Lenz Services GmbH and dbi services are just at the arrival of the stairs and Robotron a few steps from there.

LiveDemo_DSC6993
And at dbi services, we have the ‘live demo’ screen that we use for any on-demand demo for people coming at the booth. There I can show you how the #Repattack will look like before you do it on your laptop.

Hope to see lot of people next week.

The prereqs for the laptops are:

• At least 2.3 Gb RAM per VM. 5 Gb in total as 2 VMs will be built. A laptop with 8 Gb is recommended.
• At least 17Gb of free space. 25 Gb is recommended.
• 2 GHz Processor (a lesser processor will be acceptable but slower)
• Admin privileges on your host machine are required to install VirtualBox.
• Either Windows, Linux, Mac OS X operating system on your host machine. –

If you want to look at the cookbook, it’s online: https://dbvisit.atlassian.net/wiki/display/REPA11XE/RepAttack+11g+XE+Home

If you have an HP laptop, please check before that you can start a 64-bit VM on it because experience at #rackattack shows that we can spend a lot of time finding the right BIOS settings. But don’t worry, we are there to help.

Not at DOAG? Then see you next month in Birmingham, there’s another #RepAttack there.

 

Cet article Interested in a deep dive of logical replication? est apparu en premier sur Blog dbi services.

Linux Serververwaltung mit Puppet leichtgemacht

$
0
0

Mit Puppet die Linux-Server verwalten

 

Sind mehrere Linux-Server vorhanden, dann entsteht sofort ein Aufwand die unzähligen Konfigurationen ( ntp, chrony, dns, users, groups, services etc.) zu administrieren.

Jeder Administrator möchte möglichst alle Server auf die gleiche Weise konfiguriert haben, die gleichen Konfigurationen.

Puppet ist ein Werkzeug um genau diese Tätigkeiten zu automatisieren. Das erfolgt mit Manifesten (Definitionen). Die Funktionalität ist vergleichbar mit Policys in der Windowswelt.

Diese Manifeste beschreiben den soll Zustand des Systems, Files oder Service. Mit anderen Worten, fehlt eine Service wird er installiert. Wird eine Konfigurationsdatei benötigt, wird sie durch eine saubere Kopie ersetzt oder erstellt.

Dabei ist Puppet in der Lage, auf dem Zielsystem zu unterscheiden mit welchen Befehlen z.B. Software Packages installiert werden müssen, auf Red Hat Servern kennt es yum und auf Debian apt-get. Im Unterschied zu Scripts ist Puppet in der Lage auf eine beschreibende Art den soll Zustand festzulegen.

Um mit Puppet zu starten, wird ein Puppet Master benötigt und ein Client.

Die Installation des Puppet Master ist beschrieben unter folgendem Link: https://docs.puppetlabs.com/guides/install_puppet/post_install.html#configure-a-puppet-master-server

Auf dem Client muss der Puppet-Agent installiert werden.

Installation des Puppet Clients ab dem Puppet Master:

[root@fed22v1 ~]# curl -k https://puppetmaster:8140/packages/current/install.bash | bash

Installation direkt aus dem Repository der Distribution (z.B. Fedora):

[root@fed22v1 ~]# dnf info puppet
Last metadata expiration check performed 0:18:20 ago on Thu Oct 29 16:32:43 2015.
Installed Packages
Name        : puppet
Arch        : noarch
Epoch       : 0
Version     : 4.1.0
Release     : 5.fc22
Size        : 4.2 M
Repo        : @System
From repo   : updates
Summary     : A network tool for managing many disparate systems
URL         : http://puppetlabs.com
License     : ASL 2.0
Description : Puppet lets you centrally manage every important aspect of your system using a
            : cross-platform specification language that manages all the separate elements
            : normally aggregated in different files, like users, cron jobs, and hosts,
            : along with obviously discrete elements like packages, services, and files.

Nach der Installation des Clients, muss im /etc/puppet/puppet.conf folgendes eingetragen werden (puppetmaster muss aufgelöst werden können):

[main]
server = puppetmaster
[agent]
certname = fed22v1.localdomain

Auf dem Puppet Master muss noch der Client akzeptiert werden:

[root@puppetmaster]# puppet cert sign fed22v1.localdomain

Erster Kontakt zwischen Client und Puppet Master kann mit dem Befehl sofort ausgelöst werden:

[root@fed22v1 ~]# puppet agent –tv

Und jetzt den ersten Dienst konfigurieren:

Unter den Puppet Anwendern, werden eine ganze Liste von vordefinierten Konfigurationen zur Verfügung gestellt.

[root@puppetmaster /etc/puppetlabs/code/environments/production/modules]# puppet module search ntp
Notice: Searching https://forgeapi.puppetlabs.com ...
NAME                      DESCRIPTION                                                           AUTHOR             KEYWORDS
thias-ntp                 Network Time Protocol module                                          @thias             ntp ntpd
ghoneycutt-ntp            Manage NTP                                                            @ghoneycutt        ntp time services sync
puppetlabs-ntp            Installs, configures, and manages the NTP service.                    @puppetlabs        ntp time rhel ntpd gentoo aix
dhoppe-ntp                This module installs, configures and manages the NTP service.         @dhoppe            debian ubuntu ntp
diskstats-ntp             Lean RedHat NTP module, with the most common settings.                @diskstats         redhat ntp time rhel ntpd hiera
saz-ntp                   UNKNOWN                                                               @saz               ntp time ntpd gentoo oel suse
example42-ntp             Puppet module for ntp                                                 @example42         ntp example42
erwbgy-ntp                configure and manage ntpd                                             @erwbgy            ntp time services rhel centos
mthibaut-ntp              NTP Module                                                            @mthibaut          ntp hiera
kickstandproject-ntp      UNKNOWN                                                               @kickstandproject  ntp
aageyev-ntp               Install ntp on ubuntu                                                 @aageyev           ubuntu ntp
a2tar-ntp                 Install ntp on ubuntu                                                 @a2tar             ubuntu ntp
csail-ntp                 Configures NTP servers and clients                                    @csail             debian ubuntu ntp ntpd freebsd
warriornew-ntp            ntp setup                                                             @warriornew        ntp
a2labs-ntp                Install ntp on ubuntu                                                 @a2labs
mmitchell-puppetlabs_ntp  UNKNOWN                                                               @mmitchell
tohuwabohu-openntp        Puppet module for OpenNTPD                                            @tohuwabohu        ntp time openntp
hacking-ntpclient         A module to enable easy configuration of an NTP client                @hacking           ntp
ringingliberty-chrony     Manages the chrony network time daemon                                @ringingliberty    debian ubuntu redhat ntp fedora
example42-openntpd        Puppet module for openntpd                                            @example42         ntp example42 openntpd
evenup-time               Manages the timezone and ntp.                                         @evenup            ntp
oppegaard-ntpd            OpenNTP module for OpenBSD                                            @oppegaard         ntp ntpd openbsd openntpd
erwbgy-system             Manage Linux system resources and services from hiera configuration   @erwbgy            ntp rhel cron sshd user host fact
mikegleasonjr-server      The Server module serves as a base configuration for all your mana... @mikegleasonjr     ntp rsyslog firewall timezone swa

 

[root@puppetmaster /etc/puppetlabs/code/environments/production/modules]# puppet module search chrony
Notice: Searching https://forgeapi.puppetlabs.com ...
NAME                   DESCRIPTION                                               AUTHOR           KEYWORDS
ringingliberty-chrony  Manages the chrony network time daemon                    @ringingliberty redhat ntp fedora centos chrony
aboe-chrony            Module to install chrony time daemon on Archlinux         @aboe

 

Als erstes Beispiel habe ich ntp und chrony gewählt.

Auf dem Pupper Master, muss das entsprechende Modul installiert werden:

[root@puppetmaster]# puppet module install puppetlabs-ntp
[root@puppetmaster]# puppet module install ringingliberty-chrony

 

Nach der Installation liegen die Module unter:

[root@puppetmaster /etc/puppetlabs/code/environments/production/modules]# ls -als
4 drwxr-xr-x 6 root root 4096 Oct 29 12:12 chrony
4 drwxr-xr-x 7 root root 4096 Jul 22 00:44 ntp

 

Das Modul muss noch einem Client zugeteilt werden(CLI):

Diese Zuteilung erfolgt unter (site.pp):

[root@puppetmaster /etc/puppetlabs/code/environments/production/manifests]# ls -als
4 -rw-r--r-- 1 root root 2079 Oct 29 12:38 site.pp
node 'fed22v1.localdomain' {
class { 'ntp':
servers => [
'1.ch.pool.ntp.org',
'2.ch.pool.ntp.org',
'3.ch.pool.ntp.org'
]}}
 
oder
node 'fed22v1.localdomain' {
class { 'chrony':
servers => [
'1.ch.pool.ntp.org',
'2.ch.pool.ntp.org',
'3.ch.pool.ntp.org'
]}}

 

Das Modul muss noch einem Client zugeteilt werden(Web Zugang):

Screenshot Node Management

Der Unterschied dieser beiden Arten der Konfiguration:

  • pp

Das wird als erstes durchgearbeitet, eine Zentrale Möglichkeit der Konfiguration. Hier können auch Standards für alle Clients festgelegt werden.

  • Web-Gui

Hier könne Server in Gruppen zusammenfassen werden. Diesen Gruppen werden dann die Classen(z.B. chrony) zugeteilt.

 

Fazit:

Puppet biete die möglich Server zentral zu konfigurieren. Einfache Punkte wie Zeitsynchronisation sind dabei schnell konfiguriert und installiert. Als nächstes werde ich mich an User, Services und Konfigurationsfiles wagen und so die weiteren mächtigen Möglichkeiten von Puppet erkunden!

 

Cet article Linux Serververwaltung mit Puppet leichtgemacht est apparu en premier sur Blog dbi services.

Oracle Data Integrator – The EL-T tool from Oracle

$
0
0

Oracle Data Integrator is the ETL tool that Oracle has developed to enter into the world of Business Intelligence. Taking advantage of its incredible experience in database managing, Oracle has created an ETL tool that is flexible to use, powerful and that allows the integration of data coming from another sources.

ODI_P1

 Agility of use

Oracle Data Integrator uses various types of graphical interfaces during all stages of data flow creation. Here is an example of data transformation flows. The different interfaces are very intuitive and easy to use.
ODI_P11

On this picture, you can see that the user interface is split in 3 big windows.
– The project element
– The Workflow design
– The Predefined components
The project element window centralizes all components, database connection or workflow you have created for the project. All information is centralized on one location.
The workflow design uses the drag and drop technology. In the same time, you can use predefined components that are SQL instructions or script. Using these components avoids writing many complex lines of code that can lead to syntax errors. Although each performed module or task can be controlled and customized.

The Loading Knowledge Module and the Integration Knowledge Module

One of the most powerful function from ODI is that you can choose the data loading and the Data Integration methodology without using code.
For example you can use an Oracle to Oracle push or pull method for loading the data or you can choose a multi connect or a specific SQL script. These options allow a better data transfer between the source and the target table.

ODI_P12
The Integration Knowledge Module allows to choose the data upgrade or integration strategy. You have multiple actions that you can do: Oracle incremental update, Oracle update … All these functions are still modeled in these knowledge modules.

ODI_P13

The Check Knowledge Module

This module is used to check the constraints of the datastore and to create an error table. This module can use the Oracle Check module or a specific SQL script.

ODI_P14

Powerful tool

Each operation on data such as select, create, lookup, … have been integrated into the modules, thus avoiding to write many complex lines of code that can lead to syntax errors. Although each performed module or task can be controlled and customized.

ODI_P3

In addition, ODI uses a very powerful debugger tool that allows checking each task from the data transform process.

ODI_P4

Many external data sources access

A good ETL tool must now connect to most database or data types that exist on the market. That’s why ODI provides access to a many database, including big data, making it one of the most connected tools on the market.

ODI_P5ODI_P5Bis

E-LT, the Business Intelligence 2.0

ODI is based on a new technological concept called E –LT (Extract – Load, Transform). Most of the data integration tools use the technology ETL (Extract – Load – Transform) In other terms, the data first have to be extracted from their base and copied in a temporary database. Then in a second time, data are transformed and stocked in the real data warehouse. These processes cost time, RAM and CPU.

The E-L.T. technology changes the processes sort. It takes advantages from the hardware and software technologies used in the source and target database. The data is extracted and directly copied in the data warehouse. The data transform processes are directly made on the data located in the data warehouse. This new process avoids another data transfer that cost time. At least, this technology allows refreshing frequently the data because we can choose the dataflow we want to update.

ODI_P6

Classical ETL schema                                                     E-LT schema

ODI_P7            ODI_P8

Oracle Data Integrator and Oracle Golden Gate

More and more customer’s solutions are using Oracle Golden Gate and Oracle Data Integrator at the same time. What advantages do we have to use these two technologies together?

  • First advantage, Oracle Golden Gate allows data replication with a granularity size up to the table. The data transfer to ODI can be focused on the relevant data.
  • Second advantage: Oracle Golden Gate job transfer can be triggered. That allows launching an “on demand” data updating process and in certain cases a “real time” data updating process.

ODI_P9

Oracle Data Integrator and Enterprise Data Quality

The data quality is becoming a very big problem for the companies, due to the explosion of data quantity to analyze. For a company, to analyze the good data is primordial. A business analyze made on bad data can lead the company to take bad decisions. That’s why ODI can use EDQ flows to be sure that the data that have been validated.

ODI_P10

Conclusion

In conclusion, we can say that Oracle Data Integrator is one of the most powerful data integration software on the market. In addition, the possibility of using an ODI instance in the cloud allows it to be very versatile. And finally, the association ODI – Oracle Golden Gate with the E-L.T. technology can be very powerful for having operational data in your data warehouse in a minimum of time. In a next blog, I will present you Oracle Enterprise Data Quality, a module for managing you data quality.

 

Cet article Oracle Data Integrator – The EL-T tool from Oracle est apparu en premier sur Blog dbi services.


Cloud Control 12c on your laptop

$
0
0

Today every DBA should have a lab on his laptop. This is where I reproduce most of cases before opening a SR, or to investigate something, to demo, to learn new features, or to prepare an OCM exam. I’ve a few basic VirtualBox VM with single instance databases (11g and 12c, SE and EE, CDB and non-CDB, etc). I’ve the RacAttack environement with a 3 nodes RAC in 12c on OEL7. But in order to prepare for OCM12c upgrade, I need also a Cloud Control 12c. Here is how I got it without having to install it.

Oracle provides some VirtualBox images, and there is one with Oracle Enterprise Manager Cloud Control 12c Release 5 (12.1.0.5) and a database (11.2.0.4) for the repository.

Download

You can get the image from: https://edelivery.oracle.com

Filter products by ‘Linux/OVM/VMs’ and search for ‘Enterprise Manager':

CaptureVBCC001

It’s 17GB in total to download, so be patient:

CaptureVBCC002

You can also download a wget script to get all files:

CaptureVBCC003

So you have 6 files and you can unzip them. When you see the compression ratio you can ask why it is zipped…

Then concatenate all .ova files:

C:\Users\frp>cd /d F:\Downloads
F:\Downloads>type VBox*.ova > EM12cR5.ova

and you can import it with VirtualBox as any OVA.

Network

I have all my VMs on the host-only network (192.168.78.1 in my case).
On the VM configuration, I set the first network card on ‘host only’ network and the second one as NAT to be able to access internet.
If you imported the OVA without changing the MAC addresses, here they are: 0800274DA371 and 08002748F74B

Now I can start the VM and login as root – password welcome1

My keyboard is Swiss French layout, so I change it in System/Administration/keyboard

Then I want to be able to ssh to the machine so I set the network (System/Administration/Network)

Here is my configuration in my case;
CaptureVBCC011

Then I activate the interface and can connect with ssh:
$ ssh root@192.168.78.42

Stop iptables as root

I want to communicate with the other VMs to discover databases, and to access via web, so I disable the firewall:


[root@emcc ~]# /etc/init.d/iptables stop
Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] [root@emcc ~]# chkconfig iptables off

then I can connect as oracle


$ su - oracle@192.168.78.42
 
Installation details of EM Plugin Update 1 12.1.0.5.0 .....
 
EM url: https://emcc.example.com:7799/em
Ports used by this deployment at /u01/OracleHomes/Middleware/oms/install/portlist.ini
Database 11.2.0.4.0 location: /u01/OracleHomes/db
Database name: emrepus
EM Middleware Home location: /u01/OracleHomes/Middleware
EM Agent Home location: /u01/OracleHomes/agent/core/12.1.0.5.0
 
This information is also available in the file /home/oracle/README.FIRST
 
 
To start all processes, click on the start_all.sh icon on your desktop or run the script /home/oracle/Desktop/start_all.sh
 
To stop all processes, click on the stop_all.sh icon on your desktop or run the script /home/oracle/Desktop/stop_all.sh

You just have to follow what is displayed

start_all.sh

Here is how to start everything in the right order.


$ /home/oracle/Desktop/start_all.sh
Starting EM12c: Oracle Database, Oracle Management Server and Oracle Management Agent .....
 
Starting the Oracle Database and network listener .....
 
 
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 12-NOV-2015 12:46:01
 
Copyright (c) 1991, 2013, Oracle. All rights reserved.
 
Starting /u01/OracleHomes/db/product/dbhome_1/bin/tnslsnr: please wait...
 
TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Log messages written to /u01/OracleHomes/db/product/dbhome_1/log/diag/tnslsnr/emcc/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost.localdomain)(PORT=1521)))
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 12-NOV-2015 12:46:01
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/OracleHomes/db/product/dbhome_1/log/diag/tnslsnr/emcc/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost.localdomain)(PORT=1521)))
The listener supports no services
The command completed successfully
 
SQL*Plus: Release 11.2.0.4.0 Production on Thu Nov 12 12:46:02 2015
 
Copyright (c) 1982, 2013, Oracle. All rights reserved.
 
Connected to an idle instance.
 
SQL> ORACLE instance started.
 
Total System Global Area 1469792256 bytes
Fixed Size 2253344 bytes
Variable Size 855641568 bytes
Database Buffers 603979776 bytes
Redo Buffers 7917568 bytes
Database mounted.
Database opened.
SQL>
System altered.
 
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting the Oracle Management Server .....
 
nohup: appending output to `nohup.out'
 
 
opmnctl startall: starting opmn and all managed processes...
Starting the Oracle Management Agent .....
 
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Starting agent ........................... started.

And it’s ready. You can access to it from the VM console, or the host browser on https://192.168.78.42:7799/em

All passwords are welcome1.

You can discover the others hosts. I’ve put them n the /etc/hosts in the EM VM, and I’ve put the following line in all /etc/hosts where I want to install agents:

192.168.78.42 emcc emcc.example.com

Hope it helps. There are things that are quicker to do from Cloud Control when you are in OCM exam so better to know it. However, don’t rely on that only as it may not be available for all exercises.

 

Cet article Cloud Control 12c on your laptop est apparu en premier sur Blog dbi services.

Instance fails to start automatically

$
0
0

Recently at a customer I was confronted with a problem of restarting the Oracle Database 12c automatically on a Windows Server 2008 R2 x64.
I installed Oracle 12.1.0.1.0 with a user of service in a domain who has member of the Group ORA_DBA. The authentication service is correctly set with the NTS parameter in the sqlnet.ora file.

This Installation has been successfully but after a reboot we can see the database are not OPEN.

oracle@********:C:\Users\oracle\ [rdbms1201] DBPIO22
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
oracle@********::C:\Users\oracle\ [DBPIO22] sqh
SQL*Plus: Release 12.1.0.1.0 Production on Mar. Nov. 10 13:51:35 2015
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connecté à une instance inactive.
.
SQL> select status from v$instance;
select status from v$instance
*
ERREUR à la ligne 1 :
ORA-01034: ORACLE not available
ID de processus : 0
ID de session : 0, Numéro de série : 0

A restart of the service doesn’t work. When we restart the service, no commands are sent to the database instance.
Unfortunately we find no information in the alert log concerning this problem.

The Oracle Service in the services.msc have been correctly configured for a Automatic restart :
services.msc

Solution

This problem provide of the parameter SQLNET.OUTBOUND_CONNECT_TIMEOUT in the sqlnet.ora file. This parameter is to specify the time for a client to establish an Oracle Net clients to the database. The outbound connect timeout interval includes the time taken to be connected to an Oracle instance providing the requested service.

If you want restart automatically the database on Windows environment you should modify the value at “0” or comment the parameter SQLNET.OUTBOUND_CONNECT_TIMEOUT:

# ****************************************************************************
# Specify the time for a client to connect with the database server
# and provide the necessary authentication information
# unit in seconds
#
SQLNET.INBOUND_CONNECT_TIMEOUT=60
# ****************************************************************************
# Specify the time for a client to establish an Oracle Net connection
# to the database, superset of the TCP connect timeout ( i.e : 8min on Linux)
# unit in seconds
SQLNET.OUTBOUND_CONNECT_TIMEOUT=0

After a reboot, If you have change the value of this parameter in the sqlnet.ora file, the database has rebooted automatically and successfully:

SQL> select INSTANCE_NAME, STATUS,STARTUP_TIME,VERSION from v$instance;
.
INSTANCE_NAME STATUS STARTUP_TIME VERSION
---------------- ------------ ---------------------- -----------------
dbpio22 OPEN 10-NOV. -2015 14:04:30 12.1.0.1.0

I have open a Service Request to Oracle Support for this bug, I wait for a patch update for fix the bug on Windows environment.

The bug is fixed on Oracle 12.2 but on the release 12.1.0.1 the patch is not available specifically for Microsoft Windows x64 (64-bit). The bug is in Development Working status.

Many Thanks to Nicolas Jardot for his Help and Expertise :)

 

Cet article Instance fails to start automatically est apparu en premier sur Blog dbi services.

In-Memory synchronous population with hidden parameters

$
0
0

By default, In-Memory Column Store population is done in background, asynchronously. There are two hidden parameters that can change this behaviour, let’s see how it works.

Note that this is only good for research, those parameters are undocumented which means that they may not behave as you think they would.
In a demo about in-Memory (slides,video) I show that In-Memory is triggered by instance startup or query on tables, and is done asynchronously by the background processes (IMCO and Wnnn). The demo warns about the intermediate state where the execution plan is optimized for In-Memory (Full Table Scan) but rows are read from buffer cache.

Let’s take a simple example and see different ways to populate.

default population

I’ve a table DEMO with default priority which is on-demand: population is triggered by first access to the table.
Then I’m doing that first access with a simple ‘count(*)’and then measure the direct path reads done by my session and by the In-Memory worker processes (Wnnn)

SQL> select segment_name,populate_status from v$im_segments;
no rows selected
 
SQL> select count(*) from DEMO;
 
COUNT(*)
----------
4000000
 
SQL> select segment_name,populate_status from v$im_segments;
no rows selected
 
SQL> select program,e.event,e.total_waits,e.time_waited_micro/1e6 seconds,e.wait_class from v$session_event e join v$session using(sid)
2 where (program like '%(W%' or sid=sys_context('userenv','sid')) and e.wait_class='User I/O' order by total_waits;
 
PROGRAM EVENT TOTAL_WAITS SECONDS WAIT_CLASS
------------------- ------------------------------ ----------- ---------- ----------------------------------------------------------------
oracle@Exdb3 (W000) Disk file operations I/O 1 .002772 User I/O
sqlplus@Exdb3 (TNS db file scattered read 2 .001731 User I/O
oracle@Exdb3 (W001) Disk file operations I/O 3 .004728 User I/O
sqlplus@Exdb3 (TNS Disk file operations I/O 3 .000533 User I/O
oracle@Exdb3 (W001) db file sequential read 6 .024402 User I/O
oracle@Exdb3 (W000) direct path read 25 .140749 User I/O
oracle@Exdb3 (W001) direct path read 116 .887163 User I/O
sqlplus@Exdb3 (TNS db file sequential read 170 .186352 User I/O
sqlplus@Exdb3 (TNS direct path read 493 1.849471 User I/O

The population was triggered. The query on V$IM_SEGMENT run immediately after show no population yet. And the following query on session waits show 493 direct path reads by my session – for the count(*) – and few direct path reads from IM workers because most of the IMCU were not populated yet.

_inmemory_populate_wait

We can choose synchronous population with the “_inmemory_populate_wait”. When set to true (default is false) our foreground session will wait for the table to be populated in memory. Let’s test it.

SQL> alter session set "_inmemory_populate_wait"=true;
Session altered.
 
SQL> select count(*) from DEMO;
 
COUNT(*)
----------
4000000
 
SQL> select segment_name,populate_status from v$im_segments;
 
SEGMENT_NAME POPULATE_STATUS
------------ ----------------
DEMO COMPLETED
 
SQL> select program,e.event,e.total_waits,e.time_waited_micro/1e6 seconds,e.wait_class from v$session_event e join v$session using(sid)
2 where (program like '%(W%' or sid=sys_context('userenv','sid')) and e.wait_class='User I/O' order by total_waits;
 
PROGRAM EVENT TOTAL_WAITS SECONDS WAIT_CLASS
------------------- ------------------------------ ----------- ---------- ----------------------------------------------------------------
oracle@Exdb3 (W001) Disk file operations I/O 1 .007857 User I/O
oracle@Exdb3 (W000) Disk file operations I/O 2 .001091 User I/O
sqlplus@Exdb3 (TNS db file scattered read 2 .001382 User I/O
sqlplus@Exdb3 (TNS Disk file operations I/O 4 .000555 User I/O
oracle@Exdb3 (W000) db file sequential read 5 .002017 User I/O
sqlplus@Exdb3 (TNS db file sequential read 173 .072744 User I/O
oracle@Exdb3 (W001) direct path read 336 1.962332 User I/O
oracle@Exdb3 (W000) direct path read 473 2.048195 User I/O
sqlplus@Exdb3 (TNS direct path read 490 1.994116 User I/O

Here, population status was ‘COMPLETED’ immediately after my count(*) query. This is exactly what the “_inmemory_populate_wait” is for.
We see the direct-path reads done my the IM worker processes. But the count(*) did the same direct-path reads as before, which means that the execution of the query did not use the IMCS at that time. We wait for population before completing the user call, but we don’t wait for it to execute the query.

As a proof of that, here is the autotrace stat from this count(*):

Execution Plan
----------------------------------------------------------
Plan hash value: 2180342005
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 727 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS INMEMORY FULL| DEMO | 4000K| 727 (1)| 00:00:01 |
----------------------------------------------------------------------------
Statistics
----------------------------------------------------------
168 recursive calls
0 db block gets
62421 consistent gets
62274 physical reads
284 redo size
542 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
18 sorts (memory)
0 sorts (disk)
1 rows processed

and here is one when running it a second time:

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
542 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

The first execution read from the row store, triggers IMCS population, waits for the population, and the second run ca read from the IMCS.

_inmemory_populate_fg

There is another parameter that can make population synchronous because population is then done in the foreground process.
This cannot be set at session level, and it needs an instance restart:

SQL> connect / as sysdba
Connected.
SQL> alter system set "_inmemory_populate_fg"=true scope=spfile;
System altered.
 
SQL> startup force
...

Then I do the same as before:

SQL> select count(*) from DEMO;
 
COUNT(*)
----------
4000000
 
SQL> select segment_name,populate_status from v$im_segments;
 
SEGMENT_NAME POPULATE_STATUS
------------ ----------------
DEMO COMPLETED
 
SQL> select program,e.event,e.total_waits,e.time_waited_micro/1e6 seconds,e.wait_class from v$session_event e join v$session using(sid)
2 where (program like '%(W%' or sid=sys_context('userenv','sid')) and e.wait_class='User I/O' order by total_waits;
 
PROGRAM EVENT TOTAL_WAITS SECONDS WAIT_CLASS
------------------- ------------------------------ ----------- ---------- ----------------------------------------------------------------
sqlplus@Exdb3 (TNS db file scattered read 2 .001608 User I/O
oracle@Exdb3 (W001) Disk file operations I/O 3 .000506 User I/O
sqlplus@Exdb3 (TNS Disk file operations I/O 5 .000749 User I/O
oracle@Exdb3 (W001) db file sequential read 70 .027488 User I/O
sqlplus@Exdb3 (TNS db file sequential read 178 .084757 User I/O
sqlplus@Exdb3 (TNS direct path read 1215 3.32137 User I/O

Same amount of direct path reads here, but all done by my foreground session.
Once again, the population is obviously done after the execution.

So what?

The asynchronous population is an implementation choice, but code is there for foreground one.

There is something that I don’t understand however. When waiting for population, it would be better to populate first, and then use the populated IMCS to execute the query. It seems that it’s the opposite here, which means that the table is read two times in this case.
Of course there are cases where this implementation is better, so that we can fetch first rows without waiting for whole population.

Anyway, I use “_inmemory_populate_wait” in my tests or demos when I want the IMCS to be populated. Easier to set this parameter than looping on a select on V$IM_SEGMENTS to see population status being COMPLETED.

 

Cet article In-Memory synchronous population with hidden parameters est apparu en premier sur Blog dbi services.

The scripts of my DOAG 2015 session: Automated GI/RAC staging with Cobbler and VirtualBox

$
0
0

I promised to upload the scripts of my session to our blog. So, here they are:

Please note that I had to upload the files with the “doc” extension because of some limitations of our blog software. Just save the files as they are written in the hyperlink and you should be fine. If you have any questions just contact me here or per email and I’ll be happy to answer.

Thanks again to all who attended my session and for waking up early :)

Hope to see you all again,
Daniel

 

Cet article The scripts of my DOAG 2015 session: Automated GI/RAC staging with Cobbler and VirtualBox est apparu en premier sur Blog dbi services.

SQL Server Extended Event: Eine deadlock-Event von “system_health”Überwachung

$
0
0

Wie Sie vielleicht wissen, gibt es eine Überwachung über Extended Events statt: system_health.
Kürzlich fragt mich ein Kunde, ein zufälliges Deadlock-Problem zu analysieren…


Ich hatte im September in Lausanne ein Vortrag zum Thema Performance-Monitoring und Extended Events präsentiert…
Viele Leute kennen das nicht, dass in den system_health des SQL Servers die Informationen zu den deadlock-Events sind.

Wo sind die „system_health“ extended event?

xe_deadlock_00
Wie Sie sehen können, unter Management – Extended Events haben Sie den „system_health mit 2 Ausgängen:

  • Package0.event_file: Das Ereignisdateiziel ist ein Ziel, das den vollständigen Puffer auf Datenträger schreibt. Der Msdn link hier
  • Package0.ring_buffer: Das Ringpufferziel speichert kurzzeitig Ereignisdaten im Arbeitsspeicher. Der Msdn link hier

Diese Überwachung ist direkt nach der Installatio n aktiv.

Erste Analyse

Ein Doppelklick auf das Ereignisdateiziel, öffnet ein Fenster mit alle Events.
In meinem Beispiel, haben wir 7655 Events.
xe_deadlock_01
Um die deadlock-Events zu sehen, erstelle ich ein Filter auf den Event: xml_deadlock_report.
Jetzt habe ich nur noch 541 Einträge in meiner Liste für den Event „xml_deadlock_report“.
Sie bemerken dass für jeden Event, 2 Tabs vorhanden sind:

  • Details
  • Deadlock

Das „Detail“ Tab gibt Auskunft in einem XML file zu diesem Event.
xe_deadlock_05
Das „DeadLock“ Tab zeigt ein Schema
xe_deadlock_03
Und auf meiner Seite, ist das einfacher zu verstehen…

Zweite Analyse

xe_deadlock_06
Wir haben zwei Prozesse mit der id 68 und 75.
SQL Server stoppt den Prozess mit der id 68… Warum?
SQL Server hat eine System-Task REQUEST_FOR_DEADLOCK_SEARCH die alle fünf Sekunden, überwacht ob irgendwelche Deadlocks auftreten. Wenn er eine Deadlock findet, beendet diese System-Task ein von den zwei aktuelle Prozesse.
Die Frage ist, welscher Prozess wird beendet?
Normalerweise, der System-Task beendet den Prozess der den wenige „Log used“ hat.
In meinem Fall, sind die „Log used“ gleich (864), dann beendet er den zweiten Prozess!
Aber mit diesem Schema, sehe ich nicht die Anfragen der den „Deadlock“ generiert.

Dritte Analyse

Meine letzte Analyse ist mit den XML file.
xe_deadlock_07
Der Deadlock bezieht sich auf einer SELECT Anfragen.

Was können wir verbessern?

Mein erste Idee war die NO LOCK Option mit der SELECT zu erstellen.
In 90% der Fälle eines Deadlock mit SELECT ist diese Option die Antwort. Auch in meinem Fall, ist es die Lösung! ;-)
Aber manchmal, musst man noch weiter nachforschen…
Eine erste Möglichkeit ist den „SET DEADLOCK_PRIORITY LOW“ zu verwenden und die zweite ist den Index zu forcieren wenn er nicht gleich ist.

Um zum Schluss das Beste: Um diese Performance Probleme besser zu verstehen, besuchen Sie unseren neuen „Workshop SQL Server Performance Tuning“!

 

Cet article SQL Server Extended Event: Eine deadlock-Event von “system_health” Überwachung est apparu en premier sur Blog dbi services.

Real life audience vs. Twitter audience

$
0
0

I was looking at twitter analytics showing my top tweets in November and that puzzled me because tweets with the most ‘impressions’ and ‘engagments’ are related to events that had low audience in real. Any comment is welcome as it can help to understand why some events have low audience.

Here it is. I’ve put my thoughts in the orange box.

  • Impressions are the number of times a twitter user saw the tweet.
  • Engagement is user interaction: retweet, reply, favorite, follow, and any click on links, images, etc.

CaptureNovemberAudience

The top tweet is about Standard Edition 2 and it’s not a surprise because it’s something that brought a lot of questions, and answers are summarized here in an official slide from Oracle LMS. There are lot of questions about licencing (editions, virtualization, etc) and this tweet came from an event that addresses all that. It was in Lausanne, in a very nice hotel, with good speakers and great subjects, and with very good lunch in front of the lake. But there was not a lot of people at the event. It was ok, but I expected to see twice as many, like at the performance event we did in May.

The second tweet had nothing interesting in it. No information, not based on facts but on arbitrary numbers that mean nothing, nothing that helps for anything. I just wanted to see how people react to it and fortunately some followers have commented about that and about what information it can give – or not.

Third post is about the #repattack at DOAG. Usually it takes place in a room. Here I tried to make it easier to come: just a meeting point. When people have seen the tweet, one week before, they seemed to be interested. But then there are other parameter that come. Several good sessions at the same time. Lot of other activities. Encounter people elsewhere. Not everyone has his laptop with him at the event. Etc.

I’ve two remarks about that. First one is that I’m still very sceptic about Big Data analysis on twitter feeds. What makes great audience on social networks may not reflect real life interests. Second one is that it’s very difficult to forecast how many people will come at a small events. Congratulations to people that organizes those meetups and local events. It’s a hard job but we need to meet each others in real.

 

Cet article Real life audience vs. Twitter audience est apparu en premier sur Blog dbi services.

Oracle Compliance Standard

$
0
0

In Enterprise Manager 12c, using the compliance standard results might be a good solution for DBA’s to detect security incoherences (for example a lambda user who has the sysdba role …) for their various targets.

From the 12.1.0.3 version, a new column named ‘Required Data Available’ appeared in the Compliance Standard Result screen. This column defines if the data for the compliance evaluation rules for each target are in the repository or not.

If the value is ‘YES’, it means that the data necessary for the compliance rule has been collected. If the value is ‘NO’, it means that nothing has been collected nor evaluated. Thus we can consider that for this target the compliance rule is not OK.

Let’s have a look on my EM12c configuration. I added the OMSREP database in order to apply on the target the High Security Configuration for Oracle Database. Apparently everything is fine, except the required data available:

co1

The requested configuration data is not available, and we do not have any violations available for this target’s security compliance.

EM 12c provides many compliance standards for various targets, in our case High Security Configuration for Oracle Database. But by default the configuration is not collected. We can notice that when we associate a target to a compliance standard, we receive the following message:

co2

We have to enable those collections by applying an Oracle Certified Template to the target, in our case it will be Oracle Certified – Enable Database Security Configuration Metrics, because those configuration metrics are not enabled by default in order not to overload the OMR (Oracle Management Repository):

co3

You choose the Oracle Certified Database Security Configuration Metrics, you select Apply, you select your target database, and then select OK:

co5

Now the target has its collections enabled.

At the beginning of our test we did not have any schedule about oracle security. Using emctl staus agent scheduler combined with a grep on the instance name and another grep with the metric collection gived no result:

oracle@em12c:> emctl status agent scheduler | grep OMSREP | grep oracle_security
oracle@em12c:>

Now the collection for the certified template has been applied, but we have to start the schedule:

oracle@em12c:> emctl startschedule agent -type oracle_database
Oracle Enterprise Manager Cloud Control 12c Release 5
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Start Schedule TargetType succeeded

And the agent status agent scheduler command shows the oracle_security collections and their scheduled time:

oracle@em12c:>emctl status agent scheduler | grep OMSREP | grep oracle_security

2015-11-23 10:33:59.854 : oracle_database:OMSREP:oracle_security_inst2
2015-11-23 10:34:50.068 : oracle_database:OMSREP:oracle_security

We can run a collection from the agent12c with emctl:

oracle@em12c:>emctl control agent runCollection 
OMSREP:oracle_database oracle_security 
Oracle Enterprise Manager Cloud Control 12c Release 5 
Copyright (c) 1996, 2015 Oracle Corporation. 
All rights reserved. 
--------------------------------------------------------------- 
EMD runCollection completed successfully

 

Finally now we can visualize the violations and target evaluations:

co6

 

Conclusion

If you use the compliance standard, be careful with the column ‘Required Data Available’, you won’t be sure you will have correct compliance results. Don’t forget that some configuration metrics are not enabled by default.

 

 

Cet article Oracle Compliance Standard est apparu en premier sur Blog dbi services.


Trying AppDynamix – monitoring application from user to database

$
0
0

I like to come upon new monitoring software that help to go quickly from user response time to root cause. And I love applications that can be installed easily and at which I can give a try without reading pages of manual.

At DOAG 3rd floor I’ve visited the AppDynamics booth and immediately wanted to install the trial version and give it a try.

appdynamics

Download and install

From the AppDynamics website, there is a 15 days trial version that you can download. I’ve downloaded the Linux version (controller_64bit_linux-4.1.7.0.zip) because I want to try it on our Oracle Tuning workshop VM. It’s 500MB and when you unzip you get two files:

oracle@vmoratun201:/tmp/appdynamics/ [DB1] unzip controller_64bit_linux-4.1.7.0.zip
Archive: controller_64bit_linux-4.1.7.0.zip
inflating: controller_64bit_linux-4.1.7.0.sh
inflating: license.lic
 
oracle@vmoratun201:/tmp/appdynamics/ [DB1] ls -l
total 1084416
-rw-r--r--. 1 oracle oinstall 559790650 Nov 20 21:33 controller_64bit_linux-4.1.7.0.sh
-rw-r--r--. 1 oracle oinstall 550634894 Nov 23 20:35 controller_64bit_linux-4.1.7.0.zip
-rw-r--r--. 1 oracle oinstall 1997 Nov 22 12:43 license.lic

Yes… a 500MB ‘.sh’ to run as a shell script. If you look at it, it embeds some binary code at the end of the script.
appdynamics002
Funny isn’t it? I like this idea…

Then just run it

oracle@vmoratun201:/tmp/appdynamics/ [DB1] sh controller_64bit_linux-4.1.7.0.sh
Unpacking JRE ...
Preparing JRE ...
Starting Installer ...

You have a graphical wizard and after a few next-next-next you can connect to the server with a browser:
appdynamics011

Monitoring database

Then it’s very easy. Look at what you can monitor:

appdynamics012
And I choose ‘database’.

I have to set database type (Oracle) and the url of my controller.

The wizard ask you download the database agent (dbagent-4.1.7.0.zip). I just get the url from there and download it in my VM with wget:

oracle@vmoratun201:/tmp/appdynamics/ [DB1] wget https://download.appdynamics.com/self_service/agent/dbagent/2
--2015-11-23 21:30:12-- https://download.appdynamics.com/self_service/agent/dbagent/21cea4f9066d3131435bbf01
Resolving download.appdynamics.com... 54.244.244.230
Connecting to download.appdynamics.com|54.244.244.230|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 39230821 (37M) [application/zip] Saving to: “dbagent-4.1.7.0.zip”
 
36% [=======================> ] 14,253,827 191K/s eta 2m 4s

Then follow the instructions:

unzip dbagent-4.1.7.0.zip
java -jar db-agent.jar

The agent knows where is the controller (probably from the download url) and connects to it once started.
I am still in the wizard, waiting to receive that connection:
appdynamics013
and a few minutes later:
appdynamics014

I really like those simple easy things… It tells you to wait 3 minutes and 3 minutes later you get it.

The wizard asks connection information to my database (host, port, service, user, password) and it’s ready to monitor:

appdynamics016

Not very exiting yet? I’ve run SwingBench for few minutes.
Let’s click on ‘Metrics Browser’ and choose some statistics to monitor:

Capture019

  • On the left, I can choose any of the statistics we know from V$SYSSTAT
  • I can put them on the graph with lot of graphical options
  • On the top, ‘baselines’ gives the possibility to get a good performance baseline and raise alerts when it deviates from it

appdynamics000And that’s only a small part of the product, after only one hour installing it and running it without reading any documentation.
It can do lot more.
I’m running SwingBench here and it’s Java. I can monitor Java, from main calls to JDBC calls, and match it with the database time.

This is the point: you drill down from the user response time through the profiling of the java methods, and when you are in a jdbc call, then you drill down to the database time. But that’s for future blog posts.

 

Cet article Trying AppDynamix – monitoring application from user to database est apparu en premier sur Blog dbi services.

Upgrading the Grid Infrastructure from 12.1.0.1 to 12.1.0.2 on the command line

$
0
0

A lot of people use the graphical way to upgrade Oracle software from one version to another. While there is nothing to say against that the same can be done without any graphical tools. This post outlines the steps to do so.

Currently my cluster is running Grid Infrastructure 12.1.0.1 without any PSU applied. The node names are racp1vm1 and racp2vm2:

[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [racp1vm2] is [12.1.0.1.0]
[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.1.0]
[grid@racp1vm2 ~]$ /u01/app/12.1.0/grid_1_0/bin/crsctl query crs softwareversion racp1vm2
Oracle Clusterware version on node [racp1vm2] is [12.1.0.1.0]

Obviously the first step is to copy the source files to all the nodes and to extract it:

[grid@racp1vm1 ~]$ cd /u01/app/oracle/software/
[grid@racp1vm1 software]$ ls -la
total 2337928
drwxr-xr-x. 2 grid oinstall       4096 Nov  5 15:14 .
drwxr-xr-x. 3 grid oinstall       4096 Nov  5 14:24 ..
-rw-r--r--. 1 grid oinstall 1747043545 Nov  5 15:14 linuxamd64_12102_grid_1of2.zip
-rw-r--r--. 1 grid oinstall  646972897 Nov  5 15:14 linuxamd64_12102_grid_2of2.zip
[grid@racp1vm1 software]$ unzip linuxamd64_12102_grid_1of2.zip
[grid@racp1vm1 software]$ unzip linuxamd64_12102_grid_2of2.zip
[grid@racp1vm1 software]$ rm linuxamd64_12102_grid_1of2.zip linuxamd64_12102_grid_2of2.zip

It is necessary to create the path for new ORACLE_HOME before the installation as the /u01/app/12.1.0 is locked (that is owned by root and not writable by oinstall which is the default Grid Infrastructure behavior):

[root@racp1vm1 app] mkdir /u01/app/12.1.0/grid_2_0
[root@racp1vm1 app] chown grid:oinstall /u01/app/12.1.0/grid_2_0
[root@racp1vm1 app] ssh racp1vm2
root@racp1vm2's password: 
Last login: Thu Nov  5 14:52:57 2015 from racp1vm1
[root@racp1vm2 ~] mkdir /u01/app/12.1.0/grid_2_0
[root@racp1vm2 ~] chown grid:oinstall /u01/app/12.1.0/grid_2_0

I’ll use my favorite method for installing the binaries only without doing any configuration:

[grid@racp1vm1 software]$ cd grid
./runInstaller \
      ORACLE_HOSTNAME=racp1vm1.dbi.lab \
      INVENTORY_LOCATION=/u01/app/oraInventory \
      SELECTED_LANGUAGES=en \
      oracle.install.option=UPGRADE \
      ORACLE_BASE=/u01/app/grid \
      ORACLE_HOME=/u01/app/12.1.0/grid_2_0 \
      oracle.install.asm.OSDBA=asmdba \
      oracle.install.asm.OSOPER=asmoper \
      oracle.install.asm.OSASM=asmadmin \
      oracle.install.crs.config.clusterName=racp1vm-cluster \
      oracle.install.crs.config.gpnp.configureGNS=false \
      oracle.install.crs.config.autoConfigureClusterNodeVIP=true \
      oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS \
      oracle.install.crs.config.clusterNodes=racp1vm1:,racp1vm2: \
      oracle.install.crs.config.storageOption=LOCAL_ASM_STORAGE \
      oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL \
      oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=NORMAL \
      oracle.install.asm.diskGroup.name=CRS \
      oracle.install.asm.diskGroup.AUSize=1 \
      oracle.install.crs.config.ignoreDownNodes=false \
      oracle.install.config.managementOption=NONE \
      -ignoreSysPrereqs \
      -ignorePrereq \
      -waitforcompletion \
      -silent

If the above runs fine the output should look similar to this:

As a root user, execute the following script(s):
	1. /u01/app/12.1.0/grid_2_0/rootupgrade.sh

Execute /u01/app/12.1.0/grid_2_0/rootupgrade.sh on the following nodes: 
[racp1vm1, racp1vm2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following script to complete the configuration.
	1. /u01/app/12.1.0/grid_2_0/cfgtoollogs/configToolAllCommands RESPONSE_FILE=

 	Note:
	1. This script must be run on the same host from where installer was run. 
	2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

Time for the upgrade on the first node:

[root@racp1vm1 ~] /u01/app/12.1.0/grid_2_0/rootupgrade.sh
Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm1.dbi.lab_2015-11-23_12-32-03.log for the output of root script

The contents of the logfile should look similar to this:

Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm2.dbi.lab_2015-11-23_12-31-02.log for the output of root script
[root@racp1vm1 ~]# tail -f /u01/app/12.1.0/grid_2_0/install/root_racp1vm1.dbi.lab_2015-11-23_12-32-03.log
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_2_0/crs/install/crsconfig_params
2015/11/23 12:32:03 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 12:32:25 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 12:32:28 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/11/23 12:32:32 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/11/23 12:32:33 CLSRSC-363: User ignored prerequisites during installation

2015/11/23 12:32:41 CLSRSC-515: Starting OCR manual backup.

2015/11/23 12:32:42 CLSRSC-516: OCR manual backup successful.

2015/11/23 12:32:45 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2015/11/23 12:32:45 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_1_0/bin/crsctl start rollingupgrade 12.1.0.2.0'

CRS-1131: The cluster was successfully set to rolling upgrade mode.
2015/11/23 12:32:50 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_2_0/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0/grid_1_0 -oldCRSVersion 12.1.0.1.0 -nodeNumber 1 -firstNode true -startRolling false'

ASM configuration upgraded in local node successfully.

2015/11/23 12:32:53 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2015/11/23 12:32:53 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/11/23 12:34:14 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/11/23 12:36:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/11/23 12:41:00 CLSRSC-472: Attempting to export the OCR

2015/11/23 12:41:00 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'

2015/11/23 12:41:03 CLSRSC-473: Successfully exported the OCR

2015/11/23 12:41:08 CLSRSC-486: 
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2015/11/23 12:41:08 CLSRSC-541: 
 To downgrade the cluster: 
 1. All nodes that have been upgraded must be downgraded.

2015/11/23 12:41:08 CLSRSC-542: 
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2015/11/23 12:41:08 CLSRSC-543: 
 3. The downgrade command must be run on the node racp1vm2 with the '-lastnode' option to restore global configuration data.

2015/11/23 12:41:39 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/11/23 12:41:44 CLSRSC-474: Initiating upgrade of resource types

2015/11/23 12:41:50 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p first'

2015/11/23 12:41:50 CLSRSC-475: Upgrade of resource types successfully initiated.

2015/11/23 12:41:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Now we can do the same thing on the second node:

[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/rootupgrade.sh
Check /u01/app/12.1.0/grid_2_0/install/root_racp1vm2.dbi.lab_2015-11-23_13-43-30.log for the output of root script

Again, have a look at the logfile to confirm that everything went fine:

Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid_2_0/crs/install/crsconfig_params
2015/11/23 13:43:30 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:30 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:39 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:50 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2015/11/23 13:43:51 CLSRSC-464: Starting retrieval of the cluster configuration data

2015/11/23 13:43:55 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2015/11/23 13:43:55 CLSRSC-363: User ignored prerequisites during installation

ASM configuration upgraded in local node successfully.

2015/11/23 13:44:03 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2015/11/23 13:45:26 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2015/11/23 13:46:36 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2015/11/23 13:50:05 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2015/11/23 13:50:10 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2015/11/23 13:50:10 CLSRSC-482: Running command: '/u01/app/12.1.0/grid_2_0/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2015/11/23 13:51:19 CLSRSC-479: Successfully set Oracle Clusterware active version

2015/11/23 13:51:25 CLSRSC-476: Finishing upgrade of resource types

2015/11/23 13:51:31 CLSRSC-482: Running command: 'upgrade model  -s 12.1.0.1.0 -d 12.1.0.2.0 -p last'

2015/11/23 13:51:32 CLSRSC-477: Successfully completed upgrade of resource types

2015/11/23 13:51:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

That’s it. We can confirm the version by issuing:

[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]
[root@racp1vm2 ~] /u01/app/12.1.0/grid_2_0/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [racp1vm2] is [12.1.0.2.0]

Hope this helps.

 

Cet article Upgrading the Grid Infrastructure from 12.1.0.1 to 12.1.0.2 on the command line est apparu en premier sur Blog dbi services.

Querying the Oracle Management Repository

$
0
0

The Enterprise Manager Cloud Control 12c is an amazing tool allowing the DBA to display a lot of information from the console. But there are also a lot of repository views that Oracle’s DBA can consult and write their own scripts to build various reports.

The Oracle Management Service collects a huge amount of raw data from the agents installed on each managed host you administer with EM 12c.

Those raw data are inserted in various tables like EM_METRIC_VALUES for example. Enterprise Manager aggregates those management data by hour and by day. Those raw data are kept 7 days; the one hour aggregated data are kept 31 days, while one day aggregated data are kept one year.

A good way to get information in the Oracle Management Repository is to build queries against the GC%METRIC% views. The GC_METRIC% views are built from EM_METRIC_VALUES. EM_METRIC_ITEMS and EM_METRIC_KEYS tables and the GC$METRIC% views are built from the GC_METRIC% views. There are a lot of GC%METRIC% views :

SQL> select distinct view_name from user_views where view_name like 'GC%METRIC%' order by 1;

GC$METRIC_CATEGORIES

GC$METRIC_COLUMNS

GC$METRIC_COLUMNS_TARGET

GC$METRIC_ERROR_CURRENT

GC$METRIC_ERROR_HISTORY

GC$METRIC_GROUPS

GC$METRIC_GROUPS_TARGET

GC$METRIC_KEYS

GC$METRIC_LATEST

GC$METRIC_STR_VALUES

GC$METRIC_STR_VALUES_LATEST

GC$METRIC_VALUES

GC$METRIC_VALUES_DAILY

GC$METRIC_VALUES_HOURLY

GC$METRIC_VALUES_LATEST

GC$TARGET_METRIC_COLLECTIONS

GC_METRIC_COLLECTIONS

GC_METRIC_COLUMNS

GC_METRIC_COLUMNS_TARGET

GC_METRIC_COMPOSITE_KEYS

GC_METRIC_GROUPS

GC_METRIC_GROUPS_TARGET

GC_METRIC_KEYS

GC_METRIC_LATEST

GC_METRIC_LATEST_WO_TGT

GC_METRIC_STATS_METRIC

GC_METRIC_STATS_METRIC_DAILY

GC_METRIC_STATS_TARGET

GC_METRIC_STATS_TARGET_DAILY

GC_METRIC_STR_VALUES

GC_METRIC_STR_VALUES_LATEST

GC_METRIC_STR_VALUES_WO_TGT

GC_METRIC_THRESHOLDS

GC_METRIC_VALUES

GC_METRIC_VALUES_DAILY

GC_METRIC_VALUES_HOURLY

GC_METRIC_VALUES_LATEST

GC_METRIC_VAL_HOURLY_WO_TGT

GC_RAT_TRIAL_METRIC_VALUES

GC_RAT_TRIAL_TARGET_METRICS

For example if we are interested by the load for a host, you can use the console:

qomr1

 

You can also find this information by querying the OMR in the view gc$metric_columns for an entity of type host and a metric_group_label of type Load:

SQL> select entity_type,metric_column_label,metric_group_label
from gc$metric_columns
where entity_type = 'host'
and metric_group_label = 'Load';

 

host       Active Logical Memory, Kilobytes              Load
host       Active Memory, Kilobytes                      Load
host       CPU Interrupt Time (%)                        Load
host       CPU Queue Length                              Load
host       CPU Utilization (%)                           Load
host       CPU in I/O Wait (%)                           Load
host       CPU in System Mode (%)                        Load
host       CPU in User Mode (%)                          Load
host       Free Memory (%)                               Load
host       Free Memory, Kilobytes                        Load
host       Logical Free Memory (%)                       Load
host       Memory Page Scan Rate (per second)            Load
host       Memory Utilization (%)                        Load
host       Page Transfers Rate                           Load
host       Run Queue Length (1 minute average,per core)  Load
host       Run Queue Length (15 minute average,per core) Load
host       Run Queue Length (5 minute average,per core)  Load
host       Swap Free (KB)                                Load
host       Swap Utilization (%)                          Load
host       Swap Utilization, Kilobytes                   Load
host       Total Processes                               Load
host       Total Users                                   Load

We can sort the metrics collected by entity type for example host or an oracle database:

SQL> select entity_type,metric_group_label
from gc$metric_columns
where entity_type = 'host'
group by entity_type,metric_group_label
order by 1,2;

 

host                                     All Processes
host                                     Battery details
host                                     Buffer Activity
host                                     CCCAppFileParser
host                                     CCCData_base
host                                     CCCData_group
host                                     CCCData_mapping
host                                     CCCData_obs
host                                     CCCData_queue
host                                     CCCData_warning
host                                     CCCWatchdog
host                                     CPU Usage
host                                     CPU Usage Internal
host                                     CPUs Details
host                                     Compute Node Temperature
host                                     DiscoverNow
….

Or an oracle database:

 

SQL> select entity_type,metric_group_label
from gc$metric_columns
where entity_type = 'oracle_database'
group by entity_type,metric_group_label
order by 1,2;
 
oracle_database                         ADDM Report - Global
oracle_database                         ALL Privileges
oracle_database                         ANY Privileges
oracle_database                         Access To Important Tables And Views
oracle_database                         Active Sessions by CPU and Wait Classes
oracle_database                         Alert Log
oracle_database                         Alert Log Content
oracle_database                         Alert Log Error Status
oracle_database                         Archive Area
oracle_database                         Archive Area - RAC Instance
oracle_database                         Audit Settings
oracle_database                         Audit Syslog Level
oracle_database                         AutoTask Client
oracle_database                         CPU Usage
oracle_database                         Collect SQL Response Time
oracle_database                         Connect Role
oracle_database                         Control files
oracle_database                         DB Alert Log
oracle_database                         DB Audit Files Permissions
oracle_database                         DB Control Files Permission
oracle_database                         DB Data Files Permissions
oracle_database                         DB InitParameter File Permissions
oracle_database                         DB Link Usage
oracle_database                         DB Password Setting
oracle_database                         DB Profile Setting
oracle_database                         DB Scheduler Jobs
oracle_database                         DBA Group Assignment
oracle_database                         DGPrimaryDBName
oracle_database                         Data Base parameter collection
oracle_database                         Data Failure
oracle_database                         Data Guard - 10.1 Database
oracle_database                         Data Guard - 9.2 Database
oracle_database                         Data Guard Failover
oracle_database                         Data Guard Fast-Start Failover
oracle_database                         Data Guard Fast-Start Failover Observer
….

 

So the main interest is to collect data from those views, for example we can look more precisely in the CPU consumption for a host:

SQL> select entity_name ,collection_time,min_value as min,avg_value as avg,max_value as max
from gc$metric_values_daily
where metric_group_name = 'Load'
and metric_column_name = 'cpuUtil'
and (entity_name like '%dbserver1%' )
order by 2,1 asc;

Entity_name                     collection_time                min                       avg                   max

dbserver1.localdomain         01-JUN-15                  3.782                   17.4089931        72.284

dbserver1.localdomain         02-JUN-15                  2.788                   11.3085764        38.61

dbserver1.localdomain         03-JUN-15                   4.16                    11.7149271        28.975

dbserver1.localdomain         04-JUN-15                  5.526                   11.6824271        29.451

dbserver1.localdomain         05-JUN-1                    5.337                   10.9915486        27.319

dbserver1.localdomain         06-JUN-15                  5.664                   16.7499653        45.798

dbserver1.localdomain         07-JUN-15                  5.021                   15.4098958        84.75

dbserver1.localdomain         08-JUN-15                  4.664                    9.7744375        28.159

dbserver1.localdomain         09-JUN-15                   4.79                     9.40268056      21.113

dbserver1.localdomain         10-JUN-15                    5.17                    9.59378819      31.238

dbserver1.localdomain         11-JUN-15                   5.162                    8.81851042     16.562

dbserver1.localdomain         12-JUN-15                   5.666                   11.0074132       33.058

dbserver1.localdomain         13-JUN-15                   6.412                   15.4432917       42.709

dbserver1.localdomain         14-JUN-15                   5.897                    9.64745486      40.575

dbserver1.localdomain         15-JUN-15                   5.675                   14.5982083       34.483

 

For a more precise measure you can select on gc$metric_values_hourly:

SQL> select entity_name,collection_time,min_value as min,avg_value as avg,max_value as max
from gc$metric_values_hourly
where metric_group_name = 'Load'
and metric_column_name = 'cpuUtil'
and (entity_name like 'dbserver1%' )
order by 1,2
/

 

Entity_name                        collection_time                         min                       avg        max

….

dbserver1.localdomain   15-JUN-15 14:00:00                       15.23                   18.65     22.025

dbserver1.localdomain   15-JUN-15 15:00:00                       7.844                   9.176     11.406

dbserver1.localdomain   15-JUN-15 16:00:00                       8.635                   10.76     13.625

dbserver1.localdomain   15-JUN-15 17:00:00                       7.928                   9.573     16.607

dbserver1.localdomain   15-JUN-15 18:00:00                       8.439                   13.399   21.518

dbserver1.localdomain   15-JUN-15 19:00:00                       7.722                   10.56     14.767

dbserver1.localdomain   15-JUN-15 20:00:00                       7.993                   9.099     12.934

dbserver1.localdomain   15-JUN-15 21:00:00                       7.449                   9.209     12.2

dbserver1.localdomain   15-JUN-15 22:00:00                       8.001                   10.47     16.328

dbserver1.localdomain   15-JUN-15 23:00:00                       7.366                   8.771     14.51

….

We can also display those values in the EM12c console:

qomr2

 

Thus there are great possibilities to query the gc$metric_values% views in order to build reports. For example if we look at the different metric_group_label for an oracle database entity:

SQL> select distinct metric_group_label from gc$metric_values_daily
2   where entity_type = 'oracle_database' order by 1;

 

Alert Log Error Status

Archive Area

CPU Usage

Database Job Status

Database Limits

Database Size

Deferred Transactions

Dump Area

Efficiency

Flash Recovery

Global Cache Statistics

Interconnect Traffic

Invalid Objects by Schema

Memory Usage

Messages per buffered queue

Messages per buffered queue per subscriber

Messages per persistent queue

Messages per persistent queue per subscriber

OCM Instrumentation

Optimal SGA

PGA Allocated

Recovery Area

Response

SCN Growth Statistics

SCN Instance Statistics

SCN Max Statistics

SGA Pool Wastage

Segment Advisor Recommendations

Space usage by buffered queues

System Response Time Per Call

Tablespace Allocation

Tablespaces Full

Throughput

User Audit

User Block

User Locks

Wait Bottlenecks

Waits by Wait Class

backup_archive_log_age

datafile_backup

flashback available

incident_meter

max number of datafiles

 

Then we can ask for more precise measures:

SQL> select distinct key_part_1 from gc$metric_values_daily 
where metric_group_name = 'wait_sess_cls' 
order by 1;

Administrative

Application

Cluster

Commit

Concurrency

Configuration

Idle

Network

Other

Queueing

Scheduler

System I/O

User I/O

 

And we can write a SQL request to analyse the User I/O wait:

SQL> select collection_time,
2 min_value as min_wait ,
3 avg_value as avg_wait ,
4 max_value as max_wait
5 from gc$metric_values_daily
6 where metric_group_name = 'wait_sess_cls'
7 and metric_column_name = 'dbtime_waitclass_pct'
8 and (entity_name like 'dbserver1%' )
9 and key_part_1 = 'User I/O'
10* order by 1,2 asc;

 

Collection_time                                    min_wait             avg_wait             max_wait

01-JUN-15                                        .190094016        45.7777238        86.9431254

02-JUN-15                                        .026723742        49.4623178        98.6886502

03-JUN-15                                        .09020789          32.6063701        98.7430495

04-JUN-15                                        .066328202        26.342438          97.1319971

05-JUN-15                                        .032959651        22.7424382        96.1533236

06-JUN-15                                        .574570414        22.2140084        93.9315499

07-JUN-15                                        .022970399        20.6773665        88.9020253

08-JUN-15                                        .058117038        17.5328715        98.5629756

09-JUN-15                                        .080441214        20.5507072        98.4615166

10-JUN-15                                        .057529812        19.1080933        97.4292408

11-JUN-15                                        .038726475        21.6932703        96.48269

12-JUN-15                                        .106343267        25.0541945        89.5404451

13-JUN-15                                        .257816863        13.5371153        80.1841142

14-JUN-15                                        .004258955        16.5949771        84.7824337

15-JUN-15                                        .020058791        19.1270265        89.8952919

 

Naturally we find back those I/O values in the EM12c console:

 

qomr3

Conclusion: querying the OMR views might allow Oracle DBA to build interesting reports on Hosts and Databases activities.

 

Cet article Querying the Oracle Management Repository est apparu en premier sur Blog dbi services.

SQL Server 2016 CTP3.0: Stretch Database enhancements

$
0
0

Some months ago, my colleague Nathan explained you the bases of the new Stretch Database functionality via two blogs here and here.
With the new SQL Server 2016 CTP 3.0, Stretch Database now includes new features and enhancements that I will sow you in this blog.

 

In previous versions of SQL Server 2016 CTP 2.x, before to be allowed to Enable Stretch Database, you had to enable the “Remote Data Archive” option for the instance. For my new SQL Server CTP 3.0 instance, this option is for the moment disable:

Stretch1

It is now possible to Enable Stretch Database for individual table. Let’s try if it works with this configuration:

Stretch2

The wizard to Enable Table for Stretch appears:

Stretch3

Follow this wizard and enter:

  • SQL Server database credentials
  • Select tables from your database that you want to stretch to Azure (for me table command_queue)

During the validation of SQL Server settings, the first test will check if the instance is configured for Stretch Database, fires an information message telling you that the configuration has not been done for the moment and will be done by the wizard:

Stretch4

The wizard will now run the deployment process:

Stretch5

When the deployment is finalized, a special icon in front of the database name in Management Studio changed in order to directly visualized database with Stretch Database functionality:

Stretch6

Another great add-on is the monitor screen, accessible from the instance properties:

Stretch7

The monitor screen gives an overview of:

  • On-premise server information: server name, database name, locally allocated space
  • Azure server information: server name, database name, service tier, server region, database storage cost (not yet available)
  • State of the connection between on-premise and Azure servers
  • Stretch configured tables: table name, migration state, eligible rows, local rows, Rows in Azure and Details (not yet available)

Stretch8

Here all Eligible rows have not already been transferred to Azure.
A way to check the data transfer is to use the Extended Events. In fact, an Extended Event Session named StretchDatabase_Health is automatically created and started in your instance where the Stretch Database functionality is enabled:

Stretch9

If you right click on “StretchDatabase_Health session – Watch Live Data” you will have all events which concern the stretch functionalities, here is the list of all events:

  • stretch_codegen_errorlog: Reports the output from the code generator
  • stretch_codegen_start: Reports the start of stretch code generation
  • stretch_create_migration_proc_start: Reports the start of migration procedure creation
  • stretch_create_remote_table_start: Reports the start of remote table creation
  • stretch_create_update_trigger_start: Reports the start of create update trigger for remote data archive
    table
  • stretch_database_disable_completed: Reports the completion of a ALTER DATABASE SET
    REMOTE_DATA_ARCHIVE OFF command
  • stretch_database_enable_completed: Reports the completion of a ALTER DATABASE SET
    REMOTE_DATA_ARCHIVE ON command
  • stretch_database_events_submitted: Reports the completion telemetry transfer
  • stretch_migration_debug_trace: Debug trace of stretch migration actions.
  • stretch_migration_queue_migration: Queue a packet for starting migration of the database and object.
  • stretch_migration_sp_stretch_get_batch_id: Call sp_stretch_get_batch_id
  • stretch_migration_start_migration: Start migration of the database and object.
  • stretch_sync_metadata_start: Reports the start of metadata checks during the migration task.
  • stretch_table_codegen_completed: Reports the completion of code generation for a stretched table
  • stretch_table_provisioning_step_duration: Reports the duration of a stretched table provisioning
    operation
  • stretch_table_remote_creation_completed: Reports the completion of remote execution for the
    generated code for a stretched table
  • stretch_table_row_migration_event: Reports the completion of the migration of a batch of rows
  • stretch_table_row_migration_results_event: Reports an error or completion of a successful
    migration of a number of batches of rows
  • stretch_table_unprovision_completed: Reports the completion removal of local resources for
    a table that was unstretched
  • stretch_table_validation_error: Reports the completion of validation for a table when the user
    enables stretch
  • stretch_unprovision_table_start: Reports the start of stretch table un-provisioning

An example of the stretch_table_row_migration_event shows:

  • Number of row migrated per batch (5000)
  • Duration in millisecond
  • Error number and state
  • Table and database id
  • If the migration succeed

Stretch10

 

Stretch Database is a new functionality and, as each new functionality, has some limitations that will decrease in the future.
Nevertheless the ability to stretch individual tables from a database to Azure could help organizations to easily archive their data.

 

 

Cet article SQL Server 2016 CTP3.0: Stretch Database enhancements est apparu en premier sur Blog dbi services.

How to rename listener target in EM Cloud 12c ?

$
0
0

Sometimes it is necessary to rename targets in Enterprise Manager Cloud Control 12c. In my case I have listener targets discovered by EM 12c with the hostname at the end of the target name: TESTDBA_LSN_dbserver1.

As I work in a Veritas clustered environment, when the target switches to the other node (dbserver2) the listener target is relocated, but its name is always TESTDBA_LSN_dbserver1.

How to rename a listener target in EM Cloud Control 12c?

We use emcli to list the agent’s targets:

oracle@dbserver1:/opt/oracle/agent12c/core/12.1.0.4.0/bin/ [agent12c] ./emctl config agent listtargets | grep TESTDBA

[TESTDBA_SITE1.domain.com, oracle_database]

[TESTDBA_LSN_dbserver1, oracle_listener]

From the Oracle Management Repository, we can also run the query:

SQL>SELECT ENTITY_TYPE,ENTITY_NAME,DISPLAY_NAME

FROM SYSMAN.EM_MANAGEABLE_ENTITIES

WHERE MANAGE_STATUS = 2

and DISPLAY_NAME like '%TESTDBA%'

ORDER BY 1;

ENTITY_TYPE                      ENTITY_NAME                          DISPLAY_NAME

oracle_database                  TESTDBA_SITE1.domain.com             TESTDBA_SITE1.domain.com

oracle_dbsys                     TESTDBA_SITE1.domain.com_sys         TESTDBA_SITE1.domain.com_sys

oracle_listener                  TESTDBA_LSN_dbserver1                TESTDBA_LSN_dbserver1

Then from OMS server you can change the display_name:

oracle@omsserver:/home/oracle/[oms12c] emcli modify_target -name="TESTDBA_LSN_dbserver1" -type="oracle_listener" -display_name="TESTDBA_LSN"

Target "TESTDBA_LSN_dbserver1:oracle_listener" modified successfully

If we run again the previous query the display name is modified in the repository, and the Cloud 12c console display the name correctly:

SQL>SELECT ENTITY_TYPE,ENTITY_NAME,DISPLAY_NAME

FROM SYSMAN.EM_MANAGEABLE_ENTITIES

WHERE MANAGE_STATUS = 2

and DISPLAY_NAME like '%TESTDBA%'

ORDER BY 1;

ENTITY_TYPE                     ENTITY_NAME                       DISPLAY_NAME

oracle_database                 TESTDBA_SITE1.domain.com          TESTDBA_SITE1.domain.com

oracle_dbsys                    TESTDBA_SITE1.domain.com_sys      TESTDBA_SITE1.domain.com_sys

oracle_listener                 TESTDBA_LSN_dbserver1             TESTDBA_LSN

 

ren1

 

By the way, we need now to change the target name, we use emcli to rename the target from the OMS server but what a surprise the operation is not allowed and we get the following error message:

oracle@omsserver:/home/oracle/ [oms12c] emcli rename_target -target_type="oracle_listener" -target_name="TESTDBA_LSN_dbserver1" -new_target_name="TESTDBA_LSN"

Rename not supported for the given Target Type.

If we try to run the procedure from the Oracle Management Repository, we notice we have more details in the error message:

SQL> exec em_target.rename_target('oracle_listener','TESTDBA_LSN_dbserver1','TESTDBA_LSN', 'TESTBA_LSN');

BEGIN

em_target.rename_target('oracle_listener','TESTDBA_LSN_dbserver1','TESTDBA_LSN', 'TESTBA_LSN');

END;

*ERROR at line 1:

ORA-20233: -20233 Not allowed

ORA-06512: at "SYSMAN.EM_TARGET", line 8033

ORA-06512: at line 1

I we look more precisely in the sysman em_target package body source code:

-- we will implement rename of agent side targets when it is fully

-- supported by agent

IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND

l_trec.emd_url IS NOT NULL)

THEN

raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,

MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;

END IF ;

I decided to test the following modification; I put the code in comment and recreate and recompile the em_target package body:

-- we will implement rename of agent side targets when it is fully    

-- supported by agent 

-- IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND  

--     l_trec.emd_url IS NOT NULL)  

-- THEN    

-- raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,      

--   MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;  

-- END IF ;

And now the target entity rename is working fine:

Oracle@omsserver:/home/oracle/ [oms12c] emcli rename_target -target_type="oracle_listener" -target_name="TESTDBA_LSN_dbserver1" -new_target_name="TESTDBA_LSN"

Target TESTDBA_LSN_dbserver1 successfully renamed to TESTDBA_LSN.

Finally my listener target has now a correct name:

SQL>SELECT ENTITY_TYPE,ENTITY_NAME,DISPLAY_NAME

FROM SYSMAN.EM_MANAGEABLE_ENTITIES

WHERE MANAGE_STATUS = 2

and DISPLAY_NAME like '%TESTDBA%'

ORDER BY 1;

ENTITY_TYPE                   ENTITY_NAME                         DISPLAY_NAME

oracle_database               TESTDBA_SITE1.domain.com            TESTDBA_SITE1.domain.com

oracle_dbsys                  TESTDBA_SITE1.domain.com_sys        TESTDBA_SITE1.domain.com_sys

oracle_listener               TESTDBA_LSN                         TESTDBA_LSN

 

Conclusion:

Don’t forget to back up your Oracle Management Repository database before any modifications in the EM_TARGET package, and once you have done your target name modification, uncomment the code you have modified to return to the original situation.

 

 

 

Cet article How to rename listener target in EM Cloud 12c ? est apparu en premier sur Blog dbi services.

Viewing all 2837 articles
Browse latest View live