Quantcast
Channel: dbi Blog
Viewing all 2880 articles
Browse latest View live

With Oracle Goldengate take care of additional column creation on the replicated database

$
0
0

This week I worked on a GoldenGate 12.1.2.1.10 POC setup and was facing an issue which for me is a serious drawback of the Oracle GoldenGate product.If you want to create additional columns on the target database online in a GoldenGate configuration you have to be aware of the below situation which can happen in your setup:The below demo was created on a Oracle GoldenGate Downstream server

For the test, I have created the schema scott/tiger on both the source and target databases, thus no initial load is needed

1.Create SCOTT on source database DB1 and target database DB2 using utlsampl.sql script.

Source>@utlsampl.sql
Target>@utlsampl.sql

First we have to configure the replication for the SCOTT user

2.Configure SCOTT extract process on downstream server

GGSCI (srv01) 1> dblogin useridalias ggsource
Successfully logged into database.

GGSCI (srv01 as goldengate@DB1) 2> miningdblogin useridalias ggcap
Successfully logged into mining database.

GGSCI (srv01 as goldengate@DB1) 3> register extract scott database
Extract SCOTT successfully registered with database at SCN 277324431694.

GGSCI (srv01 as goldengate@DB1) 5> add extract scott integrated tranlog, begin now
EXTRACT added.

GGSCI (srv01 as goldengate@DB1) 6> add trandata scott.emp

GGSCI (srv01 as goldengate@DB1) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     SCOTT       00:00:00      00:00:07

GGSCI (srv01 as goldengate@DB1) 7> add exttrail /u01/directories/ggtrail/POCGGP15/es, extract SCOTT
EXTTRAIL added.

GGSCI (srv01 as goldengate@DB1) 12> view params scott

EXTRACT SCOTT
USERIDALIAS ggsource
DBOPTIONS ALLOWUNUSEDCOLUMN
DDL INCLUDE ALL
TRANLOGOPTIONS MININGUSERALIAS ggcap
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
EXTTRAIL /u01/directories/ggtrail/POCGGP15/es
TABLE SCOTT.EMP;

GGSCI (srv01 as goldengate@DB1) 4> start extract scott

Sending START request to MANAGER ...
EXTRACT SCOTT starting

3.Configure the replicat process on downstream server

GGSCI (srv01) 1> add replicat repscott, exttrail /u01/directories/ggtrail/POCGGP15/es
REPLICAT added.

GGSCI (srv01 as goldengate@DB2) 10> view params repscott

REPLICAT REPSCOTT
useridalias ggtarget
DISCARDFILE /u01/app/goldengate/product/12.1.2.1/discard/REPSCOTT_DISCARD.txt,APPEND,megabytes 10
ASSUMETARGETDEFS
DBOPTIONS NOSUPPRESSTRIGGERS
MAP SCOTT.EMP,TARGET SCOTT.EMP;

GGSCI (srv01) 3> dblogin useridalias ggtarget
Successfully logged into database.
Extract current_scn on source database

sys@GMAS2> select current_scn from v$database;

   CURRENT_SCN
--------------
  277324550446

GGSCI (srv01 as goldengate@DB2) 5> start replicat repscott, afterscn 277324550446

Sending START request to MANAGER ...
REPLICAT REPSCOTT starting

GGSCI (srv01 as goldengate@DB2) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     SCOTT           00:00:03      00:00:05
REPLICAT    RUNNING     REPSCOTT        00:00:00      00:02:08

Now we have a running GoldenGate replication for the table scott.emp including the DDL

======START DEMO =======

On the target database DB2 we create an additional column

scott@DB2> alter table emp add TARGET_COL varchar(10) default null;

Table altered.
On the source database DB1 after that, we create an additional column on the source database, which will be replicated to the target database.
scott@DB1> alter table emp add SOURCE_COL varchar(10) default null;

Table altered.

Now on target database DB2 we have the 2 additional columns, as described below:

scott@DB2> select ename,target_col,source_col from emp;

ENAME      TARGET_COL SOURCE_COL
---------- ---------- ----------
SMITH
ALLEN
WARD
...

And on the source database DB1  there is only one additional column

scott@DB1> select ename, source_col from emp;

ENAME      SOURCE_COL
---------- ----------
SMITH
ALLEN
WARD
...

Now on the source database DB1 its time to update the entry for the additional column

scott@DB1> update emp set source_col='change';

14 rows updated.

scott@DB1> commit;

Commit complete.

scott@DB1> select ename, source_col from emp;

ENAME      SOURCE_COL
---------- ----------
SMITH      change
ALLEN      change
WARD       change

Until now everything work as expected.

But now on the target database DB2 we will check the entry updated on table scott.emp

scott@DB2> select ename,target_col,source_col from emp;

ENAME      TARGET_COL SOURCE_COL
---------- ---------- ----------
SMITH      change
ALLEN      change
WARD       change

!!!!! TARGET_COL  is updated, and not the SOURCE_COL Column !!!!

GoldenGate works with the column order and not explicitly with the column names. Thus if you create additional columns on the target database and the column type is compatible with the value of the source database, GoldenGate will automatically put the insert/update in identical column number as on the source database :-(((. Without generating ANY warning or error.

With a sourcedef file you will have the same issue because the sourcedef file is not aware about the additional column created on the target database.

Conclusion: A Solution exist: You’ll have to install Oracle GoldenGate release 12.2 which is available since this week, next blog with the solution will come soon :-)
 

Cet article With Oracle Goldengate take care of additional column creation on the replicated database est apparu en premier sur Blog dbi services.


Migrating the Oracle 12cR1 sample schemas to PostgreSQL Plus Advanced Server 9.4

$
0
0

This post takes a look on how to migrate the Oracle 12cR1 sample schemas to PPAS 9.4 (PostgreSQL Plus Advanced Server 9.4). I’ll not dig into how to install PPAS as this was described in detail some time ago. Just follow this post if you need a setup guide.

If you wonder why I am doing this there are two reasons:

  • to see if it works, to have fun and to learn
  • PostgreSQL and PPAS are real alternatives to Oracle so migrating applications from one to another can make make a lot of sense

To download the Oracle database examples point your browser to the otn download page and download the “Oracle Database 12c Release 1 Examples (12.1.0.2.0) for Linux x86-64″ file.
Installing the sample is quite easy: unzip, start OUI, next, next, next:

oracle@oel12102:/u01/app/oracle/software/ [PROD] unzip linuxamd64_12102_examples.zip
oracle@oel12102:/u01/app/oracle/software/ [PROD] cd examples/
oracle@oel12102:/u01/app/oracle/software/examples/ [PROD] ./runInstaller

orasample1
orasample2
orasample3
orasample4
orasample5
orasample6

Once the scripts are available we can install the schemas. The only important point is that the schemas need to be installed in the right order. So I’ll begin with the HR schema:

@?/demo/schema/human_resources/hr_main.sql
specify password for HR as parameter 1:
Enter value for 1: manager

specify default tablespeace for HR as parameter 2:
Enter value for 2: users

specify temporary tablespace for HR as parameter 3:
Enter value for 3: temp

specify password for SYS as parameter 4:
Enter value for 4: manager

specify log path as parameter 5:
Enter value for 5: /var/tmp/

After that the OE schema can be installed (You’ll need the Multimedia Option installed for this to succeed):

SQL> @?/demo/schema/order_entry/oe_main.sql

specify password for OE as parameter 1:
Enter value for 1: manager

specify default tablespeace for OE as parameter 2:
Enter value for 2: users

specify temporary tablespace for OE as parameter 3:
Enter value for 3: temp

specify password for HR as parameter 4:
Enter value for 4: manager

specify password for SYS as parameter 5:
Enter value for 5: manager

specify directory path for the data files as parameter 6:
Enter value for 6: /var/tmp/

writeable directory path for the log files as parameter 7:
Enter value for 7: /var/tmp/

specify version as parameter 8:
Enter value for 8: v3

The we can continue with the PM schema:

SQL> @?/demo/schema/product_media/pm_main.sql

specify password for PM as parameter 1:
Enter value for 1: manager

specify default tablespeace for PM as parameter 2:
Enter value for 2: users

specify temporary tablespace for PM as parameter 3:
Enter value for 3: temp

specify password for OE as parameter 4:
Enter value for 4: manager

specify password for SYS as parameter 5:
Enter value for 5: manager

specify directory path for the PM data files as parameter 6:
Enter value for 6: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/product_media/

specify directory path for the PM load log files as parameter 7:
Enter value for 7: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/product_media/

specify work directory path as parameter 8:
Enter value for 8: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/product_media/

Then continue with the IX schema:

SQL> @?/demo/schema/info_exchange/ix_main.sql

specify password for IX as parameter 1:
Enter value for 1: manager

specify default tablespeace for IX as parameter 2:
Enter value for 2: users

specify temporary tablespace for IX as parameter 3:
Enter value for 3: temp

specify password for SYS as parameter 4:
Enter value for 4: manager

specify path for log files as parameter 5:
Enter value for 5: /u01/app/oracle/product/12.1.0/db_2_0/demo/schema/info_exchange/

specify version as parameter 6:
Enter value for 6: v3

And finally the SH schema:

SQL> @?/demo/schema/sales_history/sh_main.sql

specify password for SH as parameter 1:
Enter value for 1: manager

specify default tablespace for SH as parameter 2:
Enter value for 2: users

specify temporary tablespace for SH as parameter 3:
Enter value for 3: temp

specify password for SYS as parameter 4:
Enter value for 4: manager

specify directory path for the data files as parameter 5:
Enter value for 5: /u00/app/oracle/product/12.1.0/db_2_0/demo/schema/sales_history/

writeable directory path for the log files as parameter 6:
Enter value for 6: /u00/app/oracle/product/12.1.0/db_2_0/demo/schema/sales_history/

specify version as parameter 7:
Enter value for 7: v3

Once everything is installed we have the following objects available:

SQL> select owner,object_type,count(*) num_obj 
       from dba_objects 
      where owner in ('SH','PM','OE','IX','HR','BI') group by owner,object_type order by 1,2;

OWNER                          OBJECT_TYPE                NUM_OBJ
------------------------------ ----------------------- ----------
HR                             INDEX                           19
HR                             PROCEDURE                        2
HR                             SEQUENCE                         3
HR                             TABLE                            7
HR                             TRIGGER                          2
HR                             VIEW                             1
IX                             EVALUATION CONTEXT               2
IX                             INDEX                           17
IX                             LOB                              3
IX                             QUEUE                            4
IX                             RULE SET                         4
IX                             SEQUENCE                         2
IX                             TABLE                           17
IX                             TYPE                             1
IX                             VIEW                             8
OE                             FUNCTION                         1
OE                             INDEX                           48
OE                             LOB                             15
OE                             SEQUENCE                         1
OE                             SYNONYM                          6
OE                             TABLE                           14
OE                             TRIGGER                          4
OE                             TYPE                            37
OE                             TYPE BODY                        3
OE                             VIEW                            13
PM                             INDEX                           21
PM                             LOB                             17
PM                             TABLE                            3
PM                             TYPE                             3
SH                             DIMENSION                        5
SH                             INDEX                           23
SH                             INDEX PARTITION                196
SH                             MATERIALIZED VIEW                2
SH                             TABLE                           13
SH                             TABLE PARTITION                 56
SH                             VIEW                             1

Having the sample schemas available we are almost ready to start the migration to PPAS 9.4. We’ll use the EDB migration toolkit for this as it automates many tasks. The toolkit itself is documented here. If you do not want to read documentation here is the short version :)

As the migration toolkit uses jdbc to connect to the Oracle database we’ll need to download the Oracle jdbc drivers. I used the latest one, which is 12.1.0.2 (ojdbc7.jar) at the time of writing. This jar file needs to be copied to the following location:

enterprisedb@centos7:/home/enterprisedb/ [dummy] ls -la /etc/alternatives/jre/lib/ext/
total 11424
drwxr-xr-x. 2 root root    4096 Nov 25 14:46 .
drwxr-xr-x. 9 root root    4096 Nov 25 13:01 ..
-rw-r--r--. 1 root root 4003647 Oct 21 22:19 cldrdata.jar
-rw-r--r--. 1 root root    9444 Oct 21 22:19 dnsns.jar
-rw-r--r--. 1 root root   48732 Oct 21 22:19 jaccess.jar
-rw-r--r--. 1 root root 1204407 Oct 21 22:19 localedata.jar
-rw-r--r--. 1 root root     617 Oct 21 22:19 meta-index
-rw-r--r--. 1 root root 2023751 Oct 21 22:19 nashorn.jar
-rw-r--r--. 1 root root 3698857 Nov 25 14:46 ojdbc7.jar  <=================
-rw-r--r--. 1 root root   30448 Oct 21 22:19 sunec.jar
-rw-r--r--. 1 root root  294143 Oct 21 22:19 sunjce_provider.jar
-rw-r--r--. 1 root root  266680 Oct 21 22:19 sunpkcs11.jar
-rw-r--r--. 1 root root   77887 Oct 21 22:19 zipfs.jar
enterprisedb@centos7:/home/enterprisedb/ [dummy] 

The connection parameters to the source and the target have to be specified in the toolkit.properties file which is located in the edbmtk/etc directory of the ppas installation:

cat /u01/app/postgres/product/9.4/ppas_1_3/edbmtk/etc/toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@192.168.22.242:1521:PROD
SRC_DB_USER=system
SRC_DB_PASSWORD=manager
TARGET_DB_URL=jdbc:edb://localhost:5444/orasample
TARGET_DB_USER=enterprisedb
TARGET_DB_PASSWORD=manager

I want the Oracle sample schemas in my own database in PPAS so I created the ORASAMPLE database:

(enterprisedb@[local]:5444) [postgres] > create database orasample;
CREATE DATABASE
Time: 624.415 ms

Ready for migrating the first schema?

enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] pwd
/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin
enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] ./runMTK.sh -fastCopy -logBadSQL -fetchSize 10000 -loaderCount 1 -dropSchema true HR

The result:

Running EnterpriseDB Migration Toolkit (Build 48.0.1) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.22.242:1521:PROD
user =system
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5444/orasample
user =enterprisedb
password=******
Connecting with source Oracle database server...
Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options'
Connecting with target EnterpriseDB database server...
Connected to EnterpriseDB, version '9.4.1.3'
Importing redwood schema HR...
Creating Schema...hr 
Creating Sequence: DEPARTMENTS_SEQ
Creating Sequence: EMPLOYEES_SEQ
Creating Sequence: LOCATIONS_SEQ
Loading Table Data in 8 MB batches...
Creating Table: COUNTRIES
Loading Table: COUNTRIES ...
[COUNTRIES] Migrated 25 rows.
[COUNTRIES] Table Data Load Summary: Total Time(s): 0.075 Total Rows: 25
Creating Table: DEPARTMENTS
Loading Table: DEPARTMENTS ...
[DEPARTMENTS] Migrated 27 rows.
[DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.038 Total Rows: 27
Creating Table: EMPLOYEES
Loading Table: EMPLOYEES ...
[EMPLOYEES] Migrated 107 rows.
[EMPLOYEES] Table Data Load Summary: Total Time(s): 0.09 Total Rows: 107 Total Size(MB): 0.0087890625
Creating Table: JOBS
Loading Table: JOBS ...
[JOBS] Migrated 19 rows.
[JOBS] Table Data Load Summary: Total Time(s): 0.011 Total Rows: 19
Creating Table: JOB_HISTORY
Loading Table: JOB_HISTORY ...
[JOB_HISTORY] Migrated 10 rows.
[JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.026 Total Rows: 10
Creating Table: LOCATIONS
Loading Table: LOCATIONS ...
[LOCATIONS] Migrated 23 rows.
[LOCATIONS] Table Data Load Summary: Total Time(s): 0.03 Total Rows: 23 Total Size(MB): 9.765625E-4
Creating Table: REGIONS
Loading Table: REGIONS ...
[REGIONS] Migrated 4 rows.
[REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4
Data Load Summary: Total Time (sec): 0.489 Total Rows: 215 Total Size(MB): 0.01
Creating Constraint: JHIST_EMP_ID_ST_DATE_PK
Creating Constraint: EMP_EMP_ID_PK
Creating Constraint: EMP_EMAIL_UK
Creating Constraint: JOB_ID_PK
Creating Constraint: DEPT_ID_PK
Creating Constraint: LOC_ID_PK
Creating Constraint: COUNTRY_C_ID_PK
Creating Constraint: REG_ID_PK
Creating Constraint: JHIST_DEPT_FK
Creating Constraint: JHIST_EMP_FK
Creating Constraint: JHIST_JOB_FK
Creating Constraint: DEPT_MGR_FK
Creating Constraint: EMP_MANAGER_FK
Creating Constraint: EMP_JOB_FK
Creating Constraint: EMP_DEPT_FK
Creating Constraint: DEPT_LOC_FK
Creating Constraint: LOC_C_ID_FK
Creating Constraint: COUNTR_REG_FK
Creating Constraint: JHIST_DATE_INTERVAL
Creating Constraint: EMP_SALARY_MIN
Creating Index: LOC_COUNTRY_IX
Creating Index: LOC_STATE_PROVINCE_IX
Creating Index: LOC_CITY_IX
Creating Index: JHIST_DEPARTMENT_IX
Creating Index: JHIST_EMPLOYEE_IX
Creating Index: JHIST_JOB_IX
Creating Index: DEPT_LOCATION_IX
Creating Index: EMP_NAME_IX
Creating Index: EMP_MANAGER_IX
Creating Index: EMP_JOB_IX
Creating Index: EMP_DEPARTMENT_IX
Creating Trigger: SECURE_EMPLOYEES
Creating Trigger: UPDATE_JOB_HISTORY
Creating View: EMP_DETAILS_VIEW
Creating Procedure: ADD_JOB_HISTORY
Creating Procedure: SECURE_DML

Schema HR imported successfully.

Creating User: HR

Migration process completed successfully.

Migration logs have been saved to /home/enterprisedb/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Sequences: 3 out of 3
Tables: 7 out of 7
Constraints: 20 out of 20
Indexes: 11 out of 11
Triggers: 2 out of 2
Views: 1 out of 1
Procedures: 2 out of 2
Users: 1 out of 1

Total objects: 47
Successful count: 47
Failed count: 0
Invalid count: 0

*************************************************************

That was quite easy, wasn’t it? Non of the objects failed to migrate. Lets validate this inside PPAS. I installed PPAS in Oracle compatibility mode and therefore have the dba_* views available:

(enterprisedb@[local]:5444) [postgres] > \c orasample
You are now connected to database "orasample" as user "enterprisedb".
(enterprisedb@[local]:5444) [orasample] > select object_type,count(*) 
                                            from dba_objects 
                                           where schema_name = 'HR' and status = 'VALID';
 object_type | count 
-------------+-------
 TRIGGER     |     2
 SEQUENCE    |     3
 VIEW        |     1
 PROCEDURE   |     2
 TABLE       |     7
 INDEX       |    19
(6 rows)

Exactly the same amount of objects as in Oracle, even the PL/SQL procedures are there. You don’t believe it? :

(enterprisedb@[local]:5444) [orasample] > select text from dba_source where schema_name = 'HR';
                                                                                                  text                                                                                                   
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 CREATE OR REPLACE PROCEDURE hr.add_job_history(p_emp_id numeric, p_start_date timestamp without time zone, p_end_date timestamp without time zone, p_job_id character varying, p_department_id numeric)
  AUTHID DEFINER IS
 BEGIN
   INSERT INTO job_history (employee_id, start_date, end_date,
                            job_id, department_id)
     VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
 END
...

Ok, the HR schema is a simple one. Lets continue with the next one, SH:

enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] pwd
/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin
enterprisedb@centos7:/u01/app/postgres/product/9.4/ppas_1_3/edbmtk/bin/ [CRM] ./runMTK.sh -fastCopy -logBadSQL -fetchSize 10000 -loaderCount 1 -dropSchema true SH

The result:

******************** Migration Summary ********************
Tables: 11 out of 11
Constraints: 14 out of 14
Indexes: 13 out of 13
Views: 1 out of 3

Total objects: 41
Successful count: 39
Failed count: 2
Invalid count: 0

List of failed objects
======================
Views
--------------------
1. SH.FWEEK_PSCAT_SALES_MV
2. SH.CAL_MONTH_SALES_MV

Not bad, but two views are invalid. Why? As specified the “-logBadSQL” switch there is a separate logfile containing all the sql statements which failed:

enterprisedb@centos7:/home/enterprisedb/.enterprisedb/migration-toolkit/logs/ [CRM] pwd
/home/enterprisedb/.enterprisedb/migration-toolkit/logs
enterprisedb@centos7:/home/enterprisedb/.enterprisedb/migration-toolkit/logs/ [CRM] ls -latr | grep SH
-rw-r--r--. 1 enterprisedb enterprisedb    1286 Nov 28 15:30 mtk_bad_sql_SH_20151128032916.sql
-rw-r--r--. 1 enterprisedb enterprisedb    8097 Nov 28 15:30 mtk_SH_20151128032916.log

This file contains exactly the statements for the two views that failed to create:

-- MTK-15009: Error Creating Materialized View: FWEEK_PSCAT_SALES_MV
-- DB-42601: ERROR: syntax error at or near "PREBUILT" at position 53
-- Line 1: CREATE MATERIALIZED VIEW FWEEK_PSCAT_SALES_MV BUILD PREBUILT
--                                                             ^

CREATE MATERIALIZED VIEW FWEEK_PSCAT_SALES_MV BUILD PREBUILT
 REFRESH FORCE
 ON DEMAND
 AS 
SELECT   t.week_ending_day
  ,        p.prod_subcategory
  ,        sum(s.amount_sold) AS dollars
  ,        s.channel_id
  ,        s.promo_id
  FROM     sales s
  ,        times t
  ,        products p
  WHERE    s.time_id = t.time_id
  AND      s.prod_id = p.prod_id
  GROUP BY t.week_ending_day
  ,        p.prod_subcategory
  ,        s.channel_id
  ,        s.promo_id;

-- MTK-15009: Error Creating Materialized View: CAL_MONTH_SALES_MV
-- DB-42601: ERROR: syntax error at or near "PREBUILT" at position 51
-- Line 1: CREATE MATERIALIZED VIEW CAL_MONTH_SALES_MV BUILD PREBUILT
--                                                           ^

CREATE MATERIALIZED VIEW CAL_MONTH_SALES_MV BUILD PREBUILT
 REFRESH FORCE
 ON DEMAND
 AS 
SELECT   t.calendar_month_desc
  ,        sum(s.amount_sold) AS dollars
  FROM     sales s
  ,        times t
  WHERE    s.time_id = t.time_id
  GROUP BY t.calendar_month_desc;

If we take a look at the syntax for create materialized view it becomes clear why this happened:

(enterprisedb@[local]:5444) [postgres] > \h CREATE MATERIALIZED VIEW 
Command:     CREATE MATERIALIZED VIEW
Description: define a new materialized view
Syntax:
CREATE MATERIALIZED VIEW table_name
    [ (column_name [, ...] ) ]
    [ WITH ( storage_parameter [= value] [, ... ] ) ]
    [ TABLESPACE tablespace_name ]
    AS query
    [ WITH [ NO ] DATA ]

There syntax is just wrong. Maybe this is a bug in the migration toolkit as it seems the statements are not mapped from Oracle to PPAS syntax. It is easy to fix:

(enterprisedb@[local]:5444) [orasample] > CREATE MATERIALIZED VIEW sh.FWEEK_PSCAT_SALES_MV
AS SELECT   t.week_ending_day
  ,        p.prod_subcategory
  ,        sum(s.amount_sold) AS dollars
  ,        s.channel_id
  ,        s.promo_id
  FROM     sh.sales s
  ,        sh.times t
  ,        sh.products p
  WHERE    s.time_id = t.time_id
  AND      s.prod_id = p.prod_id
  GROUP BY t.week_ending_day
  ,        p.prod_subcategory
  ,        s.channel_id
  ,        s.promo_id;
SELECT 11266
Time: 8193.370 ms
(enterprisedb@[local]:5444) [orasample] > CREATE MATERIALIZED VIEW sh.CAL_MONTH_SALES_MV
AS SELECT   t.calendar_month_desc
  ,        sum(s.amount_sold) AS dollars
  FROM     sh.sales s
  ,        sh.times t
  WHERE    s.time_id = t.time_id
  GROUP BY t.calendar_month_desc;
SELECT 48
Time: 396.849 ms

Comparing the amount of objects again we should be fine:

(enterprisedb@[local]:5444) [postgres] > \c orasample
You are now connected to database "orasample" as user "enterprisedb".
(enterprisedb@[local]:5444) [orasample] > select object_type,count(*) 
                                            from dba_objects 
                                           where schema_name = 'SH' and status = 'VALID'
                                           group by object_type;
 object_type | count 
-------------+-------
 TRIGGER     |    60
 VIEW        |     1
 TABLE       |    67
 INDEX       |    19
(4 rows)

Uh, totally different numbers. Table partitions are counted as tables here and each partition gets a trigger created (that is how partitions are implemented in PostgreSQL). There is no concept of partitioned indexes in PostgreSQL but we can create indexes on the partitions. I am not sure what happened to the dimension as I am not familiar with this on the oracle side (I’ll see that I can check this in more detail soon). At least nothing about that is reported in the log file. You can see, comparing the amount of objects is not longer sufficient for being able to tell if everything was migrated. Special Oracle features need special considerations and can not be migrated automatically. Not everything can be migrated easily or without adjusting the application but the migration toolkit automates a lot of work and can give a picture of what is possible and what not.

The next schemas will be a topic for another post. Hope this helped.

 

Cet article Migrating the Oracle 12cR1 sample schemas to PostgreSQL Plus Advanced Server 9.4 est apparu en premier sur Blog dbi services.

OCM 12c preparation: Create CDB in command line

$
0
0

This post starts a series about things I wrote while preparing the OCM 12c upgrade exam. Everything in those posts are written before taking the exam – so don’t expect any clue about the exam here. It’s based only on the exam topics, and only those points I wanted to brush up, so don’t expect it to be a comprehensive list of points to know for the exam.
Let’s start by creating a CDB manually as it is something I never do in real life (dbca is the recommended way) but as it is still documented, it may be something to know.

I usually put code and output in my blog posts. But here the goal is to practice, so there is only the commands to run. If you have same environment as mine, a simple copy/paste would do it. But you probably have to adapt.

Documentation

Information about the exam says: Be prepared to use the non-searchable documentation during the exam, to help you with correct syntax.
Documentation about the ‘Create and manage pluggable databases’ topic is mostly in the Oracle® Database Administrator’s Guide. Search for ‘multitenant’, expand ‘Creating and Configuring a CDB’ and then you have the create CDB statement in ‘Creating a CDB with the CREATE DATABASE Statement’

Environment

You will need to have ORACLE_HOME set and $ORACLE_HOME/bin in the path.
If you have a doubt, find the inventory location and get oracle home from the inventory.xml:

cat /etc/oraInst.loc
cat /u01/app/oraInventory/ContentsXML/inventory.xml

Then I set the ORACLE SID:

export ORACLE_SID=CDB

Instance password file

I’ll put ‘oracle’ for all passwords:

cd $ORACLE_HOME/dbs
orapwd file=orapw$ORACLE_SID <<< oracle

Instance init.ora

In the dbs subdirectory there is a sample init.ora
I copy it and change what I need to change, here sith ‘sed’ but of course you can do it manually

cp init.ora init$ORACLE_SID.ora
sed -i -e"s??$ORACLE_BASE?" init$ORACLE_SID.ora
sed -i -e"s?ORCL?$ORACLE_SID?i" init$ORACLE_SID.ora
sed -i -e"s?^compatible?#&?" init$ORACLE_SID.ora
# using ASMM instead of AMM (because I don't like it)
sed -i -e"s?^memory_target=?sga_target=?" init$ORACLE_SID.ora
sed -i -e"s?ora_control.?$ORACLE_BASE/oradata/CDB/&.dbf?g" init$ORACLE_SID.ora
sed -i -e"$" init$ORACLE_SID.ora
echo enable_pluggable_database=true >> init$ORACLE_SID.ora
cat init$ORACLE_SID.ora

In case I can choose the OMF example, I set the destinations

echo db_create_file_dest=$ORACLE_BASE/oradata/CDB >> init$ORACLE_SID.ora
echo db_create_online_log_dest_1=$ORACLE_BASE/oradata/CDB >> init$ORACLE_SID.ora
echo db_create_online_log_dest_2=$ORACLE_BASE/oradata/CDB >> init$ORACLE_SID.ora

From the documentation you can choose the CREATE DATABASE statement for non-OMF or for OMF. I choose the first one, and once again, here it is with ‘sed’ replacements that fit my environment:

sed -e "s/newcdb/CDB/g" \
-e "s?/u0./logs/my?$ORACLE_BASE/oradata/CDB?g" \
-e "s?/u01/app/oracle/oradata?$ORACLE_BASE/oradata?g" \
-e "s/[^ ]*password/oracle/g" > /tmp/createCDB.sql <<END
CREATE DATABASE newcdb
USER SYS IDENTIFIED BY sys_password
USER SYSTEM IDENTIFIED BY system_password
LOGFILE GROUP 1 ('/u01/logs/my/redo01a.log','/u02/logs/my/redo01b.log')
SIZE 100M BLOCKSIZE 512,
GROUP 2 ('/u01/logs/my/redo02a.log','/u02/logs/my/redo02b.log')
SIZE 100M BLOCKSIZE 512,
GROUP 3 ('/u01/logs/my/redo03a.log','/u02/logs/my/redo03b.log')
SIZE 100M BLOCKSIZE 512
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 1024
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/app/oracle/oradata/newcdb/system01.dbf'
SIZE 700M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SYSAUX DATAFILE '/u01/app/oracle/oradata/newcdb/sysaux01.dbf'
SIZE 550M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
DEFAULT TABLESPACE deftbs
DATAFILE '/u01/app/oracle/oradata/newcdb/deftbs01.dbf'
SIZE 500M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1
TEMPFILE '/u01/app/oracle/oradata/newcdb/temp01.dbf'
SIZE 20M REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
UNDO TABLESPACE undotbs1
DATAFILE '/u01/app/oracle/oradata/newcdb/undotbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
ENABLE PLUGGABLE DATABASE
SEED
FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/newcdb/',
'/u01/app/oracle/oradata/pdbseed/')
SYSTEM DATAFILES SIZE 125M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
SYSAUX DATAFILES SIZE 100M
USER_DATA TABLESPACE usertbs
DATAFILE '/u01/app/oracle/oradata/pdbseed/usertbs01.dbf'
SIZE 200M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
END

I’ve written it in /tmp/createCDB.sql that I’ll run later.

Create database

For whatever reasons in case you have to cleanup a previous attempt that left shared memory:

ipcs -m | awk '/oracle/{print "ipcrm -m "$2}' | sh -x

Now creating required directories, running the create database script I’ve created before, and following the steps in documentation

mkdir -p $ORACLE_BASE/oradata/CDB $ORACLE_BASE/admin/$ORACLE_SID/adump
mkdir -p $ORACLE_BASE/oradata/CDB $ORACLE_BASE/oradata/pdbseed
mkdir -p $ORACLE_BASE/fast_recovery_area
PATH=$ORACLE_HOME/perl/bin/:$PATH sqlplus / as sysdba
startup pfile=initCDB.ora nomount
create spfile from pfile;
start /tmp/createCDB.sql
@?/rdbms/admin/catcdb.sql
oracle
oracle
temp
quit

Note that I’ve added $ORACLE_HOME/perl/bin in the PATH because this is required for the catcdb. More info about it:

The catcdb.sql is the long part in there (it run catalog and catproc on all conteainers – CDB$ROOT and PDB$SEED for the moment). Which means that if there is an exam where I have to create a database, it’s better to do that directly and read / prepare the other questions during that time.

Once done, you want to protect your database and run a backup. We will see that later.

Listener

I probably want a listener and see my service registered immediately

lsnrctl start
sqlplus / as sysdba
alter system register;

EM Express

I’m not sure EM Express helps a lot, but let’s start it:

exec DBMS_XDB_CONFIG.SETHTTPPORT(5500);

And I can acces to it on http://localhost:5500/em

oratab


echo CDB:$ORACLE_HOME:Y >> /etc/oratab

SQL Developer

If I have SQL Developer I’ll use it. At least to generate SQL statements for which I don’t know the exact syntax. It’s easier that going to documentation, copy/paste, change, etc.
I really hope that SQL Developer is there for the exam as EM Express do not have all features we had in 11g dbconsole.

You can create local connections to your CDB with a simple click:
Capture12COCMU-CreatePDB-004

Backup

Everything that takes time need a backup because you don’t want to do it again in case of failure.
Let’s put the database in archivelog mode and run a backup

rman target /
report schema;
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
backup database;

It’s an online backup, so no problem to continue with operations that don’t need an instance restart.
Next part will be about creating pluggable databases.

 

Cet article OCM 12c preparation: Create CDB in command line est apparu en premier sur Blog dbi services.

OCM 12c preparation: Manage PDB

$
0
0

Let’s see the different ways to create a PDB, with different tools.
Same disclaimer here as in the first post of the series: don’t expect to get those posts close to what you will have at the exam, but they cover important points that matches the exam topics.

Documentation

Information about the exam says: Be prepared to use the non-searchable documentation during the exam, to help you with correct syntax.
Documentation about the ‘Create and manage pluggable databases’ topic is mostly in the Oracle® Database Administrator’s Guide. Search for ‘multitenant’, expand ‘Creating and Removing PDBs with SQL*Plus’

You find all examples there. Remember that creating a PDB is always done from another one:

  • from PDB$SEED
  • from another PDB in your CDB
  • from another PDB in a remote CDB (need to create a db link)
  • from an unplugged PDB
  • from a non-CDB

and the you will name your datafiles with a conversion from the original ones.

Don’t forget to create the directories if you are not in OMF.

SQL Developer

SQL Developer is your friend. It is designed to help you. I use it in the following way:

  • SQL Worksheet is a nice notepad. Even if you finally paste the statements into sqlplus, the SQL Woksheet is graphical, has colors, and can also run statements from there ;)
  • SQL Reference documentation is classified by statements. SQL Developer is classified by objects. Right clock context menu shows you what you can do on a table, on a materialized view, etc
  • It shows what are your options and can show you the generated SQL statement if you finally want it

I’ll show you an example. You have several ways to name the files when you create a pluggable database, using the convert pairs. But if you have more than one pattern to replace, it’s not easy. Let’s use SQL Developer for that.

In the DBA tab, right click on the Container Database and you have all possible actions on it:

Capture12COCMU-CreatePDB-000

Here are all option for the CREATE PLUGGABLE DATABASE statement. Easier that going to documentation:

Capture12COCMU-CreatePDB-001

Above I’ve chosen ‘Custom Names’ to list all files. Then let’s get the SQL:

Capture12COCMU-CreatePDB-002

Now, I prefer to continue in the SQL Worksheet and I can paste it there. I’ve a file_name_convert pair for each files, so that I can change what I want:

Capture12COCMU-CreatePDB-003

SQL Developer is really a good tool.
When you unplug a PDB, it is still referenced by the original database. Then is you plug it elsewhere without renaming the files, the risk is that you drop it’s datafiles from the original container database.
Best recommendation is to immediately remove it from the original CDB and this is exactly what SQL Developer is doing:

dbca

DBCA is not my preferred tool to create a PDB, but let’s try it.

Let’s start by some troubleshooting (which is not what you want to do at an exam):
Capture12COCMU-CreatePDB-005

Well it is open. Let’s troubleshoot. dbca log is in $ORACLE_BASE/cfgtoollogs/dbca and I found the following:

[pool-1-thread-1] [ 2015-11-29 19:22:42.910 CET ] [PluggableDatabaseUtils.isDatabaseOpen:303] Query to check if DB is open= select count(*) from v$database where upper(db_unique_name)=upper('CDB') and upper(open_mode)='READ WRITE'
...
[pool-1-thread-1] [ 2015-11-29 19:22:43.034 CET ] [PluggableDatabaseUtils.isDatabaseOpen:334] DB is not open

Actually, I’ve no DB_UNIQUE_NAME in v$database:

SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
------------------------------


I’ve the db_unique_name for the instance:

SQL> show parameter uniq
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string CDB

but it’s the default (equals the db_name) as I didn’t set it in the init.ora when I created the CDB manually.
Let’s try to set it:

SQL> alter system set db_unique_name='CDB' scope=spfile;
alter system set db_unique_name='CDB' scope=spfile
*
ERROR at line 1:
ORA-32001: write to SPFILE requested but no SPFILE is in use

Ok, now I understand. I’ve created the spfile but didn’t restart the instance since then.

SQL> startup force
ORACLE instance started.

Total System Global Area 1073741824 bytes
Fixed Size 2932632 bytes
Variable Size 335544424 bytes
Database Buffers 729808896 bytes
Redo Buffers 5455872 bytes
Database mounted.
show parameter Database opened.
SQL> show spparameter unique
SQL> select db_unique_name from v$database;

DB_UNIQUE_NAME
------------------------------
CDB
 

Here it is. It’s not set in spfile, but takes the default. When we start with a pfile where it’s not set, it is not there in V$DATABASE.

My conclusion for the moment is: if you didn’t create the database with DBCA there is no reason to try to use it later.

And the most important when you create a PDB is written in the doc:

 

Cet article OCM 12c preparation: Manage PDB est apparu en premier sur Blog dbi services.

Patching PostgreSQL to a new minor release

$
0
0

If you are used to patch Oracle databases you probably know how to use opatch to apply PSUs. How does PostgreSQL handle this? Do we need to patch the existing binaries to apply security fixes? The answer is: No.

Lets say you want to patch PostgreSQL from version 9.4.1 to version 9.4.5. What do you need to do?

For this little demo I’ll create a new database and a sample table in my 9.4.1 instance:

(postgres@[local]:5432) [postgres] > select version();
                                                         version                                                          
--------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.4.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
(1 row)

Time: 0.483 ms
(postgres@[local]:5432) [postgres] > create database patch;
CREATE DATABASE
Time: 2533.745 ms
(postgres@[local]:5432) [postgres] > \c patch
You are now connected to database "patch" as user "postgres".
(postgres@[local]:5432)  > create table test ( a int );
CREATE TABLE
Time: 2.430 ms
(postgres@[local]:5432)  > insert into test (a) values ( generate_series(1,100));
INSERT 0 100
Time: 0.959 ms

If I now want to bring this version to 9.4.5 the first step I’ll need to do is to install the 9.4.5 binaries in a separate path. The binaries for my 9.4.1 installation are located here:

postgres@oel7:/home/postgres/ [PG1] ps -ef | grep PG1
postgres  2645     1  0 10:51 ?        00:00:00 /u01/app/postgres/product/94/db_1/bin/postgres -D /u02/pgdata/PG1
postgres 14439 11550  0 11:04 pts/1    00:00:00 grep --color=auto PG1

I already installed the 9.4.5 binaries here:

postgres@oel7:/home/postgres/ [PG1] ls /u01/app/postgres/product/94/db_5
bin  include  lib  share

The only tasks I need to do from here on are a) stop the 9.4.1 version:

postgres@oel7:/home/postgres/ [PG1] which pg_ctl
/u01/app/postgres/product/94/db_1/bin/pg_ctl
postgres@oel7:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/PG1 stop -m fast
waiting for server to shut down.... done
server stopped
postgres@oel7:/home/postgres/ [PG1] ps -ef | grep PG1
postgres 14452 11550  0 11:06 pts/1    00:00:00 grep --color=auto PG1
postgres@oel7:/home/postgres/ [PG1] 

Once the old version is down I just can b) restart with the new binaries:

postgres@oel7:/home/postgres/ [PG1] pg_ctl -D /u02/pgdata/PG1 start
server starting
postgres@oel7:/home/postgres/ [PG1] LOG:  database system was shut down at 2015-12-01 11:06:31 CET
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

That’s it. The new version is now 9.4.5:

(postgres@[local]:5432) [postgres] > select version();
                                                   version                                                    
--------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
(1 row)

Time: 20.725 ms
(postgres@[local]:5432) [postgres] > \c patch
You are now connected to database "patch" as user "postgres".
(postgres@[local]:5432)  > select count(*) from test;
 count 
-------
   100
(1 row)

Time: 104.297 ms

Usually, for minor versions, you can just install the new binaries and start the instance from there. But anyway, be sure to read the release notes before doing it.

 

Cet article Patching PostgreSQL to a new minor release est apparu en premier sur Blog dbi services.

Upgrading PostgreSQL to a new major release

$
0
0

The last post looked into how to upgrade PostgreSQL to a new minor version. In this post I’ll look into how to upgrade PostgreSQL to a new major version. This is not as simple as just installing the binaries and start the instance from there. For major upgrades there are two possibilities:

I’ll only look into pg_upgrade for this post. For simplicity I’ll upgrade the 9.4.5 PostgreSQL instance from the last post to 9.5 beta2. The binaries for 9.5 beta2 are already there:

postgres@oel7:/u01/app/postgres/software/ [PG1] which pg_upgrade
/u01/app/postgres/product/95/db_b2/bin/pg_upgrade

Obviously we need to stop the current version before performing the upgrade:

postgres@oel7:/u01/app/postgres/software/ [PG1] pg_ctl stop -D /u02/pgdata/PG1
waiting for server to shut down.... done
server stopped

Then we need to create a new database cluster with the version we want to upgrade to (9.5 beta2 in this case):

postgres@oel7:/u01/app/postgres/software/ [PG1] mkdir /u02/pgdata/PG7
postgres@oel7:/u01/app/postgres/software/ [PG1] mkdir /u03/pgdata/PG7
postgres@oel7:/u01/app/postgres/software/ [PG1] mkdir /u90/arch/PG7
postgres@oel7:/u01/app/postgres/software/ [PG1] initdb -D /u02/pgdata/PG7 -X /u03/pgdata/PG7/ 
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /u02/pgdata/PG7 ... ok
fixing permissions on existing directory /u03/pgdata/PG7 ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
creating template1 database in /u02/pgdata/PG7/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating collations ... ok
creating conversions ... ok
creating dictionaries ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
loading PL/pgSQL server-side language ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /u02/pgdata/PG7 -l logfile start

To verify the version we do a quick startup:

postgres@oel7:/u01/app/postgres/software/ [PG7] pg_ctl -D /u02/pgdata/PG7 start
server starting
postgres@oel7:/u01/app/postgres/software/ [PG7] LOG:  database system was shut down at 2015-12-01 12:10:02 CET
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

postgres@oel7:/u01/app/postgres/software/ [PG7] sqh
Null display is "NULL".
Timing is on.
psql (9.5beta2)
Type "help" for help.

(postgres@[local]:5449) [postgres] > 

Then shut it down again:

postgres@oel7:/u01/app/postgres/software/ [PG7] pg_ctl -D /u02/pgdata/PG7 stop -m fast
waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
LOG:  database system is shut down
 done
server stopped

Now we can begin with the upgrade by specifying four environment variables:

postgres@oel7:/u01/app/postgres/software/ [PG7] export PGDATAOLD=/u02/pgdata/PG1
postgres@oel7:/u01/app/postgres/software/ [PG7] export PGDATANEW=/u02/pgdata/PG7
postgres@oel7:/u01/app/postgres/software/ [PG7] export PGBINOLD=/u01/app/postgres/product/94/db_5/bin
postgres@oel7:/u01/app/postgres/software/ [PG7] export PGBINNEW=/u01/app/postgres/product/95/db_b2/bin/
postgres@oel7:/u01/app/postgres/software/ [PG7] pg_upgrade
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Creating newly-required TOAST tables                        ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Quite easy, isn’t it? As pointed out by pg_upgrade optimizer statistics need to be gathered as this is not done automatically:

postgres@oel7:/u01/app/postgres/software/ [PG7] pg_ctl -D /u02/pgdata/PG7 start
server starting
postgres@oel7:/u01/app/postgres/software/ [PG7] LOG:  database system was shut down at 2015-12-01 12:18:34 CET
LOG:  MultiXact member wraparound protections are now enabled
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

postgres@oel7:/u01/app/postgres/software/ [PG7] ./analyze_new_cluster.sh 
This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/u01/app/postgres/product/95/db_b2/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "bi": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "db1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "patch": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "bi": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "db1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "patch": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "bi": Generating default (full) optimizer statistics
vacuumdb: processing database "db1": Generating default (full) optimizer statistics
vacuumdb: processing database "patch": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Done

The output above proves that the “patch” database is really there but we can confirm that:

postgres@oel7:/u01/app/postgres/software/ [PG7] sqh
Null display is "NULL".
Timing is on.
psql (9.5beta2)
Type "help" for help.

(postgres@[local]:5449) [postgres] > \c patch
You are now connected to database "patch" as user "postgres".
(postgres@[local]:5449)  > select count(*) from test;
 count 
-------
   100
(1 row)

Time: 0.705 ms

And finally we can delete the old data files:

postgres@oel7:/u01/app/postgres/software/ [PG7] ./delete_old_cluster.sh 
postgres@oel7:/u01/app/postgres/software/ [PG7] ls -al /u02/pgdata/PG1
ls: cannot access /u02/pgdata/PG1: No such file or directory

That’s it.

 

Cet article Upgrading PostgreSQL to a new major release est apparu en premier sur Blog dbi services.

SQL Plan Directive: disabling usage and column groups

$
0
0

Yesterday I came upon a comment on oracle-l while I was reading my slides for the UKOUG TECH15 SuperSunday. I’ve one slide and one demo about disabling SPD usage but that’s not enough to explain all variations of what ‘usage’ means here.

The comment on Oracle-L was:
Seems like even after disabling sql plan directives they are still used by dbms_stats to create extended statistics
and my slides (with presenter notes) is:
CaptureSPDDISABLE

Basically when we talk about SPD usage we are talking about the Dynamic Sampling that is triggered by a MISSING_STATS or PERMANENT state SPD. We are not talking about the column groups that are created by dbms_stats when it encounter a ‘MISSING_STATS’ SPD in the hope to get better estimation and have finally the SPD as HAS_STATS until it is purged by Auto Drop.
(if you find that sentence too long, please come to my presentation on Sunday, there’s a demo and a diagram about those states)

Let’s see the different ways we can disable SQL Plan Directives usage, and what is the consequence on dynamic sampling and column groups.

Default behaviour

The default behaviour is that a misestimate creates a SPD in NEW, and future optimizations will ‘use’ the SPD, doing dynamic sampling. When misestimate is confirmed, state is MISSING_STATS. At that point, dbms_stats may create column groups. Future optimizations can set the state in HAS_STATS or PERMANENT depending on whether the misestimate is fixed by the column group statistics.
You may want to disable that normal behavior because:

  • You have a problem with dynamic sampling (long parse time, bugs, etc)
  • You have a problem with extended stats (bad execution plans)

You see the usage from the execution plan:

Note:
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

and you see the column group from :

SQL> select extension_name,extension from user_stat_extensions where table_name='DEMO';
EXTENSION_NAME EXTENSION
---------------------------------------- ----------------------------------------
SYS_STSPJNMIY_SDQK5$W04PFYKBIW ("A","B","C","D")

set optimizer_features_enable=’11.2.0.4′

SQL Plan directives being a 12c feature, you can disable all 12c optimizer features.
Actually, this is the same as setting “_optimizer_dsdir_usage_control”=0 so you can see below that it doesn’t disable all SPD behavior.

set “_optimizer_dsdir_usage_control”=0

Look at the name and the description: controls optimizer usage of dynamic sampling directives
It disables only the usage. Not the creation of SPD (status NEW) and not the creation of column groups (if you already have a SPD in MISSING_STATS).
So if you have no SPD at all (all dropped) and you set “_optimizer_dsdir_usage_control”=0 then you will see SPD created but not used. Which means no dynamic sampling coming from SPD. And because they are not used the state remains in NEW and no column groups are created.

However, if you already have SPD or set this at session level only, you may have unexpected behaviour.

set optimizer_adaptive_features=false

This disables all adaptive features of the optimizer and that’s probably too wide.
It achieves our goal as it even disables the creation of SPD, but it also disables Adaptive Plan, which is a very nice feature. I haven’t seen any bad effect of Adaptive Join until now (please comment if you had bad experience with it).

I tried to disable adaptive feature and then enable the adaptive plans only:

SQL> alter session set optimizer_adaptive_features=false;
Session altered.
SQL> alter session set "_optimizer_adaptive_plans"=true;
Session altered.

but it doesn’t work. Adaptive Plan remains disabled.

set “_optimizer_gather_feedback”=false

SPD are created when a Auto Re-optimization occurs, which is an evolution of Cardinality Feedback.
If you disable Cardinality Feedback, then you will have no SPD created.
You might think that I’ve the same problem as above because it disables more features than only SPD, but actually I don’t like cardinality feedback, so that’s not a problem for me…

set “_optimizer_enable_extended_stats”=false

Ok, if you problem is not about dynamic sampling but only the extended stats that are coming from the column groups created, then you can disable extended statistics usage for your session or query.

set “_column_tracking_level”=0

I was going to write “I don’t know yet a way to disable column group creation” but then remembered about the 11g way to create column groups automatically, with dbms_stats.seed_col_usage. Actually, what it does is to set “_column_tracking_level” to 3.
Then I tried “_column_tracking_level”=0 and it’s actually a way to avoid column group creation by dbms_stats. But the basic column usage will not be tracked either.

dbms_spd.alter_sql_plan_directive(:directive_id,’ENABLED’,’NO’);

Yes you can disable the directive, but once again, this disables only the usage: not the creation of the SPD in NEW, and not the creation of column groups for MISSING_STATS. It’s similar to _optimizer_dsdir_usage_control but at directive level.
Which means if the state is MISSING_STATS then column groups may be created anyway:

12:10:27 SQL> select directive_id,state,last_used,extract(notes,'/spd_note') from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by type desc;
DIRECTIVE_ID STATE LAST_USED
------------------------------ ---------- ---------------------------------------------------------------------------
EXTRACT(NOTES,'/SPD_NOTE')
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
17377721491866490983 USABLE 03-DEC-15 12.10.26.000000000 PM
<spd_note><internal_state>MISSING_STATS</internal_state><redundant>NO</redundant><spd_text>{EC(DEMO.DEMO)[A, B, C, D]}</spd_text></spd_note>
 
12:10:27 SQL> exec dbms_spd.alter_sql_plan_directive(&d,'ENABLED','NO');
12:10:27 SQL> exec dbms_spd.alter_sql_plan_directive(&d,'STATE','NEW');
12:10:27 SQL> exec dbms_stats.gather_table_stats(user,'DEMO',options=>'GATHER AUTO',no_invalidate=>false);
12:10:27 SQL> select extension_name,extension from user_stat_extensions where table_name='DEMO';
 
EXTENSION_NAME EXTENSION
---------------------------------------- ----------------------------------------
SYS_STSPJNMIY_SDQK5$W04PFYKBIW ("A","B","C","D")

dbms_spd.alter_sql_plan_directive(:directive_id,’ENABLED’,’NO’);
dbms_spd.alter_sql_plan_directive(:directive_id,’STATE’,’NEW’);

Because of problem above, the idea is to set the state to NEW because we observed that in NEW the column groups are not created. Unfortunately, they are still created here. I tried to set LAST_USAGE to null but not better. It’s probably easy to see what is different in the underlying tables but that’s enough for this blog post…

There is something else about disabling the SPDs. If you disabled them in NEW status, they will be purged after 53 weeks and you are in the next case where they are dropped.

dbms_spd.drop_sql_plan_directive(:directive id);

If you drop it (or if it’s dropped by auto drop after retention) then it will probably reappear for the same reason as it appeared the first time (misestimate) so this is not a solution.

set “_optimizer_ads_use_result_cache”=false

OK, last one. Maybe you have no problem with SPD, nor extended stats, nor even with Dynamic Sampling. If you have an issue with the fact that dynamix sampling uses result cache, then you can disable that point. “_optimizer_ads_use_result_cache”=false will remove the RESULT_CACHE(snapshot=3600) hint from DS_SVC queries. But try to increase result cache size before that.

Conclusion

Don’t disable all 12c features, don’t disable all adaptive features.
If you don’t want SPD at all, the most reliable is to drop all existing SPD and set “_optimizer_dsdir_usage_control”=0
If you want to manage them and disable some of them, then look at dbms_spd for that and monitor their state.

 

Cet article SQL Plan Directive: disabling usage and column groups est apparu en premier sur Blog dbi services.

How to read XML database alert log?

$
0
0

Since Oracle 11g, Oracle maintains two copies of the database’s alertlog in ADR: a flat text file in the sub-directory trace and an XML like in the folder alert. I had a case recently at a customer where the log.xml was moved to another place and compressed for archiving reason. As the regular text file was not containing old data, the goal was to exploit the archived XML -like file.

When the file is still located in its normal location, it’s very easy to read it using the command “show alert” in ADRCI.

oracle@vmtestol6:/home/oracle/ [DB121] adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Dec 3 16:20:31 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> show homes
ADR Homes: 
diag/rdbms/db121_site1/DB121
diag/tnslsnr/vmtestol6/listener
adrci> set home diag/rdbms/db121_site1/DB121
adrci> show alert

ADR Home = /u00/app/oracle/diag/rdbms/db121_site1/DB121:
*************************************************************
Output the results to file: /tmp/alert_3268_13985_DB121_1.ado

So ADRCI is able to parse all the <msg> tag and convert it into something readable, there is no need to find a parser.

To avoid loosing information by replacing the current file, it’s not possible to put back the file into its original location.
The trick is to create a temporary diagnostic directory and use ADRCI to view the alertlog.
There is no need to use the same DB name but it’s important to re-create a diagnostic folder hierarchy otherwise you’ll get an error when trying to set the ADR base.

oracle@vmtestol6:/tmp/ [DB121] adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Dec 3 22:21:52 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> set base /tmp
DIA-48447: The input path [/tmp] does not contain any ADR homes

Let’s create the hierarchy expected by ADRCI:

oracle@vmtestol6:/tmp/ [DB121] ls -l log_20151203.zip
-rw-r--r--. 1 oracle oinstall 162357  3 déc.  16:28 log_20151203.zip
oracle@vmtestol6:/tmp/ [DB121] mkdir -p diag/rdbms/db1/db1/alert
oracle@vmtestol6:/tmp/ [DB121] unzip log_20151203.zip -d diag/rdbms/db1/db1/alert
Archive:  log_20151203.zip
  inflating: diag/rdbms/db1/db1/alert/log.xml  
oracle@vmtestol6:/tmp/ [DB121] adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Dec 3 17:04:27 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/tmp"
adrci> show alert
...
2015-12-02 16:00:26.469000 +01:00
Instance shutdown complete
:w alert_DB121.log
"alert_DB121.log" [New] 7676L, 321717C written

Then it’s easy to save the file back to a flat text format! ADRCI also allows to run some commands to look for error and so on…

 

Cet article How to read XML database alert log? est apparu en premier sur Blog dbi services.


Manipulate Stretch database feature by script

$
0
0

On November 30th, I presented the Stretch Database feature in “Les Journées SQL Server 2015” in Paris. I explained how to manage by script this new feature in SQL Server 2016 CTP 3.0
I decided to share you my demonstration into a blog.

 

I – Enabling the feature at the instance level

First, you need to enable the “Remote Data Archive” option at the instance level.
To check if the options is enabled:

sp_configure 'REMOTE DATA ARCHIVE';
GO

 

As it returns a “run-value” set to “0”, I need to enable the option as follows:

sp_configure 'remote data archive', '1';
RECONFIGURE
GO

 

II – Enabling the feature at the database level

I create a database named “Stretch_DB” for this demonstration:

By default, the feature is obviously disabled for my database:

SELECT is_remote_data_archive_enabled FROM sys.databases WHERE name = 'Stretch_DB';

During the activation process of the feature, I need to link my local database to Azure. So I have to create a SQL Database server. I will not detail this creation in this blog because you can find several sources on the web describing how to do (cf. Getting Started with Azure SQL Database v12).

Do not forget to configure the remote firewall to accept the connection of the local SQL Server instance.

To access to the remote SQL Database server, I also need to use the Server Admin Login (provided during the SQL Database server creation step).

 

So I create the credential on my local instance:

USE Stretch_DB;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'pa$$w0rd';
CREATE DATABASE SCOPED CREDENTIAL StretchCred WITH IDENTITY = 'myLogin', SECRET = 'myPa$$w0rd';
GO

And I link my local database to my remote server:

ALTER DATABASE STRETCH_DB
    SET REMOTE_DATA_ARCHIVE = ON ( SERVER = 'JSS.DATABASE.WINDOWS.NET' , CREDENTIAL = STRETCHCRED ) ;
GO

 

I can check that my feature is enabled for my database:

SELECT IS_REMOTE_DATA_ARCHIVE_ENABLED FROM SYS.DATABASES
    WHERE NAME = 'STRETCH_DB';
GO

 

Also, I can see my remote database created in Azure:

SELECT * FROM SYS.REMOTE_DATA_ARCHIVE_DATABASES

 

III – Enabling the feature at the table level

I create a table named “ERRLOG” in “dbo” schema.

Then I enable the migration for this table:

ALTER TABLE DBO.ERRORLOG
    SET ( REMOTE_DATA_ARCHIVE ( MIGRATION_STATE = OUTBOUND )  ) ;
GO

 

I can check that my feature is enabled for my table:

SELECT IS_REMOTE_DATA_ARCHIVE_ENABLED  FROM SYS.TABLES
WHERE NAME = 'ERRORLOG'

 

Also, I can see the remote table created in Azure:

SELECT * FROM SYS.REMOTE_DATA_ARCHIVE_TABLES

 

I can pause the migration process for my table. Useful if I need to troubleshoot or monitor my feature.

ALTER TABLE dbo.ErrorLog
	SET ( REMOTE_DATA_ARCHIVE ( MIGRATION_STATE = PAUSED )  ) ;
GO

 

IV – Monitoring the Stretch Database feature

To check the migration state of the table:

SELECT REMOTE_DATA_ARCHIVE_MIGRATION_STATE_DESC FROM SYS.TABLES
WHERE NAME = 'ERRORLOG'

 

To list the migration batch of the table:

SELECT * FROM SYS.DM_DB_RDA_MIGRATION_STATUS ORDER BY START_TIME_UTC DESC;
GO

 

To obtain a data consumption of the table (locally and remotely):

SP_SPACEUSED 'ERRORLOG', 'TRUE', 'REMOTE_ONLY'
GO
SP_SPACEUSED 'ERRORLOG', 'TRUE', 'LOCAL_ONLY'
GO

 

As soon as the Stretch Database feature is enabled, an Extended Events session is created to monitor the Stretch Database ecosystem.
To see these events, I made a script:

SELECT CAST(EVENT_DATA AS XML) AS EVENTDATA
INTO #CAPTURE_EVENT_DATA
FROM SYS.FN_XE_FILE_TARGET_READ_FILE(‘C:\MSSQL13.MSSQLSERVER\MSSQL\LOG\STRETCHDATABASE_HEALTH*.XEL’, ‘C:\MSSQL13.MSSQLSERVER\MSSQL\LOG\METAFILE.XEM’, NULL, NULL);

SELECT CAST(EVENT_DATA AS XML) AS EVENTDATA
INTO #CAPTURE_EVENT_DATA
FROM SYS.FN_XE_FILE_TARGET_READ_FILE('C:\MSSQL13.MSSQLSERVER\MSSQL\LOG\STRETCHDATABASE_HEALTH*.XEL', 'C:\MSSQL13.MSSQLSERVER\MSSQL\LOG\METAFILE.XEM', NULL, NULL);

SELECT 
	EVENT_DATA.VALUE('(./@NAME)', 'NVARCHAR(50)') AS EVENT_NAME,
	EVENT_DATA.VALUE('(./@TIMESTAMP)', 'DATETIME') AS EVENT_DATE,
	EVENT_DATA.VALUE('(DATA[@NAME="DATABASE_ID"]/VALUE)[1]', 'INT') AS EVENT_DB_ID,
	EVENT_DATA.VALUE('(DATA[@NAME="TABLE_ID"]/VALUE)[1]', 'BIGINT') AS EVENT_TABLE_ID,
	EVENT_DATA.VALUE('(DATA[@NAME="IS_SUCCESS"]/VALUE)[1]', 'NVARCHAR(5)') AS EVENT_SUCCESS,
	EVENT_DATA.VALUE('(DATA[@NAME="DURATION_MS"]/VALUE)[1]', 'INT') AS EVENT_DURATION,
	EVENT_DATA.VALUE('(DATA[@NAME="ROWS"]/VALUE)[1]', 'BIGINT') AS EVENT_ROWS,
	EVENT_DATA.VALUE('(DATA[@NAME="ERROR_NUMBER"]/VALUE)[1]', 'BIGINT') AS EVENT_ERROR_NUMBER,DIRE
	EVENT_DATA.VALUE('(DATA[@NAME="SEVERITY"]/VALUE)[1]', 'INT') AS EVENT_SEVERITY,
	EVENT_DATA.VALUE('(DATA[@NAME="MESSAGE"]/VALUE)[1]', 'NVARCHAR(1000)') AS EVENT_MESSAGE
FROM #CAPTURE_EVENT_DATA
	CROSS APPLY EVENTDATA.NODES('//EVENT') XED (EVENT_DATA)

 

The Stretch Database feature is an interesting feature from SQL Server 2016. If you want more information, you should take a look to our other blogs:

SQL Server 2016 CTP2: Stretch database feature – Part 1
SQL Server 2016 CTP2: Stretch database feature – Part 2
SQL Server 2016 CTP3.0: Stretch Database enhancements

 

 

Cet article Manipulate Stretch database feature by script est apparu en premier sur Blog dbi services.

GoldenGate 12.2 new parameter ALLOWOUTPUTDIR

$
0
0

I will start a series of blog posts about the new features of GoldenGate 12.2.

This first blog will be concerned by the new parameter ALLOWOUTPUTDIR.

When I tried GoldenGate 12.2 for the first time, I reused the same configuration as with GoldenGate 12.1. It means two virtual machines with OEL 6.5 and Oracle Database 12.1.0.2.4 Enterprise Edition.

First I configured my SCOTT extract process without problems. It was the same for the REPSCOTT replicat process. But when I tried to configure the Data Pump extract process to transfer trail files, I faced an error.

1. Configure Data Pump Extract process

GGSCI (goldengate122) 1> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (goldengate122 as ggadmin@DB1) 2> add extract DPSCOTT, EXTTRAILSOURCE /u04/app/goldengate/trail/DB1/sc
EXTRACT added.


GGSCI (goldengate122 as ggadmin@DB1) 3> add rmttrail /u05/ggtrail/DB2/tc, extract DPSCOTT
RMTTRAIL added.

GGSCI (goldengate122 as ggadmin@DB1) 4> edit params DPSCOTT



GGSCI (goldengate122 as ggadmin@DB1) 5> view params DPSCOTT

extract dpscott
useridalias ggadmin
DBOPTIONS ALLOWUNUSEDCOLUMN
rmthost 192.168.56.109, MGRPORT 7809
RMTTRAIL /u05/ggtrail/DB2/tc
TABLE scott.emp;


GGSCI (goldengate122 as ggadmin@DB1) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     DPSCOTT     00:00:00      00:01:25
EXTRACT     RUNNING     SCOTT       00:00:10      00:00:03


GGSCI (goldengate122 as ggadmin@DB1) 7> start extract DPSCOTT

Sending START request to MANAGER ...
EXTRACT DPSCOTT starting


GGSCI (goldengate122 as ggadmin@DB1) 8>

It is a standard Data Pump extract.

But when I started it, it always was in status “abended”.

So I checked the error log (ggserr.log) to find the issue and this was reported there:

2015-12-02 09:40:42  INFO    OGG-00993  Oracle GoldenGate Capture for Oracle, dpscott.prm:  EXTRACT DPSCOTT started.
2015-12-02 09:40:44  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-12-02 09:40:46  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-12-02 09:40:47  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2015-12-02 09:40:47  INFO    OGG-01226  Oracle GoldenGate Capture for Oracle, dpscott.prm:  Socket buffer size set to 27985 (flush size 27985).
2015-12-02 09:40:47  WARNING OGG-06591  Oracle GoldenGate Capture for Oracle, dpscott.prm:  Reading the output trail file /u05/ggtrail/DB2/tb000000 encounters an error from position 0, rescan from the file header to recover.
2015-12-02 09:40:47  ERROR   OGG-01031  Oracle GoldenGate Capture for Oracle, dpscott.prm:  There is a problem in network communication, a remote file problem, encryption keys for target and source do not match (if using ENCRYPT) or an unknown error. (Reply received is Output file /u05/ggtrail/DB2/tc000000 is not in any allowed output directories.).
2015-12-02 09:40:47  ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, dpscott.prm:  PROCESS ABENDING.

After some tests and checks with my colleagues I tried to put the remote trail file in the directory ./dirdat/. It is one directory of the GoldenGate home. And oddly now it was good for it.

So the problem is the directory where I tried to put the remote trail files. In the official GoldenGate 12.2 documentation, we found this new parameter ALLOWOUTPUTDIR.

2. Modify the GLOBALS parameter file

Take care, this parameter must be set on the target and not on the source.

GGSCI (goldengate1222) 1> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (goldengate1222 as ggadmin@DB2) 3> view params ./GLOBALS

GGSCHEMA ggadmin
CHECKPOINTTABLE ggadmin.checkpoint
ALLOWOUTPUTDIR /u05/ggtrail/DB2/


GGSCI (goldengate1222 as ggadmin@DB2) 4>

With this parameter, we can authorize GoldenGate to use a different location than its home to store trail files. If you need different directories for your trails either you use the same root because this parameter includes the sub-directories or you provide the parameter ALLOWOUTPUTDIR several times.

Once the parameter setting is done, I can restart my Data Pump extract and now it works fine.

Conclusion

So we can see that GoldenGate 12.2 doesn’t allow to use all the directories per default like in previous version. Now we must allow GoldenGate explicitly where it is allowed to store the trails.

 

 

 

 

 

 

Cet article GoldenGate 12.2 new parameter ALLOWOUTPUTDIR est apparu en premier sur Blog dbi services.

SQL Monitoring in PostgreSQL (1) – the logging system

$
0
0

When developing an application as well as when the application is in production there is the need to identify long running queries. In Oracle one tool you might use for that is the SQL Monitor. In this post I’ll look into what PostgreSQL provides in this area.

PostgreSQL has a very strong logging system. This system can be used to log many, many server messages as well as information about sql queries. To enable to background process that captures the server log messages and redirects them to log files you need to set the logging_collector parameter to on in a first step:

(postgres@[local]:4448) [postgres] > alter system set logging_collector=on;
ALTER SYSTEM
Time: 30.390 ms
(postgres@[local]:4448) [postgres] > show logging_collector;
 logging_collector 
-------------------
 on
(1 row)

Once you have this enabled you need to tell PostgreSQL where you want to log to. This is done by setting the parameter log_directory:

(postgres@[local]:4448) [postgres] > show log_directory;
     log_directory      
------------------------
 /u02/pgdata/PG6/pg_log
(1 row)

In my case this is set to the pg_log directory which is located in my data directory. Additionally we can define how the log files will be named:

(postgres@[local]:4448) [postgres] > show log_filename;
   log_filename    
-------------------
 postgresql-%a.log
(1 row)

The place holders which can be used are the same as in strftime. The default is:

(postgres@[local]:4448) [postgres] > alter system set log_filename='postgresql-%Y-%m-%d_%H%M%S.log';
ALTER SYSTEM
Time: 45.666 ms

I recommend to set the log_rotation_age or log_rotation_size parameter so that the log-files will be rotated:

(postgres@[local]:4448) [postgres] > show log_rotation_size;
 log_rotation_size 
-------------------
 10MB
(1 row)

Time: 1.015 ms
(postgres@[local]:4448) [postgres] > show log_rotation_age;
 log_rotation_age 
------------------
 8d
(1 row)

As we now have the basic settings available lets check if we need to restart the server for the settings to go into effect:

(postgres@[local]:4448) [postgres] > select name,pending_restart 
                                       from pg_settings 
                                      where name in ('log_filename','log_rotation_size'
                                                    ,'log_rotation_age','log_destination','logging_collector');
       name        | pending_restart 
-------------------+-----------------
 log_destination   | f
 log_filename      | f
 log_rotation_age  | f
 log_rotation_size | f
 logging_collector | f
(5 rows)

(postgres@[local]:4448) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Ok, should be fine. Lets quickly check if there is a log file with some recent messages in the directory we specified:

postgres@oel7:/home/postgres/ [PG6] ls -altr /u02/pgdata/PG6/pg_log
total 68
drwx------. 19 postgres postgres 4096 Dec  5 11:01 ..
drwx------.  2 postgres postgres   45 Dec  5 11:01 .
-rw-------.  1 postgres postgres  384 Dec  5 11:01 postgresql-2015-12-05_100103.log

Looks fine. Back to what this post is about. How can we log sql statements? One parameter in this area is log_duration. When we set this to on:

(postgres@[local]:4448) [postgres] > alter system set log_duration=on;
ALTER SYSTEM
Time: 38.978 ms
(postgres@[local]:4448) [postgres] > select name,pending_restart 
                                       from pg_settings 
                                      where name in ('log_duration');
     name     | pending_restart 
--------------+-----------------
 log_duration | f
(1 row)

Time: 2.044 ms

(postgres@[local]:4448) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

… the duration of every statement is logged to the log file:

(postgres@[local]:4448) [postgres] > create table tt ( a int );
CREATE TABLE
Time: 23.829 ms
(postgres@[local]:4448) [postgres] > insert into tt (a) values (generate_series(1,1000));
INSERT 0 1000
Time: 37.333 ms
(postgres@[local]:4448) [postgres] > select count(*) from tt;
 count 
-------
  1000
(1 row)

Having a look at the log file we can confirm that the duration is logged:

postgres@oel7:/home/postgres/ [PG6] tail /u02/pgdata/PG6/pg_log/postgresql-2015-12-05_100103.log 
2015-12-05 10:08:07.044 GMT - 4 - 4609 - postgres@postgres LOG:  statement: create table tt ( a int );
2015-12-05 10:08:07.067 GMT - 5 - 4609 - postgres@postgres LOG:  duration: 23.669 ms
2015-12-05 10:08:22.052 GMT - 6 - 4609 - postgres@postgres LOG:  duration: 37.163 ms
2015-12-05 10:08:25.519 GMT - 7 - 4609 - postgres@postgres LOG:  duration: 22.992 ms

Well, is the duration without the text of the statement very helpful? Not really and this is where the log_min_duration_statement parameter comes into the game. Setting this to any value greater than -1 logs each statement that runs longer than what you specified. If you set it to zero all statements will be logged:

(postgres@[local]:4448) [postgres] > alter system set log_min_duration_statement=0;
ALTER SYSTEM

Time: 0.188 ms
(postgres@[local]:4448) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

(postgres@[local]:4448) [postgres] > select count(*) tt;
 tt 
----
  1
(1 row)

Time: 0.680 ms

Checking the logfile again:

postgres@oel7:/home/postgres/ [PG6] tail -1 /u02/pgdata/PG6/pg_log/postgresql-2015-12-05_100103.log
2015-12-05 10:13:48.392 GMT - 8 - 4651 - postgres@postgres LOG:  duration: 0.216 ms  statement: select count(*) tt;

Much better: We have the time stamp when the statement was executed, the number of the line in the logfile where we can find the log message (8), the operating system process id and the user which executed the statement.

That’s is for now. Make yourself familiar with the various parameters of the logging system. There are plenty of things you can control and adjust.

The next post will look at another way to identify problematic statements.

Btw: The PostgreSQL version I used here is:

(postgres@[local]:4448) [postgres] > select version();
                                                     version                                                      
------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.5alpha2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit
(1 row)

Time: 0.376 ms
 

Cet article SQL Monitoring in PostgreSQL (1) – the logging system est apparu en premier sur Blog dbi services.

GoldenGate 12.2 additional column on the target

$
0
0

My collegue Hervé last week posted a blog concerning a bug in GoldenGate 12.1. You can find the blog here.

In fact the problem is that GoldenGate works with the column position and not with the column name. To follow up this bug I tried to reproduce that with GoldenGate 12.2 that was released last week.

As Hervé did I used the schema scott/tiger. The goal is not to test an initial load.

Source>@utlsampl.sql
Target>@utlsampl.sql

Just a small precision, contrary to my colleague, I don’t use a downstream server but just two virtual machines.

1. Configure SCOTT extract process on the source machine

GGSCI (goldengate122) 1> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (goldengate122 as ggadmin@DB1) 2> register extract scott database

2015-12-04 09:40:36  INFO    OGG-02003  Extract SCOTT successfully registered with database at SCN 558330.

GGSCI (goldengate122 as ggadmin@DB1) 3> add extract scott integrated tranlog, begin now
EXTRACT (Integrated) added.


GGSCI (goldengate122 as ggadmin@DB1) 4> add trandata scott.emp

Logging of supplemental redo data enabled for table SCOTT.EMP.
TRANDATA for scheduling columns has been added on table 'SCOTT.EMP'.
TRANDATA for instantiation CSN has been added on table 'SCOTT.EMP'.
GGSCI (goldengate122 as ggadmin@DB1) 5> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     SCOTT       00:00:00      00:00:21


GGSCI (goldengate122 as ggadmin@DB1) 6> add exttrail /u04/app/goldengate/trail/DB1/sc, extract SCOTT
EXTTRAIL added.

GGSCI (goldengate122 as ggadmin@DB1) 7> edit params scott



GGSCI (goldengate122 as ggadmin@DB1) 8> view params scott

Extract scott
useridalias ggadmin
DDL INCLUDE MAPPED
TranlogOptions IntegratedParams (max_sga_size 256)
Exttrail /u04/app/goldengate/trail/DB1/sc
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
Table SCOTT.emp;


GGSCI (goldengate122 as ggadmin@DB1) 9> start extract scott

Sending START request to MANAGER ...
EXTRACT SCOTT starting


GGSCI (goldengate122 as ggadmin@DB1) 10>

2. Configure Data Pump extract process to transfer trail files

GGSCI (goldengate122) 1> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (goldengate122 as ggadmin@DB1) 2> add extract DPSCOTT, EXTTRAILSOURCE /u04/app/goldengate/trail/DB1/sc
EXTRACT added.


GGSCI (goldengate122 as ggadmin@DB1) 3> add rmttrail /u05/ggtrail/DB2/tc, extract DPSCOTT
RMTTRAIL added.

GGSCI (goldengate122 as ggadmin@DB1) 4> edit params DPSCOTT



GGSCI (goldengate122 as ggadmin@DB1) 5> view params DPSCOTT

extract dpscott
useridalias ggadmin
DBOPTIONS ALLOWUNUSEDCOLUMN
rmthost 192.168.56.109, MGRPORT 7809
RMTTRAIL /u05/ggtrail/DB2/tc
TABLE scott.emp;


GGSCI (goldengate122 as ggadmin@DB1) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     DPSCOTT     00:00:00      00:01:25
EXTRACT     RUNNING     SCOTT       00:00:10      00:00:03


GGSCI (goldengate122 as ggadmin@DB1) 7> start extract DPSCOTT

Sending START request to MANAGER ...
EXTRACT DPSCOTT starting


GGSCI (goldengate122 as ggadmin@DB1) 8>

3. Configure SCOTT replicat process on the target machine

GGSCI (goldengate1222) 1> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (goldengate1222 as ggadmin@DB2) 2> add replicat repscott, exttrail /u05/ggtrail/DB2/tc
REPLICAT added.


GGSCI (goldengate1222 as ggadmin@DB2) 3> edit params REPSCOTT



GGSCI (goldengate1222 as ggadmin@DB2) 4> view params REPSCOTT

REPLICAT REPSCOTT
ASSUMETARGETDEFS
USERIDALIAS ggadmin
DISCARDFILE /u04/app/goldengate/product/12.2.0.0/DB2/discard/REPSCOTT_discard.txt, append, megabytes 10
MAP SCOTT.emp, TARGET SCOTT.emp;


GGSCI (goldengate1222 as ggadmin@DB2) 5> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    STOPPED     REPSCOTT    00:00:00      00:00:20


GGSCI (goldengate1222 as ggadmin@DB2) 6> start replicat repscott

Sending START request to MANAGER ...
REPLICAT REPSCOTT starting


GGSCI (goldengate1222 as ggadmin@DB2) 7>

Now we have a running GoldenGate replication for the table scott.emp including the DDL

======START DEMO =======

On the target database DB2 we create an additional column

SQL> connect scott/tiger
Connected.
SQL> alter table emp add TARGET_COL varchar(10) default null;

Table altered.

On the source database DB1 after that, we create an additional column on the source database, which will be replicated to the target database.

SQL> connect scott/tiger
Connected.
SQL> alter table emp add SOURCE_COL varchar(10) default null;

Table altered.

Now on target database DB2 we have the 2 additional columns, as described below:

SQL> select ename,target_col,source_col from emp;

ENAME      TARGET_COL SOURCE_COL
---------- ---------- ----------
SMITH
ALLEN
WARD
...

And on the source database DB1  there is only one additional column

SQL> select ename, source_col from emp;

ENAME      SOURCE_COL
---------- ----------
SMITH
ALLEN
WARD
...

Now on the source database DB1 its time to update the entry for the additional column

SQL> update emp set source_col='change';

14 rows updated.

SQL> commit;

Commit complete.

SQL> select ename, source_col from emp;

ENAME      SOURCE_COL
---------- ----------
SMITH      change
ALLEN      change
WARD       change
...

Until now everything work as my collegue.

Now on the target database DB2 we will check the entry updated on table scott.emp

SQL> select ename,target_col,source_col from emp;

ENAME      TARGET_COL SOURCE_COL
---------- ---------- ----------
SMITH                 change
ALLEN                 change
WARD                  change
...

And contrary at the version 12.1, it is the right column that was updated.

Conclusion : Now in the version 12.2, GoldenGate works with the column names and not anymore with the column positions. It is not a revolution, other products like dbvisit replicat can do this since years. But at least it works now.

 

 

Cet article GoldenGate 12.2 additional column on the target est apparu en premier sur Blog dbi services.

SQL Monitoring in PostgreSQL (2) – pg_stat_statements

$
0
0

The last post looked into how you can monitor queries using the logging system. This post will introduce pg_stat_statements.

pg_stat_statements is a module that needs to be loaded and is not available in the default configuration. Loading it is quite easy. Create the extension as usual:

postgres@oel7:/home/postgres/ [PG6] sqh
Null display is "NULL".
Timing is on.
psql (9.5alpha2)
Type "help" for help.

(postgres@[local]:4448) [postgres] > create extension pg_stat_statements;
CREATE EXTENSION
Time: 281.765 ms
(postgres@[local]:4448) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 btree_gist         | 1.1     | public     | support for indexing common datatypes in GiST
 pg_stat_statements | 1.3     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
 postgres_fdw       | 1.0     | public     | foreign-data wrapper for remote PostgreSQL servers
(4 rows)

After the extension is available we need to adjust the shared_preload_libraries parameter:

(postgres@[local]:4448) [postgres] > show shared_preload_libraries;
 shared_preload_libraries 
--------------------------
 
(1 row)

(postgres@[local]:4448) [postgres] > alter system set shared_preload_libraries='pg_stat_statements';
ALTER SYSTEM
Time: 55.005 ms

(postgres@[local]:4448) [postgres] > select name,pending_restart 
                                       from pg_settings 
                                      where name in ('shared_preload_libraries');
           name           | pending_restart 
--------------------------+-----------------
 shared_preload_libraries | f
(1 row)

Time: 1.517 ms
(postgres@[local]:4448) [postgres] > select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Basically pg_stat_statements can be used from now own. But there are some parameters to look at if you want to fine tune. Check the documentation for the description.

(postgres@[local]:4448) [postgres] > show pg_stat_statements.max;
-[ RECORD 1 ]----------+-----
pg_stat_statements.max | 5000

Time: 0.230 ms
(postgres@[local]:4448) [postgres] > show pg_stat_statements.track;
-[ RECORD 1 ]------------+----
pg_stat_statements.track | top

Time: 0.211 ms
(postgres@[local]:4448) [postgres] > show pg_stat_statements.track_utility;
-[ RECORD 1 ]--------------------+---
pg_stat_statements.track_utility | on

Time: 0.215 ms
(postgres@[local]:4448) [postgres] > show pg_stat_statements.save;
-[ RECORD 1 ]-----------+---
pg_stat_statements.save | on

Time: 0.212 ms

When we installed the extension a view was created with the following columns:

(postgres@[local]:4448) [postgres] > \d pg_stat_statements
          View "public.pg_stat_statements"
       Column        |       Type       | Modifiers 
---------------------+------------------+-----------
 userid              | oid              | 
 dbid                | oid              | 
 queryid             | bigint           | 
 query               | text             | 
 calls               | bigint           | 
 total_time          | double precision | 
 min_time            | double precision | 
 max_time            | double precision | 
 mean_time           | double precision | 
 stddev_time         | double precision | 
 rows                | bigint           | 
 shared_blks_hit     | bigint           | 
 shared_blks_read    | bigint           | 
 shared_blks_dirtied | bigint           | 
 shared_blks_written | bigint           | 
 local_blks_hit      | bigint           | 
 local_blks_read     | bigint           | 
 local_blks_dirtied  | bigint           | 
 local_blks_written  | bigint           | 
 temp_blks_read      | bigint           | 
 temp_blks_written   | bigint           | 
 blk_read_time       | double precision | 
 blk_write_time      | double precision | 

We can now query the view for information we are interested in, e.g.:

(postgres@[local]:4448) [postgres] > \x
Expanded display is on.
(postgres@[local]:4448) [postgres] > select userid,query,calls,total_time from pg_stat_statements;
-[ RECORD 1 ]
userid     | 10
query      | alter system set logging_collector=on;
calls      | 1
total_time | 30.13
-[ RECORD 2 ]
userid     | 10
query      | create extension pg_stat_statements;
calls      | 2
total_time | 250.54
-[ RECORD 3 ]
userid     | 10
query      | select name,pending_restart from pg_settings where name in (?,?,?,?,?);
calls      | 1
total_time | 0.627
-[ RECORD 4 ]
userid     | 10
query      | show log_rotation_size;
calls      | 1
total_time | 0.006

Additionally we can call a function which is named exactly the same as the view:

(postgres@[local]:4448) [postgres] > select * from pg_stat_statements(true);
-[ RECORD 1 ]
userid              | 10
dbid                | 13295
queryid             | 780340104
query               | alter system set logging_collector=on;
calls               | 1
total_time          | 30.13
min_time            | 30.13
max_time            | 30.13
mean_time           | 30.13
stddev_time         | 0
rows                | 0
shared_blks_hit     | 0
shared_blks_read    | 0
shared_blks_dirtied | 0
shared_blks_written | 0
local_blks_hit      | 0
local_blks_read     | 0
local_blks_dirtied  | 0
local_blks_written  | 0
temp_blks_read      | 0
temp_blks_written   | 0
blk_read_time       | 0
blk_write_time      | 0
-[ RECORD 2 ]
userid              | 10
dbid                | 13295
queryid             | 1392856018
query               | create extension pg_stat_statements;
calls               | 2
total_time          | 250.54
min_time            | 1.489
max_time            | 249.051
mean_time           | 125.27
stddev_time         | 123.781
rows                | 0
shared_blks_hit     | 1150
shared_blks_read    | 90
Time: 0.742 ms

On top of either the view or the function we can now start to troubleshoot issues with the queries the server executes. Hope this helps.

In the next post I’ll introduce pg_activity.

 

Cet article SQL Monitoring in PostgreSQL (2) – pg_stat_statements est apparu en premier sur Blog dbi services.

SQL Monitoring in PostgreSQL (3) – pg_activity

$
0
0

The last posts looked at how the logging system and the pg_stat_statements extension can be used to monitor sql statements in PostgreSQL. This post will introduce pg_activity which is very similar to htop.

There are some dependencies which need to be installed before we can start installing pg_activity. The first one is python. As I am on a redhat based distribution this is quite easy:

[root@oel7 ~] yum install -y python

Then we need to install psycopg which is the database adapter for PostgreSQL for the python language (Note: if you have installed PostgreSQL not in the default location edit the setup.cfg script and provide the path to pg_config. Otherwise the install will fail):

postgres@oel7:/var/tmp/ [dummy] tar -axf psycopg2-2.6.1.tar.gz
postgres@oel7:/var/tmp/ [dummy] cd psycopg2-2.6.1
postgres@oel7:/var/tmp/psycopg2-2.6.1/ [dummy] python setup.py build
postgres@oel7:/var/tmp/psycopg2-2.6.1/ [dummy] sudo python setup.py install

The next (and last) thing we need to have available is psutils, a python library for querying os statistics:

postgres@oel7:/var/tmp/ [dummy] tar -axf psutil-3.3.0.tar.gz
postgres@oel7:/var/tmp/ [dummy] cd psutil-3.3.0
postgres@oel7:/var/tmp/ [dummy] sudo python setup.py install

That’s it. Now we can install pg_activity:

postgres@oel7:/var/tmp/ [dummy] unzip pg_activity-master.zip
postgres@oel7:/var/tmp/ [dummy] cd pg_activity-master
postgres@oel7:/var/tmp/ [dummy] sudo python setup.py install

Quite easy. Lets see what we can do with it. If you are locally on the server where your PostgreSQL instance runs you can just start pg_activity (I fired a sql statement so that you can see at least one sql in the screenshot):

postgres@oel7:/home/postgres/ [PG3] pg_activity

pg_activity_1

There is a nice summary on the top like in top/htop. The different statements which are currently executing are displayed below.

Hitting “h” for help shows you the various options:
pg_activity_2

The “F1/2/3″ switches are very nice when you want to display blocking queries only, running queries only or waiting queries only. Another great features is that you do not need to install pg_activity on the server where PostgreSQL is running. The same connections options as in e.g. psql are there so that you can connect to any remote PostgreSQL instance you have access to:

postgres@oel7:/home/postgres/ [PG3] pg_activity --help
Usage: pg_activity [options]

htop like application for PostgreSQL server activity monitoring.

Options:
  --version             show program's version number and exit
  -U USERNAME, --username=USERNAME
                        Database user name (default: "postgres").
  -p PORT, --port=PORT  Database server port (default: "5432").
  -h HOSTNAME, --host=HOSTNAME
                        Database server host or socket directory
                        (default: "localhost").
  -d DBNAME, --dbname=DBNAME
                        Database name to connect to (default: "postgres").
  -C, --no-color        Disable color usage.
  --blocksize=BLOCKSIZE
                        Filesystem blocksize (default: 4096)
  --rds                 Enable support for AWS RDS
  --help                Show this help message and exit.
  --debug               Enable debug mode for traceback tracking.

  Display Options, you can exclude some columns by using them :
    --no-database       Disable DATABASE.
    --no-user           Disable USER.
    --no-client         Disable CLIENT.
    --no-cpu            Disable CPU%.
    --no-mem            Disable MEM%.
    --no-read           Disable READ/s.
    --no-write          Disable WRITE/s.
    --no-time           Disable TIME+.
    --no-wait           Disable W.

Conclusion: pg_activity is small but very useful tool for monitoring an PostgreSQL instance. In the next posts I’ll look into some more feature rich monitoring solutions that are around for monitoring PostgreSQL instances.

 

Cet article SQL Monitoring in PostgreSQL (3) – pg_activity est apparu en premier sur Blog dbi services.

Control-M Application Integrator verwaltet Microsoft PowerShell Code & Credentials

$
0
0

Welche Vorteile hat “Control-M Application Integrator”?

Bei einem Kunden, der Control-M im Einsatz hat, habe ich auf einem Windows Server ganz viele Windows und PowerShell Scripts angetroffen. Unter diesen Scripts waren einige die nach einem bestimmten Keyword in einem Log-File suchten. Alle diese Scripts hatten einen festen CIFS Path im Script. Damit das klappt, wurde ein Share auf dem Windows Server gemappt, das File nach dem Keyword durchsucht und der Share wieder abgehängt. Dazu wurden jeweils 2 Scripts verwendet. In einem der Scripts, waren die Credentials direkt gespeichert! Das erste Script (Windows Script), wurde durch einen Job aus Control-M regelmässig gestartet, das zweite Script (PowerShell) wurde direkt aus dem ersten aufgerufen.

Was sind nun die Vorteile des Modules “Control-M Application Integrator”:

  • Keine lokalen Scripts mehr auf den Server nötig
  • Sicherheit, keine Credential in den Scripts
  • Deutliches vereinfachen
  • Bessere Wartbarkeit
  • Wiederverwendbarkeit des neuen “Control-M Application Integrator” Types “String finder”

 

Wie sieht eine “Control-M Application Integrator” Implementierung aus?

Die Erstellung eines neuen Job Type erfolgt direkt im “Control-M Application Integrator” von Control-M.

Der neue Job Type heisst “String finder”, und als erstes soll das Mapping mit “net use” auf Laufwerk x erfolgen.

2015-11-18_09h51_25

if exist x:\ (
	net use x: /delete /yes
)
net use x: "{{Path}}" {{ShareUserPW}} /USER:{{ShareUserName}}

 

Im Hauptteil “Execution #1 – #3″ haben wir dann den PowerShell Code:

2015-11-18_09h51_46

powershell.exe -nologo -ExecutionPolicy Bypass -NoProfile -Command \
"& {$COUNTES=@(GetChildItem -Path {{Path}} -Include {{Filename}} {{Recourse}} \
| Select-String '{{Pattern}}'.count; echo "Hits:$COUNTES"; exit $COUNTES}" < NUL

Im Code oben, wird das Vorkommen des Pattern gezählt ($COUNTES) und anschliessend ausgegeben (Hits:$COUNTES). Die Variablen welche durch “Control-M Application Integrator” verwendet werden ({{Path}}, {{Filename}}, {{Recourse}} und {{Pattern}} werden zur Laufzeit eingesetzt. Die Ausgabe (Hits:$COUNTES) wird später wieder verwendet um zu entscheiden, Mail oder nicht. Ebenfalls wird mit dem Exit Code, geprüft wenn ungleich null wird “Execution #2″ & “Execution #3″ ausgeführt.

 

In den beiden nächsten Schritten, werden noch eine globale Variable und der Text für die Mail Benachrichtigung erzeugt.

ctmvar -action set -var "%%%%\Text" -varexpr \
"The following pattern [{{Pattern}}] was found [{{HITS}}] times on the [{{Path}}\{{Filename}}]."

Hier wird die globale Control-M Variable “Text” verwendet um den Mail Text aus dem Code zu Definieren.

IF {{HITS}} GTR 0 (set MSG1=Hits& set MSG2=found!)
IF {{HITS}} GTR 0 (echo %MSG1% %MSG2% [{{HITS}}]) ELSE (echo Nothing to do.)

Hier wurde speziell der Text, nach dem aus Control-M später gesucht wird, in zwei Variablen aufgeteilt! Da sonst im Output, in dem der Textfilter sucht den Text “Hits found!” bereits bei der Definition finden würde!

 

Beim Post-Execution Schritt wird das Mapping wieder gelöscht.

if exist x:\ (
	net use x: /delete /yes
)

 

Wenn wir nun im Control-M, den Job erstellen möchten, so müssen wir den neuen Job Type “String finder” verwenden.

2015-11-18_09h52_57

In dem neuen Job spezifizieren wir dann die neuen Attribute für diesen Job:

  • Job Name: 1 -> Der Name des Jobs
  • Connection Profile: 2 -> Das wird im Connection Manager definiert und sind die Credentials (Username und Password für das Mapping)
  • Filename: 3 -> Filter “*.log” (Es sollen nur diese Files durchsucht werden)
  • Path: 4 -> Das ist der CIFS (Common Internet File System) Path, der verwendet werden soll
  • Pattern: 5 -> Ist das was wir in den Files suchen

2015-11-18_09h53_46

Damit wir beim Auffinden des gesuchten Pattern auch eine Benachrichtigung per Mail erhalten, konfigurieren wir noch folgendes:

2015-11-18_09h54_13

2015-11-18_09h54_31

Erstellung des Connection Profile im “Configuration Manager”

Damit wir die Credential nicht im Script oder Control-M Job definieren müssen, verwenden wir ein eigenes “Connection Profile”.

2015-11-18_14h04_00

Jetzt benötigen wir noch die Credentials:

2015-11-18_13h59_38

 

Fazit

Damit haben wir nun die Möglichkeit diverse Pattern in unterschiedlichen Files auf unterschiedlichen CIFS Laufwerken zu suchen, den Code und die Methodik wieder zu verwenden. Weiter haben wir keine Windows und PowerShell Scripts mehr lokal auf den Servern, das bedeutet Scheduler, Code und Credentials werden durch Control-M verwaltet, was auch die Sicherheit erhöht.

Control-M Application Integrator ist nur eines von vielen Modulen in Control-M.

Ich hoffe diese Beitrag konnte etwas Licht in das Module bringen :-) .

 

Cet article Control-M Application Integrator verwaltet Microsoft PowerShell Code & Credentials est apparu en premier sur Blog dbi services.


UKOUG Tech15 – Day 1- API Management

$
0
0

This year, the UKOUG (Tech15 for me of course) is held in Birmingham. This is my first time in this city and I must say that the ICC Birmingham, where the UKOUG take place, is really an impressive building, quite well deserved by bars which is an excellent point.

This is also my first time at an Oracle event. I’m not a DBA guy, when talking about Oracle, i’m mainly working with WebLogic and Java in our Middleware Team and the first thing I noticed when I took a look at the agenda of this Tech15 event is that there is, in my opinion, not enough presentations around WebLogic. I wasn’t able to find a single session about WebLogic un the agenda of the first two days which is, for me, a small negative point because I was hoping to see something around that (monitoring, logging, performance, Security Store management, aso…). Of course there are many sessions that talk about integration, SOA, Cloud services, aso… That’s also very interesting but that’s not my main priority I would say.

Session of the day

The first session I attended talked about the Oracle API Catalog (OAC) and the API Manager, presented by Robert van Mölken (AMIS) and Simone Greib (Oracle). I think this session was the most interesting of this first day for me because we always talk about development best practices, development tools, architecture, aso… But we almost never talk about the management of what has been developed and I think that having this kind of information and metadata on the OAC Console (also accessible from JDeveloper) can clearly help to keep a clean and performant development phase and ease the management of what has already been done.

You don’t want to create something that is already available, especially if it is working! This API Manager exist on-premise or in the cloud and can be used to harvest internal or external APIs, reference and manage them within your enterprise which actually look like a great and useful tool and it can also be integrated with the SOA Suite, the Oracle Identity Manager, aso…

The interesting thing in this API Manager is that the “curator” and “admin” groups are able to publish or not the APIs because you probably don’t want all your APIs to be available by everybody within your enterprise. Moreover, it can also be used to protect your APIs with a security layer like requesting a secret key for authorized access only.

The second session of the day that I found very interesting was the one presented by Franck Pachot, my colleague from dbi services that you probably already know but talking about this session may have sound a little bit too much so I choose the other one ;).

After this interesting first day, I must say that I’m quite impatient to see what will come tomorrow.

 

Cet article UKOUG Tech15 – Day 1- API Management est apparu en premier sur Blog dbi services.

UKOUG 2015 Day 2: Oracle In-Memory, Table Locks, Dbvisit, Oracle Multitenant and Open Source Tuning tools.

$
0
0

UKOUG 2015 Day 2 (Moday): a short overview.

Ref: Oracle In-Memory, all about table locks, Dbvisit, Oracle Multitenant and Open Source Tuning tools.

Today was my first day in the Oracle UKOUG 2015.

My first presentation focussed on the “Oracle Database In-Memory Option: Challenges & Possibilities”by Christian Antognini.
This option was introduced in Oracle version 12.1.0.2 and promises to deliver in-memory performance without modifying the application’s code. :-)
After a short explaining of the general concepts of the Oracle Database In-Memory option, the aim of the presentation was to review what you can expect from this new technology. We received (with help of demos) a good overview on situations where you can take advantage of it and what kind of benefits you should expect when enabling it. But take care “it depends” on each situation you are and when you will enable it … you have to clearly define what you want to enable in In-memory (the memory is not unlimited) !!

Also this morning, another good presentation by Frank Pachot (dbi-services) focussed on “All About Table Locks: DML, DDL, Foreign Key, Online Operations,…”.
Some topics or questions (listed below) was really good explained with help of demos:
- Do you know exactly what is locked when you do an online or offline DDL?
- Do you know the meaning of the lock modes (RS,RX,S,SRX,X)?
- Do you know when and why you need to index foreign keys?
This afterrnon after lunch …

Presentation about Oracle Standard Edition by Chris Lawless (Dbvisit).
It was a good refresh and review about Standard Edition 2 (SE2). SE with some limitations but we received clearly answers like what you are able to do with SE vs EE.
We talked about SE and High Availability (different 3th party Standby solution and not only Dbvisit), Disaster Recovery as well as Backup and Recovery (RPO+RTO).

After that a complete different presentation by Mike Dietrich (Oracle Master Product Manager) about “How Oracle Single/Multitenant will Change a DBA’s Life”!
Interesting for our future DBA’s life! It seems that Oracle has announced future deprecation for the traditional stand-alone database architecture that will be replaced with “Oracle Multitenant databases” (in one single PDB environment). Mike strongly recommended to start the evaluation and testing of PDB’s in your environment so that you have a good overview for the future migration strategies into the pluggable databases world. Some features will clearly change the databases concept and the life of a DBA.

And last good presentation by Bjoern Rost about “Open-Source Database Tuning Tools and Life Without EM12c”.
Clearly, not every Customer have the appropriate license or database edition to perform Tuning tasks without EM Cloud Control 12c.
We have seen (with help of demo’s) and received a list of a few free open-source database tuning tools such as Rlsqlplus, web-ash, Snapper, oraSASH, SQLT, sDB360 and sqld360 that can be used by DBA’s to gather and review metrics and wait events from the command line or graphically.

This days in Birminghan was really an intensive day but very interesting.
The UKOUG event is really a technical event allowing us to share technical information with other DBA.

 

Cet article UKOUG 2015 Day 2: Oracle In-Memory, Table Locks, Dbvisit, Oracle Multitenant and Open Source Tuning tools. est apparu en premier sur Blog dbi services.

Monitoring tools for PostgreSQL – pgcluu

$
0
0

The last posts introduced the logging system, pg_stat_statements and pg_activity. All of these can be used to monitor sql statements the PostgreSQL server is executing. In this post I’ll look into pgcluu: PostgreSQL Cluster utilization! This is a more complete monitoring solution as it is not only focused on sql statements but gives you information about the database cluster itself and other useful stuff.

All you need to run pgcluu is a modern perl distribution but this should be available if you are on a recent operating system. If you not only want to have statistics about a PostgreSQL instance but also want to have OS statistics you’ll need the sysstat package in addition (this should be available for your distribution). You can install pgcluu on the server where the PostgreSQL instance you want to monitor runs (as I will do for this post) or on a remote host. Installation is quite easy;

postgres@oel7:/var/tmp/ [dummy] tar -axf pgcluu-2.4.tar.gz
postgres@oel7:/var/tmp/ [dummy] cd pgcluu-2.4
postgres@oel7:/var/tmp/ [dummy] perl Makefile.PL
postgres@oel7:/var/tmp/ [dummy] make && sudo make install

pgcluu is divided into two parts:

  • The collector which is responsible for collecting the statistics: pgcluu_collectd
  • The report generator which generates the reports out of the files the collector generated: pgcluu

To collect statistics start the pgcluu_collectd script as deamon:

postgres@oel7:/home/postgres/ [PG2] mkdir /var/tmp/test_stats
postgres@oel7:/home/postgres/ [PG2] pgcluu_collectd -D -i 60 /var/tmp/test_stats/
postgres@oel7:/home/postgres/ [PG2] LOG: Detach from terminal with pid: 10423

This will collect statistics for the PostgreSQL instance you have the environment set for every 60 seconds and stores the results in the /var/tmp/test_stats/ directory:

postgres@oel7:/var/tmp/pgcluu-2.4/ [postgres] ls -la /var/tmp/test_stats/
total 196
drwxrwxr-x. 2 postgres postgres  4096 Dec  7 16:16 .
drwxrwxrwt. 4 root     root        64 Dec  7 16:05 ..
-rw-rw-r--. 1 postgres postgres  8280 Dec  7 16:16 pg_class_size.csv
-rw-rw-r--. 1 postgres postgres   274 Dec  7 16:16 pg_database_size.csv
-rw-rw-r--. 1 postgres postgres  4214 Dec  7 16:15 pg_hba.conf
-rw-rw-r--. 1 postgres postgres  1636 Dec  7 16:15 pg_ident.conf
-rw-rw-r--. 1 postgres postgres 30694 Dec  7 16:16 pg_settings.csv
-rw-rw-r--. 1 postgres postgres     0 Dec  7 16:15 pg_stat_connections.csv
-rw-rw-r--. 1 postgres postgres   333 Dec  7 16:16 pg_stat_database.csv
-rw-rw-r--. 1 postgres postgres  2682 Dec  7 16:16 pg_statio_user_indexes.csv
-rw-rw-r--. 1 postgres postgres  1040 Dec  7 16:16 pg_statio_user_sequences.csv
-rw-rw-r--. 1 postgres postgres  1582 Dec  7 16:16 pg_statio_user_tables.csv
-rw-rw-r--. 1 postgres postgres  1004 Dec  7 16:16 pg_stat_locks.csv
-rw-rw-r--. 1 postgres postgres   764 Dec  7 16:16 pg_stat_unused_indexes.csv
-rw-rw-r--. 1 postgres postgres  2682 Dec  7 16:16 pg_stat_user_indexes.csv
-rw-rw-r--. 1 postgres postgres  1430 Dec  7 16:16 pg_stat_user_tables.csv
-rw-rw-r--. 1 postgres postgres     0 Dec  7 16:15 pg_tablespace_size.csv
-rw-rw-r--. 1 postgres postgres   343 Dec  7 16:15 postgresql.auto.conf
-rw-rw-r--. 1 postgres postgres 24821 Dec  7 16:15 postgresql.conf
-rw-rw-r--. 1 postgres postgres 56896 Dec  7 16:16 sar_stats.dat
-rw-rw-r--. 1 postgres postgres  3111 Dec  7 16:15 sysinfo.txt

After some time, when hopefully there was some activity in the PostgreSQL instance, stop the deamon:

postgres@oel7:/home/postgres/ [PG2] pgcluu_collectd -k
OK: pgcluu_collectd exited with value 0
postgres@oel7:/home/postgres/ [PG2] 

Once we have some statistics collected we can generate a report:

postgres@oel7:/home/postgres/ [PG2] mkdir /var/tmp/test_report/
postgres@oel7:/home/postgres/ [PG2] pgcluu -o /var/tmp/test_report/ /var/tmp/test_stats/

The report can be viewed in any modern browser that supports javascript and css. Once you open the index.html you are presented with an overview of the system:

pgcluu1

On the top there is a menu which allows you to navigate to various reports of your operating system and the PostgreSQL instance, e.g. the cluster:

pgcluu2

pgcluu3

Reports for various OS statistics are available through the system menu:

pgcluu4
pgcluu5

To get the best of out this you probably should let the collector running all the time and use the build in rotation functionality:

postgres@oel7:/home/postgres/ [PG2] pgcluu_collectd -D -i 60 --rotate-daily /var/tmp/test_stats/  

Having statistics available for each day of the week helps a lot in troubleshooting. Reports can than be scheduled automatically by cron or any other scheduler.

Conclusion: pgcluu is an easy to setup and easy to use monitoring solution for PostgreSQL instances. Spend some time in thinking about how to collect, how to report and how to archive the reports and you’ll have plenty of information to troubleshoot and to plan capacity.

 

Cet article Monitoring tools for PostgreSQL – pgcluu est apparu en premier sur Blog dbi services.

UKOUG Tech15 – Day 2 – focus on APEX 5

$
0
0

This second day of the UKOUG just ended and now is the time for the b… blog! The first day I really focused my agenda on presentations regarding integration between products and solutions provided by Oracle: the SOA Suite, the new ICS (Integration Cloud Services), the API Manager, aso… For this second day, I wanted to see something else, a little bit more closer to what dbi services can provide so I mainly attended sessions about APEX.

Some of my colleagues in the Middleware team are able to develop applications using APEX and do a lot of stuff with that. I can’t say that I understand everything they are doing and how they are doing it but I always found the concept of having an application actually running in a database quite interesting so I’m following the news around this Oracle product from time to time.

Sessions of the day

For this second day, I attended a very interesting session regarding some really important things to not forget when creating/managing Interactive Reports in APEX because it can just make your IRs unusable. This session has been presented by Peter Raganitsch (FOEX) which clearly explained what is actually happening on the background when you create or use the IRs and what can be done to debug and improve your user’s experience. Basically, you must check, verify and validate which query is running for different kind of actions and you need to make sure that it is exactly what you want. APEX is, by default, using a pre-defined configuration that complexify a little bit the SQL which can lead to really bad performance on thousands (millions) of rows.

After that, I attended another session that presented why does your development will be much more faster with APEX 5.x compared to 4.x. This session has been presented by Anthony Rayner (Oracle), a member of the APEX development team, which demonstrated the usage of the new interfaces/features compared to the previous versions: this was all about the use of the Page Designer (new browser-based IDE introduced in 5.0) and the new Code Editor.

Another really interesting session I attended has been presented by Hilary Farrell (Oracle) which mainly explored the Charting Capabilities of APEX 5.0 and 5.1. She presented quickly what changed regarding the charts since APEX 3.1 with new features, new versions of the AnyCharts incorporated inside APEX, the new Oracle JET Charts (JavaScript Extension Toolkit) that replace the AnyCharts, aso… She also talked about the new elements and charts that came with APEX 5.1 and I must that it looks pretty cool!

The only small negative point of this second day is that almost all presentations around APEX were given by Oracle employees (the only exception is the first session I mentioned earlier). That’s maybe not the best way to be objective but at least Oracle did a good job because all their sessions were very interesting and instructive.

I don’t already know what I will do tomorrow but I hope I will finally be able to see a WebLogic Aministration Console somewhere. (or at least a command prompt please! :D)

 

Cet article UKOUG Tech15 – Day 2 – focus on APEX 5 est apparu en premier sur Blog dbi services.

Monitoring tools for PostgreSQL – POWA

$
0
0

The last posts introduced the logging system, pg_stat_statements, pg_activity and pgcluu. This post will look at POWA: PostgreSQL Workload Analyzer.

For getting the most out of POWA the following extension should be installed in the PostgreSQL instance you want to monitor:

  • pg_stat_statements (see last post)
  • pg_stat_kcache: gathers statistics about reads and writes done by the file system layer
  • pg_qualstats: gathers statistics of predicates found in where statements and join clauses
  • btree_gist: provides GiST index operator classes that implement B-tree equivalent behavior for various data types

As pg_stat_statements is already installed in my PostgreSQL instance lets start by installing the pg_stat_kcache extension.

postgres@oel7:/var/tmp/ [PG3] unzip pg_stat_kcache-master.zip
postgres@oel7:/var/tmp/ [PG3] cd pg_stat_kcache-master
postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] make
postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] make install

As usual: quite easy. As with pg_stat_statements we need to adjust the shared_preload_libraries parameter to have the extension loaded:

(postgres@[local]:4445) [postgres] > show shared_preload_libraries;
 shared_preload_libraries 
--------------------------
 pg_stat_statements
(1 row)

Time: 0.230 ms
(postgres@[local]:4445) [postgres] > alter system set shared_preload_libraries=pg_stat_statements,pg_stat_kcache;
ALTER SYSTEM
Time: 2.995 ms

After the PostgreSQL instance was restarted:

postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] pg_ctl -D /u02/pgdata/PG3/ stop -m fast
postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] pg_ctl -D /u02/pgdata/PG3/ start

… the extension can be created:

(postgres@[local]:4445) [postgres] > show shared_preload_libraries;
     shared_preload_libraries      
-----------------------------------
 pg_stat_statements,pg_stat_kcache
(1 row)

(postgres@[local]:4445) [postgres] > create extension pg_stat_kcache;
CREATE EXTENSION
Time: 68.483 ms

(postgres@[local]:4445) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
(3 rows)

The next, and final, extension which should be installed is pg_qualstats. The procedure is almost the same:

postgres@oel7:/var/tmp/ [PG3] unzip pg_qualstats-master.zip
postgres@oel7:/var/tmp/ [PG3] cd pg_qualstats-master
postgres@oel7:/var/tmp/pg_qualstats-master/ [PG3] make
postgres@oel7:/var/tmp/pg_qualstats-master/ [PG3] make install

Again we’ll need to adjust shared_preload_libraries:

(postgres@[local]:4445) [postgres] > show shared_preload_libraries;
     shared_preload_libraries      
-----------------------------------
 pg_stat_statements,pg_stat_kcache
(1 row)

Time: 0.215 ms
(postgres@[local]:4445) [postgres] > alter system set shared_preload_libraries=pg_stat_statements,pg_stat_kcache,pg_qualstats;
ALTER SYSTEM
Time: 4.692 ms

Then restart the server:

postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] pg_ctl -D /u02/pgdata/PG3/ stop -m fast
postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] pg_ctl -D /u02/pgdata/PG3/ start

Finally create the extension:

(postgres@[local]:4445) [postgres] > show shared_preload_libraries;
             shared_preload_libraries             
--------------------------------------------------
 pg_stat_statements, pg_stat_kcache, pg_qualstats
(1 row)

Time: 0.285 ms
(postgres@[local]:4445) [postgres] > create extension pg_qualstats;
CREATE EXTENSION
Time: 143.439 ms
(postgres@[local]:4445) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 pg_qualstats       | 0.0.7   | public     | An extension collecting statistics about quals
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
(4 rows)

The btree_gist extension is there by default and we just need to add it:

postgres@[local]:4445) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 pg_qualstats       | 0.0.7   | public     | An extension collecting statistics about quals
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
(4 rows)

(postgres@[local]:4445) [postgres] > create extension btree_gist;
CREATE EXTENSION
Time: 21.112 ms
(postgres@[local]:4445) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 btree_gist         | 1.0     | public     | support for indexing common datatypes in GiST
 pg_qualstats       | 0.0.7   | public     | An extension collecting statistics about quals
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
(5 rows)

Having all the requirements available we now start by installing powa-archivist. This is the POWA extension that gathers the performance statistics. The usual steps apply:

postgres@oel7:/var/tmp/ [PG3] unzip powa-archivist-master.zip
postgres@oel7:/var/tmp/ [PG3] cd powa-archivist-master
postgres@oel7:/var/tmp/powa-archivist-master/ [PG3] make
postgres@oel7:/var/tmp/powa-archivist-master/ [PG3] make install

Again, adjust shared_preload_libraries:

(postgres@[local]:4445) [postgres] > show shared_preload_libraries;
             shared_preload_libraries             
--------------------------------------------------
 pg_stat_statements, pg_stat_kcache, pg_qualstats
(1 row)

Time: 0.243 ms
(postgres@[local]:4445) [postgres] > alter system set shared_preload_libraries=pg_stat_statements, pg_stat_kcache, pg_qualstats, powa;
ALTER SYSTEM
Time: 69.219 ms

Restart the server:

postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] pg_ctl -D /u02/pgdata/PG3/ stop -m fast
postgres@oel7:/var/tmp/pg_stat_kcache-master/ [PG3] pg_ctl -D /u02/pgdata/PG3/ start

Create the extension:

(postgres@[local]:4445) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 btree_gist         | 1.0     | public     | support for indexing common datatypes in GiST
 pg_qualstats       | 0.0.7   | public     | An extension collecting statistics about quals
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
(5 rows)

(postgres@[local]:4445) [postgres] > create extension powa;
CREATE EXTENSION
Time: 742.831 ms
(postgres@[local]:4445) [postgres] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 btree_gist         | 1.0     | public     | support for indexing common datatypes in GiST
 pg_qualstats       | 0.0.7   | public     | An extension collecting statistics about quals
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
 powa               | 3.0.0   | public     | PostgreSQL Workload Analyser-core
(6 rows)

At this point in time it is advisable to create a dedicated database for the powa repository and add all the extensions:

(postgres@[local]:4445) [postgres] > create database powa;
CREATE DATABASE
Time: 1664.653 ms
(postgres@[local]:4445) [postgres] > \c powa
You are now connected to database "powa" as user "postgres".
(postgres@[local]:4445) [powa] > create extension pg_stat_statements;
CREATE EXTENSION
Time: 25.448 ms
(postgres@[local]:4445) [powa] > create extension btree_gist;
CREATE EXTENSION
Time: 134.281 ms
(postgres@[local]:4445) [powa] > create extension pg_qualstats;
CREATE EXTENSION
Time: 25.683 ms
(postgres@[local]:4445) [powa] > create extension pg_stat_kcache;
CREATE EXTENSION
Time: 53.798 ms
(postgres@[local]:4445) [powa] > create extension powa;
CREATE EXTENSION
Time: 98.410 ms
(postgres@[local]:4445) [powa] > \dx
                                     List of installed extensions
        Name        | Version |   Schema   |                        Description                        
--------------------+---------+------------+-----------------------------------------------------------
 btree_gist         | 1.0     | public     | support for indexing common datatypes in GiST
 pg_qualstats       | 0.0.7   | public     | An extension collecting statistics about quals
 pg_stat_kcache     | 2.0.2   | public     | Kernel cache statistics gathering
 pg_stat_statements | 1.2     | public     | track execution statistics of all SQL statements executed
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language
 powa               | 3.0.0   | public     | PostgreSQL Workload Analyser-core
(6 rows)

There are some configuration parameters that allow you to control the powa extension:

(postgres@[local]:4445) [postgres] > show powa.frequency;
 powa.frequency 
----------------
 5min
(1 row)

Time: 0.319 ms
(postgres@[local]:4445) [postgres] > show powa.retention;
 powa.retention 
----------------
 1d
(1 row)

Time: 0.241 ms
(postgres@[local]:4445) [postgres] > show powa.database;
 powa.database 
---------------
 powa
(1 row)

Time: 0.241 ms
(postgres@[local]:4445) [postgres] > show powa.coalesce;
 powa.coalesce 
---------------
 100
(1 row)

Time: 0.362 ms

So far for the work to be done inside the PostgreSQL instance. No we need the web interface. In general you can install the web interface anywhere. I’ll be doing it on the same host by installing the requirements first:

postgres@oel7: [PG3] sudo yum install python-pip python-devel

After that pip can be used to install the web interface:

postgres@oel7: [PG3] sudo pip install powa-web

We need to create small configuration file for the web interface:

postgres@oel7: [PG3] sudo echo "servers={
  'main': {
    'host': 'localhost',
    'port': '5432',
    'database': 'powa'
  }
}
cookie_secret=\"A_SECRET\" " > /etc/powa-web.conf

Once this is available the web interface can be started:

postgres@oel7:/var/tmp/powa-archivist-master/ [PG3] powa-web

You should be able to access the interface at port 8888:

powa1

powa2

powa3

powa4

After some time (you’ll need to give powa some time to collect) the dashboard will be populated:

powa5

If you select a database you can scroll down to the list of sql statements:
powa6

Clicking on one of these gives nice graphs (the following are all graphs for one statement):

powa7
powa8
powa9
powa10
powa11

Conclusion: POWA is a very nice tool for gathering and displaying statistics around a PostgreSQL instance. Especially that you can store all the statistics in a separate database and can control on how long you want to keep them makes it a very good choice. Traveling back in time to troubleshoot issues becomes very easy.

 

Cet article Monitoring tools for PostgreSQL – POWA est apparu en premier sur Blog dbi services.

Viewing all 2880 articles
Browse latest View live