Quantcast
Channel: dbi Blog
Viewing all 2879 articles
Browse latest View live

Documentum – MigrationUtil – 2 – Change Docbase Name

$
0
0

You are attending the second episode of the MigrationUtil series, today we will change the Docbase Name. If you missed the first one, you can find it here. I did this change on Documentum CS 16.4 with Oracle database, on the same docbase I already used to change the docbase ID.
My goal is to do both changes on the same docbase because that’s what I will need in the future.

So, we will be interested in the docbase RepoTemplate to change his name to repository1.

1. Migration preparation

I will not give the overview of the MigrationUtil, as I already did in the previous blog.
1.a Update the config.xml file
Below is the updated version of config.xml file to change the Docbase Name:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/product/16.4/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">RepoTemplate</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>
...
<entry key="ChangeDocbaseName">yes</entry>
<entry key="NewDocbaseName.1">repository1</entry>
...

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Before the migration (optional)

– Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : RepoTemplate
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

– Create a document in the docbase:
Create an empty file

touch /home/dmadmin/DCTMChangeDocbaseExample.docx

Create document in the repository using idql

create dm_document object
SET title = 'DCTM Change Docbase Document Example',
SET subject = 'DCTM Change Docbase Document Example',
set object_name = 'DCTMChangeDocbaseExample.docx',
SETFILE '/home/dmadmin/DCTMChangeDocbaseExample.docx' with CONTENT_FORMAT= 'msw12';

Result:

object_created  
----------------
090f449880001125
(1 row affected)

note the r_object_id.

3. Execute the migration

3.a Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

3.b Update the database name in the server.ini file
It is a workaround to avoid below error:

Database Details:
Database Vendor:oracle
Database Name:DCTMDB
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

In fact, the tool deal with the database name as a database service name, and put “/” in the url instead of “:”. The best workaround I found is to update database_conn value in the server.ini file, and put the service name instead of the database name.
Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/RepoTemplate/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = RepoTemplate
server_config_name = RepoTemplate
database_conn = dctmdb.local
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/RepoTemplate/dbpasswd.txt
service = RepoTemplate
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin
...

Don’t worry, we will roll back this change before docbase start ;)

3.c Execute the MigrationUtil script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Changes...
Skipping Host Name Change...
Skipping Install Owner Change...
Skipping Server Name Change...

Changing Docbase Name...
Created new log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Finished changing Docbase Name...

Skipping Docker Seamless Upgrade scenario...
Migration Utility completed.

No Error encountred here but it doesn’t mean that everything is ok… Please check the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange.log
Start: 2019-02-01 19:32:10.631
Changing Docbase Name
=====================

DocbaseName: RepoTemplate
New DocbaseName: repository1
Retrieving server.ini path for docbase: RepoTemplate
Found path: /app/dctm/product/16.4/dba/config/RepoTemplate/server.ini

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/DocbaseNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_docbase_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3c0f449880000103'
select r_object_id,docbase_name from dm_docbaseid_map_s where docbase_name = 'RepoTemplate'
update dm_docbaseid_map_s set docbase_name = 'repository1' where r_object_id = '440f449880000100'
select r_object_id,file_system_path from dm_location_s where file_system_path like '%RepoTemplate%'
update dm_location_s set file_system_path = '/app/dctm/product/16.4/data/repository1/content_storage_01' where r_object_id = '3a0f44988000013f'
...
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800003e0'
...
select i_stamp from dmi_vstamp_s where i_application = 'dmi_dd_attr_info'
...
Successfully updated database values...
...
Backed up '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_startup script.
Renamed '/app/dctm/product/16.4/dba/dm_start_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_start_repository1'
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_RepoTemplate_docbase_RepoTemplate.backup'
Updated dm_shutdown script.
Renamed '/app/dctm/product/16.4/dba/dm_shutdown_RepoTemplate' to '/app/dctm/product/16.4/dba/dm_shutdown_repository1'
WARNING...File /app/dctm/product/16.4/dba/config/RepoTemplate/rkm_config.ini doesn't exist. RKM is not configured
Finished processing File changes...

Processing Directory Changes...
Renamed '/app/dctm/product/16.4/data/RepoTemplate' to '/app/dctm/product/16.4/data/repository1'
Renamed '/app/dctm/product/16.4/dba/config/RepoTemplate' to '/app/dctm/product/16.4/dba/config/repository1'
Renamed '/app/dctm/product/16.4/dba/auth/RepoTemplate' to '/app/dctm/product/16.4/dba/auth/repository1'
Renamed '/app/dctm/product/16.4/share/temp/replicate/RepoTemplate' to '/app/dctm/product/16.4/share/temp/replicate/repository1'
Renamed '/app/dctm/product/16.4/share/temp/ldif/RepoTemplate' to '/app/dctm/product/16.4/share/temp/ldif/repository1'
Renamed '/app/dctm/product/16.4/server_uninstall/delete_db/RepoTemplate' to '/app/dctm/product/16.4/server_uninstall/delete_db/repository1'
Finished processing Directory Changes...
...
Processing Services File Changes...
Backed up '/etc/services' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/services_docbase_RepoTemplate.backup'
ERROR...Couldn't update file: /etc/services (Permission denied)
ERROR...Please update services file '/etc/services' manually with root account
Finished changing docbase name 'RepoTemplate'

Finished changing docbase name....
End: 2019-02-01 19:32:23.791

Here it is a justified error… Let’s change the service name manually.

3.d Change the service
As root, change the service name:

[root@vmtestdctm01 ~]$ vi /etc/services
...
repository1				49402/tcp               # DCTM repository native connection
repository1_s       	49403/tcp               # DCTM repository secure connection

3.e Change back the Database name in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

3.f Start the Docbroker and the Docbase

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

3.g Check the docbase log

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-01T19:43:15.677455	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent master (pid : 16594, session 010f449880000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:15.677967	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16595, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-01T19:43:16.680391	16563[16563]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 16606, session 010f44988000000b) is started sucessfully." 

You are saying the log name is still RepoTemplate.log ;) Yes! because in my case the docbase name and the server name were the same before I change the docbase name:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
database_owner = RepoTemplate
database_password_file = /app/dctm/product/16.4/dba/config/repository1/dbpasswd.txt
service = repository1
root_secure_validator = /app/dctm/product/16.4/dba/dm_check_password
install_owner = dmadmin

Be patient, in the next episode we will see how we can change the server name :)

4. After the migration (optional)

Get docbase map from the docbroker:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 16.4.0000.0185
Targeting port 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : vmtestdctm01
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 02 5d2 c0a87a01 vmtestdctm01 192.168.122.1
Docbroker version         : 16.4.0000.0248  Linux64
**************************************************
**     D O C B A S E   I N F O                  **
**************************************************
--------------------------------------------
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Template Repository
Govern docbase      : 
Federation name     : 
Server version      : 16.4.0000.0248  Linux64.Oracle
Docbase Roles       : Global Registry
...

it’s not very nice to keep the old description of the docbase… Use below idql request to change it:

Update dm_docbase_config object set title='Renamed Repository' where object_name='repository1';

Check after change:

[dmadmin@vmtestdctm01 ~]$ dmqdocbroker -t vmtestdctm01 -c getdocbasemap
...
Docbase name        : repository1
Docbase id          : 1000600
Docbase description : Renamed Repository
...

Check the document created before the migration:
docbase id : 090f449880001125

API> dump,c,090f449880001125
...
USER ATTRIBUTES

  object_name                     : DCTMChangeDocbaseExample.docx
  title                           : DCTM Change Docbase Document Example
  subject                         : DCTM Change Docbase Document Example
...

5. Conclusion

Well, the tool works, but as you saw we need a workaround to make the change. Which is not great, hope that it will be fixed in the future versions.
In the next episode I will change the server config name, see you there ;)

Cet article Documentum – MigrationUtil – 2 – Change Docbase Name est apparu en premier sur Blog dbi services.


What are custom and generic plans in PostgreSQL?

$
0
0

I have already written a post about prepared statements in PostgreSQL some time ago. What I did not mention in that post is the concept of generic and custom plans. So lets have a look at those.

As always, we start with creating a demo table and populate that table with some sample data:

pgbench=# create table demo ( a int, b text );
CREATE TABLE
pgbench=# insert into demo select i, 'aaa' from generate_series (1,100) i;
INSERT 0 100
pgbench=# insert into demo select i, 'bbb' from generate_series (101,200) i;
INSERT 0 100
pgbench=# insert into demo select i, 'ccc' from generate_series (201,300) i;
INSERT 0 100
pgbench=# analyze demo;
ANALYZE

Now that we have some data we can prepare a statement we would like to execute with various values:

pgbench=# prepare my_stmt as select * from demo where b = $1;
PREPARE

Btw: You can check for the currently available prepared statements in your session by querying the pg_prepared_statements catalog view:

pgbench=# select * from pg_prepared_statements;
  name   |                      statement                      |         prepare_time          | parameter_types | from_sql 
---------+-----------------------------------------------------+-------------------------------+-----------------+----------
 my_stmt | prepare my_stmt as select * from demo where b = $1; | 2019-02-05 13:15:39.232521+01 | {text}          | t

When we explain(analyze) that statement what do we see?

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.111..0.230 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.344 ms
 Execution time: 0.285 ms
(5 rows)

In the “Filter” line of the execution plan we can see the real value (‘aaa’) we passed to our prepared statement. When you see that, it is a so called custom plan. When PostgreSQL goes for a custom plan that means the statement will be re-planned for the provided set of parameters. When you execute that a few times more:

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.045..0.158 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.243 ms
 Execution time: 0.225 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.035..0.123 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.416 ms
 Execution time: 0.173 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.036..0.124 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.195 ms
 Execution time: 0.178 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.035..0.126 rows=100 loops=1)
   Filter: (b = 'aaa'::text)
   Rows Removed by Filter: 200
 Planning time: 0.192 ms
 Execution time: 0.224 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'aaa' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.038..0.130 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning time: 0.191 ms
 Execution time: 0.183 ms
(5 rows)

… you will see that the “Filter” line changes from displaying the actual parameter to a place holder. Now we have a generic plan. This generic plan will not change anymore for the lifetime of the prepared statement no matter which value you pass into the prepared statement:

pgbench=# explain (analyze) execute my_stmt ( 'bbb' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.096..0.219 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning time: 0.275 ms
 Execution time: 0.352 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'ccc' );
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.090..0.132 rows=100 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 200
 Planning time: 0.084 ms
 Execution time: 0.204 ms
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( null );
                                           QUERY PLAN                                           
------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.75 rows=100 width=8) (actual time=0.033..0.033 rows=0 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 300
 Planning time: 0.018 ms
 Execution time: 0.051 ms
(5 rows)

When you take a look at the source code (src/backend/utils/cache/plancache.c) you will see why it changes after 5 executions:

/*
 * choose_custom_plan: choose whether to use custom or generic plan
 *
 * This defines the policy followed by GetCachedPlan.
 */
static bool
choose_custom_plan(CachedPlanSource *plansource, ParamListInfo boundParams)
{
        double          avg_custom_cost;

        /* One-shot plans will always be considered custom */
        if (plansource->is_oneshot)
                return true;

        /* Otherwise, never any point in a custom plan if there's no parameters */
        if (boundParams == NULL)
                return false;
        /* ... nor for transaction control statements */
        if (IsTransactionStmtPlan(plansource))
                return false;

        /* See if caller wants to force the decision */
        if (plansource->cursor_options & CURSOR_OPT_GENERIC_PLAN)
                return false;
        if (plansource->cursor_options & CURSOR_OPT_CUSTOM_PLAN)
                return true;

        /* Generate custom plans until we have done at least 5 (arbitrary) */
        if (plansource->num_custom_plans < 5)
                return true;

Even if we change the data and analyze the table again we will still get a generic plan once PostgreSQL went for it:

pgbench=# insert into demo select i, 'ddd' from generate_series (201,210) i;
INSERT 0 10
pgbench=# insert into demo select i, 'ee' from generate_series (211,211) i;
INSERT 0 1
pgbench=# analyze demo;
ANALYZE
pgbench=# select b,count(*) from demo group by b order by b;
  b  | count 
-----+-------
 aaa |   100
 bbb |   100
 ccc |   100
 ddd |    10
 ee  |     1
(5 rows)

pgbench=# explain (analyze) execute my_stmt ( 'ddd' );
                                           QUERY PLAN                                           
------------------------------------------------------------------------------------------------
 Seq Scan on demo  (cost=0.00..5.88 rows=78 width=8) (actual time=0.147..0.151 rows=10 loops=1)
   Filter: (b = $1)
   Rows Removed by Filter: 300
 Planning time: 0.293 ms
 Execution time: 0.190 ms
(5 rows)

The situation changes when we have much more data, data is not uniformly distributed and we have an index on the column “b”:

pgbench=# truncate demo;
TRUNCATE TABLE
pgbench=# insert into demo select i, 'aaa' from generate_series (1,1000000) i;
INSERT 0 1000000
pgbench=# insert into demo select i, 'bbb' from generate_series (1000001,2000000) i;
INSERT 0 1000000
pgbench=# insert into demo select i, 'ccc' from generate_series (2000001,3000000) i;
INSERT 0 1000000
pgbench=# insert into demo select i, 'eee' from generate_series (3000001,3000010) i;
INSERT 0 10
pgbench=# create index i1 on demo (b);
CREATE INDEX
pgbench=# select b,count(*) from demo group by b order by b;
  b  |  count  
-----+---------
 aaa | 1000000
 bbb | 1000000
 ccc | 1000000
 eee |      10
(4 rows)

pgbench=# prepare my_stmt as select * from demo where b = $1;
PREPARE

No matter how often we execute the following statement (which asks for ‘eee’), we never get a generic plan:

pgbench=# explain (analyze) execute my_stmt ('eee');
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Index Scan using i1 on demo  (cost=0.43..4.45 rows=1 width=8) (actual time=0.054..0.061 rows=10 loops=1)
   Index Cond: (b = 'eee'::text)
 Planning time: 0.249 ms
 Execution time: 0.106 ms
(4 rows)

-----> REPEAT THAT HOW OFTEN YOU WANT BUT AT LEAST 10 TIMES

pgbench=# explain (analyze) execute my_stmt ('eee');
                                                QUERY PLAN                                                
----------------------------------------------------------------------------------------------------------
 Index Scan using i1 on demo  (cost=0.43..4.45 rows=1 width=8) (actual time=0.054..0.061 rows=10 loops=1)
   Index Cond: (b = 'eee'::text)
 Planning time: 0.249 ms
 Execution time: 0.106 ms

This is because the custom plan (which includes the costs for re-planning) is always cheaper than the generic plan (which does not include the costs for re-planning) when we have a data distribution like that. The documentation is very clear about that: “Using EXECUTE values which are rare in columns with many duplicates can generate custom plans that are so much cheaper than the generic plan, even after adding planning overhead, that the generic plan might never be used”.

Hope that helps.

Cet article What are custom and generic plans in PostgreSQL? est apparu en premier sur Blog dbi services.

OBUG 2019 – First event from Oracle Users Group in Belgium!

$
0
0

Introduction

It’s the first edition of the Techdays, the Oracle community event in Belgium. This event happened in Antwerp these past 2 days, and a lot of speakers came from around the world to talk about their experience on focused subjects. Really amazing to heard such high-level conferences.
And it was a great pleasure for me because I’ve been working in Belgium for several years before.

I’ll will try to give you a glimpse of what I found the most interesting among the sessions I chose.

Cloud: you cannot ignore it anymore!

Until now, I did not have much interest in the cloud because my job is actually helping customers to build on-premise (that means not in the cloud) environments. If you can live without the cloud, you cannot ignore it anymore now because it deals with budget, infrastructure optimization, strategy, flexibility, scalability.
Cloud is bringing a modern pay-what-you-need-now model compared to monolitic and costly infrastructures you’ll have to pay off in years. Cloud is bringing a service for a problem.

Cloud providers have understood that now or later, customers will move at least parts of their infrastructure into the cloud.

Going into the cloud is not something you answer yes or no. It’s a real project that you’ll have to study as it needs to rethink nearly everything. Migrating your current infrastructure to the cloud without any changes would probably be a mistake. Don’t consider the cloud as just putting your servers elsewhere.

I learned that Oracle datacenters are actually not dedicated datacenters: most of the time, their cloud machines are located in existing datacenters from different providers, making your connection to the cloud sometimes only meters away from your actual servers!

And for those who still don’t want to move their critical data somewhere outside, Oracle brings another solution named Cloud-at-customer. It’s basically the same as pure cloud in terms of management, but Oracle delivers the servers in your datacenter, keeping your data secured. At least for those who are thinking that way :-)

EXADATA

You probably know that EXADATA is the best database machine you can buy, and it’s true. But EXADATA is not the machine every customer needs (actually ODA is much more affordable and popular), only very demanding databases can benefit from this platform.

Gurmeet Goindi, the EXADATA product manager at Oracle, told us that EXADATA will still increase the gap from classic technologies.

For example, I heard that EXADATA’s maximum numbers of PDB will increase to 4’000 in one CDB, even you’ll probably never reach this limit, it’s quite an amazing number.

If I didn’t heard about new hardware coming shortly, major Exadata enhancements will come with 19c software stack release.

19c is coming

Maria Colgan from Oracle introduced the 19c new features and enhancements.

We were quite used to previous Oracle releases, R1 and R2, R1 being the very first release with a low adoption from the customer, and R2 being the mature release with longuer term support. But after the big gap between 12cR1 and 12cR2, Oracle changed the rules for a more Microsoft-like versioning: version number is now the year of product delivery. Is there still longer term release like 11gR2 was? For sure, and 19c will be the one you were waiting for. You may know that 18c was some kind of 12.2.0.2. 19c will be the lastest version of the 12.2 kernel, 12.2.0.3. If you plan to migrate your databases from previous releases this year, you should know that 19c will be available shortly, and that this could worth the wait.

19c will bring bunch of new features, like automatic indexing, which could be part of a new option. PDB could now have separate encryptions keys, and not only one for the CDB.

InMemory option will receive enhancements and now supports the storage of objects in both column and row. InMemory content can now be different between the primary and the active standby, making distribution of read only statements more efficient. If your memory is not big enough to store all your InMemory data, it will soon be possible (on Exadata only) to put the columnar data into flash to keep the benefit of columnar scans outside memory. It makes sense.

Brief overview of new “runaway queries” feature, there will be a kind of quarantine for statements that reach resource plan limits. Goal is to avoid the need for the DBA to connect and kill the session to free up system resources.

Autonomous database will also be there in 19c, but first limited to Cloud-at-customer Exadatas. It will take some years for all databases to become automomous :-)

Zero Downtime Migration tool

What you’ve been dreaming of for years is now nearly there. A new automatic migration tool without downtime. But don’t dream too much because it seems to be limited to migration to the cloud and source and target should be in the same version (11g, 12c or 18c). Don’t know actually if it will support migrations from on-premise to on-premise.

Ricardo Gonzalez told us that with this new tool, you will be able to migrate to the cloud very easily. It’s a one button approach, with a lot of intelligence inside the tool to maximize the security of the operation. And as described it looks great, and everything is considered, pre-checks, preparation, migration, post-migration, post-checks and so on. You’ll still have to do the final switchover yourself, and yes it’s based on Data Guard, so you can trust the tool as it relies on something reliable. If something goes wrong, you can still move back to on-premise database.

Autoupgrade tool

Another great tool presented by Mike Dietrich is coming with 19c. It’s a java based tool able to plan and manage database upgrades with a single input file describing the environment. It seems very useful if you have a lot of databases to upgrade. Refer to MOS Note: 2485457.1 if you’re interested.

Conclusion

So many interesting things to learn in these 2 days! Special thanks to Philippe Fierens and Pieter Van Puymbroeck for the organization.

Cet article OBUG 2019 – First event from Oracle Users Group in Belgium! est apparu en premier sur Blog dbi services.

How To Create A New Container Group Using Azure CLI

$
0
0

Steps To Create A New Container Group Using Azure CLI

Before starting, a MSDN account is needed to use the Azure platform, please refer to your manager for further information.

In this tutorial, the Azure CLI command-line tool will be install on Windows and we will deploy a website running the open source GHOST platform.

Choose the right operating system to install the command-line tool. The information can be found here.

When the installation is done, launch the command prompt and type az login to authenticate on the Azure platform (see below):
az login
Wait until a browser is launched so that you can login (see below):

az_authentication1
You have logged into Microsoft Azure!

To create a new container, the following command will be used:

  • az group create
  • az container create
  • az container show
  • az container logs
  • az group delete

Open the command prompt and type powershell to start.

Define the variable resource group that will hold the container group:

  • $resourceGroup = “TutoGhost” (hit enter)

Define the variable location:

  • $location = “westeurope” (hit enter)

Create the resource group and the location:

  • az group create -n $resourceGroup -l $location (hit enter)

The resource group and the location are now created.

Define the variable container group (only lowercase letters, numbers and dashes):

  • $containerGroupName = “site1-ghost” (hit enter)

Create the container group:

  • az container create -g $resourceGroup -n $containerGroupName `
  • –image ghost `
  • –ports 2368 `
  • –ip-address public `
  • –dns-name-label testtutoghost

The dns-name-label should be unique.

Hit enter to execute the commands.

During the execution process, use the following command-line to check the status of the container group:

  • az container show `
  • -g $resourceGroup -n $containerGroupName

The provisioningState variable display the three state of creation of the container group which is pending, creating and succeeded.

A public random ip address is assigned and a fully qualified domain name which is testtutoghost.westeurope.azurecontainer.io

To check if the container is up and running, open a browser and type the FQDN followed by colon 2368.

To check the container group logs, type the following command:

  • az container logs `
  • -n $containerGroupName -g $resourceGroup

As a test environment and to avoid paying extra cost, make sure to delete the resource group (command-line below) created if not used.

  • az group delete -n $resourceGroup -y

Cet article How To Create A New Container Group Using Azure CLI est apparu en premier sur Blog dbi services.

Docker EE – UCP 3.1.2 installation failed in Azure

$
0
0

Docker EE 2.1 is a Containers-as-a-Service (CaaS) platform on-premise or in the cloud. To manage your cluster of containers, Docker EE allows you to install UCP (Universal Control Panel).

When you plan to install the UCP 3.1.2 in Azure, the installation fails during the deployment of the UCP service. You will get the following error message.

Screenshot 2019-02-14 at 15.44.13

[docker@docker-manager000 ~]$ docker logs ucp-reconcile

Screenshot 2019-02-12 at 08.28.37

 

If you google the error message you will get a lot of answers but almost all of them are not suitable to solve the issue. According to the Docker community, the current issue is related to UCP 3.1.0, 3.1.1, 3.1.2 in certain cases.

The best solution provided by the community is to install a previous version of UCP (3.0.7) and upgrade it from within the UCP dashboard.

Before installing another version of UCP, remove the current installation:

[docker@docker-manager000 ~]$ sudo docker swarm leave --force
[docker@docker-manager000 ~]$ sudo docker stop $(sudo docker ps -aq)
[docker@docker-manager000 ~]$ sudo docker rm $(sudo docker ps -aq) --force
[docker@docker-manager000 ~]$ sudo docker rmi $(sudo docker images -aq) --force
[docker@docker-manager000 ~]$ sudo docker network prune
[docker@docker-manager000 ~]$ sudo docker system prune --force

Re-try the installation with UCP version 3.0.7

[docker@docker-manager000 ~]$ docker container run --rm -it \
>   --name ucp \
>   -v /var/run/docker.sock:/var/run/docker.sock \
>   docker/ucp:3.0.7 install \
>   --host-address 52.170.5.187 \
>   --pod-cidr 10.0.0.0/24 \
>   --cloud-provider Azure \
>   --interactive

Output:

Unable to find image 'docker/ucp:3.0.7' locally

3.0.7: Pulling from docker/ucp

ff3a5c916c92: Pull complete

a52011fa0ead: Pull complete

dbfd538ef7b5: Pull complete

Digest: sha256:21219684bfafd78c49669865b769c775f132c5b0915a5ec5fd8b41d9a6f52597

Status: Downloaded newer image for docker/ucp:3.0.7

INFO[0000] Your engine version 18.09.2, build 1ac774d (3.10.0-862.11.6.el7.x86_64) is compatible with UCP 3.0.7 (a422a90)

Admin Username: test

Admin Password:

Confirm Admin Password:

INFO[0016] Pulling required images... (this may take a while)

INFO[0016] Pulling docker/ucp-cfssl:3.0.7

INFO[0017] Pulling docker/ucp-dsinfo:3.0.7

INFO[0036] Pulling docker/ucp-metrics:3.0.7

INFO[0039] Pulling docker/ucp-hyperkube:3.0.7

INFO[0046] Pulling docker/ucp-kube-compose:3.0.7

INFO[0074] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.7

INFO[0078] Pulling docker/ucp-controller:3.0.7

INFO[0084] Pulling docker/ucp-etcd:3.0.7

INFO[0085] Pulling docker/ucp-auth:3.0.7

INFO[0087] Pulling docker/ucp-calico-cni:3.0.7

INFO[0090] Pulling docker/ucp-kube-dns-sidecar:3.0.7

INFO[0092] Pulling docker/ucp-interlock-extension:3.0.7

INFO[0093] Pulling docker/ucp-auth-store:3.0.7

INFO[0095] Pulling docker/ucp-agent:3.0.7

INFO[0096] Pulling docker/ucp-compose:3.0.7

INFO[0097] Pulling docker/ucp-swarm:3.0.7

INFO[0099] Pulling docker/ucp-pause:3.0.7

INFO[0100] Pulling docker/ucp-calico-kube-controllers:3.0.7

INFO[0101] Pulling docker/ucp-calico-node:3.0.7

INFO[0110] Pulling docker/ucp-kube-dns:3.0.7

INFO[0135] Pulling docker/ucp-interlock:3.0.7

INFO[0135] Pulling docker/ucp-interlock-proxy:3.0.7

WARN[0139] None of the hostnames we'll be using in the UCP certificates [docker-manager000 127.0.0.1 172.17.0.1 52.170.5.187] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.

Additional aliases:

INFO[0000] Initializing a new swarm at 52.170.5.187

INFO[0020] Installing UCP with host address 52.170.5.187 - If this is incorrect, please specify an alternative address with the '--host-address' flag

INFO[0020] Deploying UCP Service...

INFO[0120] Installation completed on docker-manager000 (node 5nhn0oqi9h8tjo99t15v3ckr5)

INFO[0120] UCP Instance ID: ccraf0c9vbrcnkxix11r7bdue

INFO[0120] UCP Server SSL: SHA-256 Fingerprint=9D:E3:7D:4C:12:03:B9:03:21:C6:5A:0C:87:C3:AA:A2:48:B0:C1:91:46:D4:12:A1:6F:05:64:C3:72:13:4B:6F

INFO[0120] Login to UCP at https://52.170.5.187:443

INFO[0120] Username: test

INFO[0120] Password: (your admin password)

StackOverflow post : https://stackoverflow.com/questions/54189511/docker-unable-to-reconcile-state-of-kubernetes-addons-component-error-while-dep/54646555#54646555

Cet article Docker EE – UCP 3.1.2 installation failed in Azure est apparu en premier sur Blog dbi services.

A few scripting languages for Documentum

$
0
0

Beside the obsolete dmbasic, the autistic dmawk, the verbose java with the DfCs, the limited iapi (for API) and idql (for DQL) command-line tools, Documentum does not offer any scripting language for the administrator and the out-of-the-box experience is quite frustrating in this respect. It has been so even before the java trend so it is not a maneuver to force the use of the DfCs or to rely on it for administrative tasks. It looks more like an oversight or like this was considered as a low-priority need.
Of course, people didn’t stop at this situation and developed their own bindings with their preferred scripting language. I found db:Documentum for perl, yours truly’s DctmAPI.py for python (refer to the article here), dmgawk for gawk (see here), and of course all the JVM-based languages that leverage the DfCs such as groovy, beanshell, jython, jruby, etc… Such JVM-based scripting languages actually only need to import the DfCs library and off they go for the next killer script. In this article, I’ll show how to set up the binding for a few of those languages under the linux O/S.

db::Documentum

This is a perl v5 module that permits access to the Documentum api from the perl interpreter. It was developed by M. Scott Roth, see here, originally only for the Windows O/S and EDMS v3.1.5. Thanks to other contributors, it is now compilable under several flavors of Unices, including Linux. It is downloadable from here.
You’ll need the GNU C compiler to generate the module. Here is a detailed, step by step description of the installation procedure.

# download the archive Db-Documentum-1.64.tar.gz from here http://www.perl.com/CPAN/modules/by-module/Db/
# decompress it in, say, db::Documentum
tar -zxvf Db-Documentum-1.64.tar.gz
 
# move to the newly created directory Db-Documentum-1.64;
cd Db-Documentum-1.64
 
# prepare the following needed paths;
# DM_HOME
# path to the Documentum installation, e.g. /home/dmadmin/documentum
# DM_LIB
# path to the Documentum libdmcl.so library, e.g. ${DM_HOME}/product/7.3/bin
# note: there is also the obsolescent libdmcl40.so but I've encountered problems with it, mostly "Segmentation fault (core dumped)" crashes, so use the JNI-based libdmcl.so instead; it starts more slowly as it uses java but it is more reliable and is still supported;
# DM_INCLUDE
# path to the include file dmapp.h, e.g. ${DM_HOME}/share/sdk/include
 
# edit the linux section in Makefile.PL and provide the above paths;
# also, move up the $DM_LIB initialization before the dmcl.so test and comment the line beginning with $DM_CLIENT_LIBS =
# here is how that section looks like after editing it:

elsif ( $OS =~ /linux/i ) {
 
# The path to your Documentum client installation.
$DM_HOME = '/home/dmadmin/documentum';
 
# This is kinda a gottcha, the Linux stuff is in unix/linux
# You may have to tweak these.
 
# Path to documentum client libraries.
#$DM_LIB = "$DM_HOME/unix/linux";
$DM_LIB = "$DM_HOME/product/7.3/bin";
 
# dmcl.so file
if (! -e "$DM_LIB/libdmcl.so") {
warn "\n*** WARNING *** Could not find $DM_LIB/libdmcl.so.\nThe module will not make without " .
"libdmcl.so.\n";
}
 
# Path to directory where dmapp.h lives.
#$DM_INCLUDE = "-I/documentum/share/sdk/include/";
$DM_INCLUDE = "-I$DM_HOME/share/sdk/include/";
 
#$DM_CLIENT_LIBS = "-L$DM_LIB -ldmapi -ldmupper -ldmlower -ldmcommon -ldmupper -lcompat";
$DM_RPC_LIBS = "-L$DM_LIB -lnwrpc -lnwstcp";
$OS_LIBS = "-lsocket -lnsl -lintl";
$CPP_LIBS = "-lC";
$LD_LIBS = "-ldl";
$CPP_INC = "";
$CCFLAGS = "";
}

 
# execute the Makefile.PL;
perl Makefile.PL
 
# if the error below occurs, you must install the perl-devel module using the native package deployment tool for your distribution,
# e.g. sudo yum install perl-devel for centos:
# Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
#BEGIN failed--compilation aborted at Makefile.PL line 1.
 
# a Makefile file has been generated; correct the 2 lines below as showed;
EXTRALIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl
LDLOADLIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl -lc
 
# use the newly produced Makefile;
make
 
# run some tests to check the new module;
make test
 
# the test completes successfully but, sometimes, it is followed by SIGSEGV in the JVM;
# as it occurs at program termination, it can be ignored;
 
# install the new perl module system-wide;
sudo make install

Now that we have the module, let’s use it in a simple test case: dump of all the dm_sysobject linked in cabinet /dmadmin (its id is 0c00c35080000105) in the out-of-the box dmtest repository.

vi cat my-test.pl 
#!/usr/bin/perl

use Db::Documentum qw(:all);
use Db::Documentum::Tools qw(:all);

# print version
Db::Documentum::version;

$docbase = "dmtest";
$username = "dmadmin";
$password = "dmadmin";

# connect;
$result = dm_Connect($docbase, $username, $password) || die("could not connect in " . $docbase . " as " . $username . " with password " . $password);

# run the query;
$status = dmAPIExec("execquery,c,,select r_object_id, r_object_type, object_name from dm_sysobject where folder(ID('0c00c35080000105'))");
if (1 != $status) {
   $err_mess = dmAPIGet("getmessage,c");
   print $err_mess;
   die();
}
$query_id = dmAPIGet("getlastcoll,c");
printf "%-16s  %-20s  %s\n", "r_object_id", "r_object_type", "object_name";
while (dmAPIExec("next,c," . $query_id)) {
   $r_object_id = dmAPIGet("get,c," . $query_id . ",r_object_id");
   $r_object_type = dmAPIGet("get,c," . $query_id . ",r_object_type");
   $object_name = dmAPIGet("get,c," . $query_id . ",object_name");
   printf "%16s  %-20s  %s\n", $r_object_id, $r_object_type, $object_name;
}
dmAPIExec("close,c," . $query_id);

# disconnect;
dmAPIExec("disconnect,c");
exit;

The script is very trivial and needs little explanation. Note the new functions dm_Connect, dmAPIExec and dmAPIGet. dmAPISet, dmAPIInit and dmAPIDeInit are also available but the last two don’t need to be invoked explicitly for they are automatically at module load-time.
Example of execution:

perl my-test.pl
 
Perl version: 5.016003
Db::Documentum version: 1.64
DMCL version: 7.3.0000.0205
 
r_object_id r_object_type object_name
0800c3508000019b dm_job dm_PropagateClientRights
0800c3508000019c dm_job dm_PostUpgradeAction
0800c35080000408 dmc_wfsd_type_info integer
0800c35080000409 dmc_wfsd_type_info boolean
0800c3508000040a dmc_wfsd_type_info double
0800c3508000040b dmc_wfsd_type_info string
0800c3508000040c dmc_wfsd_type_info date
0800c3508000040d dmc_wfsd_type_info repeating integer
0800c3508000040e dmc_wfsd_type_info repeating boolean
0800c3508000040f dmc_wfsd_type_info repeating double
0800c35080000410 dmc_wfsd_type_info repeating string
0800c35080000411 dmc_wfsd_type_info repeating date
0800c35080000426 dm_sysobject dm_indexAgentAcquireLock
0800c35080000587 dm_client_rights dfc_localhost_c0XP4a
0800c35080001065 dm_jms_config JMS dmtest:9080 for dmtest.dmtest
0800c35080001066 dm_jms_config JMS localhost.localdomain:9080 for dmtest.dmtest
0b00c35080000233 dm_folder Workspace Customizations
1700c3508000015d dm_outputdevice Default LP Printer
3a00c3508000013f dm_location storage_01
3a00c35080000140 dm_location common
3a00c35080000141 dm_location events
3a00c35080000142 dm_location log
3a00c35080000143 dm_location config
3a00c35080000144 dm_location dm_dba
3a00c35080000145 dm_location auth_plugin
3a00c35080000146 dm_location ldapcertdb_loc
3a00c35080000147 dm_location temp
3a00c35080000148 dm_location dm_ca_store_fetch_location
3a00c35080000153 dm_location convert
3a00c35080000154 dm_location dsearch
3a00c35080000155 dm_location nls_chartrans
3a00c35080000156 dm_location check_signature
3a00c35080000157 dm_location validate_user
3a00c35080000158 dm_location assume_user
3a00c35080000159 dm_location secure_common_area_writer
3a00c3508000015a dm_location change_password_local
3a00c3508000015b dm_location thumbnail_storage_01
3a00c3508000015c dm_location streaming_storage_01
3a00c35080000226 dm_location replicate_location
3a00c35080000227 dm_location replica_storage_01
3e00c35080000149 dm_mount_point share
6700c35080000100 dm_plugin CSEC Plugin
6700c35080000101 dm_plugin Snaplock Connector

Now, the power of perl and its more than 25’000 modules are at our fingertips to help us tackle those hairy administrative tasks !

groovy

Being a JVM-based language, groovy runs on top of a JVM, and therefore benefits of all its advantages such as automatic garbage collection (although this is not an exclusivity of java) and portability (ditto), and can tap into the uncountable existing JAVA libraries (ditto).
groovy is used in Documentum’s xPlore.
groovy is a powerful, yet easy to learn, programing language still actively maintained by the Apache foundation, cf. here. Similar to java but without its verbosity, it should instantly appeal to java developers doing Documentum administrative tasks.
groovy comes with groovysh, a comfortable and powerful interactive shell for trying out statements and experimenting with the language.
By importing the DfCs, we can use groovy to access Documentum repositories. Knowledge of the DfCs are required of course.
To install groovy, use the distribution’s package manager; e.g. on my Ubuntu, I’ve used:

sudo apt-get install groovy

while on Centos, the following command will do it:

sudo yum install groovy

To test it, let’s use the same program as for perl, but rewritten a la groovy:

#! /usr/bin/groovy

import System.*;
import java.io.*;

import com.documentum.fc.client.*;
import com.documentum.fc.common.*;

   static void main(String[] args) {
      docbroker = "dmtest";
      docbase = " dmtest";
      username = "dmadmin";
      password = "dmadmin";
   
      println("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker);
   
      try {
         client = DfClient.getLocalClient();
      
         config = client.getClientConfig();
         config.setString ("primary_host", docbroker);
      
         logInfo = new DfLoginInfo();
         logInfo.setUser(username);
         logInfo.setPassword(password);
         docbase_session = client.newSession(docbase, logInfo);
      
         if (docbase_session != null) {
            println("Got a session");
      
            // do something in the session;
            folderId = new DfId("0c00c35080000105");
            folder = docbase_session.getObject(folderId);
            attrList = "r_object_id,r_object_type,object_name";
            coll = folder.getContents(attrList);
      
            while (coll.next())
               System.out.printf("ObjectId=%-16s r_object_type=%-20s ObjectName=%s\n",
                                 coll.getString("r_object_id"),
                                 coll.getString("r_object_type"),
                                 coll.getString("object_name"));
            println("Finished");
            docbase_session.disconnect();
         }
         else
            println("Didn't get a session");
      }
      catch (e) {
         println("Exception was: " + e);
      }
   }

Lines 6 & 7 import the DfC so don’t forget to add them to the CLASSPATH; normally they are because the execution environment is a Documentum client, e.g.:

export JAVA_HOME=/home/dmadmin/documentum/shared/java64/1.8.0_77
export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar
export PATH=$JAVA_HOME/bin:$PATH

Line 15 & 38 show that besides its own built-in functions, groovy can also use equivalent functions from the java libraries.
To invoke the script:

groovy tgroovy.gry
# or make it executable and call it:
chmod +x tgroovy.gry
./tgroovy.gry

Here is its output:

attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
...
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector
Finished

jython

jython is a python implementation written in java, see here.
A such, it offers most of the features of the powerful python language although it stays behind the latest python version (v2.5.3 vs. 3.7).
Like java, groovy, jruby, scala, etc …, jython runs inside a JVM and can use all the java libraries such as the DfCs and become a Documentum client with no changes except adding the DfCs to the $CLASSPATH.
jython appeals especially to people who already know python; like for groovy, a basic level knowledge of the DfCs is required.
To install jython, use your distribution’s package manager, e.g.

# for ubuntu:
sudo apt-get install jython

Make sure the DfCs are present in $CLASSPATH, otherwise add them:

export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:

When runing the test script below, the DfCs complain about a bad date format:

20:42:05,482 ERROR [File Watcher] com.documentum.fc.common.DfPreferences - [DFC_PREFERENCE_BAD_VALUE] Bad value for preference "dfc.date_format", value="M/d/yyyy, h:mm:ss a"
com.documentum.fc.common.DfException: Illegal syntax found in the date format 'M/d/yyyy, h:mm:ss a'. The default localized short date format will be used.
at com.documentum.fc.common.DfException.newIllegalDateFormatException(DfException.java:109)

Evidently, they are unhappy with the default date format. The work-around is to force one in the dfc.properties file by adding the line below (choose a date format that best suits you):

dfc.date_format=dd.MM.yyyy HH:mm:ss

Since the error did not occur with groovy (nor in the provided JNI-enabled command-line tools such as iapi, idql and dmawk), it is not the DfCs per se that have this problem but the combination of java + jython + DfCs.
Here comes the test script:

#!/usr/bin/env jython

# install jython via your O/S package manager;
# export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:$CLASSPATH
# edit the documentum/shared/config/dfc.properties and add a dfc.date property, e.g.:
# dfc.date_format=dd.MM.yyyy HH:mm:ss
# execute:
#   jython test.jy
# or:
#   chmod +x test.jy; ./test.jy
# can also be execute interactively as follows:
# start jython:
#    jython
# call the test script;
#    execfile("/home/dmadmin/test.jy")

import traceback
import com.documentum.fc.client as DFCClient
import com.documentum.fc.common as DFCCommon

docbroker = "dmtest"
docbase = " dmtest"
username = "dmadmin"
password = "dmadmin"
print("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker)
try:
  client = DFCClient.DfClient.getLocalClient()

  config = client.getClientConfig()
  config.setString ("primary_host", docbroker)

  logInfo = DFCCommon.DfLoginInfo()
  logInfo.setUser(username)
  logInfo.setPassword(password)
  docbase_session = client.newSession(docbase, logInfo)

  if docbase_session is not None:
    print("Got a session")
    # do something in the session;
    folderId = DFCCommon.DfId("0c00c35080000105");
    folder = docbase_session.getObject(folderId);
    attrList = "r_object_id,r_object_type,object_name";
    coll = folder.getContents(attrList);
    while(coll.next()):
       print("ObjectId=" + "%-16s" % coll.getString("r_object_id") + 
             " r_object_type=" + "%-20s" % coll.getString("r_object_type") +
             " ObjectName=" + coll.getString("object_name"));
    print("Finished");
    docbase_session.disconnect()
  else:
    print("Didn't get a session")
except Exception:
    print(Exception)

Execution:

jython test.jy
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/usr/share/java/jython-2.5.3.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
...
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector
Finished

Ironically, the jython’s launcher is a perl script; it basically initializes java and python environment variables such as classpath, java options and jython home and path. If the initial WARNINGs are disruptive, edit that script and correct the problem or just redirect the stderr to null, e.g.:

jython test.jy 2> /dev/null

So, which one to choose ?

To summarize, the decision tree below may help choosing one scripting language among the preceding ones.

DfCs knowledge ?:
java proficiency ?:
choose groovy with the DfCs
else python proficiency ?:
choose jython with the DfCs
else select one of the following ones:
get acquainted with one of the above languages
| choose another JVM-based language
| give up the DfCs and use DQL/API with perl, python of gawk instead (see below)
else perl proficiency ?:
choose db::Documentum
else python proficiency ?:
choose python and DctmAPI.py
else nawk/gawk proficiency ?
choose gawk and dmgawk binding
else select one of the following:
learn one of the above scripting languages
| develop a Documentum binding for your preferred scripting language not in the list
| hire dbi-services for your administrative tasks or projects ;-)

DfCs are clearly indicated to java programmers. They are still supported and new features are always accessible from them. There are tasks which cannot be done through the API or DQL and only through the DfCs, but generally those are out of the scope of an administrator. Note that even the non java and DfCs languages still finish up invoking the DfCs in the background because they are linked with the libdmcl.so library and that one makes JNI behind-the-scene calls to the DfCs for them, thus hiding their complexity. Hopefully, this shared library will stay with us still for some time otherwise our scripting language choice will be seriously restricted to JVM-based languages and the DfCs.

Cet article A few scripting languages for Documentum est apparu en premier sur Blog dbi services.

New search function for Oracle database features

$
0
0

Oracle released a new way to search for database features, and it is …

..really great.
Try out the cool new search application for Oracle database features.

What is special about? The new site gives you a very smart overview of the database features and in which version they are available (at least from 11-19)

As example, when you search for “compression” it shows you this output:

With a click on the feature you are interessted in you can see a short description and jump directly into the documentation

Oh and when somebody from Oracle is reading this:
Please add also the license information for all the features: Thank you :-)

Playing around with the new site, I already found some features I did not know that they exist.
Or did you know, there is an ASM Cloning feature starting with Oracle 18c and Flex Disk groups?

Have a nice weekend.

P.S.
Thanks to
Thomas Teske
from Oracle who showed me this feature!

Cet article New search function for Oracle database features est apparu en premier sur Blog dbi services.

PostgreSQL : barman rsync method vs streaming method

$
0
0

Barman is a tool to perform backup and recovery for PostgreSQL databases. It can do backup using two methods:
-rsync/ssh
-streaming
In this blog I am going to explain how to use these two methods to backup a PostgreSQL database. Generally it is a good practice to dedicate a server for barman instead of installing it on the database server. My environment is described below
postgreSQL server: dbi-pg-essentials 192.168.22.101 Centos 7
barman server: pgservertools 192.168.22.104 Oracle Linux 7
postgreSQL version: 11.1
barman version: 2.6
The first step is to install barman on the barman server pgservertools

[root@pgservertools network-scripts]# yum install  barman.noarch barman-cli.noarch
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package barman.noarch 0:2.6-1.rhel7 will be installed
---> Package barman-cli.noarch 0:1.3-1.rhel7.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch       Version             Repository                  Size
================================================================================
Installing:
 barman         noarch     2.6-1.rhel7         pgdg10-updates-testing     300 k
 barman-cli     noarch     1.3-1.rhel7.1       pgdg10-updates-testing      14 k

Transaction Summary
================================================================================
...
...
Installed:
  barman.noarch 0:2.6-1.rhel7         barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@pgservertools network-scripts]#

The installation will create a linux user named barman.
As the rsync method need connections without passwords between two servers for the barman user, we have to configure ssh keys
On the server pgservertools (barman server) let’s create keys with the user barman and then copy the public key to the database server dbi-pg-essentials for the user postgres

-bash-4.2$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/barman/.ssh/id_rsa):
Created directory '/var/lib/barman/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/barman/.ssh/id_rsa.
Your public key has been saved in /var/lib/barman/.ssh/id_rsa.pub.
The key fingerprint is:
f4:b7:6b:6e:38:25:ae:be:7f:9a:34:03:a1:1c:a0:ac barman@pgservertools
The key's randomart image is:
+--[ RSA 2048]----+
|    .            |
| . . .           |
|  o   . o        |
| .   . + o       |
|E     o S . .    |
|         .....   |
|         .++.    |
|         .+++.   |
|       .+++Bo    |
+-----------------+
-bash-4.2$ 


-bash-4.2$ cd .ssh/
-bash-4.2$ ls
id_rsa  id_rsa.pub


-bash-4.2$ ssh-copy-id postgres@dbi-pg-essentials
The authenticity of host 'dbi-pg-essentials (192.168.22.101)' can't be established.
ECDSA key fingerprint is 33:65:38:f4:eb:5b:f4:10:d3:36:7b:ea:5a:70:33:18.
Are you sure you want to continue connecting (yes/no)? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
postgres@dbi-pg-essentials's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'postgres@dbi-pg-essentials'"
and check to make sure that only the key(s) you wanted were added.

If everything is fine, barman should be able to connect to database server as postgres linux user without password

-bash-4.2$ hostname
pgservertools.localdomain
-bash-4.2$ id
uid=994(barman) gid=992(barman) groups=992(barman) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
-bash-4.2$ ssh postgres@dbi-pg-essentials hostname
dbi-pg-essentials
-bash-4.2$

On the database server I also have installed the package barman-cli.noarch which will allow us to use the command barman-wal-archive. We will talk about this later.

[root@dbi-pg-essentials ~]# yum install barman-cli.noarch
Loaded plugins: fastestmirror
dbipgessentials                                          | 3.6 kB     00:00
edb-repos                                                | 2.4 kB     00:00
…
…
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : barman-cli-1.3-1.rhel7.1.noarch                              1/1
  Verifying  : barman-cli-1.3-1.rhel7.1.noarch                              1/1

Installed:
  barman-cli.noarch 0:1.3-1.rhel7.1

Complete!
[root@dbi-pg-essentials ~]# 

rsync backup

As specified earlier in this case the backup is done using rsync. But we have many ways to ship WAL to the barman server. So before talking about barman configuration let’s take a moment to see the WAL shipping
1- With WAL archiving
To better understand I put this picture I got from the barman documentation . As we see backup is done via rsync and the WAL are sent via the barman-wal-archive. This utility comes with barman 2.6.
Before barman 2.6 the rsync command was used to send WAL to barman.
In the documentation we can find that using barman-wal-archive instead of rsync/SSH reduces the risk of data corruption of the shipped WAL file on the Barman server.
rsync_1
The use of barman_wal_archive is done via the postgresql.conf file on the database server. It appears on the value of the parameter archive_command. Below values in my postgresql.conf file.

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

Before barman 2.6 we would use following for archive_command to send WAL to barman server

archive_command = 'rsync -a  %p  barman@pgservertools:/var/lib/barman/pgserver11/incoming/%f' # command to use to archive a logfile segment
[postgres@dbi-pg-essentials PG1]$

2- With WAL archiving and WAL streaming
This picture from barman documentation will help to better understand
rsync_2
To use WAL streaming to the barman server, we need pg_receivewal (pg_receivexlog up to PostgreSQL 10) to be installed on the barman server. Be careful of the version of pg_receivewal. In my case I installed the version 11.1 as my PostgreSQL is 11.1

[postgres@pgservertools bin]$ /usr/pgsql-11/bin/pg_receivewal -V
pg_receivewal (PostgreSQL) 11.1
[postgres@pgservertools bin]$

A streaming connection also should be configured and the parameter streaming_archiver should be set to on.
Now to resume let’s say that I want to configure barman with
-rsync method
-using barman_wal_archive and WAL streaming
The barman file configuration /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = rsync
streaming_archiver = on
archiver = on
path_prefix=/usr/pgsql-11/bin/
[postgres@pgservertools bin]$

And the postgresql.conf should contain following entries

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

The first thing is to test that the barman configuration is fine for the PostgreSQL database. The check command should not return any errors. On the barman server with the user barman

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 0 backups, expected at least 0)
        ssh: OK (PostgreSQL server)
        not in recovery: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

Now we can launch a backup using the backup command on the barman server with the user barman

-bash-4.2$ barman backup pgserver11
Starting backup using rsync-exclusive method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T153350
Backup start at LSN: 0/2E000060 (00000005000000000000002E, 00000060)
This is the first backup for server pgserver11
Starting backup copy via rsync/SSH for 20190215T153350
Copy done (time: 12 seconds)
This is the first backup for server pgserver11
WAL segments preceding the current backup have been found:
        00000005000000000000002D from server pgserver11 has been removed
Asking PostgreSQL server to finalize the backup.
Backup size: 74.1 MiB
Backup end at LSN: 0/2E000168 (00000005000000000000002E, 00000168)
Backup completed (start time: 2019-02-15 15:33:52.392144, elapsed time: 15 seconds)
Processing xlog segments from file archival for pgserver11
        00000005000000000000002E
        00000005000000000000002E.00000060.backup
-bash-4.2$

We can list the existing backup. On the barman server with the barman user

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 0 B
-bash-4.2$

Streaming backup

Since the version 2.0, barman supports streaming replication for backup. This method uses the native pg_basebackup
1- Streaming-only backup
This picture is from the barman documentation may help
stream_1
As we can see, In this case backup are done via streaming. WAL are also using streaming protocol.
2- WAL archiving and WAL streaming
Once again following picture may help
stream_2
In this case we configure standard archiving as well to implement a more robust architecture

For example to implement a barman configuration with streaming backup WAL streaming and WAL archiving, the /etc/barman.conf should be like

[postgres@pgservertools bin]$ cat /etc/barman.conf | grep -v  ^\;

[barman]
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip
retention_policy = REDUNDANCY 2

[pgserver11]
description = "Main DB Server"
ssh_command = ssh postgres@dbi-pg-essentials
streaming_conninfo=host=192.168.22.101 user=postgres
conninfo=host=192.168.22.101 user=postgres
backup_method = postgres
streaming_archiver = on
archiver = on
slot_name=barman
path_prefix=/usr/pgsql-11/bin/

and the postgressql.conf

[postgres@dbi-pg-essentials PG1]$ grep -E  "archive_mode|wal_level|archive_command" postgresql.conf
wal_level = replica                    # minimal, replica, or logical
archive_mode = on               # enables archiving; off, on, or always
archive_command = 'barman-wal-archive 192.168.22.104 pgserver11 %p'             # command to use to archive a logfile segment

So the check should not return any errors

-bash-4.2$ barman check pgserver11
Server pgserver11:
        PostgreSQL: OK
        is_superuser: OK
        PostgreSQL streaming: OK
        wal_level: OK
        replication slot: OK
        directories: OK
        retention policy settings: OK
        backup maximum age: OK (no last_backup_maximum_age provided)
        compression settings: OK
        failed backups: OK (there are 0 failed backups)
        minimum redundancy requirements: OK (have 1 backups, expected at least 0)
        pg_basebackup: OK
        pg_basebackup compatible: OK
        pg_basebackup supports tablespaces mapping: OK
        archive_mode: OK
        archive_command: OK
        continuous archiving: OK
        pg_receivexlog: OK
        pg_receivexlog compatible: OK
        receive-wal running: OK
        archiver errors: OK
-bash-4.2$

And we launch a backup, we can see that barman is using pg_basebackup

-bash-4.2$ barman backup pgserver11
Starting backup using postgres method for server pgserver11 in /var/lib/barman/pgserver11/base/20190215T160757
Backup start at LSN: 0/2F0001A8 (00000005000000000000002F, 000001A8)
Starting backup copy via pg_basebackup for 20190215T160757
Copy done (time: 11 seconds)
Finalising the backup.
Backup size: 73.9 MiB
Backup end at LSN: 0/31000060 (000000050000000000000031, 00000060)
Backup completed (start time: 2019-02-15 16:07:57.919595, elapsed time: 11 seconds)
Processing xlog segments from file archival for pgserver11
        000000050000000000000030
        000000050000000000000031
Processing xlog segments from streaming for pgserver11
        000000050000000000000030
-bash-4.2$

Available backups are now

-bash-4.2$ barman list-backup pgserver11
pgserver11 20190215T160757 - Fri Feb 15 16:08:09 2019 - Size: 73.9 MiB - WAL Size: 0 B
pgserver11 20190215T153350 - Fri Feb 15 15:34:08 2019 - Size: 74.1 MiB - WAL Size: 48.3 KiB

To restore with barman, we use the command recover. For example the following command will restore the backup 20190215T160757 on server dbi-pg-essentials_3 in the directory /u02/pgdata/PGRESTORE

-bash-4.2$ barman recover --remote-ssh-command "ssh postgres@dbi-pg-essentials_3" pgserver11 20190215T160757 /u02/pgdata/PGRESTORE

Conclusion

In this blog I have tried to explain different scenarios for using barman. We talked about rsync method and streaming methods for backups. Before starting the setup a choice must be done. One can check documentation for more information

Ref: http://docs.pgbarman.org/

Cet article PostgreSQL : barman rsync method vs streaming method est apparu en premier sur Blog dbi services.


DOAG day “database migration” in Mannheim at 19.02.2019

$
0
0

Yesterday I attended DOAG conference in Mannheim about migrating Oracle databases.

First presentation was about challenges about migrating to multitenant databases. With Oracle 20 it is probably not possible anymore to create a non CDB database or to upgrade from a non CDB database. So in the next years all databases have to be migrated to multitenant architecture. Problems with licensing, different charactersets were covered and some migration methods to PDB were shown.
In the second lecture causes of failures of database migrations were shown which were not caused by database technology itself but by surrounding technologies and human errors.
In the third speech a new migration method with RMAN incremental backups in combination with transportable tablespaces was shown, this is very interesting for migration of big databases.
Also a migration method with duplicate command with noopen option was presented.

Last but not least an entertaining show about migration projects was hold, the lessons learned were presented in Haiku (Japanese poem form).

Cet article DOAG day “database migration” in Mannheim at 19.02.2019 est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 3 – Change Server Config Name

$
0
0

In the previous blog I changed the Docbase Name to repository1 instead of RepoTemplate using MigrationUtil, in this blog it is Server Config Name’s turn to be changed.

In general, the repository name and the server config name are the same except in High availability case.
You can find the Server Config Name in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...

1. Migration preparation

To change the server config name to repository1, you need first to update the configuration file of MigrationUtil, like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repository1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeServerName">yes</entry>
<entry key="NewServerName.1">repository1</entry>

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Execute the migration

Use the below script to execute the migration:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration.

2.a Stop the Docbase and the DocBroker

$DOCUMENTUM/dba/dm_shutdown_repository1
$DOCUMENTUM/dba/dm_stop_DocBroker

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = dctmdb.local
...

2.c Execute the migration script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Change...
Skipping Host Name Change...
Skipping Install Owner Change...

Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Changing Server Name...
Database owner password is read from config.xml
Finished changing Server Name...

Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Start: 2019-02-02 19:55:52.531
Changing Server Name
=====================

DocbaseName: repository1
Retrieving server.ini path for docbase: repository1
Found path: /app/dctm/product/16.4/dba/config/repository1/server.ini
ServerName: RepoTemplate
New ServerName: repository1

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Validating Server name with existing servers...
select object_name from dm_sysobject_s where r_object_type = 'dm_server_config'

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_server_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3d0f449880000102'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.repository1' where r_object_id = '080f4498800010a9'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_cont_transfer_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.repository1' where r_object_id = '080f4498800004ba'
select r_object_id,target_server from dm_job_s where target_server like '%repository1.RepoTemplate%'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800010d3'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035e'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035f'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000360'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000361'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000362'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000363'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000364'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000365'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000366'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000367'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000372'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000373'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000374'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000375'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000376'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000377'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000378'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000379'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037a'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037b'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000386'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000387'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000388'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000389'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000e42'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000cb1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d02'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d04'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d05'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003db'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dc'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dd'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003de'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003df'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e0'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e2'
Successfully updated database values...

Processing File changes...
Backed up '/app/dctm/product/16.4/dba/config/repository1/server.ini' to '/app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repository1/server.ini
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Finished processing File changes...
Finished changing server name 'repository1'

Processing startup and shutdown scripts...
Backed up '/app/dctm/product/16.4/dba/dm_start_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup'
Updated dm_startup script.
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup'
Updated dm_shutdown script.

Finished changing server name....
End: 2019-02-02 19:55:54.687

2.d Reset the value of database_conn in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = repository1
database_conn = DCTMDB
...

3. Check after update

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the log to be sure that the repository has been started correctly. Notice that the log name has been changed from RepoTemplate.log to repository1.log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/repository1.log
...
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:09.807613	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29345, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:10.809686	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29362, session 010f44988000000c) is started sucessfully."

4. Manual rollback is possible?

In fact, in the MigrationUtilLogs folder, you can find logs, backup of start/stop scripts, and also the sql file for manual rollback:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
total 980
-rw-rw-r-- 1 dmadmin dmadmin   4323 Feb  2 19:55 ServerNameChange_DatabaseRestore.sql
-rwxrw-r-- 1 dmadmin dmadmin   2687 Feb  2 19:55 dm_start_repository1_server_RepoTemplate.backup
-rwxrw-r-- 1 dmadmin dmadmin   3623 Feb  2 19:55 dm_shutdown_repository1_server_RepoTemplate.backup
-rw-rw-r-- 1 dmadmin dmadmin   6901 Feb  2 19:55 ServerNameChange.log

lets see the content of the sql file :

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql
update dm_sysobject_s set object_name = 'RepoTemplate' where r_object_id = '3d0f449880000102';
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.RepoTemplate' where r_object_id = '080f4498800010a9';
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.RepoTemplate' where r_object_id = '080f4498800004ba';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800010d3';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035e';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035f';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000360';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000361';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000362';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000363';
...

I already noticed that a manual rollback is possible after Docbase ID and Docbase Name change but I didn’t test it… I would like to try this one.
So to rollback:
Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

Execute the sql

[dmadmin@vmtestdctm01 ~]$ cd $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
[dmadmin@vmtestdctm01 MigrationUtilLogs]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 17 19:53:12 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn RepoTemplate@DCTMDB
Enter password: 
Connected.
SQL> @ServerNameChange_DatabaseRestore.sql
1 row updated.
1 row updated.
1 row updated.
...

The DB User is still RepoTemplate, it hasn’t been changed when I changed the docbase name

Copy back the files saved, you can find the list of files updated and saved in the log:

cp /app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup /app/dctm/product/16.4/dba/config/repository1/server.ini
cp /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_start_repository1
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_shutdown_repository1

Think about changing back the the database connection in /app/dctm/product/16.4/dba/config/repository1/server.ini (see 2.d step).

Then start the DocBroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the repository log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-02T20:15:59.677595	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19232, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:00.679566	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19243, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:01.680888	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19255, session 010f44988000000c) is started sucessfully."

Yes, the rollback works correctly! :D Despite this, I hope you will not have to do it on a production environment. ;)

Cet article Documentum – MigrationUtil – 3 – Change Server Config Name est apparu en premier sur Blog dbi services.

Create a primary database using the backup of a standby database on 12cR2

$
0
0

The scope of this blog will be to show how to create a primary role database based on a backup of a standby database on 12cR2.

Step1: We are assuming that an auxiliary instance has been created and started in nomount mode.

rman target /
restore primary controlfile from 'backup_location_directory/control_.bkp';
exit;

By specifying “restore primary” , will modify the flag into the controlfile, and will mount a primary role instance instead of a standby one.

Step2: Once mounted the instance, we will restore the backup of the standby database.

run
{
catalog start with 'backup_location_directory';
restore database;
alter database flashback off;
recover database 
}

If in the pfile used to start the instance, you specified the recovery destination and size parameters it will try to enable the flashback.
The flashback enable , before during the recovery is not allowed, so we will deactivate for the moment.

Step3: Restore/recover completed successfully, we will try to open the database, but got some errors:

alter database open :

ORA-03113: end-of-file on communication channel
Process ID: 2588
Session ID: 1705 Serial number: 5

Step4: Fix the errors and try to open the database:

--normal redo log groups
alter database clear unarchived logfile group YYY;

--standby redo log groups
alter database clear unarchived logfile group ZZZ;
alter database drop logfile group ZZZ;

Is not enough. Looking on the database alert log file we can see :

LGWR: Primary database is in MAXIMUM AVAILABILITY mode 
LGWR: Destination LOG_ARCHIVE_DEST_2 is not serviced by LGWR 
LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR 

Errors in file /<TRACE_DESTINATION>_lgwr_1827.trc: 
ORA-16072: a minimum of one standby database destination is required 
LGWR: terminating instance due to error 16072 
Instance terminated by LGWR, pid = 1827

Step5: Complete the opening procedure:

alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
alter database set standby database to maximize performance;

SQL> select name,open_mode,protection_mode from v$database;

NAME      OPEN_MODE            PROTECTION_MODE
--------- -------------------- --------------------
NAME      MOUNTED              MAXIMUM PERFORMANCE

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
NAME      NAME_UNIQUE                    PRIMARY

Cet article Create a primary database using the backup of a standby database on 12cR2 est apparu en premier sur Blog dbi services.

Working with files on the filesystem in PostgreSQL

$
0
0

PostgreSQL comes with various helper functions that support you with working with files on the filesystem on the host PostgreSQL is running on. You might ask yourself why that is important but there are use cases for that. Maybe you want to list the contents of a directory because new files that showed up since the last check do trigger something. Maybe you want to load a file into the database (which you also can (and event should) do using copy if it is text based and somehow well formatted, but that is not the scope of this post).

For listing files in a directory there is this one:

postgres=# select * from pg_ls_dir('.');
      pg_ls_dir       
----------------------
 pg_wal
 global
 pg_commit_ts
 pg_dynshmem
 pg_notify
 pg_serial
 pg_snapshots
 pg_subtrans
 pg_twophase
 pg_multixact
 base
 pg_replslot
 pg_tblspc
 pg_stat
 pg_stat_tmp
 pg_xact
 pg_logical
 PG_VERSION
 postgresql.conf
 postgresql.auto.conf
 pg_hba.conf
 pg_ident.conf
 pg_log
 postmaster.opts
 autoprewarm.blocks
 postmaster.pid
 current_logfiles
(27 rows)

By default the ‘.’ listings are omitted by you can control this:

postgres=# select * from pg_ls_dir('.',true,true);
      pg_ls_dir       
----------------------
 .
 ..
 pg_wal
 global
 pg_commit_ts
 pg_dynshmem
 pg_notify
 pg_serial
 pg_snapshots
 pg_subtrans
 pg_twophase
 pg_multixact
 base
 pg_replslot
 pg_tblspc
 pg_stat
 pg_stat_tmp
 pg_xact
 pg_logical
 PG_VERSION
 postgresql.conf
 postgresql.auto.conf
 pg_hba.conf
 pg_ident.conf
 pg_log
 postmaster.opts
 autoprewarm.blocks
 postmaster.pid
 current_logfiles
(29 rows)

There is no option to control sorting but of course you can add a where clause to do this:

postgres=# select * from pg_ls_dir('.',true,true) order by 1;
      pg_ls_dir       
----------------------
 .
 ..
 autoprewarm.blocks
 base
 current_logfiles
 global
 pg_commit_ts
 pg_dynshmem
 pg_hba.conf
 pg_ident.conf
 pg_log
 pg_logical
 pg_multixact
 pg_notify
 pg_replslot
 pg_serial
 pg_snapshots
 pg_stat
 pg_stat_tmp
 pg_subtrans
 pg_tblspc
 pg_twophase
 PG_VERSION
 pg_wal
 pg_xact
 postgresql.auto.conf
 postgresql.conf
 postmaster.opts
 postmaster.pid
(29 rows)

You could load that into an array and then do whatever you want to do with it for further processing:

postgres=# \x
Expanded display is on.
postgres=# with dirs as (select pg_ls_dir('.'::text,true,true) dir order by 1)
                select array_agg(dir) from dirs;
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
array_agg | {.,..,autoprewarm.blocks,base,current_logfiles,global,pg_commit_ts,pg_dynshmem,pg_hba.conf,pg_ident.conf,pg_log,pg_logical,pg_multixact,pg_notify,pg_replslot,pg_serial,pg_snapshots,pg_stat,pg_stat_tmp,pg_subtrans,pg_tblspc,pg_twophase,PG_VERSION,pg_wal,pg_xact,postgresql.auto.conf,postgresql.conf,postmaster.opts,postmaster.pid}

When you try to list the files of a directory you do not have the permissions to do so of course that fails:

postgres=# select pg_ls_dir('/root');
ERROR:  could not open directory "/root": Permission denied

All other directories the PostgreSQL operating system user has access to can be listed:

postgres=# \x
Expanded display is off.
postgres=# select pg_ls_dir('/var/tmp');
                                pg_ls_dir                                
-------------------------------------------------------------------------
 yum-postgres-uSpYMT
 systemd-private-f706224b798a404a8b1b7efbbb7137c9-chronyd.service-saK1Py
 systemd-private-bcd40d1946c94f1fbcb73d1047ee2fc2-chronyd.service-Fr7WgV
 systemd-private-798725e073664df6bbc5c6041151ef61-chronyd.service-kRvvJa
(4 rows)

When you need to get some statistics about a file there is pg_stat_file:

postgres=# select pg_stat_file('postgresql.conf');
                                     pg_stat_file                                      
---------------------------------------------------------------------------------------
 (26343,"2019-02-21 17:35:22+01","2019-02-05 15:41:11+01","2019-02-05 15:41:11+01",,f)
(1 row)
postgres=# select pg_size_pretty((pg_stat_file('postgresql.conf')).size);
 pg_size_pretty 
----------------
 26 kB
(1 row)

Loading a file into the database is possible as well:

postgres=# create table t1 ( a text );
CREATE TABLE
postgres=# insert into t1 select pg_read_file('postgresql.conf');
INSERT 0 1
postgres=# select * from t1;
                                                        a                                                        
-----------------------------------------------------------------------------------------------------------------
 # -----------------------------                                                                                +
 # PostgreSQL configuration file                                                                                +
 # -----------------------------                                                                                +
 #                                                                                                              +
 # This file consists of lines of the form:                                                                     +
 #                                                                                                              +
 #   name = value                                                                                               +
...

This works even with binary files (but do you really want to have binary files in the database?):

postgres=# create table t2 ( a bytea );
CREATE TABLE
postgres=# insert into t2 select pg_read_binary_file('/bin/cp');
INSERT 0 1
postgres=# select * from t2;
                                                                                                                                                                                                                   
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 \x7f454c4602010100000000000000000002003e0001000000293e4000000000004000000000000000c0560200000000000000000040003800090040001f001e000600000005000000400000000000000040004000000000004000400000000000f801000000000000
(1 row)
postgres=# drop table t1,t2;
DROP TABLE

As usual this is all very well documented in the PostgreSQL documentation.

Cet article Working with files on the filesystem in PostgreSQL est apparu en premier sur Blog dbi services.

MySQL 8 and Security – Encryption of binary logs

$
0
0

As I discussed in some of my recent talks at conferences (at the DOAG for example), MySQL 8 came out with new features which bring lots of improvements in terms of security.

“At-Rest” encryption has been existing from some releases by now:
– InnoDB Tablespace Encryption: by 5.7.11
– Redo and Undo Log Data Encryption: by 8.0.1
Now starting from version 8.0.14, you can also encrypt binary and relay log files. In this blog post we will see how to configure that and we will do some tests.

Case 1: Binary log files are not encrypted

Binary log files encryption is disables by default:

mysql> show variables like 'binlog_encryption';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| binlog_encryption | OFF   |
+-------------------+-------+
1 row in set (0.02 sec)

With this configuration, we could extract some sensitive information with some simple OS commands without connecting to the database, which means that if an OS user account is compromised we could have some important security issues.
First of all I create a database and a table and insert in it some sensitive information:

mysql> create database cards;
Query OK, 1 row affected (0.01 sec)
mysql> use cards;
Database changed
mysql> CREATE TABLE cards.banking_card (id int (128), day int(2), month int(2), year int(4), type varchar(128), code varchar(128));
Query OK, 0 rows affected (0.04 sec)
mysql> INSERT INTO cards.banking_card VALUES (1, 8, 3, 1984, 'secret code', '01-234-5678');
Query OK, 1 row affected (0.01 sec)

I check which is the binary log file which is currently in use:

mysql> SHOW BINARY LOGS;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| mysqld8-bin.000001 |      1384 | No        |
| mysqld8-bin.000002 |       178 | No        |
| mysqld8-bin.000003 |       974 | No        |
+--------------------+-----------+-----------+
mysql> SHOW BINLOG EVENTS IN 'mysqld8-bin.000003';
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Log_name           | Pos | Event_type     | Server_id | End_log_pos | Info                                                                                                                                                    |
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| mysqld8-bin.000003 |   4 | Format_desc    |         8 |         124 | Server ver: 8.0.15, Binlog ver: 4                                                                                                                       |
| mysqld8-bin.000003 | 124 | Previous_gtids |         8 |         155 |                                                                                                                                                         |
| mysqld8-bin.000003 | 155 | Anonymous_Gtid |         8 |         232 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 232 | Query          |         8 |         341 | create database cards /* xid=17 */                                                                                                                      |
| mysqld8-bin.000003 | 341 | Anonymous_Gtid |         8 |         420 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 420 | Query          |         8 |         635 | use `cards`; CREATE TABLE cards.banking_card (id int (128), day int(2), month int(2), year int(4), type varchar(128), code varchar(128)) /* xid=23 */ |
| mysqld8-bin.000003 | 635 | Anonymous_Gtid |         8 |         714 | SET @@SESSION.GTID_NEXT= 'ANONYMOUS'                                                                                                                    |
| mysqld8-bin.000003 | 714 | Query          |         8 |         790 | BEGIN                                                                                                                                                   |
| mysqld8-bin.000003 | 790 | Table_map      |         8 |         865 | table_id: 66 (cards.banking_card)                                                                                                                     |
| mysqld8-bin.000003 | 865 | Write_rows     |         8 |         943 | table_id: 66 flags: STMT_END_F                                                                                                                          |
| mysqld8-bin.000003 | 943 | Xid            |         8 |         974 | COMMIT /* xid=24 */                                                                                                                                     |
+--------------------+-----+----------------+-----------+-------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
11 rows in set (0.00 sec)

With these 2 following commands for example (od to dump files in octal and other formats and xxd to make an hexdump) I can easily read the content of my binary log file:

# od -c mysqld8-bin.000003
0001440         a 003           B           001
0001460   005   c   a   r   d   s   016   b   a   n   k   i   n   g
0001500   _   c   a   r   d   ?     006 003 003 003 003 017 017 004
0001520   002   002   ? 001 001   002 003 374 377   005 224 022
0001540 332 375   ~   n   \ 036  \b         N       257 003
0001560           B           001   002   006 377
0001600   001        \b       003       300  \a  
0001620    \v     s   e   c   r   e   t       c   o   d   e  \v  
0001640   0   1   -   2   3   4   -   5   6   7   8   9   N   _ 312 375

# xxd mysqld8-bin.000003
00001f0: ff00 1300 6361 7264 7300 4352 4541 5445  ....cards.CREATE
0000200: 2054 4142 4c45 2063 6172 6473 2e63 6172   TABLE cards.ban
0000210: 645f 656e 6372 7970 7465 6420 2869 6420  king_card    (id
0000220: 696e 7420 2831 3238 292c 2064 6179 2069  int (128), day i
0000230: 6e74 2832 292c 206d 6f6e 7468 2069 6e74  nt(2), month int
0000240: 2832 292c 2079 6561 7220 696e 7428 3429  (2), year int(4)
0000250: 2c20 7479 7065 2076 6172 6368 6172 2831  , type varchar(1
0000260: 3238 292c 2063 6f64 6520 7661 7263 6861  28), code varcha
0000270: 7228 3132 3829 29f5 771f aafd 7e6e 5c22  r(128)).w...~n\"
0000280: 0800 0000 4f00 0000 ca02 0000 0000 0000  ....O...........
0000290: 0000 0000 0000 0000 0000 0000 0000 0000  ................
00002a0: 0000 0000 0000 0002 0200 0000 0000 0000  ................
00002b0: 0300 0000 0000 0000 e692 3509 6582 05fc  ..........5.e...
00002c0: 5301 8f38 0100 9993 56a3 fd7e 6e5c 0208  S..8....V..~n\..
00002d0: 0000 004c 0000 0016 0300 0008 000b 0000  ...L............
00002e0: 0000 0000 0005 0000 1d00 0000 0000 0001  ................
00002f0: 2000 a045 0000 0000 0603 7374 6404 ff00   ..E......std...
0000300: ff00 ff00 12ff 0063 6172 6473 0042 4547  .......cards.BEG
0000310: 494e 0567 d2c2 fd7e 6e5c 1308 0000 004b  IN.g...~n\.....K
0000320: 0000 0061 0300 0000 0042 0000 0000 0001  ...a.....B......
0000330: 0005 6361 7264 7300 0e63 6172 645f 656e  ..cards..banking
0000340: 6372 7970 7465 6400 0603 0303 030f 0f04  _card...........
0000350: 0002 0002 3f01 0100 0203 fcff 0005 9412  ....?...........
0000360: dafd 7e6e 5c1e 0800 0000 4e00 0000 af03  ..~n\.....N.....
0000370: 0000 0000 4200 0000 0000 0100 0200 06ff  ....B...........
0000380: 0001 0000 0008 0000 0003 0000 00c0 0700  ................
0000390: 000b 0073 6563 7265 7420 636f 6465 0b00  ...secret code..
00003a0: 3031 2d32 3334 2d35 3637 3839 4e5f cafd  01-234-56789N_..
00003b0: 7e6e 5c10 0800 0000 1f00 0000 ce03 0000  ~n\.............
00003c0: 0000 1800 0000 0000 0000 f9d0 d057       .............W

Yes, that’s not good news.

Case 2: Binary log files are encrypted

Now let’s try to activate encryption through the following steps:
1) Activate a keyring plugin for the master key management (for the Community Edition, it’s called keyring_file and it stores keyring data in a local file):

# mysqld_multi stop 8
mysqld_multi log file version 2.16; run: Thu Feb 21 10:54:19 2019
Stopping MySQL servers
# mysqld_multi --defaults-file=/u01/app/mysql/admin/mysqld8/etc/my.cnf start 8 > /dev/null 2>&1
# ps -eaf|grep mysqld
mysql     3362     1 13 10:58 pts/0    00:00:00 /u01/app/mysql/product/mysql-8.0.15/bin/mysqld --port=33008 --socket=/u01/app/mysql/admin/mysqld8/socket/mysqld8.sock --pid-file=/u01/app/mysql/admin/mysqld8/socket/mysqld8.pid --log-error=/u01/app/mysql/admin/mysqld8/log/mysqld8.err --datadir=/u02/mysqldata/mysqld8/ --basedir=/u01/app/mysql/product/mysql-8.0.15/ --slow_query_log=0 --slow_query_log_file=/u01/app/mysql/admin/mysqld8/log/mysqld8-slow-query.log --log-bin=/u01/app/mysql/admin/mysqld8/binlog/mysqld8-bin --innodb_flush_log_at_trx_commit=1 --sync_binlog=1 --local-infile=0 --general_log=0 --general_log_file=/u01/app/mysql/admin/mysqld8/log/mysqld8.log --lc_messages_dir=/u01/app/mysql/product/mysql-8.0.15/share/ --lc_messages=en_US --server-id=8 --log_timestamps=SYSTEM --early-plugin-load=keyring_file.so

The /u01/app/mysql/admin/mysqld8/etc/my.cnf file is defined as follows:

[mysqld8]
port                           = 33008
mysqladmin                     = /u01/app/mysql/product/mysql-8.0.15/bin/mysqladmin
...
server-id                      = 8
early-plugin-load              = keyring_file.so

2) Turn on the binlog_encryption variable:

mysql> SET PERSIST binlog_encryption=ON;
Query OK, 0 rows affected (0.03 sec)

At this point, encryption is enabled:

mysql> flush logs;
Query OK, 0 rows affected (0.05 sec)
mysql> SHOW BINARY LOGS;
+--------------------+-----------+-----------+
| Log_name           | File_size | Encrypted |
+--------------------+-----------+-----------+
| mysqld8-bin.000001 |      1384 | No        |
| mysqld8-bin.000002 |       178 | No        |
| mysqld8-bin.000003 |      1023 | No        |
| mysqld8-bin.000004 |       716 | Yes       |
| mysqld8-bin.000005 |       667 | Yes       |
+--------------------+-----------+-----------+

I insert again some data into my table:

mysql> use cards;
Database changed
mysql> INSERT INTO cards.banking_card VALUES (2, 5, 9, 1986, 'secret code', '01-234-5678');
Query OK, 1 row affected (0.02 sec)

I can try to extract some information from the binary log files on disk:

# xxd mysqld8-bin.000005
...
0000250: 2796 8d0c 9171 7109 df65 2434 9d0e 4f40  '....qq..e$4..O@
0000260: e024 07e8 9db7 ae84 f0d5 5728 90d4 905f  .$........W(..._
0000270: 9cc4 6c33 d4e1 5839 aa1f 97bb af04 b24d  ..l3..X9.......M
0000280: e36d dd05 3d0c f9d8 fbee 2379 2b85 2744  .m..=.....#y+.'D
0000290: efe4 29cb 3eff 03b8 b934 ec6b 4e9c 9189  ..).>....4.kN...
00002a0: d14b 402c 3d80 effe c34d 0d27 3be7 b427  .K@,=....M.';..'
00002b0: 5389 3208 b199 7da6 acf6 d98a 7ac3 299c  S.2...}.....z.).
00002c0: 3de0 5e12 3ed6 5849 f907 3d2c da66 f1a1  =.^.>.XI..=,.f..
00002d0: 7556 c62b b88f a3da 1a47 230b aae8 c63c  uV.+.....G#....<
00002e0: 6751 4f31 2d14 66e9 5a17 a980 4d37 2067  gQO1-.f.Z...M7 g
00002f0: 034c e0d7 b8ad 8cb4 b6d0 16e9 f6a5 3f90  .L............?.
0000300: 95aa 008e 79e1 7fda d74e ada2 f602 cc3b  ....y....N.....;
0000310: 1b61 c657 b656 3840 712d 2bb3 61b9 3c44  .a.W.V8@q-+.a..
0000390: 2a6b e68f e14c 6b3d b6ac e4cf 4f75 a828  *k...Lk=....Ou.(
00003a0: 0e21 24ad 27c7 e970 37a2 c883 46b0 ff26  .!$.'..p7...F..&
00003b0: 7c2a cf9f 9845 e4ca c067 f763 cd80 b1b3  |*...E...g.c....
00003c0: 74b8 6066 b1c0 634e fabc 9312 d0c4 ed8d  t.`f..cN........
00003d0: 880d 41b7 a1d4 3c59 bea3 63e7 ab61 11b7  ..A...<Y..c..a..
00003e0: 9f40 4555 f469 38b8 1add 1336 f03d       .@EU.i8....6.=

Well done, no clear-text data is readable anymore now!
Just a little information. When encryption is turned on, to display the content of the binary log files with mysqlbinlog, we have to use the –read-from-remote-server (-R) option, otherwise mysqlbinlog has no access to them:

# cd /u01/app/mysql/admin/mysqld8/binlog
# mysqlbinlog mysqld8-bin.000005
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
ERROR: Reading encrypted log files directly is not supported.
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
#End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

# mysqlbinlog -R --protocol TCP --host 192.168.25.2 --port 33008 --user root -p mysqld8-bin.000005
Enter password:
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
#at 4
#190221 12:01:55 server id 8  end_log_pos 124 CRC32 0x92413637  Start: binlog v 4, server v 8.0.15 created 190221 12:01:55
BINLOG '
I4VuXA8IAAAAeAAAAHwAAAAAAAQAOC4wLjE1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEwANAAgAAAAABAAEAAAAYAAEGggAAAAICAgCAAAACgoKKioAEjQA
CgE3NkGS
'/*!*/;
#at 124
#190221 12:01:55 server id 8  end_log_pos 155 CRC32 0x7d47d67e  Previous-GTIDs
#[empty]
#at 155
...

Stay tuned with MySQL 8! ;)

Cet article MySQL 8 and Security – Encryption of binary logs est apparu en premier sur Blog dbi services.

Documentum – Documents not transferred to WebConsumer

$
0
0

Receiving an incident is not always a pleasure, but it is always the case when we share the solution!
A few days ago, I received an incident regarding WebConsumer on a production environment, saying that documents are not transferred as expected to WebConsumer.

The issue didn’t happened for all documents, that’s why I directly suspect the High Availability configuration on this environment. Moreover, I know that the IDS is installed only on CS1 (as designed). So I checked the JMS logs on :
CS1 : No errors found there.

CS2 : Errors found :

2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-11 04:05:39,141 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.044s
2019-02-11 04:05:39,773 UTC INFO  [stdout] (default task-89) 2019-02-11 04:05:39,773 UTC ERROR [com.domain.repository1.dctm.methods.WCPublishDoc] (default task-89) DfException:: THREAD: default task-89; 
MSG: [DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND]error:  "The app_server_name/servlet_name 'WebCache' is not specified in dm_server_config/dm_jms_config."; ERRORCODE: 100; NEXT: null

To cross check:

On CS1:

[dmadmin@CONTENT_SERVER1 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER1 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
0

On CS2:

[dmadmin@CONTENT_SERVER2 ~]$ cd $DOCUMENTUM/shared/wildfly9.0.1/server/DctmServer_MethodServer/log
[dmadmin@CONTENT_SERVER2 log]$ grep DM_METHOD_E_JMS_APP_SERVER_NAME_NOTFOUND server.log | wc -l
60

So I checked the app servers list configured in the dm_server_config:

On CS1:

API> retrieve,c,dm_server_config
...
3d01e24080000102
API> dump,c,3d01e24080000102
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: WebCache
                               [4]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://CONTENT_SERVER1:6679/services/scs/publish
                               [4]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Good, WebCache is configured here.

On CS2:

API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
...

Ok! The root cause of this error is clear now.

The concerned method is WCPublishDocumentMethod, but applied when? by who?

I noticed that in the log above:

D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'

So, WCPublishDocumentMethod applied by the D2LifecycleConfig, which is applied when? by who?
Sought in the erver.log file and found:

2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : User  : repository1
2019-02-11 04:05:04,490 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : New session manager creation.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager set identity.
2019-02-11 04:05:04,491 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Session manager get session.
2019-02-11 04:05:06,006 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem ID: 4a01e2408002bd3d
2019-02-11 04:05:06,023 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow tracker...
2019-02-11 04:05:06,031 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching workflow config...
2019-02-11 04:05:06,032 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Get packaged documents...
2019-02-11 04:05:06,067 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Apply on masters...
2019-02-11 04:05:06,068 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Workitem acquire...
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Applying lifecycle (Target state : On Approved / Transition :promote
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : No workflow properties
2019-02-11 04:05:06,098 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching target state name and/or transition type.
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target state name :On Approved
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Target transition type :promote
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Performing D2 lifecycle on :FRM-8003970 (0901e240800311cd)
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [INFO ] - com.emc.d2.api.methods.D2WFLifeCycleMethod    : Searching associated D2 lifecycle...
2019-02-11 04:05:06,099 UTC INFO  [stdout] (default task-79) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::getInstancesForObject start time 0.000s
...
2019-02-11 04:05:39,097 UTC INFO  [stdout] (default task-60) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
...

Hummmm, the D2WFLifeCycleMethod is applied by the job D2JobLifecycleBatch. I checked the target server of this job:

1> SELECT target_server FROM dm_job WHERE object_name='D2JobLifecycleBatch';
2> go
target_server                                                                                                                                                                               
-------------
 
(1 row affected)

As I suspected, no target server defined! That’s mean that the job can be executed on “Any Running Server”, that’s why this method has been executed on CS2… While CS2 is not configured to do so.

Now, two solutions are possible:
1. Change the target_server to use only CS1 (idql):

UPDATE dm_job OBJECTS SET target_server='repository1.repository1@CONTENT_SERVER1' WHERE object_name='D2JobLifecycleBatch';

2. Add the app server WebCache to CS2, pointing to CS1 (iapi):

API>fetch,c,dm_server_config
API>append,c,l,app_server_name
WebCache
API>append,c,l,app_server_uri

https://CONTENT_SERVER1:6679/services/scs/publish

API>save,c,l

Check after update:
API> retrieve,c,dm_server_config
...
3d01e24080000255
API> dump,c,3d01e24080000255
...
USER ATTRIBUTES

  object_name                     : repository1
...
  app_server_name              [0]: do_method
                               [1]: do_mail
                               [2]: FULLTEXT_SERVER1_PORT_IndexAgent
                               [3]: FULLTEXT_SERVER2_PORT_IndexAgent
                               [4]: WebCache
  app_server_uri               [0]: https://CONTENT_SERVER1:9082/DmMethods/servlet/DoMethod
                               [1]: https://CONTENT_SERVER1:9082/DmMail/servlet/DoMail
                               [2]: https://FULLTEXT_SERVER1:PORT/IndexAgent/servlet/IndexAgent
                               [3]: https://FULLTEXT_SERVER2:PORT/IndexAgent/servlet/IndexAgent
                               [4]: https://CONTENT_SERVER1:6679/services/scs/publish
...

We choose the second option, because:
– The job is handled by the application team,
– Modifying the job to run only on CS1 will resolve this case, but if the method is applied by another job or manually on CS2, we will get again the same error.

After this update no error has been recorded in the log file:

...
2019-02-12 04:06:10,948 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod start 'WCPublishDocumentMethod'
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - c.e.d.a.c.m.lifecycle.D2LifecycleConfig       : D2LifecycleConfig::applyMethod before session apply 'WCPublishDocumentMethod' time: 0.007s
2019-02-12 04:06:10,955 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : No ARG_RETURN_ID in mapArguments
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : newObject created, user session used: 0801e2408023f714
2019-02-12 04:06:10,956 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.D2SysObject                    : getFolderIdFromCache: got folder: /System/D2/Data/c6_method_return, object id: 0b01e2408000256b, docbase: repository1
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : mapArguments: {-method_return_id=0801e2408023f714}
2019-02-12 04:06:11,016 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : origArguments: {-id=0901e24080122a59}
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodName: WCPublishDocumentMethod
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : methodParams: -id 0901e24080122a59 -user_name dmadmin -docbase_name repository1
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : Start WCPublishDocumentMethod method with JMS (Java Method Services) runLocally hint set is false
2019-02-12 04:06:11,017 UTC INFO  [stdout] (default task-81) [DEBUG] - com.emc.d2.api.methods.D2Method               : key: -method_return_id, and value: 0801e2408023f714
...

I hope this blog will help you to quickly resolve this kind of incident.

Cet article Documentum – Documents not transferred to WebConsumer est apparu en premier sur Blog dbi services.

How workload policies is making your life easy

$
0
0

Working as a control-M administrator, we are often requested to hold jobs or stop jobs during a period of time. Holding jobs and updating the job configuration might not be the most efficient way to do this. Workload policies is making your life really easier, giving you the possibility to configure such modification in advance and to let the system act independently whether the NDP (New Day Processing) is in the middle of the period range or not.

Job configuration and execution

In my lab, I have got a cyclic job called OS_Job#test4. Every 2 Minutes, from its End, this job is executing the following command hostname && cd $HOME && ls && cat installed-versions.txt && sleep 30.

The output of the job would be the following.
Output of the job

The job is executing every 2 minutes.
Cyclic execution

The job is waiting for the next execution time window.
Waiting next time execution window

Workload policies Manager

Let’s configure a workload policy. In monitoring, tools, click on Workload Policy Monitor button.
Workload policy button

Open Workload Policies Manager in order to create a policy.
Workload Policy Manager

Workload policy-2

Let’s configure the policy not to have that specific job running from 23:15 to 23:30.
workload configuration

Recommendation is to specify the application, sub-application and folder, in order to limit impact if there is 2 same job name.
Using * character as job name would include all jobs from specified folder.

Set running job to value 0 to have no expected job running.

When setting the period time, you need to take in account when the NDP is happening. For example, if the NDP is happening every day at 12pm, this would mean that you would setup :

  • Time from 08.02.2019 9am to 10.02.2019 9am to stop job as calendar date/time from 09.02.2019 9am until 11.02.2019 9am
  • Time from 08.02.2019 9am to 09.02.2019 5pm to stop job as calendar date/time from 09.02.2019 9am until 09.02.2019 5pm

The workload policy needs to be activated.
Activate workload policy

Monitor Workload Policy

Workload Policies monitor dashboard will show one job in wait status. Using viewpoint, we can monitor the jobs concerned by the newly created policy.
Monitoring
monitoring viewpoints

For the run between 23:15 and 23:30, the job will not be executed as belonging to an active workload policy.
waiting information

Logs are showing the same.
Logs

You would need to be aware that :

  • The workload will only act on the jobs once the current execution status is over.
  • You don’t have to take care of the NDP. In case the NDP is included in the range period, the current job will be removed and a new one will be ordered and will only start once the workload policy time range is over.

This Workload Policy function is quite interesting and helpful. Before implementing it and using it for a first time on production, I still recommend you to test it in a lab and get used to it.

Cet article How workload policies is making your life easy est apparu en premier sur Blog dbi services.


How to disable ACLs in Oracle 12c

$
0
0

A common problem in Oracle 12c is, that ACLs are often not wanted to limit connections from database to other hosts. To disable ACLs is not that easy, so the best way is to enable connections and resolutions to all hosts for all users like following example:


BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => '*',
ace => xs$ace_type(privilege_list => xs$name_list('resolve'),
principal_name => 'PUBLIC',
principal_type => xs_acl.ptype_db));
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => '*',
ace => xs$ace_type(privilege_list => xs$name_list('connect'),
principal_name => 'PUBLIC',
principal_type => xs_acl.ptype_db));
END;
/
COMMIT;

“*” in host parameter allows connection to all hosts, principal_name “PUBLIC” allows all users of the database to initiate network connections.

Cet article How to disable ACLs in Oracle 12c est apparu en premier sur Blog dbi services.

PostgreSQL 12: New option –exclude-database for pg_dumpall

$
0
0

Up to PostgreSQL 11 pg_dump all was used to dump all PostgreSQL databases of a cluster into one script file. The script file then contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in the cluster.
PostgreSQL 12 is under development but we can test new features which will be implemented. Now with PostgreSQL 12 there is a new option that allows to exclude specific database with pg_dumpall. This option is called –exclude-database
This option can be useful where a database name is visible but the database is not dumpable by the user.
We can now see this new option in the help output

[postgres@dbi-pg-essentials log]$ /u01/app/postgres/product/12dev/bin/pg_dumpall --help | grep exclude
--exclude-database=PATTERN   exclude databases whose name matches PATTERN
[postgres@dbi-pg-essentials log]$

In this blog I am going to test this new option.

Let’s first list the databases in my cluster

postgres=# \l
                                     List of databases
      Name       |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------------+----------+----------+-------------+-------------+-----------------------
 db1             | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 db2             | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 mydb            | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres        | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres_sample | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres
 template1       | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                 |          |          |             |             | postgres=CTc/postgres
(7 rows)

postgres=#

First let’s dump all user databases

[postgres@dbi-pg-essentials ~]$  /u01/app/postgres/product/12dev/bin/pg_dumpall   > with_alldb.sql   

If we do a grep in the output file, we can see that the dump contains all databases

[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" with_alldb.sql
CREATE DATABASE db1 WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE db2 WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$

Now let’s dump all the user databases except db1 and db2.

[postgres@dbi-pg-essentials ~]$ /u01/app/postgres/product/12dev/bin/pg_dumpall  --exclude-database=db2 --exclude-database=db1 > without_db1_db2.sql

We can effectively verify that the dump file contains instructions to create the databases mydb and postgres_sample. That’s the expected result.

[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" without_db1_db2.sql
CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$¨

What is interesting is that we can use wildcard characters. The result above can obtained like following

[postgres@dbi-pg-essentials ~]$  /u01/app/postgres/product/12dev/bin/pg_dumpall  --exclude-database=db* > without_db_star.sql
[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" without_db_star.sql
CREATE DATABASE mydb WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$

Let’s say we want to dump all user databases except those whose names contain ‘db’

[postgres@dbi-pg-essentials ~]$  /u01/app/postgres/product/12dev/bin/pg_dumpall  --exclude-database=*db* > without_star_db_star.sql
[postgres@dbi-pg-essentials ~]$ grep "CREATE DATABASE" without_star_db_star.sql                    
 CREATE DATABASE postgres_sample WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8';
[postgres@dbi-pg-essentials ~]$

Conclusion:

In this blog we talked about the new option exclude-database for pg_dumpall in PostgreSQL 12.

Cet article PostgreSQL 12: New option –exclude-database for pg_dumpall est apparu en premier sur Blog dbi services.

How To Create A Recovery Services Vault In Azure

$
0
0

Creating A Recovery Services Vault In Azure

To create a recovery services vault, we first need to create a resource group by using either the CLI or the Azure interface.

Creating a resource group using the command line interface

$resourceGroup = “recovvault”
$location = “westeurope”
az group create -n $resourceGroup -l $location

The resource recovvault has been created successfully.

Now we need to setup  the recovery services vault.

From Azure portal, select the resource group created then click on Add, and search for Backup and Site recovery (OMS), click on it and hit create.
Specify the following information as described in the image below:

Recovery Services Backup

Hit create.

The recovery services vault has been created successfully.

Set the vault storage redundancy to Locally-redundant

  • Select the recovery vault created
  • From the options column, select Properties, then Backup configuration
  • Choose Locally-redundant (cost effective than Geo-redundant)
  • Hit save

Configuring the Azure Backup Agent.

We need to create a virtual machine in order to simulate an on-premise server, so click on Backup from the left menu.
For the purpose of this example, we will choose On-premises for simulation and File and Folders to backup.
Hit Prepare the infrastructure.

Then download the agent for windows server and the vault credentials by clicking on the link provided (see below):

Recovery_Services_Agent

Open the Azure Recovery Services Agent service, leave the sections Installation Settings and Proxy Configuration by default then click on Install.

When the installation is done, click on Proceed to Registration.

Let’s associate the vault credential with the agent.

Browse to select the agent credential in order to associate the server with the backup vault in Azure.
Enter and confirm the passphrase, choose a location to save it then hit finish.

The server is now registered with the Azure backup and the agent properly installed.

Schedule backup for Files and Folders.

  • Click on close to launch Microsoft Azure Recovery Services Agent.
  • Click on Schedule Backup from the right column
  • Select items to backup (my docs folder)
  • Specify backup schedule (outside normal working hours)
  • Leave by default the retention policy
  • In our example, we choose over the network due to our small data.
  • Hit Finish.

If you have small data, choose the option Over the network, otherwise choose Offline for large data.

The backup schedule is now successful.

Executing the backup.

Click on Back up Now from the right column
Hit back up and close (see below):

Backup_R

Recovering Data

  • Click on Recover data from the right column of the window backup
  • Choose This server
  • Choose Volume
  • Select the volume and date
  • Choose another recovery destination
  • Hit recover

The recovery is now successful, the folder has been properly recovered.

Information related to the cost of resources used during the implementation of the recovery vault:

Use the below PowerShell command to get an overview of the cost of the resources used:

Get-AzConsumptionUsageDetail -BillingPeriodName 201903 -ResourceGroup recovvault

If you are in a test environment, to avoid paying extra cost, make sure to delete the resource group (command-line below) created if not used.

  • az group delete -n $resourceGroup -y

Need further details about managing and monitoring a recovery service vault, click here.
Need further details about deleting a recovery service vault, click here.

Cet article How To Create A Recovery Services Vault In Azure est apparu en premier sur Blog dbi services.

How to extract sql statements with awk

$
0
0

A common problem is to extract sql statements out of a sql file e.g. produced by full metadata impdp. Often these files are too big to be edited with vi.

Following statement extracts all sql statements with “CREATE MATERIALIZED VIEW” at the beginning with awk:


cat impdp_full_metadata.sql | awk 'BEGIN{ FLAG = 0 } { if( $0 ~ /CREATE MATERIALIZED VIEW/ ) FLAG=1; if(FLAG != 0) print $0; if ($0 ~ /;/ ) FLAG=0; }'

Explanation: At beginning of awk state FLAG is set to 0, also when semicolon is in the line. If flag is not 0, the line is printed, if pattern of statement is found, flag is set to 1 which causes line with pattern and all subsequence lines are printed up to the next line which contains a semicolon. This works also for statements spanning more than one line.

Cet article How to extract sql statements with awk est apparu en premier sur Blog dbi services.

SQL Konferenz 2019 – Day 1

$
0
0

For my first “dbi visit”, I went to Darmstadt, in Germany, to attend SQL Konferenz 2019.

There were more than 50 sessions over two days (20 & 21.02.2019) in several areas as SQL Administration, BI, DevOps, Azure & BigData.

I decided to write a short glimpse of the most appreciated sessions.

Keynote

To start this event, they introduced the German SQL Server community (PASS.de) and the upcoming meetings this year, the well-known people are present on stage. Let’s say this year we celebrated the 15th anniversary of this conference.

The next dates

The next dates

 

Afterward, The organization team prepared a small staging based on “Back to the SQL Future”, the evolution of SQL Server over the years:

Dr Emmett Brown & Marty McFly is back

Dr Emmett Brown & Marty McFly are back

 

Next, there were a lot of talks about the Azure Cloud and PowerBi, including a demo of Azure Data Factory.

And to finish the keynote, Kusto Explorer was presented through a funny demo. Kusto Explorer is a program that allows to explore your data with a SQL style syntax light version, and to create results with graphs, quickly and easily, focused on BigData.

See details on Microsoft website

Explore your data

Explore your data

 

Azure Managed Instance

This session was presented by  Björn Peters, who listed the main traditional recurring tasks of a DBA: Backup, ETL, Monitoring, Patching, Reorgs Rebuilds, DBCC CheckDB. These tasks are greatly simplified today in Azure and shows us the functions available in Azure Managed Instance:

  •  Data migration: Native backup/restore
  • Security: Azure AD, TDE, SQL Audit, Dynamic Data Masking
  • Programmability: Global temp tables, Linked servers, CLR module
  • Operational: DMVs & XEvents, Query Store, DB Mail
  • Scenario enablers: Service Broker, Transactional Replication

Azure Data Studio – The new Kid in Town

Presented by Frank Geisler, a member of the PASS.de community

This is a new tool to manage our databases, not intended to replace SSMS, but more like a new tool more oriented towards Dev or DevOps.

  • Multi-platform: Windows, Mac & Linux
  • Based on a Visual Studio Code fork
  • A modern architecture
  • For developer and DBA
  • Extensions
    • SQL Server Agent
    • SQL Server Import
    • Server Reports
    • WhoIsActive
ADS will kill Microsoft SSMS?

ADS will kill Microsoft SSMS?

 

I invite you to test Azure Data Studio => Download here

Frank’s podcasts => https://www.pleasetalkdatatome.de/

SQL Agent in the cloud

this session was presented by Sam Cogan, As you know the SQL Agent doesn’t exist in Azure SQL and we have 3 alternatives:

  • Azure SQL Elastics Jobs
  • Azure Automation
  • Azure Functions
You have the choice

You have the choice

 

Trends in databases

It was a presentation about popular databases stats, based on the following website => https://db-engines.com/en/ranking

SQl Server VS Oracle?

SQL Server VS Oracle?

 

After: 15th anniversary

An evening was organized with meals, animations and games to win prizes.

We played "Bingo Monsters"

We played “Bingo Monsters”

 

Finally, it was a good opportunity to meet some people, who know dbi services but in the context of the DOAG ;)

Cet article SQL Konferenz 2019 – Day 1 est apparu en premier sur Blog dbi services.

Viewing all 2879 articles
Browse latest View live