Quantcast
Channel: dbi Blog
Viewing all 2833 articles
Browse latest View live

DOAG 2017

$
0
0

Als Consultant bei dbi services war ich die letzten 2 Jahre hauptsächlich in Konsolidierungsprojekten basierend auf den Oracle Engineered Systems unterwegs. Deshalb waren für mich natürlich die Vorträge über die neue Generation der Oracle Database Appliance X7-2 interessant.

Aus meiner Sicht hat Oracle den richtigen Schritt getan und die Vielfalt der ODA Systeme wie es sie noch in der X6-2 Generation gab (Small/Medium/Large und HA) reduziert.

Künftig wird es die ODA X7-2 nur noch in 3 Modellen (S, M und HA) geben, wobei klar zu sagen ist, dass die kleineren Systeme leistungstechnisch aufgewertet wurden und die HA endlich wieder in einer Ausstattung zur Verfügung steht, die eine Konsolidierung auch von größeren Datenbank- und Applikationssystemen ermöglicht:

  • die ODA X7-2 S als Einsteigersystem mit einer 10 Core CPU, bis zu 384 GB RAM und 12,8 TB NVMe Storage
  • die ODA X7-2 M entspricht nun eher den X6 L Systemen mit 2×18 Core, bis zu 768 GB RAM und bis 51,2 TB NVMe Storage
  • die ODA X7-2 HA ist natürlich das Flaggschiff der ODA Klasse. 2 Server mit je 2×18 Cores, bis zu 768 GB RAM pro Server und diversen Storage Erweiterungen bis 150 TB geben einem das alte X5 Gefühl (oder vielleicht schon mit einer Exadata) zu arbeiten

Die für mich interessantesten Neuerungen sind weniger im Hardwarebereich zu finden, sondern viel mehr in den möglichen Deployments der Systeme:

  • alle Systeme unterstützen SE/SE1/SE2/EE 11.2.0.4, 12.1.0.2 und 12.2.01
  • alle Systeme unterstützen ein virtualisiertes Setup, die kleinen Systeme mit KVM, die X7-2 HA mit KVM und OVM, wobei bis jetzt noch kein Hardpartitioning mit KVM möglich, aber in Planung ist
  • auf der X7-2 HA kann bei den Storage Erweiterungen zwischen High Performance (SSD) und High Capacity (HDD) gewählt werden, sogar Mischformen sind mit Einschränkungen möglich

Eingespart wurde allerdings bei den Netzwerkschnittstellen, hier gibt es nur noch 2 Interfaces statt bisher 4 wie an der X5-2 (neben den privaten Interfaces für den Interconnect). Es gibt zwar die Möglichkeit ein weiteres Interface nach dem Deployment zu konfigurieren, allerdings nur mit 1GB. Geplant ist zwar, dass künftig auch im Bare Metal Setup VLAN Konfigurationen auf dem Public Interface möglich sind, trotzdem hätte man (insbesondere der HA) noch zwei zusätzliche Interfaces spendieren können, zudem Steckplätze vorhanden sind.

Hoch interessant sind die Leistungsdaten des NVMe bzw. SSD Storage. Hier sind bis zu 100 000 IOPS möglich, bei der HA sehe ich als begrenzenden Faktor eher den SAS Bus als die SDDs. Was wirklich schön ist, dass der Storage für die Redo Logs auf 4x800GB SSD erweitert wurde, hier musste man in den früheren Systemen immer etwas sparsam sein…

Alles in allem freue ich mich darauf mit der X7-2 zu arbeiten, denn Oracle stellt hier ein gutes Stück Hardware bereit, das auch preislich im Rahmen bleibt.

 

 

 

 

 

 

 

 

Cet article DOAG 2017 est apparu en premier sur Blog dbi services.


DOAG 2017: avg_row_len with virtual columns

$
0
0

At the DOAG I attended a session “Top-level DB design for Big Data in ATLAS Experiment at CERN” provided by Gancho Dimitrov. The presentation was actually very interesting. As part of Gancho’s improvement activities to reduce space in a table he stored data in a 16 Bytes raw format (instead of a string representing hex values which requires 36 Bytes) and use virtual columns to actually calculate the real hex-string.

So the original value is e.g. 21EC2020-3AEA-4069-A2DD-08002B30309D, which is reduced to 16 Bytes by removing the ‘-‘ and converting the resulting hex-string to raw:

HEXTORAW(REPLACE(’21EC2020-3AEA-4069-A2DD-08002B30309D’, ‘-‘, ”))

The problem was that the longer virtual columns added to the average row length statistic in Oracle. I.e. here the simple testcase:


create table cern_test
(
GUID0 RAW(16)
,GUID1 RAW(16)
,GUID2 RAW(16)
,GUID0_CHAR as (SUBSTR(RAWTOHEX(GUID0),1,8)||'-'||
SUBSTR(RAWTOHEX(GUID0),9,4)||'-'||
SUBSTR(RAWTOHEX(GUID0),13,4)||'-'||
SUBSTR(RAWTOHEX(GUID0),17,4)||'-'||
SUBSTR(RAWTOHEX(GUID0),21,12))
,GUID1_CHAR as (SUBSTR(RAWTOHEX(GUID1),1,8)||'-'||
SUBSTR(RAWTOHEX(GUID1),9,4)||'-'||
SUBSTR(RAWTOHEX(GUID1),13,4)||'-'||
SUBSTR(RAWTOHEX(GUID1),17,4)||'-'||
SUBSTR(RAWTOHEX(GUID1),21,12))
,GUID2_CHAR as (SUBSTR(RAWTOHEX(GUID2),1,8)||'-'||
SUBSTR(RAWTOHEX(GUID2),9,4)||'-'||
SUBSTR(RAWTOHEX(GUID2),13,4)||'-'||
SUBSTR(RAWTOHEX(GUID2),17,4)||'-'||
SUBSTR(RAWTOHEX(GUID2),21,12))
);
 
insert into cern_test (guid0,guid1,guid2)
select HEXTORAW('21EC20203AEA4069A2DD08002B30309D'),
HEXTORAW('31DC20203AEA4069A2DD08002B30309D'),
HEXTORAW('41CC20203AEA4069A2DD08002B30309D')
from xmltable('1 to 10000');
commit;
 
exec dbms_stats.gather_table_stats(user,'CERN_TEST',estimate_percent=>100,method_opt=>'FOR ALL COLUMNS SIZE 1');
 
select avg_row_len from tabs where table_name='CERN_TEST';
 
AVG_ROW_LEN
-----------
162
 
select sum(avg_col_len) from user_tab_columns
where table_name='CERN_TEST' and column_name in ('GUID0','GUID1','GUID2');
 
SUM(AVG_COL_LEN)
----------------
51
 
select sum(avg_col_len) from user_tab_columns
where table_name='CERN_TEST' and column_name in ('GUID0_CHAR','GUID1_CHAR','GUID2_CHAR');
 
SUM(AVG_COL_LEN)
----------------
111

The question is if the computation of the average row length by Oracle is correct. I.e. should the physically non-existent virtual columns be considered?
I.e. physically they do not take space. So physically the average row length in the example above is 51, but logically it is 162. What is correct?

To answer that question it has to be checked what the average row length is used for. That information is not documented, but my assumption is that it’s actually only used for the calculation of Bytes required when doing a “select * “, i.e. all columns. That number however, may become important later on when calculating the memory required for e.g. a hash join.

Anyway, the basic question is how Oracle treats virtual columns in execution plans? I.e. does it compute the value of the virtual column when the table is accessed or does it compute the virtual column when it needs it (e.g. when fetching the row or when needing it as a column to join with). According the number “Bytes” in the execution plan the value is computed when the table is accessed:


SQL> explain plan for
2 select a.*, b.guid0 b_guid0 from cern_test a, cern_test b
3 where a.guid0_char=b.guid0_char;
 
Explained.
 
SQL> select * from table(dbms_xplan.display(format=>'+PROJECTION'));
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------
Plan hash value: 3506643611
 
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100M| 20G| 267 (83)| 00:00:01 |
|* 1 | HASH JOIN | | 100M| 20G| 267 (83)| 00:00:01 |
| 2 | TABLE ACCESS FULL| CERN_TEST | 10000 | 527K| 23 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL| CERN_TEST | 10000 | 1582K| 23 (0)| 00:00:01 |
--------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - access("A"."GUID0_CHAR"="B"."GUID0_CHAR")
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - (#keys=1) "A"."GUID0_CHAR"[VARCHAR2,132], "GUID0"[RAW,16],
"GUID0"[RAW,16], "GUID1"[RAW,16], "GUID2"[RAW,16] 2 - "GUID0"[RAW,16] 3 - "GUID0"[RAW,16], "GUID1"[RAW,16], "GUID2"[RAW,16]  
23 rows selected.
 
SQL> select column_name, avg_col_len from user_tab_columns
2 where table_name='CERN_TEST' and column_name in ('GUID0','GUID0_CHAR');
 
COLUMN_NAME AVG_COL_LEN
----------------------------------- -----------
GUID0 17
GUID0_CHAR 37
 
SQL> select (10000*17)/1024 from dual;
 
(10000*17)/1024
---------------
166.015625
 
SQL> select ((10000*17)+(10000*37))/1024 from dual;
 
((10000*17)+(10000*37))/1024
----------------------------
527.34375

So according the projection at step 2 of the plan we use B.GUID0 only, but the Bytes value of 527K considers GUID0 and the virtual column GUID0_CHAR. So the calculation of Bytes is done when the table is accessed and not when the virtual column is actually needed (during the hash).

In that regard the calculation of the avg_row_len by dbms_stats with the virtual columns considered is correct.

The only issue I see are old scripts people wrote long ago, which try to compute the amount of data in a table based on its avg_row_len statistic using something like


SELECT table_name, num_rows * avg_row_len actual_size_of_data
FROM user_tables order by 2;

If there are virtual columns in the table, such a select may return too high values for “actual_size_of_data”.

REMARK: Using the old ANALYZE command to gather statistics results in a value for the avg_row_len, which considers only “real” columns. However, ANALYZE must not be used anymore of course.

 

Cet article DOAG 2017: avg_row_len with virtual columns est apparu en premier sur Blog dbi services.

#DOAG2017

$
0
0

CaptureGPTW

The discussions about the technologies we love. With Bryn about my tests on the MLE and the fact that I compared very different things, running a recursive function on different datatype (integer vs. number). With Mike about the way RUs will be recommended and RURs only for very special cases. With Nigel about the ODC Database Ideas, with Stefan about what is documented or not, with… Discussions about community also, and user groups.

The trip, where meeting fellow speakers start in the plane,…

The dinners with ACEs, with Speakers, with friends…

The beers, thanks to the Pieter & Philippe for sharing Belgian beers & cheese & mustard & celery salt & your good mood

The sessions of course. Kamil’s tool to show tablespace fragmentation visually, Jan’s comparison between Oracle and EDB, Philippe & Pieter technical view on GDPR, Adam’s research on NFS for his appliance,…

The party for sure,…

DSC00332My session, and the very interesting questions I got… I was lucky to speak on the first day. And proud to speak on the Oak Table stream for the first time. I was happy to see many people already with a CDB and even in production. It is a slow adoption but people come to it and finally notice that it is not a big change for daily job.

IMG_4712And colleagues of course. This is the conference where dbi services has a booth and several speakers. We are passionate and like to share. At the booth, we did some demos of Dbvisit Standby 8, Orachrome Lighty, and also the OpenDB Appliance. We meet customers, or candidatees, talk about the technologies we love, explain how we do our training workshops. It is also a great place to discuss among us. Even if we have internal projects, and two ‘dbi xChange’ events every year, we are mainly at customers and have so much to share.

DOAG is an amazing conference. Intense time compressed into 3 days. This incredibly friendly ambiance is hard to quit at the end of the conference. Fortunately, persistence and durability are guaranteed thanks to Kamil’s snapshots:

When you see how Kamil highlights each personality with a simple camera, can you imagine what he can do when organizing a conference? Keep an eye on POUG website.

 

Cet article #DOAG2017 est apparu en premier sur Blog dbi services.

Documentum – DFC traces setup & investigation

$
0
0

When working with Documentum, you will most probably have to enable the DFC traces one day or another and then work with these traces to analyze them. The purpose of this blog is simply to show how the DFC traces can be enabled, which tools can be used to quickly process them and what are the limitations of such things.

Enabling the DFC traces can be done very easily by updating the dfc.properties file of the client. This client can be a DA, D2, JMS, Index Agent, aso… The change is applied directly (if enabled=true) and disabled by default (if commented or enable=false). If you have a dfc.properties that is inside a war file (for DA/D2 for example) and that you deployed your application as a war file (not exploded), then disabling the tracing might need a restart of your application. To avoid that, you can have a dfc.properties inside the war file that just point to another one outside of the war file and then enabling/disabling the traces from this second file will work properly. There are a lot of options to customize how the traces should be generated. A first example with only a few properties that you can use and reuse every time you need traces:

dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_depth=0
dfc.tracing.mode=compact
dfc.tracing.dir=/tmp/dfc_tracing

 

Another example with more properties to really specify what you want to see:

dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_depth=4
dfc.tracing.include_rpcs=true
dfc.tracing.mode=standard
dfc.tracing.include_session_id=true
dfc.tracing.user_name_filter[0]=dmadmin
dfc.tracing.user_name_filter[1]=myuser
dfc.tracing.thread_name_filter[0]=Thread-3
dfc.tracing.thread_name_filter[1]=Thread-25
dfc.tracing.timing_style=milliseconds_from_start
dfc.tracing.dir=/tmp/dfc_tracing
dfc.tracing.file_prefix=mydfc
dfc.tracing.max_backup_index=10
dfc.tracing.max_file_size=104857600
...

 

All these properties are quite easy to understand even without explanation but you can probably find more information and all the possible options in the official Documentum documentation. It’s not the main purpose of this blog so I’m just mentioning a few properties to get started. By default, the name of the generated files will be something like “dfctrace.timestamp.log”, you can change that by setting the “dfc.tracing.file_prefix” for example. Adding and customizing the properties will change the display format and style inside the files so if you want to have a way to analyze these DFC traces, it is better to use more or less always the same set of options. For the example below, OTX asked me to use these properties only:

dfc.tracing.enable=true
dfc.tracing.verbose=true
dfc.tracing.max_stack_depth=4
dfc.tracing.include_rpcs=true
dfc.tracing.mode=compact
dfc.tracing.include_session_id=true
dfc.tracing.dir=/tmp/dfc_tracing

 

When you have your DFC traces, you need a way to analyze them. They are pretty much readable but it will be complicated to get something out of it without spending a certain amount of time – unless you already know what you are looking for – simply because there are a lot of information inside… For that purpose, Ed Bueche developed more than 10 years ago some AWK scripts to parse the DFC traces files: traceD6.awk and trace_rpc_histD6.awk. You can find these scripts at the following locations (all EMC links… So might not be working at some point in the future):

As you can see above, it is not really maintained and the same scripts or a mix of several versions can be found at several locations so it can be a little bit confusing. All the old links are about the awk scripts but since 2013, there is now a python script too (also developed by Ed Bueche).

In this blog, I wanted to talk about the AWK scripts mainly. Earlier this month, I was working with OTX on some performance tuning tasks and for that, I gathered the DFC traces for several scenarios, in different log files, well separated, aso… Then, I provided them to OTX for the analysis. OTX came back to me a few minutes later saying that most of the traces were corrupted and asking me to regenerate them. I wasn’t quite OK with that simply because it takes time and because there were some testing in progress on this environment so gathering clean DFC traces for several scenarios would have forced the tests to be stopped, aso… (Ok ok you got me, I’m just lazy ;))

The content of the DFC traces looked correct to me and after a quick verification, I saw that OTX was using the AWK scripts (traceD6.awk and trace_rpc_histD6.awk) to analyze the logs but they were apparently getting an error. The files didn’t look corrupted to me so I mentioned to OTX that the issue might very well be with the AWK scripts they were using. They didn’t really listen to what I said and stayed focus on getting a new set of DFC traces. I already used these scripts but never really looked inside so it was the perfect reason to take some time for that:

[dmadmin@content_server_01 ~]$ cd /tmp/dfc_tracing/
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ ls -l trace* dfctrace.*.log
-rw-r-----. 1 dmadmin dmadmin 92661060 Nov 3 09:24 dfctrace.1510220481672.log
-rw-r-----. 1 dmadmin dmadmin 3240 Nov 4 14:10 traceD6.awk
-rw-r-----. 1 dmadmin dmadmin 7379 Nov 4 14:10 traceD6.py
-rw-r-----. 1 dmadmin dmadmin 5191 Nov 4 14:10 trace_rpc_histD6.awk
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f traceD6.awk < dfctrace.1510220481672.log > output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_awk_1.log
2 output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f trace_rpc_histD6.awk < dfctrace.1510220481672.log > output_awk_2.log
awk: trace_rpc_histD6.awk:203: (FILENAME=- FNR=428309) fatal: division by zero attempted
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_awk_2.log
4 output_awk_2.log
[dmadmin@content_server_01 dfc_tracing]$

 

As you can see above, the first script generated a log file that contains only 2 lines, so this is already suspicious even if there are no errors. The second script generated an error and its log file contains only 4 lines… The input DFC trace file has a size of 90Mb so it’s clear that there is something wrong and that’s why OTX said that the DFC traces were corrupted. The error message shows the line (203) as the origin of the issue as well as a “division by zero attempted” message. This obviously means that somewhere on this line, there is a division and that the divisor is equal to 0 or at least not set at all. Since I love all kind of UNIX scripting, I would rather fix the bug in the script than having to generate a new set of DFC traces (and the new set would still be impacted by the issue anyway…)! So checking inside the trace_rpc_histD6.awk file, the line 203 is the following one:

[dmadmin@content_server_01 dfc_tracing]$ grep -n -C1 "TIME SPENT" trace_rpc_histD6.awk
202-    printf ("DURATION (secs):\t%17.3f\n", ((curr_tms - st_tms)) );
203:    printf ("TIME SPENT EXECUTING RPCs (secs):%8.3f (which is %3.2f percent of total time)\n", total_rpc_time, 100*total_rpc_time/(curr_tms - st_tms));
204-    printf ("Threads :\t%25d\n", thread_cnt);
[dmadmin@content_server_01 dfc_tracing]$

 

The only division on this line is the total time taken to execute the RPCs divided by the duration of the log file (timestamp of last message – first message). So the value of “curr_tms – st_tms” is 0. Potentially, it could be that both variables have the exact same value but since the first and last messages on the DFC traces don’t have the same timestamp, this isn’t possible and therefore both variables are actually 0 or not set. To check where these variables are defined, how and in which function:

[dmadmin@content_server_01 dfc_tracing]$ grep -n -C15 -E "curr_tms|st_tms" trace_rpc_histD6.awk | grep -E "curr_tms|st_tms|^[0-9]*[:-][^[:space:]]"
144-/ .RPC:/ {
159:                    st_tms = $1;
162:            curr_tms = $1;
175-}
177-/obtained from pool/ {
--
187-}
188-/.INFO: Session/ {
193-}
197-END {
202:    printf ("DURATION (secs):\t%17.3f\n", ((curr_tms - st_tms)) );
203:    printf ("TIME SPENT EXECUTING RPCs (secs):%8.3f (which is %3.2f percent of total time)\n", total_rpc_time, 100*total_rpc_time/(curr_tms - st_tms));
[dmadmin@content_server_01 dfc_tracing]$

 

This shows that the only location where these two variables are set is inside the matching pattern “/ .RPC:/” (st_tms is set to $1 only on the first execution). So it means that this portion of code is never executed so in other words: this pattern is never found in the DFC trace file. Why is that? Well that’s pretty simple: the DFC traces file contains a lot of RPC calls but these lines never contain ” .RPC:”, there are always at least two dots (so something like that: ” ..RPC:” or ” …RPC:” or ” ….RPC:”). The reason why there are several dots is simply because the RPC are placed where they are called… In this case, OTX asked us to use “dfc.tracing.max_stack_depth=4″ so this is what I did and it is the reason why the AWK scripts cannot work by default because they need “dfc.tracing.max_stack_depth=0″, that’s written at the beginning of the scripts in the comment sections.

So a simple way to fix the AWK scripts is to remove the space at the beginning of the pattern for both the traceD6.awk and trace_rpc_histD6.awk scripts and after doing that, it will work for all max_stack_depth values:

[dmadmin@content_server_01 dfc_tracing]$ grep -n ".RPC:/" *.awk
traceD6.awk:145:/ .RPC:/ {
trace_rpc_histD6.awk:144:/ .RPC:/ {
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ sed -i 's,/ .RPC:/,/.RPC:/,' *.awk
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ grep -n ".RPC:/" *.awk
traceD6.awk:145:/.RPC:/ {
trace_rpc_histD6.awk:144:/.RPC:/ {
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f traceD6.awk < dfctrace.1510220481672.log > output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -lc output_awk_1.log
 1961 163788 output_awk_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ awk -f trace_rpc_histD6.awk < dfctrace.1510220481672.log > output_awk_2.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_awk_2.log
 367 49050 output_awk_2.log
[dmadmin@content_server_01 dfc_tracing]$

 

That looks much much better… Basically, the first script list all RPCs with their thread, name and times while the second script creates a sorted list of queries that took the most time to execute as well as a list of calls and occurrences per types/names.

The AWK and Python scripts, even if they are globally working, might have some issues with commas, parenthesis and stuff like that (again it depends which dfc.tracing options you selected). This is why I mentioned above that there is actually both a AWK and Python version of these scripts. Sometimes, the AWK scripts will contain the right information, sometimes it is the Python version that will but in all cases, the later will run much faster. So if you want to work with these scripts, you will have to juggle a little bit:

[dmadmin@content_server_01 dfc_tracing]$ python traceD6.py dfctrace.1510220481672.log > output_py_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_py_1.log
 1959 194011 output_py_1.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ python traceD6.py dfctrace.1510220481672.log -profile > output_py_2.log
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ wc -l output_py_2.log
 342 65917 output_py_2.log
[dmadmin@content_server_01 dfc_tracing]$

 

As you can see, there are fewer lines in the python output files but that’s because some unnecessary headers have been removed in the python version so it’s actually normal. However there are much more characters so it shows that, in this case, the extracted DQL queries contain more characters but it does not mean that these characters are actually part of the DQL queries: you will see below that there are references to “, FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)” => This is NOT part of the DQL but it is present on the output of the Python script while it is not for the AWK one:

[dmadmin@content_server_01 dfc_tracing]$ head -15 output_awk_1.log
analysis program version 2 based on DFC build 6.0.0.76
68354.130 & 0.005 & [http--0.0.0.0-9082-3] & EXEC_QUERY  select r_object_id from dm_sysobject where folder ('/Home') and object_name = 'Morgan Patou'
68354.135 & 0.000 & [http--0.0.0.0-9082-3] & multiNext
68354.136 & 0.005 & [http--0.0.0.0-9082-3] & SysObjFullFetch  0b0f12345004f0de
68354.165 & 0.002 & [http--0.0.0.0-9082-4] & EXEC_QUERY  select r_object_id from dm_user where user_name = 'Morgan Patou'
68354.167 & 0.000 & [http--0.0.0.0-9082-4] & multiNext
68354.167 & 0.002 & [http--0.0.0.0-9082-4] & IsCurrent
68354.170 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY  SELECT COUNT(*) AS items FROM dm_group WHERE group_name = 'report_user' AND ANY i_all_users_names = 'Morgan Patou'
68354.173 & 0.001 & [http--0.0.0.0-9082-4] & multiNext
68354.175 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY  select r_object_id from dm_sysobject where folder ('/myInsight') and object_name = 'myInsight.license'
68354.178 & 0.001 & [http--0.0.0.0-9082-4] & multiNext
68354.179 & 0.001 & [http--0.0.0.0-9082-4] & IsCurrent
68354.165 & 0.010 & [http--0.0.0.0-9082-3] & SysObjGetPermit
68354.175 & 0.006 & [http--0.0.0.0-9082-3] & SysObjGetXPermit
68354.181 & 0.006 & [http--0.0.0.0-9082-4] & MAKE_PULLER
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ head -15 output_py_1.log
68354.130 & 0.005 & [http--0.0.0.0-9082-3] & EXEC_QUERY & select r_object_id from dm_sysobject where folder ('/Home') and object_name = 'Morgan Patou', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
68354.135 & 0.000 & [http--0.0.0.0-9082-3] & multiNext &
68354.136 & 0.005 & [http--0.0.0.0-9082-3] & SysObjFullFetch & 0b0f12345004f0de
68354.165 & 0.002 & [http--0.0.0.0-9082-4] & EXEC_QUERY & select r_object_id from dm_user where user_name = 'Morgan Patou', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
68354.167 & 0.000 & [http--0.0.0.0-9082-4] & multiNext &
68354.167 & 0.002 & [http--0.0.0.0-9082-4] & IsCurrent & 110f123450001d07
68354.170 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY & SELECT COUNT(*) AS items FROM dm_group WHERE group_name = 'report_user' AND ANY i_all_users_names = 'Morgan Patou', FOR_UPDATE=T, BATCH_HINT=50, BOF_DQL=T, FLUSH_BATCH=-1]],50,true,true)
68354.173 & 0.001 & [http--0.0.0.0-9082-4] & multiNext &
68354.175 & 0.003 & [http--0.0.0.0-9082-4] & EXEC_QUERY & select r_object_id from dm_sysobject where folder ('/myInsight') and object_name = 'myInsight.license', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
68354.178 & 0.001 & [http--0.0.0.0-9082-4] & multiNext &
68354.179 & 0.001 & [http--0.0.0.0-9082-4] & IsCurrent & 090f123450023f63
68354.165 & 0.010 & [http--0.0.0.0-9082-3] & SysObjGetPermit & 0b0f12345004f0de
68354.175 & 0.006 & [http--0.0.0.0-9082-3] & SysObjGetXPermit & 0b0f12345004f0de
68354.181 & 0.006 & [http--0.0.0.0-9082-4] & MAKE_PULLER & null
68354.187 & 0.000 & [http--0.0.0.0-9082-4] & getBlock &
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$
[dmadmin@content_server_01 dfc_tracing]$ head -35 output_py_2.log

****** PROFILE OF rpc CALLS *****
     3.273           0.080              41      AUTHENTICATE_USER
     0.032           0.002              17      BEGIN_TRANS
     0.001           0.000              14      END_PUSH_V2
     0.202           0.012              17      END_TRANS
    21.898           0.071             310      EXEC_QUERY
     0.028           0.005               6      FETCH_CONTENT
     0.011           0.000              55      GET_ERRORS
     0.117           0.004              27      GET_LOGIN
     0.290           0.002             163      IsCurrent
     0.013           0.000              82      KILL_PULLER
     0.003           0.000              14      KILL_PUSHER
     0.991           0.012              82      MAKE_PULLER
     0.005           0.000              14      MAKE_PUSHER
     0.002           0.000               5      NEXT_ID_LIST
     0.083           0.002              38      NORPC
     0.015           0.005               3      RelationCopy
     0.446           0.032              14      SAVE
     0.274           0.014              20      SAVE_CONT_ATTRS
     0.140           0.010              14      START_PUSH
     0.134           0.045               3      SysObjCheckin
     0.048           0.016               3      SysObjCheckout
     2.199           0.009             240      SysObjFullFetch
     0.913           0.006             141      SysObjGetPermit
     0.764           0.005             141      SysObjGetXPermit
     0.642           0.046              14      SysObjSave
     0.033           0.000              82      getBlock
     1.454           0.004             399      multiNext

**** QUERY RESPONSE SORTED IN DESCENDING ORDER ****

10.317  select distinct wf.object_name as workflow_name, pr.object_name as process_name, i.name as Performer_Name, i.task_name as Task_Name, i.date_sent as Date_Task_Sent, i.actual_start_date as Date_Task_Acquired, wf.r_creator_name as Workflow_Initiator, cd.primary_group as "group", cd.subgroup as subgroup, cd.artifact_name as Artifact_Name, cd.object_name as document_name, cd.r_version_label as version_label, cd.title as Document_Title, cd.r_object_id as object_id from cd_common_ref_model(all) cd, dmi_package p, dmi_queue_item i, dm_workflow wf, dm_process pr
0.607   select r_object_id from dm_sysobject where folder ('/myInsight/Presentations/Standard Presentations/Graphical Reports') and object_name = 'FusionInterface.xsl', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
0.505   select r_object_id from dm_sysobject where folder ('/myInsight/Presentations/Life Sciences') and object_name = 'Unique Templates.xsl', FOR_UPDATE=F, BATCH_HINT=50, BOF_DQL=T]],50,true,true)
[dmadmin@content_server_01 dfc_tracing]$

 

To conclude this blog on a more philosophical note: always question what other people ask you to do and think twice before doing the same thing over and over again. ;)

 

 

Cet article Documentum – DFC traces setup & investigation est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – How to enable debug logs

$
0
0

Let’s say that you have an application deployed on a WebLogic Server and that the SSO/Atn/Atz (SSO/Authentication/Authorization) isn’t working or throwing a 4xx error code like ‘401 – Unauthorized’ or ‘403 – Forbidden’. How would you be able to know what is happening and how to solve it? Well the answer is always the same: enable debug logs, it is just a matter or choosing the right scope and choosing a way to do it because yes, it is possible to enable/disable them in several ways…

In this blog, I will be using a WebLogic Server configured with a SAML2 Single Sign-On (SSO) and hosting a Documentum D2 application. I will therefore enable and disable the debug logs for a Managed Server that is named ‘msD2-01′ and this MS is hosting D2, obviously (this is just for the background, it does not matter which application is running there!).

In the case of the SAML2 SSO, it’s pretty easy to recognize some pattern of errors, you just have to access the SSO URL and then watch closely the redirection from the WLS to the IdP Partner and then back to the WLS. Depending on when the issue appears, you can find out where the issue is. But in case you absolutely don’t know anything about all that or if you are completely new to this, you can and should always do the same thing: enable the debug logs.

 

I. Administration Console

WebLogic provides several ways to enable debug logs but the one that makes it really easy, even for beginners, is the WebLogic Administration Console. In this blog, I will only talk about SAML2 SSO, Authentication and Authorization. So enabling the debug logs for these three elements can be done using these steps:

  1. Login to the Admin Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Environment > Servers > ServerName (msD2-01 for me) > Debug
  3. Click on the ‘Lock & Edit’ button
  4. Expand the scope ‘weblogic’ and then ‘security’
  5. Check the correct checkboxes, depending on which element you want to enable:
    • atn
    • atz
    • ldap
    • saml
    • saml2
  6. Go back to the top and click on the ‘Enable’ button
  7. Click on the ‘Activate Changes’ button -> All changes have been activated. No restarts are necessary.

The correct page on the WebLogic Admin Console (step 2) is this one:

Debug

From this moment, you can check the Managed Server log file (its name is defined in the ‘Logging’ tab) which is $DOMAIN_HOME/servers/ServerName/logs/ServerName.log by default ($DOMAIN_HOME/servers/msD2-01/logs/msD2-01.log in my case) and it will start to log the messages for the elements that you enabled.

Do NOT enable too many elements! I usually only enable the SAML2 when I know it is linked to the SSO or atn+atz when it is more linked to how WebLogic manages the Authentication/Authorization. These are the three main scopes that you should work with when debugging an issue related to SSO/Authentication/Authorization.

Also, do not forget to disable the debug logs when you are done with your analysis. It is pretty obvious but it can store a lot of information in the log files so…

Please also note that if you expand the ‘atn’ scope for example, you will see that inside it, there is actually the ‘DebugSecurityAtn’, ‘DebugSecuritySAML2Atn’ and ‘DebugSecuritySAMLAtn’ -> these are attributes (not expandable). Therefore enabling the whole ‘atn’ will already activate some of the SAML2 debug logs. So there is a notion of group (=scope) on the Admin Console to simplify the selection that you can also find on other solutions below.

 

II. config.xml

If you already worked with WebLogic, you probably know what is the purpose of the config.xml file and you probably also know that it can be a little bit dangerous to mess with this file (not really if you know what you are doing). However it is still possible to do some configuration there, so at your own risks :).

By default, this file will NOT contain any information regarding the debug logging, unless you already enabled them at least once in the past using the Admin Console for example. So since there is, by default, no information regarding the debug logging, you need to add them manually if you want to configure the logging using the config.xml file. Be aware that the location where you put the configuration matters!

[weblogic@weblogic_server_01 ~]$ cd $DOMAIN_HOME/config
[weblogic@weblogic_server_01 config]$
[weblogic@weblogic_server_01 config]$ grep -C1 -E "debug|server>" config.xml
  </log>
  <server>
    <name>AdminServer</name>
--
    <custom-trust-key-store-pass-phrase-encrypted>{AES}QARbQAV1ul1u3tJcsGAhdATQrNJe1YlVnQmY9d1jWQFx4aM=</custom-trust-key-store-pass-phrase-encrypted>
  </server>
  <server>
    <name>msD2-01</name>
--
    </single-sign-on-services>
  </server>
  <production-mode-enabled>true</production-mode-enabled>
[weblogic@weblogic_server_01 config]$

 

With the example above, there are no debug information in the config.xml file so let’s add the SAML2, atn and atz logging. You can put these entries between the ‘<listen-port-enabled>’ and ‘<listen-address>’ parameters for example. That’s what it looks like afterwards:

[weblogic@weblogic_server_01 config]$ grep -C1 -E "debug|server>" config.xml
  </log>
  <server>
    <name>AdminServer</name>
--
    <custom-trust-key-store-pass-phrase-encrypted>{AES}QARbQAV1ul1u3tJcsGAhdATQrNJe1YlVnQmY9d1jWQFx4aM=</custom-trust-key-store-pass-phrase-encrypted>
  </server>
  <server>
    <name>msD2-01</name>
--
    <listen-port-enabled>false</listen-port-enabled>
    <server-debug>
      <debug-scope>
        <name>weblogic.security.atn</name>
        <enabled>true</enabled>
      </debug-scope>
      <debug-scope>
        <name>weblogic.security.atz</name>
        <enabled>true</enabled>
      </debug-scope>
      <debug-scope>
        <name>weblogic.security.saml2</name>
        <enabled>true</enabled>
      </debug-scope>
    </server-debug>
    <listen-address>10.10.10.10</listen-address>
--
    </single-sign-on-services>
  </server>
  <production-mode-enabled>true</production-mode-enabled>
[weblogic@weblogic_server_01 config]$

 

As you can see above, the debug-scope name that needs to be added in the config.xml file is simply the concatenation of the different elements you expanded in the WebLogic Administration Console separated by dots (E.g.: weblogic.security.saml2).

Above, I only talked about scopes again. If you want to set the attributes directly, it is also possible by adding this for example ‘<debug-security-saml2-service>true</debug-security-saml2-service>’ or ‘<debug-security-atz>true</debug-security-atz>’ but let’s try to keep it simple… I’m only showing the minimum to be able to debug issues but that is obviously not everything you can do.

The problem with configuring something in the config.xml: you need to restart the Managed Server for the change to be applied… Indeed, all changes are written to this file as soon as they are activated but this file is only loaded at the boot so a restart is needed, which makes this solution a little bit less effective when we are talking about debug logging because we usually want to enable/disable them on the fly!

 

III. WLST

You all know what the WLST is so let’s just get to it. This solution is probably the best one if you want to script/automate it because it’s the whole purpose of the WebLogic Scripting Tool, yeah yeah I know it is unbelievable ;)! Below I will not use the config and key files to connect to the AdminServer but keep in mind that these files contain the weblogic username and password encrypted so if you want to automate something, you will most probably need them to avoid clear text passwords! So first the connection to the WLST:

[weblogic@weblogic_server_01 config]$ source $MW_HOME/wlserver/server/bin/setWLSEnv.sh
CLASSPATH=$JAVA_HOME/lib/tools.jar:$MW_HOME/wlserver/modules/features/wlst.wls.classpath.jar:

PATH=$MW_HOME/wlserver/server/bin:$MW_HOME/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:$JAVA_HOME/jre/bin:$JAVA_HOME/bin:$JAVA_HOME/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/weblogic/bin:$MW_HOME/wlserver/../oracle_common/modules/org.apache.maven_3.2.5/bin

Your environment has been set.
[weblogic@weblogic_server_01 config]$
[weblogic@weblogic_server_01 config]$ java -Djava.security.egd=file:///dev/./urandom -Dwlst.offline.log=disable weblogic.WLST

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline> connect('weblogic','P4sSw0rd','t3s://weblogic_server_01:8443')
Connecting to t3s://weblogic_server_01:8443 with userid weblogic ...
<Nov 11, 2017 2:50:12 PM UTC> <Info> <Security> <BEA-090908> <Using the default WebLogic SSL Hostname Verifier implementation.>
Successfully connected to Admin Server "AdminServer" that belongs to domain "DOMAIN".

wls:/DOMAIN/serverConfig/> edit()
Location changed to edit tree.
This is a writable tree with DomainMBean as the root.
To make changes you will need to start an edit session via startEdit().
For more help, use help('edit').

wls:/DOMAIN/edit/> cd('Servers/msD2-01/ServerDebug/msD2-01')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01> startEdit()
Starting an edit session ...
Started edit session, be sure to save and activate your changes once you are done.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecurityAtn','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecurityAtz','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecuritySAML2Atn','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecuritySAML2Service','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> set('DebugSecuritySAMLCredMap','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> save()
Saving all your changes ...
Saved all your changes successfully.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01 !> activate()
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released once the activation is completed.
Activation completed
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01>

 

The commands above will enable the attributes only. On the first paragraph above (related to the Admin Console), I talked about groups (=scope) and I actually only enabled the groups themselves on this first solution (scopes = ‘atn’, ‘atz’ and ‘saml2′). So if you enabled the scope ‘atn’ on the Admin Console for example, then you will probably see the scope ‘DebugSecurityAtn’ (on the WLST session) set to false (unless you already changed it using the above WLST commands). If you want to enable the scopes directly, it is not on the same location. Here is an example for the SAML2 scope:

wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01> cd('DebugScopes/weblogic.security.saml2')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2> ls()

-r--   DynamicallyCreated                           false
-rw-   Enabled                                      false
-r--   Id                                           0
-rw-   Name                                         weblogic.security.saml2
-rw-   Notes                                        null
-rw-   Tags                                         null
-r--   Type                                         DebugScope

-r-x   addTag                                       Boolean : String(tag)
-r-x   freezeCurrentValue                           Void : String(attributeName)
-r-x   getInheritedProperties                       String[] : String[](propertyNames)
-r-x   isInherited                                  Boolean : String(propertyName)
-r-x   isSet                                        Boolean : String(propertyName)
-r-x   removeTag                                    Boolean : String(tag)
-r-x   restoreDefaultValue                          Void : String(attributeName)
-r-x   unSet                                        Void : String(propertyName)

wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2> startEdit()
Starting an edit session ...
Started edit session, be sure to save and activate your changes once you are done.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> set('Enabled','true')
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> ls()

-r--   DynamicallyCreated                           false
-rw-   Enabled                                      true
-r--   Id                                           0
-rw-   Name                                         weblogic.security.saml2
-rw-   Notes                                        null
-rw-   Tags                                         null
-r--   Type                                         DebugScope

-r-x   addTag                                       Boolean : String(tag)
-r-x   freezeCurrentValue                           Void : String(attributeName)
-r-x   getInheritedProperties                       String[] : String[](propertyNames)
-r-x   isInherited                                  Boolean : String(propertyName)
-r-x   isSet                                        Boolean : String(propertyName)
-r-x   removeTag                                    Boolean : String(tag)
-r-x   restoreDefaultValue                          Void : String(attributeName)
-r-x   unSet                                        Void : String(propertyName)

wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> save()
Saving all your changes ...
Saved all your changes successfully.
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2 !> activate()
Activating all your changes, this may take a while ...
The edit lock associated with this edit session is released once the activation is completed.
Activation completed
wls:/DOMAIN/edit/Servers/msD2-01/ServerDebug/msD2-01/DebugScopes/weblogic.security.saml2> disconnect()
Disconnected from weblogic server: AdminServer
wls:/offline> exit()

Exiting WebLogic Scripting Tool.

[weblogic@weblogic_server_01 config]$

 

Pretty cool, right?

 

IV. Command line

The last way to do this is via the command line but just like the config.xml, this will require a restart of the concerned Managed Server. Here are some examples of command line arguments that can be added to the JAVA_OPTIONS to enable the debug logs on the Attributes level:

  • -Dweblogic.debug.DebugSecurityAtn=true
  • -Dweblogic.debug.DebugSecurityAtz=true
  • -Dweblogic.debug.DebugSecuritySAML2Atn=true
  • -Dweblogic.debug.DebugSecuritySAML2Service=true

With all that, you should be well armed to face any issue!

 

 

Cet article WebLogic – SSO/Atn/Atz – How to enable debug logs est apparu en premier sur Blog dbi services.

DBVISIT from Oracle to Postgres

$
0
0

As I regularly work on Oracle and PostgreSQL, I decided to test the replication from Oracle to PostgreSQL using the Dbvisit Replicate tool.

Dbivisit Replicate does not use Oracle logminer ot triggers but its own mining processes to get the changes when they are written to the redo logs. When a change appears in the redo log, an external file called PLOG is generated and transferred to the target.

dbvisit

The architecture is quite easy to understand, you have a MINE process on the source server, looking at the redo logs for changed data, and an APPLY process which applies SQL on the target database.

The configuration is easy to implement but must not be under estimated:=)

My configuration is the following:

  • Oracle server named cloud13c, with PSI database version 12.2.0.1
  • Postgres server named pg_essentials_p1 with Postgres version 9.6

At first, we create a user in the Postgres database:

postgres@pg_essentials_p1:/home/postgres/ [PG1] createuser -d -e -E -l -P -r -s dbvrep_admin
Enter password for new role: 
Enter it again: 
CREATE ROLE dbvrep_admin ENCRYPTED 
PASSWORD 'md5e3c4e8f1b4f8e388eef4fe890d6bdb36' SUPERUSER CREATEDB 
CREATEROLE INHERIT LOGIN;

We edit the configuration file /u02/pgdata/postgresql.conf in order to allow non-localhost connections:

postgres@pg1:/u02/pgdata/PG1/ [PG1] cp postgresql.conf postgresql.conf.save
postgres@pg1:/u02/pgdata/PG1/ [PG1] sed -i "s/^#\(
listen_addresses = '\)localhost'/\1*'\t/" postgresql.conf

We also enable connections on non-localhost address:

postgres@pg1:/u02/pgdata/PG1/ [PG1] cp pg_hba.conf 
pg_hba.conf.save
postgres@p1:/u02/pgdata/PG1/ [PG1] echo -e 
"host\tall\t\tall\t\t0.0.0.0/0\t\tmd5" >> pg_hba.conf

cat pg_hba.conf:

 # TYPE  DATABASE        USER            ADDRESS                 METHOD
# "local" is for Unix domain socket connections only
local   all             all                                     md5
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
host    all             all             192.168.22.201/24       md5
 # IPv6 local connections:
host    all             all             ::1/128                 md5
 host    all             barman          192.168.1.101/24       md5
host    replication     barman_streaming 192.168.1.101/24       md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     postgres                                md5
host    replication     postgres        127.0.0.1/32            md5
host    replication     postgres        ::1/128                 md5
host all       all       0.0.0.0/0      md5

We have to restart the postgres server:

postgres@pg1:/u02/pgdata/PG1/ [PG1] pgrestart
waiting for server to shut down.... done
server stopped
server starting
postgres@pg1:/u02/pgdata/PG1/ [PG1] 2017-07-17 13:52:52.350 CEST
 - 1 - 3106 - 
 - @ LOG:  redirecting log output to logging collector process
2017-07-17 13:52:52.350 CEST - 2 - 3106 - 
 - @ HINT:  Future log output will appear in directory 
"/u01/app/postgres/admin/PG1/pg_log".
 
postgres@pg_essentials_p1:/u02/pgdata/PG1/ [PG1] 
postgres@pg_essentials_p1:/u02/pgdata/PG1/ [PG1] alias | grep pgrestart
alias pgrestart='pg_ctl -D ${PGDATA} restart -m fast'

Then we install dbvisit replicate:

We download the dbvisit_replicate-2.9.00-el5.x86_64.rpm and we install it:

[root@localhost software]# rpm -ivh dbvisit_replicate-2.9.00-el5.x86_64.rpm 
Preparing...                       ################################# [100%]
Updating / installing...
   1:dbvisit_replicate-2.9.00-el5  ################################# [100%]

To make it work properly, I had to modify the sqlnet.ora file as follows in order to avoid the following error message:

ERR-11: Could not connect as dbvrep to database PSI, 
error is ORA-24327: need explicit attach
before authenticating a user (DBD ERROR: OCISessionBegin)
sqlnet.ora:
SQLNET.SQLNET_ALLOWED_LOGON_VERSION=11
SQLNET.ALLOWED_LOGON_VERSION_CLIENT =11
SQLNET.ALLOWED_LOGON_VERSION_SERVER =11

Before running dbvrep, be sure you can connect with psql from the Oracle server to the postgreSQL server !! I needed to install a postgres client own the Oracle host and to define the PATH properly.

Finally by running dbvrep on the Oracle server, you  run the setup wizard, and you enter your configuration settings, this menu is quite easy to understand. The setup wizard is defined in 4 steps:

– Step 1: describe databases

– Step 2: Replicate Pairs

– Step 3: Replicated tables

– Step 4: Process Configuration

oracle@localhost:/home/oracle/ora2pg/ [PSI] dbvrep
Initializing......done
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
No DDC file loaded.
Run "setup wizard" to start the configuration wizard or try "help" 
to see all commands available.
dbvrep> setup wizard                                                                              
This wizard configures Dbvisit Replicate.
 
The setup wizard creates configuration scripts, which need to be run 
after the wizard ends. Nochanges to the databases are made before that.
 
The progress is saved every time a list of databases, replications, etc. 
is shown. It will bere-read if wizard is restarted and the same DDC 
name and script path is selected.
Run the wizard now? [Yes]                                                   
Accept end-user license agreement? (View/Yes/No) [No] yes                  
Before starting the actual configuration, some basic information is needed. The DDC name and
script path determines where all files created by the wizard go 
(and where to reread them ifwizard is rerun) and the license key 
determines which options are available for this
configuration.
(DDC_NAME) - Please enter a name for this replication: [] ora2pg                                  
(LICENSE_KEY) - Please enter your license key: [(trial)]                                          
Which Replicate edition do you want to trial (LTD/XTD/MAX): [MAX]                                 
(SETUP_SCRIPT_PATH) - Please enter a directory for location of 
configuration scripts on this     
machine: [/home/oracle/Documents/ora2pg] /home/oracle/ora2pg                                 
 
Network configuration files were detected on this system in these locations:
/u00/app/oracle/network/admin
/u00/app/oracle/product/12.2.0.1/dbhome_1/network/admin
(TNS_ADMIN) - Please enter TNS configuration directory for this machine:                          [/u00/app/oracle/network/admin]                                          
Read 2 described databases from previous wizard run.
 
Step 1 - Describe databases
========================================
The first step is to describe databases used in the replication. 
There are usually two of them
(source and target).
 
Following databases are now configured:
1: Oracle PSI, SYS/***, SYSTEM/***, dbvrep/***, USERS/TEMP, dbvrep/, 
ASM:No, TZ: +02:00
2: Postgres postgres, dbvrep_admin/***, dbvrep_admin/***, dbvrep/***, 
/, dbvrep/, ASM:n/a, TZ: 
Enter the number of the database to modify it, or "add", or "done": [done]                        
Read 1 replication pairs from previous wizard run.
 
Step 2 - Replication pairs
========================================
The second step is to set source and targets for each replication pair.
 
Enter number of replication pair to modify it, or "add", or "done": [done] 1                  
Do you want to "edit" the replication pair or "delete" it? [edit] edit                        
Let's configure the replication pair, selecting source and target.
Following databases are described:
1: PSI#DBVREP (Oracle)
2: DBNAME=POSTGRES;HOST=PG1#DBVREP (Postgres) 
(cannot be source: not an Oracle database)
Select source database: [1]                                                                   
Select target database: [2]                                                                   
Will limited DDL replication be enabled? (Yes/No) [Yes]                                       
Use fetcher to offload the mining to a different server? (Yes/No) [No]                        
Should where clauses (and Event Streaming) include all columns, 
not just changed and PK?      (Yes/No) [No]                                                                            
Would you like to encrypt the data across the network? (Yes/No) [No]                          
Would you like to compress the data across the network? (Yes/No) [No]                         
How long do you want to set the network timeouts. 
Recommended range between 60-300 seconds    [60]                                                                                       
Lock and copy the data initially one-by-one or at a single SCN?
one-by-one : Lock tables one by one and capture SCN
single-scn : One SCN for all tables
ddl-only   : Only DDL script for target objects
resetlogs  : Use SCN from last resetlogs operation
(standby activation, rman incomplete
recovery)
no-lock    : Do not lock tables. Captures previous SCN of oldest active 
transaction. Requires pre-requisite running of pre-all.sh script            (one-by-one/single-scn/ddl-only/resetlogs/no-lock) [single-scn] 
 
What data instantiation script to create?
ddl_file       : DDL file created (APPLY.sql)
ddl_run        : DDL is automatically executed on target
load           : All replicated data is created and loaded automatically
none                                                                                          (ddl_file/ddl_run/load/none) [ddl_run] ddl-file
 
 
Following replication pairs are now configured:
1: PSI (Oracle) ==> postgres (Postgres), DDL: Yes, fetcher: No, 
process suffix: (no suffix),
compression: No, encryption: No, network timeout: 60, prepare type: 
single-scn,:
ddl-run
Enter number of replication pair to modify it, or "add", or "done": [done]                        
Read 1 replication pairs from previous wizard run.
 
Step 3 - Replicated tables
========================================
The third step is to choose the schemas and tables to be replicated. 
If the databases arereachable, the tables are checked for existence, 
datatype support, etc., schemas are queried for tables. 
Note that all messages are merely hints/warnings and may be ignored 
if issues are rectified before the scripts are actually executed.
 
Following tables are defined for replication pairs:
1: PSI (Oracle) ==> postgres (Postgres), DDL: Yes, suffix: (no suffix), 
prepare: single-scn
  PSI(tables)
Enter number of replication pair to modify it, or "done": [done]                                  
Read 2 replication pairs from previous wizard run.
 
Step 4 - Process configuration
========================================
The fourth step is to configure the replication processes for each 
replication.
 
Following processes are defined:
1: MINE on PSI
  Host: cloud13c, SMTP: No, SNMP: No
2: APPLY on postgres
  Host: pg1, SMTP: No, SNMP: No
Enter number of process to modify it, or "done": [done] 1                                         
Fully qualified name of the server for the process (usually co-located 
with the database, unless  mine is offloaded using fetcher): [cloud13c]                                                     
Server type (Windows/Linux/Unix): [Linux]                                                         
Enable email notifications about problems? (Yes/No) [No]                                          
Enable SNMP traps/notifications about problems? (Yes/No) [No]                                     
Directory with DDC file and default where to create log files etc. 
(recommended: same as global   setting, if possible)? [/home/oracle/ora2pg]                                                    
Following settings were pre-filled with defaults or your reloaded settings:
----------------------------------------
[MINE_REMOTE_INTERFACE]: Network remote interface: cloud13c:7901 
[MINE_DATABASE]: Database TNS: PSI 
[TNS_ADMIN]: tnsnames.ora path: /u00/app/oracle/network/admin 
[MINE_PLOG]: Filemask for generated plogs: /home/oracle/ora2pg/mine/%S.%E 
(%S is sequence, %T thread, %F original filename (stripped extension), 
%P process type, %N process name, %E default extension)
[LOG_FILE]: General log file: /home/oracle/ora2pg/log/dbvrep_%N_%D.%E 
[LOG_FILE_TRACE]: Error traces: 
/home/oracle/ora2pg/log/trace/dbvrep_%N_%D_%I_%U.%E 
 
Checking that these settings are valid...
Do you want to change any of the settings? [No]                                                   
Following processes are defined:
1: MINE on PSI
  Host: cloud13c, SMTP: No, SNMP: No
2: APPLY on postgres
  Host: pg1, SMTP: No, SNMP: No
Enter number of process to modify it, or "done": [done] 2                                         
Fully qualified name of the server for the process (usually co-located 
with the database, unless  mine is offloaded using fetcher): [pg1]                                                          
Server type (Windows/Linux/Unix): [Linux]                                                         
Enable email notifications about problems? (Yes/No) [No]                                          
Enable SNMP traps/notifications about problems? (Yes/No) [No]                                     
Directory with DDC file and default where to create log files etc. 
(recommended: same as global   setting, if possible)? [/home/oracle/ora2pg]                                                    
Following settings were pre-filled with defaults or your reloaded settings:
----------------------------------------
[APPLY_REMOTE_INTERFACE]: Network remote interface: pg1:7902 
[APPLY_DATABASE]: Database Postgres connection string: dbname=postgres;
host=pg1 
[TNS_ADMIN]: tnsnames.ora path: /u00/app/oracle/network/admin 
[APPLY_SCHEMA]: Dbvisit Replicate database (schema): dbvrep 
[APPLY_STAGING_DIR]: Directory for received plogs: /home/oracle/ora2pg/apply 
[LOG_FILE]: General log file: /home/oracle/ora2pg/log/dbvrep_%N_%D.%E 
[LOG_FILE_TRACE]: Error traces: 
/home/oracle/ora2pg/log/trace/dbvrep_%N_%D_%I_%U.%E 
 
Checking that these settings are valid...
Do you want to change any of the settings? [No]                                                   
Following processes are defined:
1: MINE on PSI
  Host: cloud13c, SMTP: No, SNMP: No
2: APPLY on postgres
  Host: pg1, SMTP: No, SNMP: No
Enter number of process to modify it, or "done": [done]                                           
Created file /home/oracle/ora2pg/ora2pg-APPLY.ddc.
Created file /home/oracle/ora2pg/ora2pg-MINE.ddc.
Created file /home/oracle/ora2pg/config/ora2pg-setup.dbvrep.
Created file /home/oracle/ora2pg/config/ora2pg-dbsetup_PSI_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-dbsetup_DBNAME_POSTGRES_HOST_PG1_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-grants_PSI_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-grants_DBNAME_POSTGRES_HOST_PG1_DBVREP.sql.
Created file /home/oracle/ora2pg/config/ora2pg-onetime.ddc.
Created file /home/oracle/ora2pg/start-console.sh.
Created file /home/oracle/ora2pg/ora2pg-run-cloud13c.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-cloud13c-start-MINE.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-cloud13c-stop-MINE.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-cloud13c-dbvrep-MINE.sh.
Created file /home/oracle/ora2pg/scripts/systemd-dbvrep-MINE_ora2pg.service.
Created file /home/oracle/ora2pg/scripts/upstart-dbvrep-MINE_ora2pg.conf.
Created file /home/oracle/ora2pg/ora2pg-run-pg1.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-pg1-start-APPLY.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-pg1-stop-APPLY.sh.
Created file /home/oracle/ora2pg/scripts/ora2pg-pg1-dbvrep-APPLY.sh.
Created file /home/oracle/ora2pg/scripts/systemd-dbvrep-APPLY_ora2pg.service.
Created file /home/oracle/ora2pg/scripts/upstart-dbvrep-APPLY_ora2pg.conf.
Created file /home/oracle/ora2pg/Nextsteps.txt.
Created file /home/oracle/ora2pg/ora2pg-all.sh.
============================================================================
Dbvisit Replicate wizard completed
Script /home/oracle/ora2pg/ora2pg-all.sh created. 
This runs all the above created scripts. Please exit out of dbvrep, 
review and run script as current user to setup and start Dbvisit Replicate.
============================================================================
Optionally, the script can be invoked now by this wizard.
Run this script now? (Yes/No) [No]                                          dbvrep> exit

As it is asked at the end of the setup wizard, we run the ora2pg_all.sh :

oracle@localhost:/home/oracle/ora2pg/ [PSI] . ora2pg-all.sh 
Setting up Dbvisit Replicate configuration
Configure database PSI...
This check fails if the DBID is not the expected one...
Ok, check passed.
Configure database dbname=postgres
Object grants for database PSI...
Object grants for database dbname=postgres
Setting up the configuration
Initializing......done
DDC loaded from database (0 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /home/oracle/ora2pg/config/ora2pg-onetime.ddc loaded.
MINE: Cannot determine Dbvisit Replicate dictionary version. (no
dictionary exists)
APPLY: Cannot determine Dbvisit Replicate dictionary version. (no
dictionary exists)
dbvrep> #clear the no-DDC-DB-available warning
dbvrep> process clear previous warnings
dbvrep> set ON_WARNING SKIP
Variable ON_WARNING set to SKIP for process *.
dbvrep> set ON_ERROR EXIT
Variable ON_ERROR set to EXIT for process *.
dbvrep> 
dbvrep> # Configuring default processes
dbvrep> choose process MINE
Process type MINE set to: MINE.
dbvrep> choose process APPLY
Process type APPLY set to: APPLY.
dbvrep> PROCESS SWITCH_REDOLOG
Redo log switch requested.
dbvrep> PROCESS SETUP MINE DROP DICTIONARY
0 dictionary objects dropped.
dbvrep> PROCESS SETUP MINE CREATE DICTIONARY
dbvrep> PROCESS SETUP MINE LOAD DICTIONARY
Oldest active transaction SCN: 2054212 (no active transaction)
Supplemental logging on database set.
dbvrep> PROCESS SETUP APPLY DROP DICTIONARY
0 dictionary objects dropped.
dbvrep> PROCESS SETUP APPLY CREATE DICTIONARY
dbvrep> PROCESS SETUP APPLY LOAD DICTIONARY
dbvrep> PROCESS SETUP PAIR MINE AND APPLY
Applier SCN set (start=2054228, current=2054228).
dbvrep> SET APPLY.INSTANTIATE_SCN NOW
Variable INSTANTIATE_SCN set to NOW for process APPLY.
dbvrep> SET MINE._PREPARE_SUPLOG_TYPE PK
Variable _PREPARE_SUPLOG_TYPE set to PK for process MINE.
dbvrep> EXCLUDE CREATE TABLE %.DBMS_TABCOMP_TEMP_UNCMP #Ignore tables
created by Compression Advisor
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.DBMS_TABCOMP_TEMP_CMP #Ignore tables
created by Compression Advisor
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.SCHEDULER$_% #Ignore tables created by
Oracle scheduler (also used by schema/full expdp/impdp)
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP1$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP2$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP3$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> EXCLUDE CREATE TABLE %.CMP4$% #Ignore tables created by
Compression Advisor since 11.2.0.4
Exclude rule created.
dbvrep> memory_set IGNORE_APPLY_DDL_DIFFERENCES Yes
Variable IGNORE_APPLY_DDL_DIFFERENCES set to YES for process *.
dbvrep> SET PREPARE_SCHEMA_EXCEPTIONS none
Variable PREPARE_SCHEMA_EXCEPTIONS set to none for process *.
dbvrep> PROCESS SUPPLEMENTAL LOGGING SCHEMA "PSI" ENABLE PRIMARY KEY
dbvrep> PROCESS SWITCH_REDOLOG
Redo log switch requested.
dbvrep> PROCESS WAIT_SCN_FLIP
Waited 1 seconds until scn_to_timestamp changed.
dbvrep> #single-scn instantiation: lock all tables and schemas
dbvrep> PROCESS LOCK SCHEMAS "PSI"
Locking all schemas.
...locked 2 of 2 tables from PSI schema.
Lock done.
dbvrep> #single-scn instantiation: unlock all tables and schemas, but
keep the SCN
dbvrep> PROCESS LOCK RELEASE LOCKS
Engine locks released.
dbvrep> 
dbvrep> #prepare the tables (we use OFFLINE as neither MINE nor APPLY
is running; with OFFLINE we won't wait on network timeout)
dbvrep> PREPARE OFFLINE SCHEMA "PSI"
Table PSI.EMPLOYE instantiated at SCN 2056800
Table PSI.OFFICE instantiated at SCN 2056800
dbvrep> 
dbvrep> #single-scn instantiation: unlock all tables and schemas,
forget the SCN (so it does not affect any further PREPARE statements)
dbvrep> PROCESS LOCK CLEAR SCN
dbvrep> PROCESS SWITCH_REDOLOG
Redo log switch requested.
dbvrep> #prepare script for instantiation
dbvrep> PROCESS PREPARE_DP WRITE DDL_FILE FILE
/home/oracle/ora2pg/APPLY.sql USERID SYSTEM/manager@PSI
File /home/oracle/ora2pg/APPLY.sql has been written successfully.
Created DDL script /home/oracle/ora2pg/APPLY.sql.
dbvrep> create ddcdb from ddcfile
DDC loaded into database (430 variables).
dbvrep> load ddcdb
DDC loaded from database (430 variables).
dbvrep> set ON_WARNING SKIP
Variable ON_WARNING set to SKIP for process *.
dbvrep> set ON_ERROR SKIP
Variable ON_ERROR set to SKIP for process *.
OK-0: Completed successfully.
WARN-1850: No DDC DB available, dictionary table does not exist.
These steps are required after the ora2pg-all.sh script runs:
 
1) Create the necessary directory(ies) on the servers:
cloud13c: /home/oracle/ora2pg
pg1: /home/oracle/ora2pg
 
2) Copy the DDC files to the server(s) where the processes will run:
pg1: /home/oracle/ora2pg/ora2pg-APPLY.ddc
cloud13c: /home/oracle/ora2pg/ora2pg-MINE.ddc
 
Ensure that the parameter TNS_ADMIN (in the ddc file) is pointing to the correct TNS_ADMIN path on each of the servers.
 
3) Review that path to dbvrep executable is correct in the run scripts:
/home/oracle/ora2pg/ora2pg-run-cloud13c.sh
/home/oracle/ora2pg/ora2pg-run-pg1.sh
 
4) Copy the run script to the server(s) where the processes will run:
cloud13c: /home/oracle/ora2pg/ora2pg-run-cloud13c.sh
pg1: /home/oracle/ora2pg/ora2pg-run-pg1.sh
 
5) Ensure firewall is open for listen interfaces 0.0.0.0:7902, 0.0.0.0:7901 used by the processes.
 
6) Make sure the data on apply are in sync as of time when setup was run.
Scripts for Data Pump/export/DDL were created as requested:
 
Create referenced database links (if any) before running the scripts.
/home/oracle/ora2pg/APPLY.sql
 
7) Start the replication processes on all servers:
cloud13c: /home/oracle/ora2pg/ora2pg-run-cloud13c.sh
pg1: /home/oracle/ora2pg/ora2pg-run-pg1.sh
 
8) Start the console to monitor the progress:
/home/oracle/ora2pg/start-console.sh

As explained you have to copy two files on the postgres server : /home/oracle/ora2pg/ora2pg-APPLY.ddc and /home/oracle/ora2pg/ora2pg-run-pg1.sh

As I choosed the option ddl_only, we have to first create the tables on the postgres server. In order to do this we can use the APPLY:sql file from the Oracle server.

The next step consist in running the MINE process on the Oracle server:

oracle@localhost:/home/oracle/ora2pg/ [PSI] . ora2pg-run-cloud13c.sh 
Initializing......done
DDC loaded from database (430 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /home/oracle/ora2pg/ora2pg-MINE.ddc loaded.
Starting process MINE...started

And we launch the APPLy process on the postgres server:

postgres@pg_essentials_p1:/home/oracle/ora2pg/ [PG1] . ora2pg-run-pg1.sh 
Initializing......done
DDC loaded from database (431 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
DDC file /home/oracle/ora2pg/ora2pg-APPLY.ddc loaded.
Starting process APPLY...Created directory /home/oracle/ora2pg/ddc_backup
Created directory /home/oracle/ora2pg/log/
Created directory /home/oracle/ora2pg/log/trace/
Created directory /home/oracle/ora2pg/apply
started

Initially I had two tables in my PSI oracle database belonging to the psi schema: EMPLOYE and OFFICE. I used the APPLY.sql script to create the tables in the postgres environment.

To visualize the activity we run start_console.sh on the Oracle server:

oracle@localhost:/home/oracle/ora2pg/ [PSI] . start-console.sh 
Initializing......done
DDC loaded from database (431 variables).
Dbvisit Replicate version 2.9.02
Copyright (C) Dbvisit Software Limited. All rights reserved.
 
| Dbvisit Replicate 2.9.02(MAX edition) - Evaluation License expires in 30 days
MINE is running. Currently at plog 120 and SCN 2060066 (11/07/2017 15:27:57).
APPLY is running. Currently at plog 120 and SCN 2060021 (11/07/2017 15:27:45).
Progress of replication ora2pg:MINE->APPLY: total/this execution
-------------------------------------------------------------------------------------------------
PSI.EMPLOYE/psi.employe:      100%  Mine:1/1             Unrecov:0/0         Applied:1/1         Conflicts:0/0       Last:07/11/2017 15:20:06/OK
PSI.OFFICE/psi.office:        100%  Mine:1/1             Unrecov:0/0         Applied:1/1         Conflicts:0/0       Last:07/11/2017 15:21:36/OK
-------------------------------------------------------------------------------------------------
2 tables listed.

And we can validate that each insert in the employe or office table is replicated on the postgres server:

From the postgres database;

(postgres@[local]:5432) [postgres] > select * from psi.employe;
 name  | salary 
-------+--------
 Larry |  10000
 Bill  |   2000
(2 rows)

From the Oracle server:

SQL> insert into employe values ('John', 50000);
 
1 row created.
 
SQL> commit;
 
Commit complete.

The console is giving us correct informations:

/ Dbvisit Replicate 2.9.02(MAX edition) - Evaluation License expires in 30 days
MINE is running. Currently at plog 120 and SCN 2075526 (11/07/2017 16:44:17).
APPLY is running. Currently at plog 120 and SCN 2075494 (11/07/2017 16:44:08).
Progress of replication ora2pg:MINE->APPLY: total/this execution
-------------------------------------------------------------------------------------------------
PSI.EMPLOYE/psi.employe:      100%  Mine:3/3             Unrecov:0/0         Applied:3/3         Conflicts:0/0       Last:07/11/2017 16:18:41/OK
PSI.OFFICE/psi.office:        100%  Mine:3/3             Unrecov:0/0         Applied:3/3         Conflicts:0/0       Last:07/11/2017 15:37:02/OK
-------------------------------------------------------------------------------------------------
2 tables listed.

And the result is apllied on the postgres database:
(postgres@[local]:5432) [postgres] > select * from psi.employe;
 name  | salary 
-------+--------
 Larry |  10000
 Bill  |   2000
 John  |  50000
(3 rows)
 
As previously we have choosen the single-scn and ddl-run option, we had to run the APPLY.sql script from the Oracle server in order to create the tables on the postgres side, you can also choose in Step 2 of the configuration wizard, the load option (all replicated data is created and loaded automatically):
Lock and copy the data initially one-by-one or at a single SCN?
one-by-one : Lock tables one by one and capture SCN
single-scn : One SCN for all tables
ddl-only   : Only DDL script for target objects
resetlogs  : Use SCN from last resetlogs operation (standby activation, rman incomplete
recovery)
no-lock    : Do not lock tables. Captures previous SCN of oldest active transaction. Requires
pre-requisite running of pre-all.sh script                                                    (one-by-one/single-scn/ddl-only/resetlogs/no-lock) [single-scn] 
 
What data instantiation script to create?
ddl_file       : DDL file created (APPLY.sql)
ddl_run        : DDL is automatically executed on target
load           : All replicated data is created and loaded automatically
none                                                                                          (ddl_file/ddl_run/load/none) [ddl_run] load
Do you want to (re-)create the tables on target or keep them (they are already created)?      (create/keep) [keep] create

In this case you can visualize that each Oracle table is replicated to the Postgres server.

From the oracle server:

SQL> create table salary (name varchar2(10)); 
 
Table created.
 
SQL> insert into salary values ('Larry');
 
1 row created.
 
SQL> commit;
 
Commit complete.

The dbvist console displays correct informations:

\ Dbvisit Replicate 2.9.02(MAX edition) - Evaluation License expires in 30 days
MINE is running. Currently at plog 135 and SCN 2246259 (11/27/2017 14:44:24).
APPLY is running. Currently at plog 135 and SCN 2246237 (11/27/2017 14:44:18).
Progress of replication replic:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------
REP.SALARY:                   100%  Mine:1/1             Unrecov:0/0         Applied:1/1         Conflicts:0/0       Last:27/11/2017 14:01:25/OK
---------------------------------------------------------------------------------------------
1 tables listed.

From the postgres server:

(postgres@[local]:5432) [postgres] > select * from rep.salary;
 name  
-------
 Larry
(1 row)

The plog files generated in the postgres server contains the strings we need:

The plot files are generated on the postgres server in the directory /home/oracle/replic/apply

-bash-4.2$ ls
122.plog.gz  124.plog.gz  126.plog  128.plog.gz  130.plog.gz  132.plog.gz  134.plog
123.plog.gz  125.plog.gz  127.plog  129.plog.gz  131.plog.gz  133.plog.gz  135.plog
-bash-4.2$ strings 135.plog | grep -l larry
-bash-4.2$ strings 135.plog | grep -i larry
Larry
-bash-4.2$ strings 135.plog | grep -i salary
SALARY
create table salary (name varchar2(10))
SALARY
SALARY

Despite some problems at the beginning of my tests, the replication from Oracle to PostgreSQL is working fine and fast. There are many possibilities with Dbvisit Replicate I will try to test in the following weeks.

 

 

 

 

 

 

Cet article DBVISIT from Oracle to Postgres est apparu en premier sur Blog dbi services.

Are statistics immediately available after creating a table or an index in PostgreSQL?

$
0
0

While giving the last PostgreSQL DBA Essentials workshop this question came up: When we create a table or an index: are the statistics available automatically? To be more precise: When we create and load a table in one step, create an index on that table afterwards: Do we have the statistics available by default or do we need to wait for autovacuum to kick in or analyze manually? Lets see …

First of all lets disable autovacuum so it does not kick off analyze in the background:

postgres=# \! ps -ef | grep autov | grep -v grep
postgres  1641  1635  0 07:08 ?        00:00:00 postgres: MY_CLUSTER: autovacuum launcher process   
postgres=# alter system set autovacuum=off;
ALTER SYSTEM
postgres=# select * from pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# \! ps -ef | grep autov | grep -v grep

Create and populate the table:

postgres=# \! cat a.sql
drop table if exists t;
create table t
as select a.*, md5(a::varchar) from generate_series(1,5000000) a;
postgres=# \i a.sql
psql:a.sql:1: NOTICE:  table "t" does not exist, skipping
DROP TABLE
SELECT 5000000

Create an index:

postgres=# create index i1 on t(a);
CREATE INDEX
postgres=# \d+ t
                                     Table "public.t"
 Column |  Type   | Collation | Nullable | Default | Storage  | Stats target | Description 
--------+---------+-----------+----------+---------+----------+--------------+-------------
 a      | integer |           |          |         | plain    |              | 
 md5    | text    |           |          |         | extended |              | 
Indexes:
    "i1" btree (a)

Do we have statistics already? Lets check:

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

No, at least not for the table. What about the index?

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i1'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

No. Lets analyze:

postgres=# analyze t;
ANALYZE
postgres=# analyze i1;
WARNING:  skipping "i1" --- cannot analyze non-tables or special system tables
ANALYZE

Apparently we can not analyze an index. What do we see now?

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |        4 |          -1
           0 |       33 |          -1
(2 rows)

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i1'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

We do see statistics for the table but not for the index. The reason is that “analyze” works on the tables, but not on the indexes. For regular indexes there will be nothing in pg_statistic because that information would be redundant with the underlying table columns. But there will be statistics for function based indexes:

postgres=# create index i2 on t(lower(a::text));
CREATE INDEX
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i2'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

postgres=# analyze t;
ANALYZE
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 'i2'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |       10 |          -1
(1 row)

So, when autovacuum is off we do not get statistics when we do not kick off a manual analyze (which is not a surprise). What happens when autovacuum is on?

postgres=# alter system set autovacuum=on;
ALTER SYSTEM
postgres=# select * from pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)
postgres=# \i a.sql
DROP TABLE
SELECT 5000000
postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
(0 rows)

Nope, same picture here. But some seconds later:

postgres=# select stanullfrac,stawidth,stadistinct from pg_statistic where starelid = 't'::regclass;
 stanullfrac | stawidth | stadistinct 
-------------+----------+-------------
           0 |        4 |          -1
           0 |       33 |          -1
(2 rows)

… statistics are there. Conclusion: When you require current statistics directly after loading a table you’d better kick of a manual analyze right after. Otherwise autovacuum will take care about that, but not immediately.

 

Cet article Are statistics immediately available after creating a table or an index in PostgreSQL? est apparu en premier sur Blog dbi services.

Dataguard: QUIZZ on Snapshot Standby

$
0
0

In oracle documentation we can find this about snapshot standby: A snapshot standby database is a fully updatable standby database that provides disaster recovery and data protection benefits that are similar to those of a physical standby database.

The concept of snapshot standby is that we can temporary convert a physical standby to an updatable database for different purposes and then convert back to a physical standby. During the time that the database is a snapshot standby, it can be used as a normal read write database. And then after the flashback technology is used combined with archived logfiles to convert back the snapshot to a physical standby.
In this blog I have tested some common tasks on a snapshot database and I am describing below the results.

We show our configuration, oracle 12.2 is used.
ORCL_SITE: Primary
ORCL_SITE2: Physical Standby
ORCL_SITE2: Physical Standby
ORCL_SITE3: Logical Standby


DGMGRL> show configuration;
.
Configuration - ORCL_DR
.
Protection Mode: MaxPerformance
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
ORCL_SITE3 - Logical standby database
.
Fast-Start Failover: DISABLED
.
Configuration Status:
SUCCESS (status updated 42 seconds ago)
.
DGMGRL>

The first question we can ask is which type of standby can be converted to a snapshot database

1- Can we convert a logical standby to a snapshot standby
Let’s convert our logical standby to a snapshot standby

DGMGRL> CONVERT DATABASE 'ORCL_SITE3' TO SNAPSHOT STANDBY;
Converting database "ORCL_SITE3" to a Snapshot Standby database, please wait...
Error: ORA-16831: operation not allowed on this member
.
Failed.
Failed to convert database "ORCL_SITE3"
DGMGRL>

Answer: NO we cannot convert a logical standby to a snapshot standby

2- Can we convert a physical standby to a snapshot standby
Let’s convert our physical standby to a snapshot standby

DGMGRL> CONVERT DATABASE 'ORCL_SITE2' TO SNAPSHOT STANDBY;
Converting database "ORCL_SITE2" to a Snapshot Standby database, please wait...
Database "ORCL_SITE2" converted successfully
DGMGRL>

And we can verify the new status of the database ‘ORCL_SITE2′

DGMGRL> show database 'ORCL_SITE2';
.
Database - ORCL_SITE2
.
Role: SNAPSHOT STANDBY
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 1 minute 33 seconds (computed 1 second ago)
Instance(s):
ORCL
.
Database Status:
SUCCESS
.
DGMGRL>
.

Answer: Yes we can convert a physical standby to a snapshot standby.

Now that the physical is converted to a snapshot let’s continue our quizz.

3- Can we open a snapshot standby on a read only mode
Let’s shutdown our standby snapshot and let’s open it on read only mode

SQL> startup open read only;
ORACLE instance started.
.
Total System Global Area 943718400 bytes
Fixed Size 8627440 bytes
Variable Size 348130064 bytes
Database Buffers 583008256 bytes
Redo Buffers 3952640 bytes
Database mounted.
Database opened.

The status is now open read only

SQL> select db_unique_name,database_role,open_mode from v$database;
.
DB_UNIQUE_NAME DATABASE_ROLE OPEN_MODE
------------------------------ ---------------- --------------------
ORCL_SITE2 SNAPSHOT STANDBY READ ONLY

Answer: Yes a snapshot standby can be opened in a READ ONLY mode

4- Can we create a tablespace on a snapshot standby
Connected to the standby database let’s create a tablespace

SQL> create tablespace mytab_snap datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' size 2M autoextend on maxsize 10M;
.
Tablespace created.

We can verify in the table dba_tablespaces

SQL> select tablespace_name from dba_tablespaces;
.
TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
MYTAB_SNAP
TAB_TEST

Answer: Yes we can see that the tablespace MYTAB_SNAP was created.

5- Can we drop a tablespace on a snapshot standby
Let’s drop a tablespace
SQL> drop tablespace TAB_TEST including contents and datafiles;
drop tablespace TAB_TEST including contents and datafiles
*
ERROR at line 1:
ORA-38881: Cannot drop tablespace TAB_TEST on primary database due to
guaranteed restore points.

Answer: No due to guaranteed restore point, we cannot drop a tablespace on a snapshot database.

6- Can we extend a datafile on a snapshot standby
We are going to consider two types of datafiles.
• One from tablespace MYTAB_SNAP created on the snapshot standby
• Another from tablespace TAB_TEST created on the primary

FILE_NAME TABLESPACE_NAME
-------------------------------------------------- ------------------------------
/u01/app/oracle/oradata/ORCL/users01.dbf USERS
/u01/app/oracle/oradata/ORCL/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/ORCL/system01.dbf SYSTEM
/u01/app/oracle/oradata/ORCL/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/ORCL/stab_test.dbf TAB_TEST
/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf MYTAB_SNAP

Let’s extend first the datafile created on the snapshot standby

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' resize 20M;
.
Database altered.

And then let’s do the same operation on the datafile created on the primary

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 20M;
.
Database altered.

We can verify the new sizes

SQL> select FILE_NAME,sum(BYTES) from dba_data_files group by FILE_NAME;
.
FILE_NAME SUM(BYTES)
-------------------------------------------------- ----------
/u01/app/oracle/oradata/ORCL/users01.dbf 5242880
/u01/app/oracle/oradata/ORCL/undotbs01.dbf 73400320
/u01/app/oracle/oradata/ORCL/system01.dbf 870318080
/u01/app/oracle/oradata/ORCL/sysaux01.dbf 587202560
/u01/app/oracle/oradata/ORCL/stab_test.dbf 20971520
/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf 20971520
.
6 rows selected.

Answer: Yes we can extend datafiles on a snapshot standby.

Just note that when the snapshot standby is converted back to a physical, the datafile is shrinked until his previous size.

7- Can we reduce a datafile on a standby database
Let’s now reduce the size of the datafile created on the snapshot

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/mytab_snap01.dbf' resize 5M;
.
Database altered.

And let’s do the same operation on the datafile created on the primary

SQL> alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 5M;
alter database datafile '/u01/app/oracle/oradata/ORCL/stab_test.dbf' resize 5M
*
ERROR at line 1:
ORA-38883: Cannot shrink data file /u01/app/oracle/oradata/ORCL/stab_test.dbf
on primary database due to guaranteed restore points.

Answer: Yes we see that we can only reduce size for datafiles created on the snapshot standby.

8- Can we do a switchover to a snapshot standby
As a snapshot is a physical standby which was converted, one may ask if a switchover is possible to a snapshot standby.

DGMGRL> switchover to 'ORCL_SITE2';
Performing switchover NOW, please wait...
Error: ORA-16831: operation not allowed on this member
.
Failed.
Unable to switchover, primary database is still "ORCL_SITE"
DGMGRL>

Answer: No we cannot do a switchover to a snapshot standby.

9- Can we do a failover to a snapshot standby
The same question can be asked about failover.

DGMGRL> connect sys/root@ORCL_SITE2
Connected to "ORCL_SITE2"
Connected as SYSDBA.
DGMGRL> failover to 'ORCL_SITE2';
Converting database "ORCL_SITE2" to a Physical Standby database, please wait...
Operation requires shut down of instance "ORCL" on database "ORCL_SITE2"
Shutting down instance "ORCL"...
Connected to "ORCL_SITE2"
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires start up of instance "ORCL" on database "ORCL_SITE2"
Starting instance "ORCL"...
ORACLE instance started.
Database mounted.
Connected to "ORCL_SITE2"
Connected to "ORCL_SITE2"
Continuing to convert database "ORCL_SITE2" ...
Database "ORCL_SITE2" converted successfully
Performing failover NOW, please wait...
Failover succeeded, new primary is "ORCL_SITE2"
DGMGRL>

Answer: Yes, we can do a failover to a snapshot standby but the time of the failover is longer than if the failover was done to a physical standby. Indeed oracle has

• Convert the snapshot to physical standby one
• Applied archived logs to the physical standby
• And then do the failover

Conclusion: In this blog, we tried to explain some behaviors of snapshot standby. Hope that this article may help

 

Cet article Dataguard: QUIZZ on Snapshot Standby est apparu en premier sur Blog dbi services.


impdp logtime=all metrics=y and 12cR2 parallel metadata

$
0
0

A quick post to show why you should always use LOGTIME=ALL METRICS=Y when using Data Pump. Just look at an example showing the timestamp in front of each line and a message about each task completed by the worker:

15-NOV-17 11:48:32.305: W-5 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
15-NOV-17 11:48:34.439: W-13 Completed 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.439: W-13 Completed by worker 1 28 PROCACT_SCHEMA objects in 1 seconds
15-NOV-17 11:48:34.440: W-13 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
15-NOV-17 11:48:35.472: W-17 Startup took 70 seconds
15-NOV-17 11:48:35.596: W-18 Startup took 70 seconds
15-NOV-17 11:48:35.719: W-20 Startup took 70 seconds
15-NOV-17 11:48:35.841: W-19 Startup took 70 seconds
15-NOV-17 11:48:43.520: W-5 Completed 1714 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 1 39 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 2 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 3 85 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 4 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 5 25 TABLE objects in 1 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 6 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 7 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 8 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 9 89 TABLE objects in 5 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 10 74 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 11 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 12 113 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 13 34 TABLE objects in 2 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 14 111 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 15 108 TABLE objects in 7 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 16 90 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 17 82 TABLE objects in 4 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 18 40 TABLE objects in 3 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 19 97 TABLE objects in 6 seconds
15-NOV-17 11:48:43.520: W-5 Completed by worker 20 53 TABLE objects in 3 seconds
15-NOV-17 11:48:44.266: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA

Here, I was running an import with PARALLEL=20 and I can see exactly how many tables were processed by each worker. You see it is ‘TABLE’ and not ‘TABLE_DATA’ which is the proof that 12cR2 can import metadata in parallel.

I see no reason not to use LOGTIME=ALL METRICS=Y always and you will be happy to have this detail if something goes wrong.

 

Cet article impdp logtime=all metrics=y and 12cR2 parallel metadata est apparu en premier sur Blog dbi services.

How to rename an existing Fusion Middleware WebLogic Domain

$
0
0

Some times it happens that we need to rename an existing fusion Middleware WebLogic domain. I was asked to do such on a Fusion Middleware Reports & Forms environment.
I took some time to check how this can be done and did some testing to confirm it works as expected. The difficulty is not the WebLogic Domain itself as a WebLogic domain can be created quickly but it is time consuming if the complete configuration has to be redone like SSL, logging settings, etc and what about the system components.

I used pack and unpack to rename the FMW WebLogic Domain.

Let’s say I wanted to rename a Fusion Middleware Forms & Reports WebLogic domain named fr_domain in fr_domain_new

First I used pack to create the domain archive:

cd $MW_HOME/oracle_common/common/bin
./pack.sh -domain /u01/config/domains/fr_domain -template $HOME/fr_domain.jar -template_name full_fr_domain

Then using unpack, I changed the domain directory path thus the domain name.

./unpack.sh -domain /u01/config/domains/fr_domain_new -template /home/oracle/fr_domain.jar -user_name weblogic -password Welcome1 -server_start_mode prod -app_dir /u01/config/applications/fr_domain_new -java_home $JAVA_HOME

Of course the JAVA_HOME environment variable needs to be set before.

This simply worked but I had to recreate the security files  for the Administration Server and Managed Servers (boot.properties) if needed and those for the system Components.

To create the security files for the System Components, the Node Manager needs to be started

export WEBLOGIC_DOMAIN_HOME=/u01/config/domains/fr_domain_new/
nohup ${WEBLOGIC_DOMAIN_HOME}/bin/startNodeManager.sh >> ${WEBLOGIC_DOMAIN_HOME}/nodemanager/nohup-NodeManager.out 2>&1 &

And then start once the System  components with the storeUserConfig option. for example:

cd /u01/config/domains/fr_domain_new/bin 
./startComponent.sh ohs1 storeUserConfig
./startComponent.sh vm01_reportsServer storeUserConfig

This was for a simple WebLogic Domain on a single machine. For clustered WebLogic Domains installed on several hosts, the pack and unpack needs to be used again to dispatch the WebLogic Managed  Servers on the targeted machines.

As example, to create the archive files for the Managed Servers to be installed on remote machines:

$MW_HOME/oracle_common/common/bin/pack.sh -managed=true -domain /u01/config/domains/fr_domain_new -template /home/oracle/fr_domain_new.jar -template_name fr_domain_new

 

 

 

Cet article How to rename an existing Fusion Middleware WebLogic Domain est apparu en premier sur Blog dbi services.

Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots

$
0
0

If you store your databases on the cluster filesystem ACFS you may use the provided Perl-script gDBClone from OTN to clone databases or create snapshot databases. It is an interesting approach to create clones from the Production DB in minutes regardless of the production DB size. What you do is to create a standby DB from your production DB on a separate cluster and use that standby DB as a Gold/Master copy for ACFS snapshots.

In a Production environment with Data Guard Broker we wanted to use that technique, but were confronted with an issue:

The Production DB had already a physical standby DB with the Data Guard Broker running. The protection mode was MaxAvailability, which means transport of the redo in sync mode. The master/gold copy to get the snapshots from should receive the redo data in async mode. How to achieve that?

Actually not very common parameters in a Broker configuration are


ExternalDestination1
ExternalDestination2

With those parameters (which are available in 11.2.0.4 and 12.1.0.2 onwards) you actually can send your redo to a destination in async mode. The parameters are documented as follows:

The ExternalDestination1 configuration property is used to specify a redo transport destination that can receive redo data from the current primary database. To set up transport of redo data to the specified destination, the broker uses the values specified for this parameter to define a LOG_ARCHIVE_DEST_n initialization parameter on the primary database. The broker also monitors the health of the transport to the specified destination.

After a role change, the broker automatically sets up a LOG_ARCHIVE_DEST_n initialization parameter on the new primary database to ship redo data to the specified destination.

I.e. you can set the parameter the same as LOG_ARCHIVE_DEST_n, but the following options are not allowed:

ALTERNATE
DELAY
LOCATION
MANDATORY
MAX_FAILURE
NET_TIMEOUT
SYNC
VALID_FOR

So let’s assume I created my DB GOLDCOP as a standby DB using the rman duplicate command


RMAN> duplicate target database for standby from active database dorecover nofilenamecheck;

or alternatively using


# ./gDBClone clone -sdbname PRIM -sdbscan scoda7 -tdbname GOLDCOP -tdbhome OraDb11g_home1 -dataacfs /cloudfs -standby

In the broker configuration I added the DB GOLDCOP as follows:

DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
STBY - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS
 
DGMGRL> edit configuration set property ExternalDestination1 = 'service=goldcop db_unique_name=GOLDCOP noaffirm async';
Property "externaldestination1" updated
DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
STBY - Physical standby database
GOLDCOP - External destination 1
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS

Let’s check if I really do NOAFFIRM ASYNC redo transport on PRIM:

SQL> select DEST_NAME, DB_UNIQUE_NAME, AFFIRM, TRANSMIT_MODE from v$archive_dest where dest_id in (2,3);
 
DEST_NAME DB_UNIQUE_NAME AFF TRANSMIT_MOD
-------------------------------- ------------------------------ --- ------------
LOG_ARCHIVE_DEST_2 STBY YES PARALLELSYNC
LOG_ARCHIVE_DEST_3 GOLDCOP NO ASYNCHRONOUS

The external destination is not a database in the configuration:

DGMGRL> show database "GOLDCOP";
Object "GOLDCOP" was not found

But the transport to its destination is monitored by the Broker. I.e. when shutting down the DB GOLDCOP I do get an error:

DGMGRL> show configuration;
 
Configuration - MYPROD
 
Protection Mode: MaxAvailability
Databases:
PRIM - Primary database
Error: ORA-16778: redo transport error for one or more databases
 
STBY - Physical standby database
GOLDCOP - External destination 1
 
Fast-Start Failover: DISABLED
 
Configuration Status:
ERROR
 
DGMGRL> show instance "PRIM";
 
Instance 'PRIM' of database 'PRIM'
 
Instance Error(s):
ORA-16737: the redo transport service for standby database "GOLDCOP" has an error
 
Instance Status:
ERROR

As the External destination 1 is not “a database” in the broker configuration, it actually also does not matter if the broker is started (dg_broker_start=TRUE) at the external destination GOLDCOP or not.
To start applying redo on the external destination, you have to start managed recovery as you would without a broker configuration:

alter database recover managed standby database using current logfile disconnect from session;

And redo real time apply is happening on GOLDCOP:

SQL> select name,value
2 from v$dataguard_stats
3 where name in ('apply lag','transport lag');
 
NAME VALUE
-------------------- --------------------
transport lag +00 00:00:00
apply lag +00 00:00:00
 
SQL>
SQL> select inst_id,process,pid,status,thread#,sequence#, block#
2 from gv$managed_standby
3 where process like 'MRP%';
 
INST_ID PROCESS PID STATUS THREAD# SEQUENCE# BLOCK#
---------- --------- ---------- ------------ ---------- ---------- ----------
1 MRP0 5155 APPLYING_LOG 1 50 420

To make the external destination self managing I did set the archivelog deletion policy on GOLDCOP to

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;

in rman so that applied archives become reclaimable automatically in the fast recovery area. In addition I set

fal_server='PRIM'

on GOLDCOP to ensure that archive gaps can be resolved.

The pro’s of above configuration are:
– the GOLDCOP-DB does not cause much overhead for my Production DB (async redo transport).
– Decoupling my GOLDCOP DB from Primary (temporarily) is fast and easy:
edit configuration set property ExternalDestination1 = '';

REMARK: Of course I do also have the other advantages of the gDBClone-approach:
– A production copy on a separate cluster which serves as a gold-copy to take snapshots from for testing or development purposes.
– Creating a snapshot database takes minutes regardless of the DB size.

Con’s:
– I have to take care to start managed standby database recovery on my GOLDCOP-DB. I.e. the same as when running data guard without the Broker.

To create a snapshot DB I just do something like:


# ./gDBClone snap -sdbname GOLDCOP -tdbname PRODCOP1

Et voilà a production copy in 2 minutes.

If PRODCOP1 is no longer needed I can delete it:


# ./gDBClone deldb -tdbname PRODCOP1

Besides using the configuration property ExternalDestination1 there are other possibilities in 12c to run a standby DB as a master copy for snapshots without affecting the production system (like e.g. the support of cascaded standby DBs in the Broker), but I still think that the external destinations feature offers a good possibility to run a master copy.

 

Cet article Transfer redo in async-mode to the Gold/Master copy of the Production DB for ACFS snapshots est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – 403 Forbidden, a first issue

$
0
0

In a previous blog, I explained how it was possible to enable the SSO/Atn/Atz (SSO/Authentication/Authorization) debug logs in order to troubleshoot an issue. In this blog, I will show the logs generated by an issue that I had to deal with last month at one of our customers. This issue will probably not occur very often but it is a pretty funny one so I wanted to share it!

So the issue I will talk about in this blog happened on an environment that is configured with a SAML2 SSO. With a fully working SAML2 SSO, the WebLogic hosting the application is supposed to redirect the end-user to the IdP Partner (with a SAML2 request) which process it and then redirect the end-user again to the WebLogic (with the SAML2 response) which process the response and finally grant access to the Application. On this issue, both redirections were apparently happening properly but then for an unknown reason the WebLogic Server was blocking the access to the application with a “403 – Forbidden” message.

Obviously the first thing I did is to enable the debug logs and then I replicated the issue. These are the logs that I could see on the Managed Server log file:

<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <SAML2Servlet: Processing request on URI '/saml2/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): request URI is '/saml2/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): service URI is '/sp/acs/post'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <getServiceTypeFromURI(): returning service type 'ACS'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <Assertion consumer service: processing>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <get SAMLResponse from http request:PBvbnNSJ1cXMlFtZzaXM6bmHhtbG5zm46NhwOlJlOnNhbWxHbWxIb2Fc3wP6dGM6
U0FiB4bWxuczp4NTAwPSJ1cm46b2FzNTDoyLjA6YXNzZXJ0aW9uIaXM6bmFtZXM6
U0FdG9jb2wiIHhtbG5zOmRzaWc9Imh0dHA6Ly93NTDoyLjA6cHJvd3cudzMub3Jn
aHR0cDoa5vcmcvMjAwMS9W5zdGFuY2vL3d3dy53MyYTUxTY2hlbWEtUiIERlc3Rp
MWNxM2FjNzI1ZDjYmIVhNDM1Zlzc3VlSW5zdGFudD0ijhlNjc3OTkiIEMjAxNy0x
LzINpZyMiIHhtwMDAvMDkveG1sZHbG5DovL3d3dy53My5vczOmVuYz0iaHR0cmcv
MMS8wNC94bWxjAwlbmWxuczpzYW1sPMjIiB4bSJ1aXM6bmFtZXM6dcm46b2FzGM6
dGdXRlOM6U0FNZXM6YXR0cmliTDoyLjAAiIHhtbG5zOnhzaT6cHJvZmlslg1MD0i
bmF0aW9JodHRwczovL5ldS5uPSub3Zhc3BoY2hicy1DIyMinRpcyzdDIyM5uZXQ6
ODA4NSMvcG9zdCI9zYW1sMi9zcC3SUhwRHRuN3I1WH9hY3gSUQ9ImlkLUhhTTFha
Z3hBiIEmVzcWW5URXhybHJlG9uc2VUbz0RVFGbWt1VkRaNC0iXzB4ZluUGM1Mjk2
MS0xNw6SXNzdWjo0OTFZlcnNpVyIEZvo1MloiIlQxMb249IjIuMCI+PHNhbWcm1h
...
LTExIgTDEyPSIyMDE3LFjQ525PcLTE2VETExLTOk2VDOjUyWimdGVym90TEyOjU0
OjUyWiI+PHNh8c2FtbDpBdWRpZW5bWw6QXVkabj4jWVuY2VSZXN0cmljdGlvZT5T
c3NGVkVHJhb3b3JkUHJvdG1sOkF1dGhuQ29udGV4VjdnNwb3J0PC9zYWdENsYXNz
QXV0aG5gQXV0TdGF0LTExLTE2VDEaG5JZW1lbnQSIyMDE3bnN0YW50PyOjQ5OjUy
aEV25PcVucnhPSIyMDE3LTExEZmZ2IiBkFmdGVyLTEJWTTZXNzaW9uTm90T2VDEz
WivUWuZGV4PSJpZC13UlVMWGRYOXd6xWzc2lvbklRThFZDJwRDdIgU2VR210OUc0
dWJXYSUQ8L3NhRE1PQ19XlfREVWTQU1MMl9FbnbWw6QXVkaWVuY2U+RpdHlfPC9z
YWRpb24+P1sOkF1ZGllbHJpY3mPHNhNlUmVzdC9zYW1sOkNvbmRpdGlvbnM+bWw6
YXNzUzpjbYW1YXNpczpuIuMlmVmPnVybjpvDphYxzp0YzpTQU1MOjGFzc2VzOlBh
OjAXh0Pj0OjUyWiI+PHNsOkF1dGhuQhxzYW129udGV4bWw6QXV0aG5Db250ZdENs
UmVnRlepBdXRobkNvbzYWmPjwvc2FtbDF1dGhuU1sOkHQ+PC93RhdGVtZW50Pjwv
c2F9zY25zZtbDpBcW1scDpSZXNwb3NlcnRpb24+PCT4=
>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <BASE64 decoded saml message:<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-12T12:23:42Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><samlp:Status><samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/></samlp:Status><saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-12T12:23:42Z" Version="2.0"><saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer><dsig:Signature><dsig:SignedInfo><dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi"><dsig:Transforms><dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/></dsig:Transforms><dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue></dsig:Reference></dsig:SignedInfo><dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue><dsig:KeyInfo><dsig:X509Data><dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate></dsig:X509Data></dsig:KeyInfo></dsig:Signature><saml:Subject><saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID><saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"><saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-12T12:28:42Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/></saml:SubjectConfirmation></saml:Subject><saml:Conditions NotBefore="2017-11-12T12:23:42Z" NotOnOrAfter="2017-11-12T12:28:42Z"><saml:AudienceRestriction><saml:Audience>SAML2_Entity_ID_01</saml:Audience></saml:AudienceRestriction></saml:Conditions><saml:AuthnStatement AuthnInstant="2017-11-12T12:23:42Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-12T12:38:42Z"><saml:AuthnContext><saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef></saml:AuthnContext></saml:AuthnStatement></saml:Assertion></samlp:Response>>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <<samlp:Response> is signed.>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity(SAML2.Assertion.DOM)>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: start assert SAML2 token>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2IdentityAsserterProvider: SAML2IdentityAsserter: tokenType is 'SAML2.Assertion.DOM'>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion signature>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: The assertion is signed.>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion signature>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion attributes>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion attributes>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion issuer>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: End verify assertion issuer>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Atn> <SAML2Assert: Start verify assertion conditions>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecurityAtn> <com.bea.common.security.internal.service.IdentityAssertionTokenServiceImpl.assertIdentity - IdentityAssertionException>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <[Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:096537]Assertion is not yet valid (NotBefore condition).>
<Nov 12, 2017 12:23:41 PM UTC> <Debug> <SecuritySAML2Service> <exception info
javax.security.auth.login.LoginException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:090377]Identity Assertion Failed, weblogic.security.spi.IdentityAssertionException: [Security:096537]Assertion is not yet valid (NotBefore condition).
        at com.bea.common.security.internal.service.IdentityAssertionServiceImpl.assertIdentity(IdentityAssertionServiceImpl.java:89)
        at sun.reflect.GeneratedMethodAccessor1410.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at com.bea.common.security.internal.utils.Delegator$ProxyInvocationHandler.invoke(Delegator.java:64)
		...
>

 

I cut some of the strings above (like all signatures, the SSL Certificates, aso…) because it was really too big and it is not really important. What is important above is the java exception. Indeed, the Identity Assertion failed because of the following: ‘Assertion is not yet valid (NotBefore condition)’. This message might seems a little bit mystical but it actually points you right to the issue: the ‘NotBefore’ condition is causing the validation to fail.

So why is that? Well when you have a SAML2 SSO like I said above, you first have a request and then a response. For security reasons, there are some conditions that apply on them and that need to be fulfilled for the SSO to be working. To understand that a little bit better, I took the decoded SAML2 response from the logs above (line 32) and I reformatted it into an XML format so it is more readable:

<samlp:Response xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol" xmlns:dsig="http://www.w3.org/2000/09/xmldsig#" xmlns:enc="http://www.w3.org/2001/04/xmlenc#" xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Destination="https://weblogic_server_01/saml2/sp/acs/post" ID="id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-" InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" IssueInstant="2017-11-12T12:23:42Z" Version="2.0">
	<saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer>
	<dsig:Signature>
		<dsig:SignedInfo>
			<dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
			<dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
			<dsig:Reference URI="#id-HpDtn7r5XxxAQFYnwSLXZmkuVgIHTExrlreEDZ4-">
				<dsig:Transforms>
					<dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
					<dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
				</dsig:Transforms>
				<dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
				<dsig:DigestValue>YGtUZvsfo3z51AsBo7UDhbd6Ts=</dsig:DigestValue>
			</dsig:Reference>
		</dsig:SignedInfo>
		<dsig:SignatureValue>al8sJwbqzjh1qgM3Sj0QrX1aZjwyI...JB6l4jmj91BdQrYQ7VxFzvNLczZ2brJSdLLig==</dsig:SignatureValue>
		<dsig:KeyInfo>
			<dsig:X509Data>
				<dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate>
			</dsig:X509Data>
		</dsig:KeyInfo>
	</dsig:Signature>
	<samlp:Status>
		<samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
	</samlp:Status>
	<saml:Assertion ID="id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi" IssueInstant="2017-11-12T12:23:42Z" Version="2.0">
		<saml:Issuer Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity">https://idp_partner_01/fed/idp</saml:Issuer>
		<dsig:Signature>
			<dsig:SignedInfo>
				<dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
				<dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
				<dsig:Reference URI="#id-0WrMNbOz6wsuZdFPhfjnw7WIXXQ6k89-1AgHZ9Oi">
					<dsig:Transforms>
						<dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
						<dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
					</dsig:Transforms>
					<dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
					<dsig:DigestValue>7+jZtq8SpY3BKVaFjIFeEJm51cA=</dsig:DigestValue>
				</dsig:Reference>
			</dsig:SignedInfo>
			<dsig:SignatureValue>GIlXt4B4rVFoDJRxidpZO73gXB68Dd+mcpoV9DKrjBBjLRz...zGTDcEYY2MG8FgtarZhVQGc4zxkkSg8GRT6Wng3NEuTUuA==</dsig:SignatureValue>
			<dsig:KeyInfo>
				<dsig:X509Data>
					<dsig:X509Certificate>MIwDQUg+nhYqGZ7pCgBQAwTTELMAkGA1UEBhMCQk1ZhQ...aATPRCd113tVqsvCkUwpfQ5zyUHaKw4FkXmiT2nzxxHA==</dsig:X509Certificate>
				</dsig:X509Data>
			</dsig:KeyInfo>
		</dsig:Signature>
		<saml:Subject>
			<saml:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">PATOU_MORGAN</saml:NameID>
			<saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
				<saml:SubjectConfirmationData InResponseTo="_0x7258edc52961ccbd5a435fb13ac67799" NotOnOrAfter="2017-11-12T12:28:42Z" Recipient="https://weblogic_server_01/saml2/sp/acs/post"/>
			</saml:SubjectConfirmation>
		</saml:Subject>
		<saml:Conditions NotBefore="2017-11-12T12:23:42Z" NotOnOrAfter="2017-11-12T12:28:42Z">
			<saml:AudienceRestriction>
				<saml:Audience>SAML2_Entity_ID_01</saml:Audience>
			</saml:AudienceRestriction>
		</saml:Conditions>
		<saml:AuthnStatement AuthnInstant="2017-11-12T12:23:42Z" SessionIndex="id-oX9wXdpGmt9GQlVffvY4hEIRULEd25nrxDzE8D7w" SessionNotOnOrAfter="2017-11-12T12:38:42Z">
			<saml:AuthnContext>
				<saml:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport</saml:AuthnContextClassRef>
			</saml:AuthnContext>
		</saml:AuthnStatement>
	</saml:Assertion>
</samlp:Response>

 

As you can see on the XML, there are two conditions that apply on the SAML2 response:

  • The usage of the response needs to take place ‘NotBefore’ the current time
  • The usage of the response needs to take place ‘NotOnOrAfter’ the current time + 5 minutes

In this case, the NotBefore is set to ‘2017-11-12T12:23:42Z’ which is the current time of the IdP Partner Server. However you can see in the logs that the WebLogic Server hosting the application is actually one second before this time (Nov 12, 2017 12:23:41 PM UTC) and therefore the NotBefore restriction applies and the WebLogic Server hosting the application has no other choice than to return a ‘403 – Forbidden’ message because the SAML2 response is NOT valid.

In this case, the NTP daemon (Network Time Protocol) on the IdP Partner Linux server has been restarted and the time on this server has been resynched which solved the issue. Having a server in the future can cause some interesting behaviors :).

 

 

Cet article WebLogic – SSO/Atn/Atz – 403 Forbidden, a first issue est apparu en premier sur Blog dbi services.

WebLogic – SSO/Atn/Atz – 403 Forbidden, another issue

$
0
0

Earlier today, I posted another blog with almost the same title but the similarities stop here. The thing is that when you access your application, there aren’t much error codes possible so if there is an issue, it’s often the same generic message that is being displayed on the browser.

To get some background on the issue that I will present below, we are usually setting up the WebLogic Server, enabling the SSO on the WLS level, aso… Then there are different Application Teams that prepare their specific application war files and deploy them into our WLS. Since all teams are using standard procedures and IQs to deploy all that, the applications are properly working in SSO 99% of the time but human errors can happen, especially in the dev environments where there are a less verification…

So after deploying their customized war files, an Application Team tried to access it using the SSO URL but then got a ‘403 – Forbidden’ error message, crap. As we are responsible for the whole platform, they are usually directly coming to us so that we can check what is wrong. So as always: enable the debug logs, find out what the issue is, where a mistake was done and how to solve it. In this case (and contrary to the previous blog), the SAML2 response was correct and accepted by WebLogic so the SSO process was already going one step further, this is why I will just skip the first part of the logs (as well as the LDAP authentication & retrieval of groups) and only show what is happening afterwards (so no Atn but only Atz):

<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 12, 2017, 5:43:25,15 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 3
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = class weblogic.security.principal.WLSGroupImpl("readers")
        Principal = class weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" ]>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Subject: 3
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = weblogic.security.principal.WLSGroupImpl("readers")
        Principal = weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Roles:Anonymous>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous]>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous]) -> false>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to NotApplicable because of Condition>
<Nov 12, 2017, 5:43:25,16 PM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Deny>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning DENY>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned DENY>
<Nov 12, 2017, 5:43:25,17 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: false>

 

As you can see at the end of the log, the access is denied (‘isAccessAllowed returning adjudicated: false’) and if you check the previous lines, you can see that this is actually because the function ‘string-at-least-one-member-of’ is expecting the user to have the role ‘consumer’ but here the user only have the role ‘Anonymous’. There is a mismatch and therefore the access is denied which caused the ‘403 – Forbidden’ message. So where are those roles assigned? It is actually not on the IdP Partner or LDAP side but on the WebLogic side directly, or rather on the Application side to be more precise…

When working with SSO solutions, there are some additional configurations that you will need to incorporate to your application war file like for example what is transport mode for all communications, which URLs (/context) are not to be authenticated through SSO and which ones are, which roles the users need, aso… This is all done (for WebLogic) inside the web.xml and weblogic.xml.

In this case, the ‘consumer’ role shown above is defined in the web.xml file inside the ‘<auth-constraint>’ (Authorization constraint) so it basically says that this is the only authorized role to perform constrained requests. This is an example of configuration that you can put in your web.xml (this one is an extract of what can be done for Documentum D2 with SAML2 SSO):

[weblogic@weblogic_server_01 D2]$ tail -50 WEB-INF/web.xml
  <security-constraint>
    <web-resource-collection>
      <web-resource-name>SSO Public</web-resource-name>
      <description>Non-authenticated resources</description>
      <url-pattern>/help/en/*</url-pattern>
      <url-pattern>/resources/*</url-pattern>
	  ...
      <http-method>GET</http-method>
      <http-method>POST</http-method>
    </web-resource-collection>
  </security-constraint>

  <security-constraint>
    <web-resource-collection>
      <web-resource-name>SSO Private</web-resource-name>
      <description>Authenticated resources</description>
      <url-pattern>/*</url-pattern>
      <http-method>GET</http-method>
      <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
      <description>Authorized role</description>
      <role-name>consumer</role-name>
    </auth-constraint>
    <user-data-constraint>
      <description>User Data</description>
      <transport-guarantee>CONFIDENTIAL</transport-guarantee>
    </user-data-constraint>
  </security-constraint>

  <security-role>
    <role-name>consumer</role-name>
  </security-role>

  ...
[weblogic@weblogic_server_01 D2]$

 

So the above configuration exactly match what WebLogic requires and what is shown in the logs. The user must have a role that is ‘consumer’ to be able to access the Application. The only question left is why the users aren’t assigned to this role if they have been authenticated via the SAML2 SSO and LDAP and that’s where the issue is in this case. In the web.xml, you can define what security should apply to your application but for the assignment to the security roles, you should rather take a look at the weblogic.xml file. You can assign users using their principals (coming from the WLS Security Realm) but it is usually better to use groups instead to avoid managing users individually (more information there). So you already understood it, the issue in this case was simply that the Application Team configured the security roles on the web.xml file but they forgot the assignments on the weblogic.xml and therefore the issue was solved by simply adding the following lines in this file:

[weblogic@weblogic_server_01 D2]$ tail -5 WEB-INF/weblogic.xml
  <security-role-assignment>
      <role-name>consumer</role-name>
      <principal-name>users</principal-name>
  </security-role-assignment>
</weblogic-web-app>
[weblogic@weblogic_server_01 D2]$

 

With these four simple lines, all existing users from the LDAP (that is configured in our WebLogic) are getting automatically granted the role ‘consumer’ and are therefore allowed to access the application.

From the WebLogic Administration Console, you can check what is the assignment of a security role using these steps:

  1. Login to the Admin Console using your weblogic account
  2. Navigate to the correct page: DOMAIN > Deployments > YourApp (click on the name) > Security > URL Patterns > Roles > YourRole (click on the name)

The list of roles can be seen here:

SecurityRole1

Then after clicking on the name of the role, you can see the conditions for the assignments:

SecurityRole2

To compare the logs before/after, this is what is being printed to the logs after the correction:

<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <AuthorizationManager will use common security for ATZ>
<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <weblogic.security.service.WLSAuthorizationServiceWrapper.isAccessAllowed>
<Nov 12, 2017, 6:30:55,10 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Identity=Subject: 3
        Principal = class weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = class weblogic.security.principal.WLSGroupImpl("readers")
        Principal = class weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Roles=[ "Anonymous" "consumer" ]>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Resource=type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed Direction=ONCE>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): input arguments:>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Subject: 3
        Principal = weblogic.security.principal.WLSUserImpl("PATOU_MORGAN")
        Principal = weblogic.security.principal.WLSGroupImpl("readers")
        Principal = weblogic.security.principal.WLSGroupImpl("superusers")
>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Roles:Anonymous, consumer>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Resource: type=<url>, application=D2, contextPath=/D2, uri=/X3_Portal.jsp, httpMethod=GET>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Direction: ONCE>
<Nov 12, 2017, 6:30:55,11 PM UTC> <Debug> <SecurityAtz> < Context Handler: >
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <Accessed Subject: Id=urn:oasis:names:tc:xacml:2.0:subject:role, Value=[Anonymous,consumer]>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <Evaluate urn:oasis:names:tc:xacml:1.0:function:string-at-least-one-member-of([consumer,consumer],[Anonymous,consumer]) -> true>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <primary-rule evaluates to Permit>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <urn:bea:xacml:2.0:entitlement:resource:type@E@Furl@G@M@Oapplication@ED2@M@OcontextPath@E@UD2@M@Ouri@E@U@K@M@OhttpMethod@EGET, 1.0 evaluates to Permit>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <XACML Authorization isAccessAllowed(): returning PERMIT>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AccessDecisionServiceImpl.isAccessAllowed AccessDecision returned PERMIT>
<Nov 12, 2017, 6:30:55,12 PM UTC> <Debug> <SecurityAtz> <com.bea.common.security.internal.service.AuthorizationServiceImpl.isAccessAllowed returning adjudicated: true>

 

Access is allowed. :)

 

 

Cet article WebLogic – SSO/Atn/Atz – 403 Forbidden, another issue est apparu en premier sur Blog dbi services.

Alfresco – Unable to move/rename a file/folder using AOS

$
0
0

When playing with the AOS implementation, a colleague of mine faced an interesting issue where he just wasn’t able to move or rename any files or folders. The creation and deletion were working properly but he was unable to move or rename anything when using a Network Location or a Network Drive. This environment was freshly installed with a front-end (Apache HTTPD) setup in SSL so we worked together to find out what was the issue. All workstations were impacted no matter what OS was used (Windows 7, 8, 10, aso…).

The Network Location or Drive were mounted using the following:

  • URL Style => https://alfresco_server_01.domain.com/alfresco/aos/
  • WebDAV Style => \\alfresco_server_01.domain.com@SSL\DavWWWRoot\alfresco\aos\

In all cases, the workstations were able to connect and create nodes in the Alfresco Server (through AOS), it meant that the parameters/configuration on the Alfresco and Apache HTTPD sides were pretty much OK but nevertheless we still checked them to be sure:

alfresco@alfresco_server_01:~$ grep -E "^alfresco\.|^share\." $CATALINA_HOME/shared/classes/alfresco-global.properties
alfresco.context=alfresco
alfresco.host=alfresco_server_01.domain.com
alfresco.port=443
alfresco.protocol=https
share.context=share
share.host=alfresco_server_01.domain.com
share.port=443
share.protocol=https
alfresco@alfresco_server_01:~$
alfresco@alfresco_server_01:~$
alfresco@alfresco_server_01:~$ grep aos $CATALINA_HOME/shared/classes/alfresco-global.properties
aos.baseUrlOverwrite=https://alfresco_server_01.domain.com:443/alfresco/aos
#aos.sitePathOverwrite=/alfresco/aos
alfresco@alfresco_server_01:~$

 

For me, the parameters above seemed correct at first sight. The Apache HTTPD being in SSL on the port 443 and redirecting everything to the Tomcat using the mod_jk, it is normal for the alfresco and share parameters to use https and the port 443 (and even if the Tomcat is actually not in SSL) because these values should reflect the front-end. For the “aos.baseUrlOverwrite”, it is normally to be used only in case you have a proxy server in front of your Alfresco and that this proxy isn’t an Apache HTTPD. Since my colleague was using Apache, this parameter wasn’t really needed but having it set to the correct value shouldn’t hurt either. The correct value for this parameter is also the front-end URL and it is the current value, or so it seemed.

With the above parameters, we were able to create any kind of files and folders in our Network Locations/Drives. I took some screenshots for this blog and I used a simple folder to demonstrate the issue and the solution. So creating a folder with the default name is working properly:

RenameFolder1

At this point, I had a new folder in my Alfresco Server which I could clearly see and manage via the Share client. So renaming it from Share wasn’t a problem but doing the same thing through AOS (Network Location or Drive) resulted in this:

RenameFolder2

At the same time, the following logs were generated on Alfresco side:

2017-11-14 08:34:50,342  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-7] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,442  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-8] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,544  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-9] doMove: BAD REQUEST: Destination malformed
2017-11-14 08:34:50,647  ERROR [aoservices-err.StandardWebdavService] [ajp-nio-8009-exec-1] doMove: BAD REQUEST: Destination malformed

 

With the default log level, that’s not particularly helpful… From the Apache logs, it is pretty easy to see when the folder “New Folder” has been created (MKCOL @ 08:34:21) as well as when I tried to rename it (MOVE @ 08:34:50):

alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "MKCOL /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 201 640 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "PROPPATCH /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 971 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:21 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder/desktop.ini HTTP/1.1" 404 588 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos HTTP/1.1" 207 5473 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1784 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1803 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1784 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1606 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 400 1711 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:34:50 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1909 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"

 

As you can see above, the renaming failed and the Apache responded with a 400 error code. The error “doMove: BAD REQUEST” on the Alfresco side reminded me of this JIRA but the outcome of this ticket was that the parameter “aos.baseUrlOverwrite” was wrongly set… In our case, its value was “https://alfresco_server_01.domain.com:443/alfresco/aos” (as shown above) and this seemed to be the correct URL… But in fact it wasn’t.

Just to avoid any uncertainty, we tried to change the value to “https://alfresco_server_01.domain.com/alfresco/aos” (so just removing the port :443 which technically can be here or not…) and then restart Alfresco… After doing that, the rename was actually working:

RenameFolder3

So magically the issue was gone… The associated Apache HTTPD logs showed this time a 201 return code:

alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:04 +0100] "PROPFIND /alfresco/aos HTTP/1.1" 207 5347 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:04 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 207 1799 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary HTTP/1.1" 207 1780 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/aos/Sites HTTP/1.1" 207 1736 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco HTTP/1.1" 302 224 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /alfresco/ HTTP/1.1" 207 1858 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND / HTTP/1.1" 302 572 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:05 +0100] "PROPFIND /share/page/repository HTTP/1.1" 501 1490 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:06 +0100] "MOVE /alfresco/aos/Sites/my-site/documentLibrary/New%20folder HTTP/1.1" 201 640 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"
alfresco_server_01.domain.com:443 10.20.30.40 - - [14/Nov/2017:08:48:07 +0100] "PROPFIND /alfresco/aos/Sites/my-site/documentLibrary/Renamed%202 HTTP/1.1" 207 1797 "-" "Microsoft-WebDAV-MiniRedir/6.3.9600"

 

Conclusion for this blog? Take care when you want to set the “aos.baseUrlOverwrite”, do not add the port if it is not really necessary! Another solution to this issue would be to just comment the “aos.baseUrlOverwrite” parameter since it is not needed when using Apache HTTPD. I personally never use this parameter (keeping it commented) because I’m always using Apache :).

 

 

Cet article Alfresco – Unable to move/rename a file/folder using AOS est apparu en premier sur Blog dbi services.

DOAG 2017: Automation in progress

$
0
0

DOAG2017_dbi

A week ago, I had the chance to be speaker at the DOAG Konferenz 2017 in Nürnberg. It’s sometimes hard to find time to be at the conferences because the end of year is quite busy at customers. But it’s also important because it’s time for sharing. I can share what I’m working on about automation and patching and I can also see how other people are doing.

And it was great for me this year, I started to work with Ansible to automate some repetitives tasks, and I saw a lot of interesting presentations either about Ansible itself or where Ansible was used in the demo.

The session “Getting Started with Ansible and Oracle” of Ron Ekins from Pure Storage showed a very interesting use case to see the strengh of Ansible. A live demo where he cloned 1 Production database to 6 different demo environments for the developpers. And doing this way, with a playbook, we are sure that the 6 environments are done without human errors because Ansible will play the same tasks across all nodes.

The previous day, I attended the session “Practical database administration automation with Ansible” of Mikael Sandström and Ilmar Kerm from Kindred Group. They presented some modules they wrote to interact with the database using Ansible. The modules can be used to validate some parameters or create users, etc… I found the code while I was working on my project but I did not dive in the details. The code is available on Github and I will definitively have a closer look.

We can think that Ansible is not designed to manage databases but using modules you can extend Ansible to do a lot of things.

Next week, I have the chance the be also at the Tech17 organised by the UKOUG. Let’s hope I can continue to learn and share!
Speaker_UKOUG2017

 

Cet article DOAG 2017: Automation in progress est apparu en premier sur Blog dbi services.


No journal messages available before the last reboot of your CentOS/RHEL system?

$
0
0

As you probably noticed RedHat as well as CentOS switched to systemd with version 7 of their operating system release. This also means that instead of looking at /var/log/messages you are supposed to use journcalctl to browse the messages of the operating system. One issue with that is that messages before the last reboot of your system will not be available, which is probably not want you want.

Lets say I started my RedHat linux system just now:

Last login: Tue Dec  5 09:12:34 2017 from 192.168.22.1
[root@rhel7 ~]$ uptime
 09:14:14 up 1 min,  1 user,  load average: 0.33, 0.15, 0.05
[root@rhel7 ~]$ date
Die Dez  5 09:14:15 CET 2017

Asking for any journal logs before that will not show anything:

[root@rhel7 ~]$ journalctl --help  | grep "\-\-since"
  -S --since=DATE          Show entries not older than the specified date
[root@rhel7 ~]$ journalctl --since "2017-12-04 00:00:00"
-- Logs begin at Die 2017-12-05 09:13:07 CET, end at Die 2017-12-05 09:14:38 CET. --
Dez 05 09:13:07 rhel7.localdomain systemd-journal[86]: Runtime journal is using 6.2M (max allowed 49.6M, trying to 
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuset
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpu
Dez 05 09:13:07 rhel7.localdomain kernel: Initializing cgroup subsys cpuacct

Nothing for yesterday, which is bad. The issue here is the default configuration:

[root@rhel7 ~]$ cat /etc/systemd/journald.conf 
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.

[Journal]
#Storage=auto
#Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
#SystemMaxUse=
#SystemKeepFree=
#SystemMaxFileSize=
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
#MaxRetentionSec=
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg

“Storage=auto” means that the journal will only be persistent if this directory exists (it does not in the default setup):

[root@rhel7 ~]$ ls /var/log/journal
ls: cannot access /var/log/journal: No such file or directory

As soon as this is created and the service is restarted the journal will be persistent and will survive a reboot:

[root@rhel7 ~]$ mkdir /var/log/journal
[root@rhel7 ~]$ systemctl restart systemd-journald.service
total 4
drwxr-xr-x.  3 root root   46  5. Dez 09:15 .
drwxr-xr-x. 10 root root 4096  5. Dez 09:15 ..
drwxr-xr-x.  2 root root   28  5. Dez 09:15 a473db3bada14e478442d99da55345e0
[root@rhel7 ~]$ ls -al /var/log/journal/a473db3bada14e478442d99da55345e0/
total 8192
drwxr-xr-x. 2 root root      28  5. Dez 09:15 .
drwxr-xr-x. 3 root root      46  5. Dez 09:15 ..
-rw-r-----. 1 root root 8388608  5. Dez 09:15 system.journal

Of course you should look at the other parameters that control the size of journal as well as rotation.

 

Cet article No journal messages available before the last reboot of your CentOS/RHEL system? est apparu en premier sur Blog dbi services.

Naming of archivelog files with non existing top level archivelog directory

$
0
0

In Oracle 12.2 an archive log directory is accepted, if top level directory does not exist:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/
 total 2267920
 drwxr-xr-x. 2 oracle dba        96 Dec  6 05:36 arch ...

Now database accepts this non existing archivelog destination:

SQL> alter system set log_archive_dest_3='LOCATION=/u02/oradata/DMK/arch/arch2';
System altered.

But not this:

SQL> alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4';
 alter system set log_archive_dest_4='LOCATION=/u02/oradata/DMK/arch/arch2/arch4'
 *
 ERROR at line 1:
 ORA-02097: parameter cannot be modified because specified value is invalid
 ORA-16032: parameter LOG_ARCHIVE_DEST_4 destination string cannot be translated
 ORA-07286: sksagdi: cannot obtain device information.
 Linux-x86_64 Error: 2: No such file or directory

Log file format is set as following:

SQL> show parameter log_archive_format;
NAME                                 TYPE        VALUE
 ------------------------------------ ----------- ------------------------------
 log_archive_format                   string      %t_%s_%r.dbf
 SQL>

 

Now let’s see how archive log files look like in log_archive_dest_3:

oracle@localhost:/u01/app/oracle/product/12.2.0/dbhome_1/dbs/ [DMK] ls -l /u02/oradata/DMK/arch/arch2*
 -rw-r-----. 1 oracle dba 3845120 Dec  6 05:36 /u02/oradata/DMK/arch/arch21_5_960106002.dbf

So Oracle just adds the non existing top level directory to beginning of archivelog filename.

 

Cet article Naming of archivelog files with non existing top level archivelog directory est apparu en premier sur Blog dbi services.

How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7?

$
0
0

DISCLAIMER: I know it exists other solutions to do it

Pre-requisites:
– a virtual machine (or not) with CentOS7 installed
– a free disk or partition

I use a VBox machine and I added a 5GiB hard disk

We list the disk and partition to check if our new hard is added.

[root@deploy ~]$ lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0   20G  0 disk
├─sda1                       8:1    0    1G  0 part /boot
└─sda2                       8:2    0   19G  0 part
  ├─cl-root                253:0    0   21G  0 lvm  /
  └─cl-swap                253:1    0    2G  0 lvm  [SWAP]
sdb                          8:16   0   10G  0 disk

Good, we can continue..

Let’s partition the disk using fdisk

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n

[root@deploy ~]$ fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x76a98fa2.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now, we need to inform the kernel that the partition table has changed. To do that, either we reboot the server or we run partprobe

[root@deploy ~]$ partprobe /dev/sdb1
[root@deploy ~]$

We create a physical volume

[root@deploy ~]$ pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@deploy ~]$ pvs
  PV         VG Fmt  Attr PSize  PFree
  /dev/sda2  cl lvm2 a--  19.00g       0
  /dev/sdb1     lvm2 ---   5.00g    5.00g
  /dev/sdc2  cl lvm2 a--   5.00g 1020.00m

We create a volume group

[root@deploy ~]$ vgcreate  vg_deploy /dev/sdb1
  Volume group "vg_deploy" successfully created

We check that the volume group was created properly

[root@deploy ~]$ vgdisplay vg_deploy
  --- Volume group ---
  VG Name               vg_deploy
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0
  Free  PE / Size       1279 / 5.00 GiB
  VG UUID               5ZhlvC-lpor-Ti8x-mS9P-bnxW-Gdtw-Gynocl

Here, I set the size of the logical volume with PE (Physical Extent). One PE represents 4.00 MiB

We create a logical volume on our volume group

[root@deploy ~]$ lvcreate -l 1000 -n lv_deploy vg_deploy
  Logical volume "lv_deploy" created.

We have a look to check how our logical volume “lv_deploy” looks like

[root@deploy ~]$ lvdisplay /dev/vg_deploy/lv_deploy
  --- Logical volume ---
  LV Path                /dev/vg_deploy/lv_deploy
  LV Name                lv_deploy
  VG Name                vg_deploy
  LV UUID                2vxcDv-AHfB-7c2x-1PM8-nbn3-38M5-c1QoNS
  LV Write Access        read/write
  LV Creation host, time deploy.example.com, 2017-12-05 08:15:59 -0500
  LV Status              available
  # open                 0
  LV Size                3.91 GiB
  Current LE             1000
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Let’s create our file system on the new logical volume

[root@deploy ~]$ mkfs.xfs  /dev/vg_deploy/lv_deploy
meta-data=/dev/vg_deploy/lv_deploy isize=512    agcount=4, agsize=256000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1024000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We now create a new directory “mysqldata” for example

[root@deploy ~]$ mkdir /mysqldata

We add the new entry for our new logical volume

[root@deploy ~]$ echo "/dev/mapper/vg_deploy-lv_deploy     /mysqldata      xfs     defaults      0 0" >> /etc/fstab

We mount it

[root@deploy ~]$ mount -a

We check the filesystem is mounted properly

[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.9G   33M  3.9G   1% /mysqldata

We add some files to the /mysqldata directory (a for loop will help us)

[root@deploy mysqldata]$ for i in 1 2 3 4 5; do dd if=/dev/zero  of=/mysqldata/file0$i bs=1024 count=10; done
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000282978 s, 36.2 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000202232 s, 50.6 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000255617 s, 40.1 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000195752 s, 52.3 MB/s
10+0 records in
10+0 records out
10240 bytes (10 kB) copied, 0.000183672 s, 55.8 MB/s
[root@deploy mysqldata]$ ls -l
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

NOW the interesting part is coming because we are going to reduce our /mysqldata filesystem
But first let’s make a backup of our current /mysqldata FS

[root@deploy mysqldata]$ yum -y install xfsdump
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile

Bad news! we cannot reduce an xfs partition directly so we need:
– to backup our filesystem
– umount the filsesytem && delete the logical volume
– re-partition tle logical volume with xfs FS
– restore our data

Backup the file system

[root@deploy mysqldata]$ xfsdump -f /tmp/mysqldata.dump /mysqldata
xfsdump: using file dump (drive_simple) strategy
xfsdump: version 3.1.4 (dump format 3.0) - type ^C for status and control

 ============================= dump label dialog ==============================

please enter label for this dump session (timeout in 300 sec)
 -> test
session label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: level 0 dump of deploy.example.com:/mysqldata
xfsdump: dump date: Tue Dec  5 08:36:20 2017
xfsdump: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsdump: session label: "test"
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 83840 bytes

 ============================= media label dialog =============================

please enter label for media in drive 0 (timeout in 300 sec)
 -> test
media label entered: "test"

 --------------------------------- end dialog ---------------------------------

xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: ending media file
xfsdump: media file size 75656 bytes
xfsdump: dump size (non-dir files) : 51360 bytes
xfsdump: dump complete: 5 seconds elapsed
xfsdump: Dump Summary:
xfsdump:   stream 0 /tmp/mysqldata.dump OK (success)
xfsdump: Dump Status: SUCCESS

Then, we unmount the filesystem and delete the logical volume

[root@deploy ~]$ umount /mysqldata/

[root@deploy ~]$ df -hT
Filesystem          Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root xfs        21G  8.7G   13G  42% /
devtmpfs            devtmpfs  910M     0  910M   0% /dev
tmpfs               tmpfs     920M     0  920M   0% /dev/shm
tmpfs               tmpfs     920M  8.4M  912M   1% /run
tmpfs               tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1           xfs      1014M  227M  788M  23% /boot
tmpfs               tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2          iso9660   4.3G  4.3G     0 100% /media/iso

[root@deploy ~]$ lvremove /dev/vg_deploy/lv_deploy
Do you really want to remove active logical volume vg_deploy/lv_deploy? [y/n]: y
  Logical volume "lv_deploy" successfully removed

We recreate the logical volume with a lower size (from 1000 PE to 800 PE)

[root@deploy ~]$ lvcreate -l 800 -n lv_deploy vg_deploy
WARNING: xfs signature detected on /dev/vg_deploy/lv_deploy at offset 0. Wipe it? [y/n]: y
  Wiping xfs signature on /dev/vg_deploy/lv_deploy.
  Logical volume "lv_deploy" created.

We build the XFS filesystem

[root@deploy ~]$ mkfs.xfs /dev/mapper/vg_deploy-lv_deploy
meta-data=/dev/mapper/vg_deploy-lv_deploy isize=512    agcount=4, agsize=204800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=819200, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

We remount the filesystem

[root@deploy ~]$ mount -a
[root@deploy ~]$
[root@deploy ~]$
[root@deploy ~]$ df -hT
Filesystem                      Type      Size  Used Avail Use% Mounted on
/dev/mapper/cl-root             xfs        21G  8.7G   13G  42% /
devtmpfs                        devtmpfs  910M     0  910M   0% /dev
tmpfs                           tmpfs     920M     0  920M   0% /dev/shm
tmpfs                           tmpfs     920M  8.4M  912M   1% /run
tmpfs                           tmpfs     920M     0  920M   0% /sys/fs/cgroup
/dev/sda1                       xfs      1014M  227M  788M  23% /boot
tmpfs                           tmpfs     184M     0  184M   0% /run/user/0
/dev/loop2                      iso9660   4.3G  4.3G     0 100% /media/iso
/dev/mapper/vg_deploy-lv_deploy xfs       3.2G   33M  3.1G   2% /mysqldata

We list the content of /mysqldata directory

[root@deploy ~]$ ls -l /mysqldata
total 0

Let’s restore our data

[root@deploy ~]$ xfsrestore -f /tmp/mysqldata.dump /mysqldata
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 3.1.4 (dump format 3.0) - type ^C for status and control
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: deploy.example.com
xfsrestore: mount point: /mysqldata
xfsrestore: volume: /dev/mapper/vg_deploy-lv_deploy
xfsrestore: session time: Tue Dec  5 08:36:20 2017
xfsrestore: level: 0
xfsrestore: session label: "test"
xfsrestore: media label: "test"
xfsrestore: file system id: 84832e04-e6b8-473a-beb4-f4d59ab9e73c
xfsrestore: session id: f010d421-1a34-4c70-871f-48ffc48c29f2
xfsrestore: media id: 8fda43c1-c7de-4331-b930-ebd88199d0e7
xfsrestore: using online session inventory
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: 1 directories and 5 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsrestore: restore complete: 0 seconds elapsed
xfsrestore: Restore Summary:
xfsrestore:   stream 0 /tmp/mysqldata.dump OK (success)
xfsrestore: Restore Status: SUCCESS

Our data are back

[root@deploy ~]$ ls -l /mysqldata/
total 60
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file01
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file02
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file03
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file04
-rw-r--r--. 1 root root 10240 Dec  5 08:28 file05

Hope this helps :-)

 

Cet article How to reduce the size of an LVM partition formatted with xfs filesystem on CentOS7? est apparu en premier sur Blog dbi services.

SQL Server Tips: an orphan user owns a database role

$
0
0

A few days ago, I conduct an audit to detect all orphan’s windows accounts in a database and I was surprise to have an error during the drop user query.

 

The first step is to find all orphan’s windows accounts in a database

USE [dbi_database]

GO

/*Step1: Search the orphan user */

SELECT * FROM  sys.database_principals a

LEFT OUTER JOIN sys.server_principals b ON a.sid = b.sid

WHERE b.sid IS NULL

AND   a.type In ('U', 'G')

AND   a.principal_id > 4

 

I find the user called “dbi\orphan_user” and run the query to drop it

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]

GO

orphan_user01

But as you can see, I receive the error message:

Msg 15421, Level 16, State 1, Line4

“The database principal owns a database role and cannot be dropped.”

 

This user is owner of database roles…

Be careful it is not this error message:

Msg 15138, Level 16, State 1, Line 4

The database principal owns a schema in the database, and cannot be dropped.

In this case, the user is owner on schema.

Do not confuse these two error messages:

  • Msg 15421 is for database role
  • Msg 15138 is for schema

 

The goal is to search all database roles owns by the user dbi\orphan_user

/*Search database role onws by this Orphran  user*/

  SELECT dp2.name, dp1.name FROM sys.database_principals AS dp1

                JOIN sys.database_principals AS dp2

                ON dp1.owning_principal_id = dp2.principal_id

                WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

As you can see in my select, I use two times the view sys.database_principals to do a cross check between the owning_principal_id and the principal_id.

orphan_user02

After that, I change the owner from this role to the good one (by default dbo).

/*Change the owner from these database role*/

ALTER AUTHORIZATION ON ROLE::<database role> TO dbo;

orphan_user03

And I drop the orphan user without problems…

/*Drop Orphran User*/

DROP USER [dbi\orphan_user]

GO

orphan_user04

To finish, I give you a Santa Klaus Gift:

I also rewrite the query to have the “Alter Authorization” query directly in the SELECT. You have just to copy/paste and execute it

SELECT dp2.name, dp1.name, 'ALTER AUTHORIZATION ON ROLE::' + dp1.name + ' TO dbo;' as query

FROM sys.database_principals AS dp1

JOIN sys.database_principals AS dp2

ON dp1.owning_principal_id = dp2.principal_id

WHERE dp1.type = 'R' AND dp2.name = 'dbi\orphan_user';

 

Et voila! 8-)

 

Cet article SQL Server Tips: an orphan user owns a database role est apparu en premier sur Blog dbi services.

#UKOUG_TECH17

$
0
0

Award

ukoug_tech17_award_paul_fitton.jpgI’ve received an award for an article I’ve written last year, the Most Read Oracle Scene Article in 2016 – Technology. I like to write for Oracle Scene magazine. It is rare today to have a magazine both available in a paper version, and also publicly available on the web. And I must say that as an author, all the people behind are doing a great work. Thanks to them and thanks for the award. Seeing that what I write helps is the motivation to do so.

The article is: Is your AWR/Statspack Report Relevant.

This is the occasion to link to other articles I’ve written for the UKOUG Magazine. Statspack, because not everybody has Enterprise Edition with optional Diagnostic Pack: Improving Statspack Experience. One on the parameter I hate the most: CBO Choice between Index & Full Scan: the Good, the Bad & the Ugly parameters. Another about the statistics that should replace this awful parameter: Demystifying WORKLOAD System Statistics Gathering.

ukoug_tech17_award_keynoteTwo interesting features in 12c: 12c Online Statistics Gathering & Session Private Statistics. A big change that happened in 12.2.0.2 concerning availability: I/O Error on Datafile: Instance Crash or Datafile Offline?. My SLOB performance tests when ODA Lite came with MVMe SSD: Performance for All Editions on ODA X6-2S/M. And finally, the article on the great Oracle features for physical transport/clone/migration: From Transportable Tablespaces to Pluggable Databases

I’ve stolen a few pictures to illustrate this blog post, from UKOUG photo albums, and from friends.

Presentations

Pres1The article on Transportable Tablespaces and Pluggable Databases is actually based one one presentation I did. I was in that big Hall 1 [I realize that some may do a joke on this, but I’m talking about the room] where I look like Ant-Man from the back of the room. But doing live demos is great on this huge screen.

The idea for this presentation came 1 year ago when preparing a 5 minutes talk for Oracle Open World 2016 ACED talks (blog post about this) and this subject is so exciting that I wanted to share more about it. Online PDB clone and relocate will be the features you will like the most when going to Multitenant.

pres2 My second presentation was more developer focused, exposing all Join Methods that can magically construct your query result in a record time, or, when bad Join Method is chosen, make your 2-second query still running after one hour. I explained the join methods by 3 ways: theory with .ppt animation, execution plan with dbms_xplan and in live when running queries, with SQL Monitor, showing the different operations and A-Rows increasing in live.

I was also co-presenting in a roundtable on Oracle Cloud PaaS, sharing my little experience on DBaaS. Everybody talks about Cloud and it is good to talk about problems encountered and how to deal with it.

Round tables

Despite the hard concurrency of good sessions, I also attended a few round tables. Those conferences are a good opportunity to meet and share other users and product managers. Oracle is a big company, and we sometimes think that they care only about their biggest customers, but that is not exact. There are several product managers who really listen to customers. A great one was the discussion about something that slowly changes for a few years: the documentation which was really precise in the past – in explaining the When, Why, and How – is now more vague.
IMG_4806

Community

Geeking at the Oracle Developer Community lounge, Eating something decent near the conference center, drinking while discussing technical stuff, the ACE dinner (and the first ACED briefing out of US), posing with award winners, and the amazing chocolate testing…

ukoug_tech17_geeksbulgugiIMG_4812aceDQTvPKSUIAADCqVachocolate1

 

Cet article #UKOUG_TECH17 est apparu en premier sur Blog dbi services.

Viewing all 2833 articles
Browse latest View live