Quantcast
Channel: dbi Blog
Viewing all 2879 articles
Browse latest View live

Deploy WebLogic docker images using Docker Toolbox and Virtual Box on Windows

$
0
0

I was interested to run Docker on my Windows machine and found out the Docker Toolbox for Windows that configure itself with the already installed VirtualBox at installation time.

Once installed, You can start the Docker QuickStart shell preconfigured for a Docker command-line environment. At startup time it will start a VM named default and will be ready to work with Docker.
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...

## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/

docker is configured to use the default machine with IP 192.168.99.100
For help getting started, check out the docs at https://docs.docker.com

Start interactive shell
$
The “docker-machine env” displays the machine environment that has been created:
$ docker-machine env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="C:\Users\PBR\.docker\machine\machines\default"
export DOCKER_MACHINE_NAME="default"
export COMPOSE_CONVERT_WINDOWS_PATHS="true"
# Run this command to configure your shell:
# eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env)

Here is how to directly set the environment from it:

$ eval $("C:\Program Files\Docker Toolbox\docker-machine.exe" env)

Once the environment is set, it can be displayed as follow:


$ docker info
Containers: 9
Running: 0
Paused: 0
Stopped: 9
Images: 2
Server Version: 18.06.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 34
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.93-boot2docker
Operating System: Boot2Docker 18.06.0-ce (TCL 8.2.1); HEAD : 1f40eb2 - Thu Jul 19 18:48:09 UTC 2018
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.955GiB
Name: default
ID: AV7B:Z7GA:ZWLU:SNMY:ALYL:WTCT:2X2F:NHPY:2TRP:VK27:JY3L:PHJO
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: pbrand
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

I will use a Docker image provided by Oracle on the Docker store: the Oracle WebLogic 12.2.1.3 image. First I need to sign in to the Docker Store

docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pbrand
Password:
Login Succeeded

Then I can pull the Oracle WebLogic 12.2.1.3 image


docker pull store/oracle/weblogic:12.2.1.3
12.2.1.3: Pulling from store/oracle/weblogic
9fd8609e6e4d: Pull complete
eac7b4a33e34: Pull complete
b6f7d13c859b: Pull complete
e0ca246b2272: Pull complete
7ba4d6bfba43: Pull complete
5e3b8c4731f0: Pull complete
97623ceb6339: Pull complete
Digest: sha256:4c7ce451c093329784a2808a55cd4fc4f1e93d8444b8492a24148d283936add9
Status: Downloaded newer image for store/oracle/weblogic:12.2.1.3

Display all images in my Docker:


$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
store/oracle/weblogic 12.2.1.3 c6bb22ff0ea8 2 weeks ago 1.14GB

In the docker repository, for the Oracle WebLogic 12.2.1.3 image, it is written that the Administrator user should be provided using a domain.properties file having the format below and provided in the command lline to start the Docker image.
The format of the domain.properties file is key value pair:

username=myadminusername
password=myadminpassword

The command line suggested is the following:


$ docker run -d -p 7001:7001 -p 9002:9002 -v $PWD/domain.properties:/u01/oracle/properties/domain.properties store/oracle/weblogic:12.2.1.3

This run command is fine on Linux but doesn’t suite to Windows environment. The created domain.properties file is on the C: drive on windows and the mapping can’t use environment variables like PWD.
In my case, the Docker run command to run is the following:

$ docker run -d --name wls12213 -p 7001:7001 -p 9002:9002 -v //c/Users/PBR/docker_weblogic/domain.properties:/u01/oracle/properties/domain.properties store/oracle/weblogic:12.2.1.3
670fc3bd2c8131b71ecc6a182181d1f03a4832a4c0e8d9d530e325e759afe151

With the -d option it displays only the instance ID, no logs.

Checking the logs using the Docker log command:

$ docker logs wls12213

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

domain_name : [base_domain] admin_listen_port : [7001] domain_path : [/u01/oracle/user_projects/domains/base_domain] production_mode : [prod] admin name : [AdminServer] administration_port_enabled : [true] administration_port : [9002]

I noticed from the logs that the Administration channel is enabled and listening on HTTPS Port 9002. The URL to browse to the WebLogic Administration Console is then:
https://192.168.99.100:9002/console
wls12213_console_servers1

 

Cet article Deploy WebLogic docker images using Docker Toolbox and Virtual Box on Windows est apparu en premier sur Blog dbi services.


RMAN PITR recover table Oracle 12c

$
0
0

At one client’s site, I had to restore a table someone had partially deleted one week before. Before Oracle 12c, we had to duplicate the target database to another server, and then to export and import data to the target database. But depending on the database size, it could cost a lot of time, and as nobody knew when the delete action happened, it was more practical to use the rman recover table command in order to have multiple versions of the table content.

At first for security, we save the application table:

SQL> create table appuser.employe_save as select * from appuser.employe;

Table created.

My backups are configured on sbt_tape with ddboost, so I thought I only have to run such a command :

run {
ALLOCATE CHANNEL C1 DEVICE TYPE SBT_TAPE PARMS 'BLKSIZE=1048576, 
SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so, 
SBT_PARMS=(CONFIG_FILE=/opt/dpsapps/dbappagent/config/oracle_ddbda_proddb.cfg)' 
FORMAT '%d_%U' ;
recover table appuser.employe
until time "to_date('16-AUG-2018 08:00:00','DD-MON-YYYY HH24:MI:SS')"
auxiliary destination '/tmp/proddb/aux';
}

But I got this error message:

RMAN-03002: failure of recover command at 08/23/2018 10:50:04
RMAN-03015: error occurred in stored script Memory Script
RMAN-06026: some targets not found - aborting restore
RMAN-06101: no channel to restore a backup or copy of the control file

The problem is documented with bug 17089942:

The table recovery fails when channels are allocated manually within a run block. The solution consists in defining the channel device type in the rman configuration:

rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Aug 23 13:52:39 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRODDB (DBID=271333692)
connected to recovery catalog database

RMAN> configure channel device type sbt_tape parms 'BLKSIZE=1048576, 
SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so, 
SBT_PARMS=(CONFIG_FILE=/opt/dpsapps/dbappagent/config/oracle_ddbda_proddb.cfg)';

starting full resync of recovery catalog
full resync complete
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'BLKSIZE=1048576, 
SBT_LIBRARY=/opt/dpsapps/dbappagent/lib/lib64/libddboostora.so, 
SBT_PARMS=(CONFIG_FILE=/opt/dpsapps/dbappagent/config/oracle_ddbda_proddb.cfg)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

Then connected with rman we can run the following recover command in order to restore the employe table with a new name employe_16082018:

RMAN> run {
 recover table appuser.employe
until time "to_date('16-AUG-2018 08:00:00','DD-MON-YYYY HH24:MI:SS')"
auxiliary destination '/tmp/proddb/aux'
remap table appuser.employe:employe_16082018;
}

What happens ? Oracle will create a pseudo database under /tmp/proddb/aux with SYSTEM SYSAUX TEMP UNDO and data tablespaces, then it restores the appuser.employe table at the specified date and renames it with the specified new name. Finally Oracle deletes the pseudo database.

RMAN> run {
2> recover table appuser.employe
3> until time "to_date('16-AUG-2018 08:00:00','DD-MON-YYYY HH24:MI:SS')"
4> auxiliary destination '/tmp/PRODDB/aux'
5> remap table appuser.employe:employe_16082018;
6> }

Starting recover at 23-AUG-2018 14:03:05
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=765 device type=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=2562 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: database app agent Oracle v4.5.0.0

Creating automatic instance, with SID='ecvh'

initialization parameters used for automatic instance:
db_name=PRODDB
db_unique_name=ecvh_pitr_PRODDB
compatible=12.1.0.2.0
db_block_size=8192
db_files=200
diagnostic_dest=/u00/app/oracle
_system_trig_enabled=FALSE
sga_target=2560M
processes=200
db_create_file_dest=/tmp/PRODDB/aux
log_archive_dest_1='location=/tmp/PRODDB/aux'
#No auxiliary parameter file used

…..

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_ecvh_cesy" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_ecvh_cesy":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "APPUSER"."EMPLOYE_16082018"           7.137 MB   16173 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_ecvh_cesy" successfully completed at Thu Aug 23 14:10:28 2018 elapsed 0 00:00:10
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_temp_fqx8w89p_.tmp deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/onlinelog/o1_mf_3_fqx90jyn_.log deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/onlinelog/o1_mf_2_fqx90hyd_.log deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/onlinelog/o1_mf_1_fqx90gwo_.log deleted
auxiliary instance file /tmp/PRODDB/aux/ECVH_PITR_PRODDB/datafile/o1_mf_affac_1_fqx8xybx_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_sysaux_fqx8p1p7_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_undotbs1_fqx8nskn_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/datafile/o1_mf_system_fqx8olyx_.dbf deleted
auxiliary instance file /tmp/PRODDB/aux/PRODDB_CI/controlfile/o1_mf_fqx8nb57_.ctl deleted
auxiliary instance file tspitr_ecvh_63884.dmp deleted
Finished recover at 23-AUG-2018 14:10:29

The recover was quite fast, so I had the possibility to run multiple recover at different times allowing me to understand at which time the delete command happened:

SQL> select table_name from all_tables where owner = 'APPUSER' and table_name like 'EMPLOYE%'

TABLE_NAME
--------------------------------------------------------------------------------
EMPLOYE
EMPLOYE_16082018
EMPLOYE_22072018
EMPLOYE_SAVE

SQL> select count(*) from appuser.employe_22072018;

  COUNT(*)
----------
     16141

SQL> r
  1* select count(*) from appuser.employe_16082018

  COUNT(*)
----------
     16173

SQL> select count(*) from appuser.employe;

  COUNT(*)
----------
     16226

I already tested this recover feature on my own virtual machine on a test database. Running this recover command on a production database allowed me to discover the Oracle bug when your backups are on tape. Finally using ddboost with rman is so fast that you do not have to hesitate to restore tables with Oracle 12c even with a huge volumetry.

 

Cet article RMAN PITR recover table Oracle 12c est apparu en premier sur Blog dbi services.

COMMIT

$
0
0

By Franck Pachot

.
COMMIT is the SQL statement that ends a transaction, with two goals: persistence (changes are durable) and sharing (changes are visible to others). That’s a weird title and introduction for the 499th blog post I write on the dbi-services blog. 499 posts in nearly 5 years- roughly two blog posts per week. This activity was mainly motivated by the will to persist and share what I learn every day.

Persistence is primarily for myself: writing a test case with a little explanation is a good way to remember an issue encountered, and Google helps to get back to it when the problem is encountered again later. Sharing is partly for others: I learn a lot from what others are sharing (blogs, forums, articles, mailing lists,…) and it makes sense to also share what I learn. But in addition to that, publishing an idea is also a good way to validate it. If something is partially wrong or badly explained, or just benefits from exchanging ideas, then I’ll get feedbacks, by comments, tweets, e-mails.

This high throughput of things I learn every day gets its source from multiple events. In a consulting company, going from one customer to another means different platforms, versions, editions, different requirements, different approaches. Our added value is our experience. From all the problems seen in all those environments, we have build knowledge, best practices and tools (this is the idea of DMK) to bring a reliable and efficient solution to customers projects. But dbi services also invests a lot in research and training, in order to build this knowledge pro-actively, before encountering the problems at customers. A lot of blog posts were motivated by lab problems only (beta testing, learning new features, setting up a proof of concept before proposing it to a customer). And then encountered later at customers, with faster solutions as this had been investigated before. Dbi services also provides workshops for all technologies and preparing training exercises, as well as giving the workshop, was also a great source of blog posts.

I must say that dbi services is an amazing company in this area. Five years ago, I blogged in French on developpez.com and answered forums such as dba-village.com, and wrote a few articles for SOUG. But as soon as I started at dbi services, I passed the OCM, I presented for the first time in public, at DOAG, and then at many local and international conferences. I attended my first Oracle Open World. I became ACE and later ACE Director. The blogging activity is one aspect only. What the dbi services Technology Organization produces is amazing, for the benefit of the customers and the consultants.

You may have heard that I’m going to work in the database team at CERN, which means quiescing my consulting and blogging activity here. For sure I’ll continue to share, but probably differently. Maybe on the Databases at CERN blog, and probably posting on Medium. Blogs will be also replicated to http://www.oaktable.net/ of course. Anyway, it is easy to find me on LinkedIn or Twitter. For sure I’ll be at conferences and probably not only Oracle ones.

Database transparent_1000pxOracle_100_1000pxI encourage you to continue to follow the dbi services blog, as I’ll do. Many colleagues are already sharing on all technologies. And new ones are coming. Even if my goal was the opposite, I’m aware that publishing so often may have throttled other authors to do so. I’m now releasing some bandwidth to them. The dbi services blog is in the 9th position in the Top-100 Oracle blogs and 27th position in the Top-60 Database blogs with 6 blog posts a week on average. And there’s also a lot non-database topics covered as well. So stay tuned on https://blog.dbi-services.com/.

 

Cet article COMMIT est apparu en premier sur Blog dbi services.

SQL Server Tips: Drop a database-user attached to a service…

$
0
0

Few weeks ago, I have a little issue when I try to drop a database-user without login

Unfortunately, I do a little mistake at the beginning…
I receive like every morning a report if all AD logins (computers, groups, users) registered on SQL server instances are in the AD with the useful command sp_validatelogins
This report indicates that a computer name dbi\server_name$ was no more in the AD.
I drop the login without problem and without verifying the binding with database-users (this was my mistake…). :-?

The day after, I receive another alert that I have an orphan database-user on the SCOM database OperationManager12.

My reaction was to connect to the instance and go dropping the user like usual when I become this alert.
error_drop_server01
As you can see, I receive the error message:
Msg 15284, Level 16, State 1, Line 15
The database principal has granted or denied permissions to objects in the database and cannot be dropped.

I “google” the error and found some explanations.
The user is owner of services in the service broker and I use this query to find the message:

select * from sys.database_permissions where grantor_principal_id = user_id('dbi\server_name$')

error_drop_server02
The user is linked to a service number 65536. I search now the service linked to this number.
error_drop_server03
With the name of the service, I can revoke the SEND permission from this user.
error_drop_server04
And I receive this error:
Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.
With the Google’s help, I re-try with the EXECUTE AS command with the server name as user:

EXECUTE AS USER= 'dbi\server_name$'
REVOKE SEND ON SERVICE::Service_mid10_39_40_55_pid4288_adid2_r479087710 FROM [dbi\server_name$]
REVERT

error_drop_server05
As excepted, I receive a new error:
Msg 15404, Level 16, State 11, Line 37
Could not obtain information about Windows NT group/user ‘BISAD\WBNSS55$’, error code 0x534

The login does not more exist, then it’s normal to have this error.
And now, what to do?
The only workaround that I found, is to drop the service, drop the user an recreate the service with dbo as owner(before dropping, create the create service statement before):

DROP SERVICE [Service_mid10_39_40_55_pid4288_adid2_r479087710]
GO

error_drop_server06

USE [OperationsManager12]
GO
DROP USER [dbi\server_name$]
GO

error_drop_server07

CREATE SERVICE [Service_mid10_39_40_55_pid4288_adid2_r479087710]  ON QUEUE [dbo].[Queue_mid10_39_40_55_pid4288_adid2_r479087710] ([http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification])
GO

error_drop_server08
Et voila! It was a little bit tricky to find out a solution but this one works! 8-)

 

Cet article SQL Server Tips: Drop a database-user attached to a service… est apparu en premier sur Blog dbi services.

Install Dbvisit 8 on Windows

$
0
0

In a previous blog we described how to install dbvisit standby in a Linux box. In this article I am going to describe the installation on a Windows machine. We are using dbvisit 8 and windows server 2016. The name of my servers are winserver1 and winserver2.
The first thing you will have to do is to download dbvisit standby package here . A trial key will be sent to you. Before starting the installation we create a user named dbvisit (feel free to change) with following properties:
img1
The user dbvisit need also privilege to logon as a service.
img2
The installation is very easy, just log with a privileged user and run the executable. Below is the installation on the server winserver2. Note that dbvisit standby need to be installed on both servers. Note that we also have turned off the WIndows User Account Control (UAC).

Dbvnet,Dbvagent and Standby Cli components install

img3
Click on Next
img4
Click on I Agree (anyway we don’t have the choice)
img5
Choose the components to install. Note that on a first time the central console is not installed. We will install it later. Note that it is recommended to install the console on a separate server (this can be a VM on Windows or Linux)
img6
Click on Next
img7
Here we give the user we created at the beginning
img8
We provide the password
img9
And then the installation starts
img10
And then the final step is to answer to some configuration questions


-----------------------------------------------------------
About to configure DBVISIT DBVNET
-----------------------------------------------------------

>>> Please specify the Dbvnet Passphrase to be used for secure connections.

The passphrase provided must be the same in both the local and remote
Dbvnet installations. It is used to establish a secure (encrypted)
Dbvnet connections

Enter a custom value:
> XXXXXXXXXXXXXXXXX

>>> Please specify the Local host name to be used by Dbvnet on this server.

Dbvnet will be listening on the local IP Address on this server which
resolve to the host name specified here.
If using a cluster or virtual IP make sure the host name or alias
specified here resolve to the IP address local to where dbvnet is
installed. The host name should resolve to IPv4 address, if not
you can use an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
>

>>> Please specify the Local Dbvnet PORT to be used.

Dbvnet will be listening on the specified port for incoming connections
from remote dbvnet connections. Please make sure that this port is not
already in use or blocked by any firewall. You may choose any value
between 1024 and 65535, however the default of 7890 is recommended.

Enter a custom value or press ENTER to accept default [7890]:
>

>>> Please specify the Remote host name to be used by Dbvnet.

By default Dbvnet will use this remote hostname for any remote
connections. Dbvnet must be installed and configured on the specified
remote host. If using a cluster or virtual IP make sure the host name
or alias specified here resolve to the IP address local to where dbvnet
is installed.
If you are unsure about the remote host name during installation, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not
you can use an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
> winserver1

>>> Please specify the Remote Dbvnet PORT to be used.

Dbvnet will connect to the remote server on this specified port.
On the remote host Dbvnet will be listening on the specified port for
incoming connections. Please make sure that this port is not already in
use or blocked by any firewall. You may choose any value between 1024
and 65535, however the default of 7890 is recommended.

Enter a custom value or press ENTER to accept default [7890]:
>

-----------------------------------------------------------
Summary of the Dbvisit DBVNET configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVNET_PASSPHRASE XXXXXXXXXXXX
DBVNET_LOCAL_HOST winserver2
DBVNET_LOCAL_PORT 7890
DBVNET_REMOTE_HOST winserver1
DBVNET_REMOTE_PORT 7890

Press ENTER to continue

-----------------------------------------------------------
About to configure DBVISIT DBVAGENT
-----------------------------------------------------------

>>> Please specify the host name to be used for the Dbvisit Agent.

The Dbvisit Agent (Dbvagent) will be listening on this local address.
If you are using the Dbvserver (GUI) - connections from the GUI will be
established to the Dbvisit Agent. The Dbvisit Agent address must be
visible from the Dbvserver (GUI) installation.
If using a cluster or virtual IP make sure the host name or alias
specified here resolve to the IP address local to where dbvnet is
installed.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver2]:
>

>>> Please specify the listening PORT number for Dbvagent.

The Dbvisit Agent (Dbvagent) will listening on the specified port for
incoming requests from the GUI (Dbvserver). Please make sure that this
port is not already in use or blocked by any firewall. You may choose
any value between 1024 and 65535, however the default of 7891 is
recommended.

Enter a custom value or press ENTER to accept default [7891]:
>

>>> Please specify passphrase for Dbvagent

Each Dbvisit Agent must have a passpharse specified. This passphrase
does not have to match between all the servers. It will be used to
establish a secure connection between the GUI (Dbvserver) and the
Dbvisit Agent.

Enter a custom value:
> XXXXXXXXXXXXXXXXXXXX

-----------------------------------------------------------
Summary of the Dbvisit DBVAGENT configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVAGENT_LOCAL_HOST winserver2
DBVAGENT_LOCAL_PORT 7891
DBVAGENT_PASSPHRASE XXXXXXXXXXXXXXXXXXX

Press ENTER to continue

No need to configure standby, skipped.
Copied file C:\Program Files\Dbvisit\dbvnet\conf\dbvnetd.conf to C:\Program Files\Dbvisit\dbvnet\conf\dbvnetd.conf.201808260218
DBVNET config file updated
Copied file C:\Program Files\Dbvisit\dbvagent\conf\dbvagent.conf to C:\Program Files\Dbvisit\dbvagent\conf\dbvagent.conf.201808260218
DBVAGENT config file updated

-----------------------------------------------------------
Component Installed Version
-----------------------------------------------------------
standby 8.0.22_36_gb602000a
dbvnet 8.0.22_36_gb602000a
dbvagent 8.0.22_36_gb602000a
dbvserver not installed

-----------------------------------------------------------

-----------------------------------------------------------
About to start service DBVISIT DBVNET
-----------------------------------------------------------
Successfully started service DBVISIT DBVNET

-----------------------------------------------------------
About to start service DBVISIT DBVAGENT
-----------------------------------------------------------
Successfully started service DBVISIT DBVAGENT

>>> Installation completed
Install log C:\Users\dbvisit\AppData\Local\Temp\dbvisit_install.log.201808260214.

Press ENTER to continue

Once then we can finish the installation
img11
dbserver console install

The installation of the dbserver (central console) is done is the same way. We will not show pictures but only the questions we have to reply. In our case it is installed on the primary server winserver1.

-----------------------------------------------------------
Welcome to the Dbvisit software installer.
-----------------------------------------------------------

Installing dbvserver...

-----------------------------------------------------------
About to configure DBVISIT DBVSERVER
-----------------------------------------------------------

>>> Please specify the host name to be used for Dbvserver

The Dbvisit Web Server (Dbvserver) will be listening on this local
address. If using a cluster or virtual IP make sure the host name or
alias specified here resolve to the IP address local to where Dbvserver
is installed.
If you are unsure about the remote host name during installation, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.

Enter a custom value or press ENTER to accept default [winserver1]:
>
>>> Please specify the listening port number for Dbvserver on the local server

You may choose any value between 1024 and 65535. The default recommended
value is 4433.

Note: if you can not access this port after the installation has
finished, then please double-check your server firewall settings
to ensure the selected port is open.
Enter a custom value or press ENTER to accept default [4433]:
>
>>> Please specify the host name (or IPv4 address) to be used for Dbvserver public interface

In most cases this will be the same as the listener address, if not sure
use the same value as the listener address.

The Dbvisit Web Server (Dbvserver) will be listening on the local
listener address. The public address can be set to an external IP
example a firewall address in case the Central Console (Dbvserver)
and agents (Primary and Standby Database servers) have a firewall
inbetween them. The public interface address will be passed to
the agents during communication for sending information back.
If you are unsure about the public host address, use
the default value which will be the current local hostname.
The host name should resolve to IPv4 address, if not you can use
an IPv4 IP address instead of host name.
Enter a custom value or press ENTER to accept default [winserver1]:
>

-----------------------------------------------------------
Summary of the Dbvisit DBVSERVER configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVSERVER_LOCAL_HOST winserver1
DBVSERVER_LOCAL_PORT 4433
DBVSERVER_PUBLIC_HOST winserver1

Press ENTER to continue

-----------------------------------------------------------
Summary of the Dbvisit DBVSERVER configuration
-----------------------------------------------------------
DBVISIT_BASE C:\Program Files\Dbvisit
DBVSERVER_LOCAL_HOST winserver1
DBVSERVER_LOCAL_PORT 4433
DBVSERVER_PUBLIC_HOST winserver1

Press ENTER to continue

Once the console installed we can log into using the following URL

https://winserver1:4433

with the default credentials admin/admin (Note that you have to change it once logged)

Conclusion : In this blog we have shown how to install dbvisit in a Windows server. In a coming blog we will see how to create a standby database on a Windows server and how to schedule log shipping and log apply.

 

Cet article Install Dbvisit 8 on Windows est apparu en premier sur Blog dbi services.

Dbvisit 8 Standby Daemon on Windows

$
0
0

In this previous blog, we have installed Dbvisit standby for Windows on both servers. We suppose that the database is created and that the standby is configured (see this blog). The steps are the same that on Linux except that the command is launched on a traditional DOS terminal or a PowerShell. We will not describe these steps (see here for more details).
Dbvisit by default will neither send nor apply the archived log files. With Windows we can use the Windows Scheduler or since Dbvisit 8 the Daemon.
To use Windows Scheduler we just remind that the command to be scheduled is

 dbvctl.exe –d PROD 

–dbvctl.exe is located in the Dbvisit install directory
–PROD is the name of the database

Since Dbvisit 8, it is no longer needed to use the Windows scheduler to send and to apply archived logs. We can use the new background process option where you can run Dbvisit Standby for each DDC (database) in the background. We can manage this via the Central Console.
When using Windows, the background process will run as a Windows Service. A Windows Service will be created on the primary and standby server for each database. Below an example of how to create a service for the database PROD
img1
And choose DATABASE ACTIONS TAB
img2
Choose Daemon Actions
img3
After when we select a host we will see the status of the daemon on this host. We are to create the Daemon using the INSTALL button.
img4
We provide the credentials of the user dbvisit. This user will be the owner of the service which be created for the Daemon. And let’s submit
img5
Everything is OK. If we click again on DAEMON ACTIONS and select the host winserver1 we will have
img6
And we can start the Daemon.
img7
The same steps have to be done on both servers.
We then should have a Windows service configured with an automatic startup.
img8
And now Dbvisit will send and apply archive logs as soon as they are generated.

 

Cet article Dbvisit 8 Standby Daemon on Windows est apparu en premier sur Blog dbi services.

SQL Server Tips: How many different datetime are in my column and what the delta?

$
0
0

Few months ago, a customer asks me to find in a column, how many I have of the same date & time in a column and what is the delta between these dates & times.
The column is based on the function CURRENT_TIMESTAMP and used as key.
I know, it’s not good to use it as a key but some developers are not competent to develop SQL correctly (no need to comments that!)…

This usage indicates a lot of duplicate keys and the customer want to know how many and the delta between each date &time.

To perform this task, I create a little example with a temporary table with one column with a datetime format:

CREATE TABLE [#tmp_time_count] (dt datetime not null)

I insert the CURRENT_TIMESTAMP in the table a thousand times to have data to play with:
INSERT INTO [#tmp_time_count] SELECT CURRENT_TIMESTAMP
Go 1000

To see how many different datetime I have, I need just to use DISCTINCT in a COUNT:

SELECT COUNT(DISTINCT dt) as [number_of_time_diff] from [#tmp_time_count]

datetime_diff_01
In my test, I find 36 different times for 1000 rows.
The question now is to know how many I have on the same date & time…
To have this information, I try a lot of thing but finally, I write this query with a LEFT JOIN on the same table and a DATEPART on the datetime’s column.

SELECT DISTINCT [current].dt as [Date&Time], DATEPART(MILLISECOND,ISNULL([next].dt,0) –[current].dt) as [time_diff] FROM [#tmp_time_count] as [current] LEFT JOIN [#tmp_time_count] as [next] on [next].dt = (SELECT MIN(dt) FROM [#tmp_time_count] WHERE dt >[current].dt)

datetime_diff_02
Don’t forget at the end to drop the table….

DROP TABLE [#tmp_time_count];

Et voila! I hope this little query can help you in a similar situation…

 

Cet article SQL Server Tips: How many different datetime are in my column and what the delta? est apparu en premier sur Blog dbi services.

Documentum – Silent Install – Remote Docbases/Repositories (HA)

$
0
0

In previous blogs, we installed in silent the Documentum binaries, a docbroker (+licence(s) if needed), several repositories and finally D2. In this one, we will see how to install remote docbases/repositories to have a High Availability environment with the docbases/repositories that we already installed in silent.

As mentioned in the first blog of this series, there is a utility under “$DM_HOME/install/silent/silenttool” that can be used to generate a skeleton for a CFS/Remote CS but there are still missing parameters so I will describe them in this blog.

In this blog, I will also configure the Global Repository (GR) in HA so that you have it available even if the first node fails… This is particularly important if, like me, you prefer to set the GR as the crypto repository (so it is the repository used for encryption/decryption).

 

1. Documentum Remote Global Registry repository installation

The properties file for a Remote GR installation is as follow (it supposes that you already have the binaries and a docbroker installed on this Remote CS):

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_GR.properties
### Silent installation response file for a Remote Docbase (GR)
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=gr_docbase
SERVER.PRIMARY_SERVER_CONFIG_NAME=gr_docbase
CFS_SERVER_CONFIG_NAME=content_server_02_gr_docbase
SERVER.DOCBASE_SERVICE_NAME=gr_docbase
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED=true
SERVER.PROJECTED_DOCBROKER_HOST_OTHER=content_server_01.dbi-services.com
SERVER.PROJECTED_DOCBROKER_PORT_OTHER=1489
SERVER.GLOBAL_REGISTRY_REPOSITORY=gr_docbase
SERVER.BOF_REGISTRY_USER_LOGIN_NAME=dm_bof_registry
SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD=dm_b0f_reg1s7ryP4ssw0rd

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_GR.properties
[dmadmin@content_server_02 ~]$

 

Just like in previous blog, I will let you set the DATA and SHARE folders as you want to.

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • SERVER.COMPONENT_ACTION: The action to be executed, it can be either CREATE, UPGRADE or DELETE. You can upgrade a Documentum environment in silent even if the source doesn’t support the silent installation/upgrade as long as the target version (CS 7.3, CS 16.4, …) does
  • common.aek.passphrase.password: The password used for the AEK on the Primary CS
  • common.aek.key.name: The name of the AEK key used on the Primary CS. This is usually something like “CSaek”
  • common.aek.algorithm: The algorithm used for the AEK key. I would recommend the strongest one, if possible: “AES_256_CBC”
  • SERVER.ENABLE_LOCKBOX: Whether or not you used a Lockbox to protect the AEK key on the Primary CS. If set to true, the lockbox will be downloaded from the Primary CS, that’s why you don’t need the “common.use.existing.aek.lockbox” property
  • SERVER.LOCKBOX_FILE_NAME: The name of the Lockbox used on the Primary CS. This is usually something like “lockbox.lb”
  • SERVER.LOCKBOX_PASSPHRASE.PASSWORD: The password used for the Lockbox on the Primary CS
  • SERVER.DOCUMENTUM_DATA: The path to be used to store the Documentum documents, accessible from all Content Servers which will host this docbase/repository
  • SERVER.DOCUMENTUM_SHARE: The path to be used for the share folder
  • SERVER.FQDN: The Fully Qualified Domain Name of the current host the docbase/repository is being installed on
  • SERVER.DOCBASE_NAME: The name of the docbase/repository created on the Primary CS (dm_docbase_config.object_name)
  • SERVER.PRIMARY_SERVER_CONFIG_NAME: The name of the dm_server_config object created on the Primary CS
  • CFS_SERVER_CONFIG_NAME: The name of dm_server_config object to be created for this Remote CS
  • SERVER.DOCBASE_SERVICE_NAME: The name of the service to be used
  • SERVER.REPOSITORY_USERNAME: The name of the Installation Owner. I believe it can be any superuser account but I didn’t test it
  • SERVER.SECURE.REPOSITORY_PASSWORD: The password of the above account
  • SERVER.REPOSITORY_USER_DOMAIN: The domain of the above account. If using an inline user like the Installation Owner, you should keep it empty
  • SERVER.REPOSITORY_USERNAME_WITH_DOMAIN: Same value as the REPOSITORY_USERNAME if the USER_DOMAIN is kept empty
  • SERVER.REPOSITORY_HOSTNAME: The Fully Qualified Domain Name of the Primary CS
  • SERVER.USE_CERTIFICATES: Whether or not to use SSL Certificate for the docbase/repository (it goes with the SERVER.CONNECT_MODE). If you set this to true, you will have to add the usual additional parameters, just like for the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_HOST: The Fully Qualified Domain Name of the Primary CS
  • SERVER.PRIMARY_CONNECTION_BROKER_PORT: The port used by the docbroker/connection broker on the Primary CS
  • SERVER.PROJECTED_CONNECTION_BROKER_HOST: The hostname to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.PROJECTED_CONNECTION_BROKER_PORT: The port to be use for the [DOCBROKER_PROJECTION_TARGET] on the server.ini file, meaning the docbroker/connection broker the docbase/repository should project to, by default
  • SERVER.DFC_BOF_GLOBAL_REGISTRY_VALIDATE_OPTION_IS_SELECTED: Whether or not you want to validate the GR on the Primary CS. I always set this to true for the first docbase/repository installed on the Remote CS (in other words: for the GR installation). If you set this to true, you will have to provide some additional parameters:
    • SERVER.PROJECTED_DOCBROKER_HOST_OTHER: The Fully Qualified Domain Name of the docbroker/connection broker that the GR on the Primary CS projects to so this is usually the Primary CS…
    • SERVER.PROJECTED_DOCBROKER_PORT_OTHER: The port of the docbroker/connection broker that the GR on the Primary CS projects to
    • SERVER.GLOBAL_REGISTRY_REPOSITORY: The name of the GR repository
    • SERVER.BOF_REGISTRY_USER_LOGIN_NAME: The name of the BOF Registry account created on the Primary CS inside the GR repository. This is usually something like “dm_bof_registry”
    • SERVER.SECURE.BOF_REGISTRY_USER_PASSWORD: The password used by the BOF Registry account

 

Once the properties file is ready, first make sure the gr_docbase is running on the “Primary” CS (content_server_01) and then start the CFS installer using the following commands:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap gr_docbase
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase gr_docbase has 1 server:
--------------------------------------------
  server name         :  gr_docbase
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  12345
  last ckpt time      :  6/12/2018 14:23:35
  next ckpt time      :  6/12/2018 14:28:35
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010101
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_GR.properties

 

Don’t forget to check the logs once done to make sure it went without issue!

 

2. Other Remote repository installation

Once you have the Remote Global Registry repository installed, you can install the Remote repository that will be used by the end-users (which isn’t a GR then). The properties file for an additional remote repository is as follow:

[dmadmin@content_server_02 ~]$ vi /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ cat /tmp/dctm_install/CFS_Docbase_Other.properties
### Silent installation response file for a Remote Docbase
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Action to be executed
SERVER.COMPONENT_ACTION=CREATE

### Docbase parameters
common.aek.passphrase.password=a3kP4ssw0rd
common.aek.key.name=CSaek
common.aek.algorithm=AES_256_CBC
SERVER.ENABLE_LOCKBOX=true
SERVER.LOCKBOX_FILE_NAME=lockbox.lb
SERVER.LOCKBOX_PASSPHRASE.PASSWORD=l0ckb0xP4ssw0rd

SERVER.DOCUMENTUM_DATA=
SERVER.DOCUMENTUM_SHARE=
SERVER.FQDN=content_server_02.dbi-services.com

SERVER.DOCBASE_NAME=Docbase1
SERVER.PRIMARY_SERVER_CONFIG_NAME=Docbase1
CFS_SERVER_CONFIG_NAME=content_server_02_Docbase1
SERVER.DOCBASE_SERVICE_NAME=Docbase1
SERVER.REPOSITORY_USERNAME=dmadmin
SERVER.SECURE.REPOSITORY_PASSWORD=dm4dm1nP4ssw0rd
SERVER.REPOSITORY_USER_DOMAIN=
SERVER.REPOSITORY_USERNAME_WITH_DOMAIN=dmadmin
SERVER.REPOSITORY_HOSTNAME=content_server_01.dbi-services.com

SERVER.USE_CERTIFICATES=false

SERVER.PRIMARY_CONNECTION_BROKER_HOST=content_server_01.dbi-services.com
SERVER.PRIMARY_CONNECTION_BROKER_PORT=1489
SERVER.PROJECTED_CONNECTION_BROKER_HOST=content_server_02.dbi-services.com
SERVER.PROJECTED_CONNECTION_BROKER_PORT=1489

[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_DATA=.*,SERVER.DOCUMENTUM_DATA=$DOCUMENTUM/data," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$ sed -i "s,SERVER.DOCUMENTUM_SHARE=.*,SERVER.DOCUMENTUM_SHARE=$DOCUMENTUM/share," /tmp/dctm_install/CFS_Docbase_Other.properties
[dmadmin@content_server_02 ~]$

 

I won’t list all these parameters again because as you can see above, it is exactly the same, except the docbase/repository name. Only the last section regarding the GR validation isn’t needed anymore. Once the properties file is ready, you can install the additional remote repository in the same way:

[dmadmin@content_server_02 ~]$ dmqdocbroker -t content_server_01.dbi-services.com -p 1489 -c getservermap Docbase1
dmqdocbroker: A DocBroker Query Tool
dmqdocbroker: Documentum Client Library Version: 7.3.0040.0025
Using specified port: 1489
**************************************************
**     D O C B R O K E R    I N F O             **
**************************************************
Docbroker host            : content_server_01.dbi-services.com
Docbroker port            : 1490
Docbroker network address : INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
Docbroker version         : 7.3.0050.0039  Linux64
**************************************************
**           S E R V E R     M A P              **
**************************************************
Docbase Docbase1 has 1 server:
--------------------------------------------
  server name         :  Docbase1
  server host         :  content_server_01.dbi-services.com
  server status       :  Open
  client proximity    :  1
  server version      :  7.3.0050.0039  Linux64.Oracle
  server process id   :  23456
  last ckpt time      :  6/12/2018 14:46:42
  next ckpt time      :  6/12/2018 14:51:42
  connect protocol    :  TCP_RPC
  connection addr     :  INET_ADDR: 12 123 12345678 content_server_01.dbi-services.com 123.123.123.123
  keep entry interval :  1440
  docbase id          :  1010102
  server dormancy status :  Active
--------------------------------------------
[dmadmin@content_server_02 ~]$
[dmadmin@content_server_02 ~]$ $DM_HOME/install/dm_launch_cfs_server_config_program.sh -f /tmp/dctm_install/CFS_Docbase_Other.properties

 

At this point, you will have the second docbases/repositories dm_server_config object created but that’s pretty much all you got… For a correct/working HA solution, you will need to configure the jobs for HA support (is_restartable, method_verb, …), maybe change the checkpoint_interval, configure the projections, trust the needed DFC clients (JMS applications), aso…

 

You now know how to install and configure a Global Registry repository as well as any other docbase/repository on a “Remote” Content Server (CFS) using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – Remote Docbases/Repositories (HA) est apparu en premier sur Blog dbi services.


Documentum – Silent Install – xPlore binaries & Dsearch

$
0
0

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here) and finally D2. I believe I only have 2 blogs left and they are both related to xPlore. In this one, we will see how to install the xPlore binaries as well as configure a first instance (Dsearch here) on it.

Just like other Documentum components, you can find some silent installation files or at least a template for the xPlore part. On the Full Text side, it is actually easier to find these silent files because they are included directly into the tar installation package so you will be able to find the following files as soon as you extract the package (xPlore 1.6):

  • installXplore.properties: Contains the template to install the FT binaries
  • configXplore.properties: Contains the template to install a FT Dsearch (primary, secondary) or a CPS only
  • configIA.properties: Contains the template to install a FT IndexAgent

 

In addition to that and contrary to most of the Documentum components, you can actually find a documentation about most of the xPlore silent parameters so if you have questions, you can check the documentation.

 

1. Documentum xPlore binaries installation

The properties file for the xPlore binaries installation is really simple:

[xplore@full_text_server_01 ~]$ cd /tmp/xplore_install/
[xplore@full_text_server_01 xplore_install]$ tar -xvf xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ chmod 750 setup.bin
[xplore@full_text_server_01 xplore_install]$ rm xPlore_1.6_linux-x64.tar
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ ls *.properties
configIA.properties  configXplore.properties  installXplore.properties
[xplore@full_text_server_01 xplore_install]$
[xplore@full_text_server_01 xplore_install]$ vi FT_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Installation.properties
### Silent installation response file for FT binary
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
SMTP_HOST=localhost
ADMINISTRATOR_EMAIL_ADDRESS=xplore@full_text_server_01.dbi-services.com

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you want to install xPlore on. This will be the base folder under which the binaries will be installed. I put here /opt/xPlore but you can use whatever you want
  • SMTP_HOST: The host to target for the SMTP (emails)
  • ADMINISTRATOR_EMAIL_ADDRESS: The email address to be used for the watchdog. If you do not specify the SMTP_HOST and ADMINISTRATOR_EMAIL_ADDRESS properties, the watchdog configuration will end-up with a non-fatal error, meaning that the binaries installation will still be working without issue but you will have to add these manually for the watchdog if you want to use it. If you don’t want to use it, you can go ahead without, the Dsearch and IndexAgents will work properly without but obviously you are loosing the features that the watchdog brings

 

Once the properties file is ready, you can install the Documentum xPlore binaries in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ ./setup.bin -f FT_Installation.properties

 

2. Documentum xPlore Dsearch installation

I will use below a lot the word “Dsearch” but this section can actually be used to install any instances: Primary Dsearch, Secondary Dsearch or even CPS only. Once you have the binaries installed, you can install a first Dsearch (PrimaryDsearch usually or PrimaryEss) that will be used for the Full Text indexing. The properties file for this component is as follow:

[xplore@full_text_server_01 xplore_install]$ vi FT_Dsearch_Installation.properties
[xplore@full_text_server_01 xplore_install]$ cat FT_Dsearch_Installation.properties
### Silent installation response file for Dsearch
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
ess.configMode.primary=1
ess.configMode.secondary=0
ess.configMode.upgrade=0
ess.configMode.delete=0
ess.configMode.cpsonly=0

### Other configurations
ess.primary=true
ess.sparenode=0

ess.data_dir=/opt/xPlore/data
ess.config_dir=/opt/xPlore/config

ess.primary_host=full_text_server_01.dbi-services.com
ess.primary_port=9300
ess.xdb-primary-listener-host=full_text_server_01.dbi-services.com
ess.xdb-primary-listener-port=9330
ess.transaction_log_dir=/opt/xPlore/config/wal/primary

ess.name=PrimaryDsearch
ess.FQDN=full_text_server_01.dbi-services.com

ess.instance.password=ds34rchAdm1nP4ssw0rd
ess.instance.port=9300

ess.ess.active=true
ess.cps.active=false
ess.essAdmin.active=true

ess.xdb-listener-port=9330
ess.admin-rmi-port=9331
ess.cps-daemon-port=9321
ess.cps-daemon-local-port=9322

common.installOwner.password=ds34rchAdm1nP4ssw0rd
admin.username=admin
admin.password=ds34rchAdm1nP4ssw0rd

[xplore@full_text_server_01 xplore_install]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the system supports a 64 bits architecture
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • ess.configMode.primary: Whether or not you want to install a Primary Dsearch (binary value)
  • ess.configMode.secondary: Whether or not you want to install a Secondary Dsearch (binary value)
  • ess.configMode.upgrade: Whether or not you want to upgrade an instance (binary value)
  • ess.configMode.delete: Whether or not you want to delete an instance (binary value)
  • ess.configMode.cpsonly: Whether or not you want to install a CPS only and not a Primary/Secondary Dsearch (binary value)
  • ess.primary: Whether or not this instance is a primary instance (set this to true if installing a primary instance)
  • ess.sparenode: Whether or not the secondary instance is to be used as a spare node. This should be set to 1 only if “ess.configMode.secondary=1″ and you want it to be a spare node only
  • ess.data_dir: The path to be used to contain the instance data. For a single-node, this is usually /opt/xPlore/data and for a multi-node, it needs to be a shared folder between the different nodes of the multi-node
  • ess.config_dir: Same as “ess.data_dir” but for the config folder
  • ess.primary_host: The Fully Qualified Domain Name of the primary Dsearch this new instance will be linked to. Here we are installing a Primary Dsearch so it is the local host
  • ess.primary_port: The port that the primary Dsearch is/will be using
  • ess.xdb-primary-listener-host: The Fully Qualified Domain Name of the host where the xDB has been installed on for the primary Dsearch. This is usually the same value as “ess.primary_host”
  • ess.xdb-primary-listener-port: The port that the xDB is/will be using for the primary Dsearch. This is usually the value of “ess.primary_port” + 30
  • ess.transaction_log_dir: The path to be used to store the xDB transaction logs. This is usually under the “ess.config_dir” folder (E.g.: /opt/xPlore/config/wal/primary)
  • ess.name: The name of the instance to be installed. For a primary Dsearch, it is usually something like PrimaryDsearch
  • ess.FQDN: The Fully Qualified Domain Name of the current host the instance is being installed on
  • ess.instance.password: The password to be used for the new instance (xDB Administrator & superuser). Using the GUI installer, you can only set 1 password and it will be used for everything (JBoss admin, xDB Administrator, xDB superuser). In silent, you can separate them a little bit, if you want to
  • ess.instance.port: The port of the instance to be installed. For a primary Dsearch, it is usually 9300
  • ess.ess.active: Whether or not you want to enable/deploy the Dsearch (set this to true if installing a primary or secondary instance)
  • ess.cps.active: Whether or not you want to enable/deploy the CPS (already included in the Dsearch so set this to true only if installing a CPS Only)
  • ess.essAdmin.active: Whether or not you want to enable/deploy the Dsearch Admin
  • ess.xdb-listener-port: The port to be used by the xDB for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 30
  • ess.admin-rmi-port: The port to be used by the RMI for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 31
  • ess.cps-daemon-port: I’m not sure what this is used for because the correct port for the CPS daemon0 (on a primary Dsearch) is the next parameter but I know that the default value for this is usually “ess.instance.port” + 21. It is possible that this parameter is only used in case the new instance is a CPS Only because this port (instance port + 21) is used on a CPS Only host as Daemon0 so it would make sense… To be confirmed!
  • ess.cps-daemon-local-port: The port to be used by the CPS daemon0 for the instance to be installed. For a primary Dsearch, it is usually “ess.instance.port” + 22. You need a few ports available after this one in case you are going to have several CPS daemons (9322, 9323, 9324, …)
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue
  • admin.username: The name of the JBoss instance admin account to be created
  • admin.password: The password of the above-mentioned account

 

Once the properties file is ready, you can install the Documentum xPlore instance in silent using the following command:

[xplore@full_text_server_01 xplore_install]$ /opt/xPlore/setup/dsearch/dsearchConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f FT_Dsearch_Installation.properties

 

You now know how to install the Full Text binaries and a first instance on top of it using the silent installation provided by Documentum.

 

Cet article Documentum – Silent Install – xPlore binaries & Dsearch est apparu en premier sur Blog dbi services.

SCAN listener does not know about service

$
0
0

When trying to connect to a database via SCAN listener in a RAC environment with sqlplus, an ORA-12514 error is thrown. Tnsping can resolve the connect string. Whereas connecting to the same database over node listener with sqlplus succeeds.

One possible reason could be, that the parameter remote_listener of the database to be connected is not set to SCAN listener of RAC cluster.

So try to set remote_listener to SCAN_LISTENER_HOST:SCAN_LISTENER_PORT like (e.g. host is scan_host, port is 1521):

alter system set remote_listener = 'scan_host:1521' scope=both;

 

Cet article SCAN listener does not know about service est apparu en premier sur Blog dbi services.

Documentum – Silent Install – xPlore IndexAgent

$
0
0

In previous blogs, we installed in silent the Documentum binaries (CS), a docbroker (+licence(s) if needed), several repositories (here and here), D2 and finally the xPlore binaries & Dsearch. This blog will be the last one of this series related to silent installation on Documentum and it will be about how to install an xPlore IndexAgent on the existing docbase/repository created previously.

So let’s start, as always, with the preparation of the properties file:

[xplore@full_text_server_01 ~]$ vi /tmp/xplore_install/FT_IA_Installation.properties
[xplore@full_text_server_01 ~]$ cat /tmp/xplore_install/FT_IA_Installation.properties
### Silent installation response file for Indexagent
INSTALLER_UI=silent
KEEP_TEMP_FILE=true

### Installation parameters
common.installLocation=/opt/xPlore
COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH=/opt/xPlore
common.64bits=true
COMMON.JAVA64_HOME=/opt/xPlore/java64/JAVA_LINK

### Configuration mode
indexagent.configMode.create=1
indexagent.configMode.upgrade=0
indexagent.configMode.delete=0
indexagent.configMode.create.migration=0

### Other configurations
indexagent.ess.host=full_text_server_01.dbi-services.com
indexagent.ess.port=9300

indexagent.name=Indexagent_Docbase1
indexagent.FQDN=full_text_server_01.dbi-services.com
indexagent.instance.port=9200
indexagent.instance.password=ind3x4g3ntAdm1nP4ssw0rd

indexagent.docbase.name=Docbase1
indexagent.docbase.user=dmadmin
indexagent.docbase.password=dm4dm1nP4ssw0rd

indexagent.connectionBroker.host=content_server_01.dbi-services.com
indexagent.connectionBroker.port=1489

indexagent.globalRegistryRepository.name=gr_docbase
indexagent.globalRegistryRepository.user=dm_bof_registry
indexagent.globalRegistryRepository.password=dm_b0f_reg1s7ryP4ssw0rd

indexagent.storage.name=default
indexagent.local_content_area=/opt/xPlore/wildfly9.0.1/server/DctmServer_Indexagent_Docbase1/data/Indexagent_Docbase1/export

common.installOwner.password=ind3x4g3ntAdm1nP4ssw0rd

[xplore@full_text_server_01 ~]$

 

A short description of these properties:

  • INSTALLER_UI: The mode to use for the installation, here it is obviously silent
  • KEEP_TEMP_FILE: Whether or not you want to keep the temporary files created by the installer. These files are generated under the /tmp folder. I usually keep them because I want to be able to check them if something went wrong
  • common.installLocation: The path you installed xPlore on. I put here /opt/xPlore but you can use whatever you want
  • COMMON.DCTM_USER_DIR_WITH_FORWARD_SLASH: Same value as “common.installLocation” for linux but for Windows, you need to change double back-slash with forward slash
  • common.64bits: Whether or not the below mentioned java is a 32 or 64 bits
  • COMMON.JAVA64_HOME: The path of the JAVA_HOME that has been installed with the binaries. If you installed xPlore under /opt/xPlore, then this value should be: /opt/xPlore/java64/JAVA_LINK
  • indexagent.configMode.create: Whether or not you want to install an IndexAgent (binary value)
  • indexagent.configMode.upgrade: Whether or not you want to upgrade an IndexAgent (binary value)
  • indexagent.configMode.delete: Whether or not you want to delete an IndexAgent (binary value)
  • indexagent.configMode.create.migration: This isn’t used anymore in recent installer versions but I still don’t know what was its usage before… In any cases, set this to 0 ;)
  • indexagent.ess.host: The Fully Qualified Domain Name of the primary Dsearch this new IndexAgent will be linked to
  • indexagent.ess.port: The port that the primary Dsearch is using
  • indexagent.name: The name of the IndexAgent to be installed. The default name is usually Indexagent_<docbase_name>
  • indexagent.FQDN: The Fully Qualified Domain Name of the current host the IndexAgent is being installed on
  • indexagent.instance.port: The port that the IndexAgent is/will be using (HTTP)
  • indexagent.instance.password: The password to be used for the new IndexAgent JBoss admin
  • indexagent.docbase.name: The name of the docbase/repository that this IndexAgent is being installed for
  • indexagent.docbase.user: The name of an account on the target docbase/repository to be used to configure the objects (updating the dm_server_config, dm_ftindex_agent_config, aso…) and that has the needed permissions for that
  • indexagent.docbase.password: The password of the above-mentioned account
  • indexagent.connectionBroker.host: The Fully Qualified Domain Name of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.connectionBroker.port: The port of the target docbroker/connection broker that is aware of the “indexagent.docbase.name” docbase/repository. This will be used in the dfc.properties
  • indexagent.globalRegistryRepository.name: The name of the GR repository
  • indexagent.globalRegistryRepository.user: The name of the BOF Registry account created on the CS inside the GR repository. This is usually something like “dm_bof_registry”
  • indexagent.globalRegistryRepository.password: The password used by the BOF Registry account
  • indexagent.storage.name: The name of the storage location to be created. The default one is “default”. If you intend to create new collections, you might want to give it a more meaningful name
  • indexagent.local_content_area: The path to be used to store the content temporarily on the file system. The value I used above is the default one but you can put it wherever you want. If you are using a multi-node, this path needs to be accessible from all nodes of the multi-node so you can put it under the “ess.data_dir” folder for example
  • common.installOwner.password: The password of the xPlore installation owner. I assume this is only used on Windows environments for the service setup because on linux, I always set a dummy password and there is no issue

 

Once the properties file is ready, make sure that the Dsearch this IndexAgent is linked to is currently running (http(s)://<indexagent.ess.host>:<indexagent.ess.port>/dsearchadmin), make sure that the Global Registry repository (gr_docbase) as well as the target repository (Docbase1) are running and then you can install the Documentum IndexAgent in silent using the following command:

[xplore@full_text_server_01 ~]$ /opt/xPlore/setup/indexagent/iaConfig.bin LAX_VM "/opt/xPlore/java64/JAVA_LINK/bin/java" -f /tmp/xplore_install/FT_IA_Installation.properties

 

This now concludes the series about Documentum silent installation. There are other components that support the silent installation like the Process Engine for example but usually they require only a few parameters (or even none) so that’s why I’m not including them here.

 

Cet article Documentum – Silent Install – xPlore IndexAgent est apparu en premier sur Blog dbi services.

Oracle 18c: Cluster With Oracle ASM Filter Driver

$
0
0

During the installation of Oracle Grid Infrastructure, you can optionally enable automated installation and configuration of Oracle ASM Filter Driver for your system with the Configure ASM Filter Driver check box on the Create ASM Disk Group wizard page. When you enable the Configure ASM Filter Driver box, an automated process for Oracle ASMFD is launched during Oracle Grid Infrastructure installation.

If Oracle ASMLIB exists on your Linux system, then deinstall Oracle ASMLIB before installing Oracle Grid Infrastructure, so that you can choose to install and configure Oracle ASMFD during an Oracle Grid Infrastructure installation.
In this blog I do install a 2 nodes cluster of Oracle 18c using Oracle ASMFD. Below the disks we will use.

[root@rac18ca ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:09 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:09 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:09 /dev/sdf
[root@rac18ca ~]#

[root@rac18cb ~]# ls -l /dev/sd[d-f]
brw-rw----. 1 root disk 8, 48 Sep  8 22:46 /dev/sdd
brw-rw----. 1 root disk 8, 64 Sep  8 22:46 /dev/sde
brw-rw----. 1 root disk 8, 80 Sep  8 22:46 /dev/sdf
[root@rac18cb ~]#

We suppose that all prerequisites are done (public IP, private IP, scan,shared disks ….). Also we will not show all print screens.
The first step is to unzip the Oracle software in the ORACLE_HOME for the grid infrastructure.

unzip -d /u01/app/grid/18.0.0.0 LINUX.X64_180000_grid_home.zip

After we have to use the ASMCMD afd_label command to provision disk devices for use with Oracle ASM Filter Driver as follows.

[root@rac18ca ~]# export ORACLE_HOME=/u01/app/oracle/18.0.0.0/grid
[root@rac18ca ~]# export ORACLE_BASE=/tmp                                       
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label VOTOCR /dev/sde --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DATA /dev/sdd --init
[root@rac18ca ~]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_label DIVERS /dev/sdf --init
[root@rac18ca ~]#

And then we can use the ASMCMD afd_lslbl command to verify the device has been marked for use with Oracle ASMFD.

[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lsl                              bl /dev/sde
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
VOTOCR                                /dev/sde
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdd
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA                                  /dev/sdd
[root@rac18ca network-scripts]# /u01/app/oracle/18.0.0.0/grid/bin/asmcmd afd_lslbl /dev/sdf
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DIVERS                                /dev/sdf
[root@rac18ca network-scripts]#

Now that disks are initialized for ASMFD, we can start the installation.

[oracle@rac18ca grid]$ ./gridSetup.sh

We will not show all the pictures.

imag1

imag2

imag3

imag4

imag5

imag6

imag7

And in next window, we can choose the disks for the OCR and Voting files. We will also check Configure Oracle ASM Filter Driver.

imag8

And then continue the installation. We will have to run the orainstRoot.sh and the root.sh scripts. All these steps are not shown here.
At the end of the installation we can verify the status of the cluster

[oracle@rac18cb ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[oracle@rac18ca ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_DATA.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.DG_VOTOCR.dg
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.net1.network
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.ons
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
ora.proxy_advm
               ONLINE  ONLINE       rac18ca                  STABLE
               ONLINE  ONLINE       rac18cb                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.MGMTLSNR
      1        OFFLINE OFFLINE                               STABLE
ora.asm
      1        ONLINE  ONLINE       rac18ca                  Started,STABLE
      2        ONLINE  ONLINE       rac18cb                  Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.mgmtdb
      1        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18ca.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
ora.rac18cb.vip
      1        ONLINE  ONLINE       rac18cb                  STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac18ca                  STABLE
--------------------------------------------------------------------------------
[oracle@rac18ca ~]$

We also can check that ASMFD is enabled.

[oracle@rac18ca ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DATA                        ENABLED   /dev/sdd
DIVERS                      ENABLED   /dev/sdf
VOTOCR                      ENABLED   /dev/sde
[oracle@rac18ca ~]$


[oracle@rac18ca ~]$ asmcmd dsget
parameter:/dev/sd*, AFD:*
profile:/dev/sd*,AFD:*
[oracle@rac18ca ~]$

[oracle@rac18ca ~]$ asmcmd lsdsk
Path
AFD:DATA
AFD:DIVERS
AFD:VOTOCR
[oracle@rac18ca ~]$

Conclusion
In this blog we have seen how we can install a cluster using ASMFD

 

Cet article Oracle 18c: Cluster With Oracle ASM Filter Driver est apparu en premier sur Blog dbi services.

Connecting to Azure SQL Managed Instance from on-premise network

$
0
0

A couple of weeks ago, I wrote up about my first immersion into the SQL Server managed instances (SQLMIs), a new deployment model of Azure SQL Database which provides near 100% compatibility with the latest SQL Server on-premises Database Engine. In the previous blog post, to test a connection to this new service, I installed an Azure virtual machine on the same VNET (172.16.0.0/16) including SQL Server management studio. For testing purpose, we don’t need more, but in real production scenario chances are your SQL Azure MI would be part of your on-premise network with a more complex Azure network topology including VNET, Express Route or VPN S2S as well. Implementing such infrastructure won’t be likely your concern if you are a database administrator. But you need to be aware of the underlying connectivity components and how to diagnose possible issues or how to interact with your network team in order to avoid being under pressure and feeling the wrath of your application users too quickly :)

So, I decided to implement this kind of infrastructure in my lab environment but if you’re not a network guru like me you will likely face some difficulties to configure some components especially when it comes the VPN S2S. In addition, you have to understand different new notions about Azure network before hoping to see your infrastructure work correctly. As an old sysadmin, I admit it was a very great opportunity to turn my network learning into a concrete use case. Let’s first set the initial context. Here my lab environment I’ve been using for a while for different purposes as internal testing and event presentations as well. It addresses a lot of testing scenarios including multi-subnet architectures with SQL Server FCIs and SQL Server availability groups.

blog 142 - 1 - lab environment

bviously, some static routes are already set up to allow network traffic between my on-premise subnets. As you guessed, the game consisted in extending this on-premise network to my SQL MI network on Azure. As a reminder, SQL MI is not reachable from a public endpoint and you may connect only from an internal network (either directly from Azure or from your on-premise network). As said previously one of my biggest challenges was to configure my remote access servers as VPN server to communicate with my SQL MI Azure network. Fortunately, you have a plenty of pointers on the internet that may help you to achieve this task.  This blog post is a good walk-through by the way. In my context, you will note I also had to apply special settings to my home routeur in order to allow IPsec Passthrough as well as to add my RRAS server internal IP (192.168.0.101) to the DMZ. I also used IKEv2 VPN protocol and pre-shared key for authentication between my gateways on-premise and on Azure. The VPN S2S configuration is environment specific and this probably why doing a presentation to customer or at events is so difficult especially if you’re outside of your own network.

Anyway, let’s talk about the Azure side configuration. My Azure network topology is composed of two distinct VNETs as follows:

blog 142 - 2 - VNET configuration

The connection between my on-premise and my Azure networks are defined as shown below:

$vpnconnection = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg 
$vpnconnection.Name
$vpnconnection.VirtualNetworkGateway1.Id
$vpnconnection.LocalNetworkGateway2.Id 

dbi-vpn-connection
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The first VNET (172.17.x.x) is used as hub virtual network and owns my gateway. The second one (172.16.x.x) concerns is SQL MI VNET:

Get-AzureRmVirtualNetwork | Where-Object { $_.Name -like '*-vnet' } | % {

    Get-AzureRmVirtualNetworkSubnetConfig -VirtualNetwork $_ | Select Name, AddressPrefix
} 

Name          AddressPrefix  
----          -------------  
default       172.17.0.0/24  
GatewaySubnet 172.17.1.0/24  
vm-net        172.16.128.0/17
sql-mi-subnet 172.16.0.0/24

 

My azure gateway subnet (GatewaySubnet) is part of the VPN connectivity with the related gateway connections:

$gatewaycfg = Get-AzureRmVirtualNetworkGatewayConnection -ResourceGroupName dbi-onpremises-rg -Name dbi-vpn-connection 
$gatewaycfg.VirtualNetworkGateway1.Id
$gatewaycfg.LocalNetworkGateway2.Id 

/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworkGateways/dbi-virtual-network-gw
/subscriptions/xxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/localNetworkGateways/dbi-local-network-gw

 

The dbi-local-network-gw local gateway includes the following addresses prefix that correspond to my local lab environment network:

$gatewaylocal = Get-AzureRMLocalNetworkGateway -ResourceGroupName dbi-onpremises-rg -Name dbi-local-network-gw 
$gatewaylocal.LocalNetworkAddressSpace.AddressPrefixes 

192.168.0.0/16
192.168.5.0/24
192.168.40.0/24

 

Note that I’ve chosen a static configuration but my guess is that I could turn to the BGP protocol instead to make things more dynamic. I will talk quickly about using BGP with routing issues at the end of the write-up. But at this stage, some misconfiguration steps are missing to hope reaching out my SQL MI instance from my lab environment network. Indeed, although my VPN connection status is ok, I was able only to reach out my dbi-on-premise-vnet VNET and I need a way to connect to the sql-mi-vnet VNET. So, I had to turn on both the virtual network peering and gateway transit mechanism. Peering 2 VNETs Azure automatically routes traffic between them by the way.

blog 142 - 3 - VNET configuration peering

Here the peering configuration I applied to my dbi-onpremises-vnet VNET (first VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName dbi-onpremises-rg -VirtualNetworkName dbi-onpremises-vnet 
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : dbi-onpremises-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : True
UseRemoteGateways         : False

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/sql-mi-rg/providers/Microsoft.Network/virtualNetworks/sql-mi-vnet

 

And here the peering configuration of my sql-mi-vnet VNET (2nd VNET):

Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet | `
Select-Object VirtualNetworkName, PeeringState, AllowVirtualNetworkAccess, AllowForwardedTraffic, AllowGatewayTransit, UseRemoteGateways 

$peering = Get-AzureRmVirtualNetworkPeering -ResourceGroupName sql-mi-rg -VirtualNetworkName sql-mi-vnet
Write-Host "Remote virtual network peering"
$peering.RemoteVirtualNetwork.Id 

VirtualNetworkName        : sql-mi-vnet
PeeringState              : Connected
AllowVirtualNetworkAccess : True
AllowForwardedTraffic     : True
AllowGatewayTransit       : False
UseRemoteGateways         : True

Remote virtual network peering
/subscriptions/xxxxx/resourceGroups/dbi-onpremises-rg/providers/Microsoft.Network/virtualNetworks/dbi-onpremises-vnet

 

Note that to allow traffic that comes from my on-premise network to go through my first VNET (dbi-onpremises-vnet) at the destination of the second one (sql-mi-vnet), I need to enable some configuration settings as Allow Gateway Transit, Allow Forwarded Traffic and remote gateway on the concerned networks.

At this stage, I still faced a weird issue because I was able to connect to a virtual machine installed on the same VNET than my SQL MI but no luck with the SQL instance. In addition, the psping command output confirmed my connectivity issue letting me think about a routing issue.

blog 142 - 5 - psping command output

Routes from my on-premise network seemed to be well configured as show below. The VPN is a dial up internet connection in my case.

blog 142 - 6 - local route

I also got the confirmation that my on-premise network packets were correctly sent through my Azure VPN gateway by the Microsoft support team (a particular to Filipe Bárrios – support engineer Azure Networking). In fact, I got stuck a couple of days without to figure out exactly what happens. Furthermore Checking effective routes seemed to not viable option in my case because there is no explicit network interface with SQL MI. Please feel free to comment if I get wrong on this point. Fortunately, I found out a PowerShell script provided by Jovan Popovic (MSFT) which seems to have put me on the right track:

blog 142 - 4 - VNET PGP configuration

Referring to the Microsoft documentation, it seems PGP propagation could be very helpful in my case.

Support transit routing between your on-premises networks and multiple Azure VNets

BGP enables multiple gateways to learn and propagate prefixes from different networks, whether they are directly or indirectly connected. This can enable transit routing with Azure VPN gateways between your on-premises sites or across multiple Azure Virtual Networks.

After enabling the corresponding option in the SQL MI route table and opening the SQL MI ports in my firewall connection was finally successful.

blog 142 - 8 - sql mi route bgp

blog 142 - 7 - psping output 2

Hope it helps!

See you

 

 

Cet article Connecting to Azure SQL Managed Instance from on-premise network est apparu en premier sur Blog dbi services.

PDB lockdown with Oracle 18.3.0.0

$
0
0

The PDB lockdown feature offers you the possibility to restrict operations and functionality available from within a PDB, and might be very useful from a security perspective.

Some new features have been added to the 18.3.0.0 Oracle version:

  • You have the possibility to create PDB lockdown profiles in the application root like in the CDB root. This facilitates to have a more precise control access to the applications associated with the application container.
  • You can create a PDB lockdown profile from another PDB lockdown profile.
  • Three default PDB lockdown profiles have been added : PRIVATE_DBAAS, SAAS and PUBLIC_DBAAS
  • The v$lockdown_rules is a new view allowing you to display the contents of a PDB lockdown profile.

Let’s make some tests:

At first we create a lockdown profile from the CDB (as we did with Oracle 12.2)

SQL> create lockdown profile psi;

Lockdown Profile created.

We alter the lockdown profile to disable any statement on the PDB side except alter system set open_cursors=500;

SQL> alter lockdown profile PSI disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

Then we enable the lockdown profile:

SQL> alter system set PDB_LOCKDOWN=PSI;

System altered.

We can check the pdb_lockdown parameter value from the CDB side:

SQL> show parameter pdb_lockdown

NAME				     TYPE	 VALUE
------------------------------------ ----------- -------
pdb_lockdown			     string	 PSI

From the PDB side what happens ?

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set optimizer_mode='FIRST_ROWS_10';
alter system set optimizer_mode='FIRST_ROWS_10'
*
ERROR at line 1:
ORA-01031: insufficient privileges

This is a good feature, allowing a greater degree of separation between different PDB of the same instance.

We can create a lockdown profile disabling partitioned tables creation:

SQL> connect / as sysdba
Connected.
SQL> create lockdown profile psi;

Lockdown Profile created.

SQL> alter lockdown profile psi disable option=('Partitioning');

Lockdown Profile altered.

SQL> alter system set pdb_lockdown ='PSI';

System altered.

On the CDB side, we can create partitioned tables:

SQL> create table emp (name varchar2(10)) partition by hash(name);

Table created.

On the PDB side we cannot create partitioned tables:

SQL> alter session set container = pdb;

Session altered.

SQL> show parameter pdb_lockdown

NAME				     TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
pdb_lockdown			     string
APP
SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

We now have the possibility to create a lockdown profile from another one:

Remember we have the pdb lockdown profile app disabling partitioned tables creation, we can create a new app_hr lockdown profile from the app lockdown profile and add new features to the app_hr one:

SQL> create lockdown profile app_hr from app;

Lockdown Profile created.

The app_hr lockdown profile will not have the possibility to run alter system flush shared_pool:

SQL> alter lockdown profile app_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

We can query the dba_lockdown_profiles view:

SQL> SELECT profile_name, rule_type, rule, status 
     FROM   dba_lockdown_profiles order by 1;

PROFILE_NAME		   RULE_TYPE	    RULE.        STATUS

APP			    OPTION.     PARTITIONING	 DISABLE
APP_HR			   STATEMENT	ALTER SYSTEM	 DISABLE
APP_HR		            OPTION.     PARTITIONING     DISABLE
SQL> alter system set pdb_lockdown=app_hr;

System altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system flush shared_pool ;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

If we reset the pdb_lockdown to app, we now can flush the shared pool:

SQL> alter system set pdb_lockdown=app;

System altered.

SQL> alter system flush shared_pool ;

System altered.

We now can create lockdown profiles in the application root, so let’s create an application PDB:

SQL> CREATE PLUGGABLE DATABASE apppsi 
AS APPLICATION CONTAINER ADMIN USER app_admin IDENTIFIED BY manager
file_name_convert=('/home/oracle/oradata/DB18', 
'/home/oracle/oradata/DB18/apppsi');  

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO
	 4 APPPSI			  MOUNTED

We open the application PDB:

SQL> alter pluggable database apppsi open;

Pluggable database altered.

We connect to the application container :

SQL> alter session set container=apppsi;

Session altered.

We have the possibility to create a lockdown profile:

SQL> create lockdown profile apppsi;

Lockdown Profile created.

And to disable some features:

SQL> alter lockdown profile apppsi disable option=('Partitioning');

Lockdown Profile altered.

But there is a problem if we try to enable the profile:

SQL> alter system set pdb_lockdown=apppsi;
alter system set pdb_lockdown=apppsi
*
ERROR at line 1:
ORA-65208: Lockdown profile APPPSI does not exist.

And surprise we cannot create a partitioned table:

SQL> create table emp (name varchar2(10)) partition by hash(name);
create table emp (name varchar2(10)) partition by hash(name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

Let’s do some more tests: we alter the lockdown profile like this:

SQL> alter lockdown profile apppsi disable statement=('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

In fact we cannot use sys in order to test lockdown profiles in APP root, we have to use an application user with privileges such as create or alter lockdown profiles in the application container. So after creating an appuser in the application root:

SQL> connect appuser/appuser@apppsi

SQL> create lockdown profile appuser_hr;

Lockdown Profile created.

SQL> alter lockdown profile appuser_hr disable option=('Partitioning');

Lockdown Profile altered.

And now it works fine:

SQL> alter system set pdb_lockdown=appuser_hr;

System altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

And now can we enable again the partitioning option for the appuser_hr profile in the APP root ?

SQL> alter lockdown profile appuser_hr enable option = ('Partitioning');

Lockdown Profile altered.

SQL> create table emp (name varchar2(10)) partition by hash (name);
create table emp (name varchar2(10)) partition by hash (name)
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

It does not work as expected, the lockdown profile has been updated, but as previously we cannot create a partitioned table.

Let’s do another test with the statement option: we later the lockdown profile in order to disable all alter system set statements except with open_cursors:

SQL> alter lockdown profile appuser_hr disable statement=('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter system set open_cursors=500;

System altered.

This is a normal behavior.

Now we alter the lockdown profile in order to disable alter system flush shared_pool:

SQL> alter lockdown profile appuser_hr disable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

That’s fine :=)

Now we enable the statement:

SQL> alter lockdown profile appuser_hr enable STATEMENT = ('ALTER SYSTEM') 
clause = ('flush shared_pool');

Lockdown Profile altered.

SQL> alter system flush shared_pool;
alter system flush shared_pool
*
ERROR at line 1:
ORA-01031: insufficient privileges

And again this is not possible …

Let’s try in the CDB root:

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app disable statement =('ALTER SYSTEM') 
clause=('SET') OPTION ALL EXCEPT=('open_cursors');

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';
alter system set cursor_sharing='FORCE'
*
ERROR at line 1:
ORA-01031: insufficient privileges

The behavior is correct, let’s try to enable it :

SQL> connect / as sysdba
Connected.

SQL> alter lockdown profile app enable statement=('ALTER SYSTEM') 
clause ALL;

Lockdown Profile altered.

SQL> alter session set container=pdb;

Session altered.

SQL> alter system set cursor_sharing='FORCE';

System altered.

This is correct again, it seems it does not work correctly in the APP root …

In conclusion the lockdown profile new features are powerful and will be very useful for security reasons. It will allow the DBAs to define a finer granularity  to restrict user’s rights to what they only need to access. But we have to be careful, with the PDB lockdown profiles we can build and generate very complicated database administration.

 

 

 

 

 

 

 

 

 

 

 

 

 

Cet article PDB lockdown with Oracle 18.3.0.0 est apparu en premier sur Blog dbi services.

PII search using HCI

$
0
0

In a previous blog, we described how to install Hitachi Content Intelligence the solution of Hitachi Ventara for data indexing and search. In this blog post, we will see how we can use Hitachi Content Intelligence to perform the basic search on personal information (PII).

Data Connections

HCI allows you to connect to multiple data source using default data connectors. The first step is to create a data connection. By default, multiples data connectors are available:

HCI_data_connectors

For our example, we will simply use the Local File System as the data repository. Note that, the directory must be within the HCI install directory

Below the data connection configuration for our PII demo.

HCI_Data_Connection

Click on Test after adding the information and click on Create.

A new data connection will appear in your dashboard.

Processing Pipelines

After creating the data connection, will build a processing pipeline for our PII example
Click on Processing Pipelines > Create a Pipeline. Enter a name for your pipeline (optionally a description) and click on Create.
Click on Add Stages, and create your desired pipeline. For PII search we will use the following pipeline.
HCI_PII_Pipeline
After building your pipeline, you can test it by clicking on the Test Pipeline button at the top right of your page.

Index Collections

We should now, create an index collection to specify how you want to index your data set.
First, click on Create Index inside the Index Collections button. Create an HCI Index and use the schemaless option.
HCI_Index

Content Classes

Then you should create your content classes to extract your desired information from your data set. For our PII example, we will create 3 content classes for American Express and Visa credit card and for Security Social Number.
HCI_content_classes
For America Express credit card, your should add the following pattern.
HCI_AMEX
Pattern for Visa credit card.
HCI_VISA
Pattern for Social Security Number.
HCI_SSN

Start your workflow

When all steps are completed you can start your workflow and wait until it finish.
HCI_Workflow_Start

HCI Search

Use the HCI Search application to visualize the results.
https://<hci_instance_ip>:8888
Select your index name in the search field, and naviguate through the results.
You can also display the results in charts and graphics
HCI_Search_Graph
This demo is also available in the Hitachi Ventara Community website: https://community.hitachivantara.com/thread/13804-pii-workflows
 

Cet article PII search using HCI est apparu en premier sur Blog dbi services.


[Oracle 18c new feature] Quick win to improve queries with Memoptimized Rowstore

$
0
0

With its 18th release Oracle comes with many improvements. Some of them are obvious and some of them more discrete.
This is the case of the new buffer area (memory area) called the Memoptimize pool. This new area, part of the SGA, is used to store the data and metadata of standard Oracle tables (heap-organized tables) to improve significantly the performance of queries having filter on PKs.

This new MEMPTIMIZE POOL memory area is split in 2 parts:

  • Memoptimize buffer area: 75% of the space reserved to store table buffers the same way as they are store in the so-called buffer cache
  • Hash index: 25% of the space reserved to store the hash index of primary key from table in the Memoptimize buffer area

To manage this space a new parameter MEMOPTIMIZE_POOL_SIZE is available, unfortunately not dynamic. This parameter is fixed at run time and it is not managed with the database automatic memory management. This parameter takes space from the SGA_TARGET so be careful when dimensioning it.

Before this new memory structure, clients who want to query a standard table with a filter on its PK (e.g: where COL_PK = X ) have to wait on I/Os coming from the disk to the memory until reach the X value from the index. Then I/Os again from disk to memory to fetch the table block containing the row from the table where COL_PK = X. This mechanism consumes I/Os of course and also CPU cycles because it involves other processes of the instance who need to perform some tasks.

Now thanks to this new memory space, when a client does the exact same query where COL_PK = X, it can directly hash the value and walk through the Hash Index to find the row location in the Memoptimize buffer area. Then the result is directly picked up by the client process. It results in less CPU consumption and less I/Os disk in most of case at the cost of memory space.

When to used?

It is only useful in case when queries are done on table with an equality filter on the PK. You can balance the need with the size of the requested table and the frequency of usage of such queries.

4 steps activation

  1. Check that the COMPATIBLE parameter is set to 18.0.0 or higher
  2. Set the parameter MEMOPTIMIZE_POOL_SIZE to the desired value (restart required)
  3. Alter (or create) target table with the “MEMOPTIMIZE FOR READ” clause
  4. Then execute the procedure “DBMS_MEMOPTIMIZE.POPULATE( )” to populate the MEMOPTIMIZE POOL with the target table

How to remove a table from the MEMOPTIMIZE POOL ?

With the procedure DROP_OBJECT() from the DBMS_MEMOPTIMIZE package.

You can disable the access to this new MEMPTIMIZE POOL by using the clause “NO MEMOPTIMIZE FOR READ”.

 

I hope this helps and please do not hesitates to contact us should you want more details.

Nicolas

 

Cet article [Oracle 18c new feature] Quick win to improve queries with Memoptimized Rowstore est apparu en premier sur Blog dbi services.

Redhat Forum 2018 – everthing is OpenShift

$
0
0

I had an exiting and informational day at Redhat Forum Zurich.

After starting with a short welcome in the really full movie theater in Zurich Sihlcity, we had the great pleasure to listen to Jim Whitehurst. With humor he told about the success of Redhat during the last 25 years.

IMG_20180911_094607

The partner and success stories of Vorwerk / Microsoft / Accenture / Avaloq / Swisscom and SAP showed impressivly the potential and the usage of OpenShift.
firefox_2018-09-12_18-11-44

After the lunch break, which was great for networking and talking to some of the partners, the breakout sessions started.
The range of sessions showed the importance of OpenShift for agile businesses.

Here is a short summary of three sessions:
Philippe Bürgisser (acceleris) showed on a practical example his sticking points of bringing OpenShift into production.
PuzzleIT, adcubum and Helsana gave amazing insides into their journey to move adcubum syrius to APPUiO.
RedHat and acceleris explained how Cloud/OpenShift simplifies and improves development cycles.
firefox_2018-09-12_17-50-40

During the end note, Redhat take up the cudgels for women in IT and their importance, a suprising and apreciated aspect – (Red)Hats off!
Thank you for that great event! Almost 900 participants this year can’t be wrong.
firefox_2018-09-12_18-05-56

 

Cet article Redhat Forum 2018 – everthing is OpenShift est apparu en premier sur Blog dbi services.

Masking Data With PostgreSQL

$
0
0

I was searching a tool for anonymizing data in a PostgreSQL database and I have tested the extension pg_anonymizer.
PostgreSQL_anonymizer is a set of SQL functions that remove personally identifiable values from a PostgreSQL table and replace them with random-but-plausible values. The goal is to avoid any identification from the data record while remaining suitable for testing, data analysis and data processing.
In this blog I am showing how this extension can be used. I am using a PostgreSQL 10 database.
The first step is to install the extension pg_anonymizer. In my case I did it with with pgxn client

[postgres@pgserver2 ~]$ pgxn install postgresql_anonymizer --pg_config /u01/app/postgres/product/10/db_1/bin/pg_config
INFO: best version: postgresql_anonymizer 0.0.3
INFO: saving /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: unpacking: /tmp/tmpVf3psT/postgresql_anonymizer-0.0.3.zip
INFO: building extension
gmake: Nothing to be done for `all'.
INFO: installing extension
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension'
/usr/bin/mkdir -p '/u01/app/postgres/product/10/db_1/share/extension/anon'
/usr/bin/install -c -m 644 .//anon.control '/u01/app/postgres/product/10/db_1/share/extension/'
/usr/bin/install -c -m 644 .//anon/anon--0.0.3.sql  '/u01/app/postgres/product/10/db_1/share/extension/anon/'
[postgres@pgserver2 ~]$

We can then verify that under /u01/app/postgres/product/10/db_1/share/extension we have a file anon.control and a directory named anon

[postgres@pgserver2 extension]$ ls -ltra anon*
-rw-r--r--. 1 postgres postgres 167 Sep 13 10:54 anon.control

anon:
total 18552
drwxrwxr-x. 3 postgres postgres    12288 Sep 13 10:54 ..
drwxrwxr-x. 2 postgres postgres       28 Sep 13 10:54 .
-rw-r--r--. 1 postgres postgres 18980156 Sep 13 10:54 anon--0.0.3.sql

Let’s create a database named prod and let’s create the required extensions. tsm_system_rows should delivered by the contrib.

prod=# \c prod
You are now connected to database "prod" as user "postgres".
prod=#
prod=# CREATE EXTENSION tsm_system_rows;;
CREATE EXTENSION
prod=#

prod=# CREATE EXTENSION anon;
CREATE EXTENSION
prod=#


prod=# \dx
                                    List of installed extensions
      Name       | Version |   Schema   |                        Description

-----------------+---------+------------+----------------------------------------------------
--------
 anon            | 0.0.3   | anon       | Data anonymization tools
 plpgsql         | 1.0     | pg_catalog | PL/pgSQL procedural language
 tsm_system_rows | 1.0     | public     | TABLESAMPLE method which accepts number of rows as
a limit
(3 rows)

prod=#

The extension will create following functions in the schema anon. These functions can be used to mask some data.

prod=# set search_path=anon;
SET
prod=# \df
                                                               List of functions
 Schema |           Name           |     Result data type     |                          Argu
ment data types                           |  Type
--------+--------------------------+--------------------------+------------------------------
------------------------------------------+--------
 anon   | random_city              | text                     |
                                          | normal
 anon   | random_city_in_country   | text                     | country_name text
                                          | normal
 anon   | random_company           | text                     |
                                          | normal
 anon   | random_country           | text                     |
                                          | normal
 anon   | random_date              | timestamp with time zone |
                                          | normal
 anon   | random_date_between      | timestamp with time zone | date_start timestamp with tim
e zone, date_end timestamp with time zone | normal
 anon   | random_email             | text                     |
                                          | normal
 anon   | random_first_name        | text                     |
                                          | normal
 anon   | random_iban              | text                     |
                                          | normal
 anon   | random_int_between       | integer                  | int_start integer, int_stop integer
                            | normal
 anon   | random_last_name         | text                     |
                            | normal
 anon   | random_phone             | text                     | phone_prefix text DEFAULT '0'::text
                            | normal
 anon   | random_region            | text                     |
                            | normal
 anon   | random_region_in_country | text                     | country_name text
                            | normal
 anon   | random_siren             | text                     |
                            | normal
 anon   | random_siret             | text                     |
                            | normal
 anon   | random_string            | text                     | l integer
                            | normal
 anon   | random_zip               | text                     |
                            | normal
(18 rows)

prod=#

Now in the database prod let’s create a table with some data.

prod=# \d customers
                      Table "public.customers"
   Column   |         Type          | Collation | Nullable | Default
------------+-----------------------+-----------+----------+---------
 first_name | character varying(30) |           |          |
 last_name  | character varying(30) |           |          |
 email_add  | character varying(30) |           |          |
 country    | character varying(60) |           |          |
 iban       | character varying(60) |           |          |
 amount     | integer               |           |          |

prod=#

prod=# table customers;
 first_name | last_name |        email_add        |   country    |            iban            |   amount
------------+-----------+-------------------------+--------------+----------------------------+------------
 Michel     | Delco     | michel.delco@yaaa.fr    | FRANCE       | FR763000600001123456890189 |    5000000
 Denise     | Blanchot  | denise.blanchot@yaaa.de | GERMANY      | DE91100000000123456789     | 1000000000
 Farid      | Dim       | farid.dim@yaaa.sa       | Saudi Arabia | SA4420000001234567891234   |    2500000
(3 rows)

prod=#

Let’s say that I want some people to access to all data for this table, but I don’t want them to see the real email, the real country and the real iban of the customers.
One solution should be to create a view with anonymous data for these columns. This will replace them with random-but-plausible values for these columns

prod=# create view Customers_anon as select first_name as Firstname ,last_name  as Lastnmame,anon.random_email() as Email ,anon.random_country() as Country, anon.random_iban() as Iban ,amount as Amount from customers;
CREATE VIEW

And then grant the access privilege to concerned people

prod=# select * from customers_anon ;
 firstname | lastnmame |             email             | country |            iban            |   amount
-----------+-----------+-------------------------------+---------+----------------------------+------------
 Michel    | Delco     | wlothean0@springer.com        | Spain   |  AD1111112222C3C3C3C3C3C3  |    5000000
 Denise    | Blanchot  | emoraloy@dropbox.com          | Myanmar |  AD1111112222C3C3C3C3C3C3  | 1000000000
 Farid     | Dim       | vbritlandkt@deliciousdays.com | India   |  AD1111112222C3C3C3C3C3C3  |    2500000
(3 rows)

prod=#
 

Cet article Masking Data With PostgreSQL est apparu en premier sur Blog dbi services.

We are proud to announce:

EDB containers for OpenShift 2.3 – PEM integration

$
0
0

A few days ago EnterpriseDB announced the availability of version 2.3 of the EDB containers for OpenShift. The main new feature in this release is the integration of PEM (Postgres Enterprise Manager), so in this post we’ll look at how we can bring up a PEM server in OpenShift. If you did not follow the lats posts about EDB containers in OpenShift here is the summary:

The first step you need to do is to download the updated container images. You’ll notice that there are two new containers which have not been available before the 2.3 release:

  • edb-pemserver: Obviously this is the PEM server
  • admintool: a utility container for supporting database upgrades and launching PEM agents on the database containers

For downloading the latest release of the EDB container images for OpenShift, the procedure is the following:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker login containers.enterprisedb.com

docker pull containers.enterprisedb.com/edb/edb-as:v10
docker tag containers.enterprisedb.com/edb/edb-as:v10 localhost:5000/edb/edb-as:v10
docker push localhost:5000/edb/edb-as:v10

docker pull containers.enterprisedb.com/edb/edb-pgpool:v3.6
docker tag containers.enterprisedb.com/edb/edb-pgpool:v3.6 localhost:5000/edb/edb-pgpool:v3.6
docker push localhost:5000/edb/edb-pgpool:v3.6

docker pull containers.enterprisedb.com/edb/edb-pemserver:v7.3
docker tag containers.enterprisedb.com/edb/edb-pemserver:v7.3 localhost:5000/edb/edb-pemserver:v7.3
docker push localhost:5000/edb/edb-pemserver:v7.3

docker pull containers.enterprisedb.com/edb/edb-admintool
docker tag containers.enterprisedb.com/edb/edb-admintool localhost:5000/edb/edb-admintool
docker push localhost:5000/edb/edb-admintool

docker pull containers.enterprisedb.com/edb/edb-bart:v2.1
docker tag containers.enterprisedb.com/edb/edb-bart:v2.1 localhost:5000/edb/edb-bart:v2.1
docker push localhost:5000/edb/edb-bart:v2.1

In my case I have quite a few EDB containers available now (…and I could go ahead and delete the old ones, of course):

docker@minishift:~$ docker images | grep edb
containers.enterprisedb.com/edb/edb-as          v10                 1d118c96529b        45 hours ago        1.804 GB
localhost:5000/edb/edb-as                       v10                 1d118c96529b        45 hours ago        1.804 GB
containers.enterprisedb.com/edb/edb-admintool   latest              07fda249cf5c        10 days ago         531.6 MB
localhost:5000/edb/edb-admintool                latest              07fda249cf5c        10 days ago         531.6 MB
containers.enterprisedb.com/edb/edb-pemserver   v7.3                78954c316ca9        10 days ago         1.592 GB
localhost:5000/edb/edb-pemserver                v7.3                78954c316ca9        10 days ago         1.592 GB
containers.enterprisedb.com/edb/edb-bart        v2.1                e2410ed4cf9b        10 days ago         571 MB
localhost:5000/edb/edb-bart                     v2.1                e2410ed4cf9b        10 days ago         571 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.6                e8c600ab993a        10 days ago         561.1 MB
localhost:5000/edb/edb-pgpool                   v3.6                e8c600ab993a        10 days ago         561.1 MB
containers.enterprisedb.com/edb/edb-as                              00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-as                                           00adaa0d4063        3 months ago        979.3 MB
localhost:5000/edb/edb-pgpool                   v3.5                e7efdb0ae1be        4 months ago        564.1 MB
containers.enterprisedb.com/edb/edb-pgpool      v3.5                e7efdb0ae1be        4 months ago        564.1 MB
localhost:5000/edb/edb-as                       v10.3               90b79757b2f7        4 months ago        842.7 MB
containers.enterprisedb.com/edb/edb-bart        v2.0                48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     2.0                 48ee2c01db92        4 months ago        590.6 MB
localhost:5000/edb/edb-bart                     v2.0                48ee2c01db92        4 months ago        590.6 MB

The only bits I changed in the yaml file that describes my EDB AS deployment compared to the previous posts are these (check the high-lightened lines, there are only two):

apiVersion: v1
kind: Template
metadata:
   name: edb-as10-custom
   annotations:
    description: "Custom EDB Postgres Advanced Server 10.0 Deployment Config"
    tags: "database,epas,postgres,postgresql"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1 
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-service 
    labels:
      role: loadbalancer
      cluster: ${DATABASE_NAME}
  spec:
    selector:                  
      lb: ${DATABASE_NAME}-pgpool
    ports:
    - name: lb 
      port: ${PGPORT}
      targetPort: 9999
    sessionAffinity: None
    type: LoadBalancer
- apiVersion: v1 
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-pgpool
  spec:
    replicas: 2
    selector:
      lb: ${DATABASE_NAME}-pgpool
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          lb: ${DATABASE_NAME}-pgpool
          role: queryrouter
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-pgpool
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: PGPORT
            value: ${PGPORT} 
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-pgpool:v3.6
          imagePullPolicy: IfNotPresent
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5
    triggers:
    - type: ConfigChange
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: ${DATABASE_NAME}-as10-0
  spec:
    replicas: 1
    selector:
      db: ${DATABASE_NAME}-as10-0 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          db: ${DATABASE_NAME}-as10-0 
          cluster: ${DATABASE_NAME}
      spec:
        containers:
        - name: edb-as10 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: DATABASE_USER_PASSWORD
            value: 'postgres'
          - name: ENTERPRISEDB_PASSWORD
            value: 'postgres'
          - name: REPL_USER
            value: ${REPL_USER} 
          - name: REPL_PASSWORD
            value: 'postgres'
          - name: PGPORT
            value: ${PGPORT} 
          - name: RESTORE_FILE
            value: ${RESTORE_FILE} 
          - name: LOCALEPARAMETER
            value: ${LOCALEPARAMETER}
          - name: CLEANUP_SCHEDULE
            value: ${CLEANUP_SCHEDULE}
          - name: EFM_EMAIL
            value: ${EFM_EMAIL}
          - name: NAMESERVER
            value: ${NAMESERVER}
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: IfNotPresent 
          readinessProbe:
            exec:
              command:
              - /var/lib/edb/testIsReady.sh
            initialDelaySeconds: 60
            timeoutSeconds: 5 
          livenessProbe:
            exec:
              command:
              - /var/lib/edb/testIsHealthy.sh
            initialDelaySeconds: 600 
            timeoutSeconds: 60 
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: ${BACKUP_PERSISTENT_VOLUME}
            mountPath: /edbbackup
          - name: pg-initconf
            mountPath: /initconf
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: ${BACKUP_PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${BACKUP_PERSISTENT_VOLUME_CLAIM}
        - name: pg-initconf
          configMap:
            name: postgres-map
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'edb'
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: REPL_USER
  displayName: Repl user
  description: repl database user
  value: 'repl'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: "5444"
- name: LOCALEPARAMETER
  displayName: Locale
  description: Locale of database
  value: ''
- name: CLEANUP_SCHEDULE
  displayName: Host Cleanup Schedule
  description: Standard cron schedule - min (0 - 59), hour (0 - 23), day of month (1 - 31), month (1 - 12), day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0). Leave it empty if you dont want to cleanup.
  value: '0:0:*:*:*'
- name: EFM_EMAIL
  displayName: Email
  description: Email for EFM
  value: 'none@none.com'
- name: NAMESERVER
  displayName: Name Server for Email
  description: Name Server for Email
  value: '8.8.8.8'
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: ''
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: ''
  required: true
- name: BACKUP_PERSISTENT_VOLUME
  displayName: Backup Persistent Volume
  description: Backup Persistent volume name
  value: ''
  required: false
- name: BACKUP_PERSISTENT_VOLUME_CLAIM
  displayName: Backup Persistent Volume Claim
  description: Backup Persistent volume claim name
  value: ''
  required: false
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

As the template starts with one replica I scaled that to three so finally the setup we start with for PEM is this (one master and two replicas, which is the minimum you need for automated failover anyway):

dwe@dwe:~$ oc get pods -o wide -L role
edb-as10-0-1-4ptdr   1/1       Running   0          7m        172.17.0.5   localhost   standbydb
edb-as10-0-1-8mw7m   1/1       Running   0          5m        172.17.0.6   localhost   standbydb
edb-as10-0-1-krzpp   1/1       Running   0          8m        172.17.0.9   localhost   masterdb
edb-pgpool-1-665mp   1/1       Running   0          8m        172.17.0.8   localhost   queryrouter
edb-pgpool-1-mhgnq   1/1       Running   0          8m        172.17.0.7   localhost   queryrouter

Nothing special happened so far except that we downloaded the new container images, pushed that to the local registry and adjusted the deployment yaml to reference the latest version of the containers. What we want to do now is to create the PEM repository container so that we can add the database to PEM which will give us monitoring and alerting. As PEM requires persistent storage as well we need a new storage definition:

Selection_016

You can of course also get the storage definition using the “oc” command:

dwe@dwe:~$ oc get pvc
NAME                STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
edb-bart-claim      Bound     pv0091    100Gi      RWO,ROX,RWX                   16h
edb-pem-claim       Bound     pv0056    100Gi      RWO,ROX,RWX                   50s
edb-storage-claim   Bound     pv0037    100Gi      RWO,ROX,RWX                   16h

The yaml file for the PEM server is this one (notice that the container image referenced is coming from the local registry):

apiVersion: v1
kind: Template
metadata:
   name: edb-pemserver
   annotations:
    description: "Standard EDB Postgres Enterprise Manager Server 7.3 Deployment Config"
    tags: "pemserver"
    iconClass: "icon-postgresql"
objects:
- apiVersion: v1
  kind: Service
  metadata:
    name: ${DATABASE_NAME}-webservice 
    labels:
      name: ${DATABASE_NAME}-webservice
  spec:
    selector:
      role: pemserver 
    ports:
    - name: https
      port: 30443
      nodePort: 30443
      protocol: TCP
      targetPort: 8443
    - name: http
      port: 30080
      nodePort: 30080
      protocol: TCP
      targetPort: 8080
    type: NodePort
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    name: edb-pemserver
  spec:
    replicas: 1
    selector:
      app: pemserver 
    strategy:
      resources: {}
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: pemserver 
          cluster: ${DATABASE_NAME} 
      spec:
        containers:
        - name: pem-db
          env:
          - name: DATABASE_NAME
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER
            value: ${DATABASE_USER}
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
          - name: PGPORT
            value: ${PGPORT}
          - name: RESTORE_FILE
            value: ${RESTORE_FILE}
          - name: ENABLE_HA_MODE
            value: "No"
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
            image: localhost:5000/edb/edb-as:v10
          imagePullPolicy: Always 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
        - name: pem-webclient 
          image: localhost:5000/edb/edb-pemserver:v7.3
          imagePullPolicy: Always 
          env:
          - name: DATABASE_NAME 
            value: ${DATABASE_NAME} 
          - name: DATABASE_USER 
            value: ${DATABASE_USER} 
          - name: ENTERPRISEDB_PASSWORD
            value: "postgres"
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: POD_NODE
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName 
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP 
          - name: PGPORT
            value: ${PGPORT}
          - name: CIDR_ADDR
            value: ${CIDR_ADDR}
          - name: ACCEPT_EULA
            value: ${ACCEPT_EULA}
          - name: DEBUG_MODE
            value: ${DEBUG_MODE}
          ports:
          - containerPort: ${PGPORT} 
          volumeMounts:
          - name: ${PERSISTENT_VOLUME}
            mountPath: /edbvolume
          - name: httpd-shm
            mountPath: /run/httpd
        volumes:
        - name: ${PERSISTENT_VOLUME}
          persistentVolumeClaim:
            claimName: ${PERSISTENT_VOLUME_CLAIM}
        - name: httpd-shm 
          emptyDir:
            medium: Memory 
        dnsPolicy: ClusterFirst
        restartPolicy: Always
    triggers:
    - type: ConfigChange
parameters:
- name: DATABASE_NAME
  displayName: Database Name
  description: Name of Postgres database (leave edb for default)
  value: 'pem'
  required: true
- name: DATABASE_USER
  displayName: Default database user (leave enterprisedb for default)
  description: Default database user
  value: 'enterprisedb'
- name: PGPORT
  displayName: Database Port
  description: Database Port (leave 5444 for default)
  value: '5444'
  required: true
- name: PERSISTENT_VOLUME
  displayName: Persistent Volume
  description: Persistent volume name
  value: 'edb-data-pv'
  required: true
- name: PERSISTENT_VOLUME_CLAIM 
  displayName: Persistent Volume Claim
  description: Persistent volume claim name
  value: 'edb-data-pvc'
  required: true
- name: RESTORE_FILE
  displayName: Restore File
  description: Restore file location
  value: ''
- name: CIDR_ADDR 
  displayName: CIDR address block for PEM 
  description: CIDR address block for PEM (leave '0.0.0.0/0' for default) 
  value: '0.0.0.0/0' 
- name: ACCEPT_EULA
  displayName: Accept end-user license agreement (leave 'Yes' for default)
  description: Indicates whether user accepts the end-user license agreement
  value: 'Yes'
  required: true

Again, don’t process the template right now, just save it as a template:
Selection_001

Once we have that available we can start to deploy the PEM server from the catalog:
Selection_002

Selection_003

Of course we need to reference the storage definition we created above:
Selection_004

Leave everything else at its defaults and create the deployment:
Selection_005

A few minutes later you should have PEM ready:
Selection_011

For connecting to PEM with your browser have a look at the service definition to get the port:
Selection_012

Once you have that you can connect to PEM:
Selection_013
Selection_014

In the next post we’ll look at how we can add our existing database deployment to our just created PEM server so we can monitor the instances and configure alerting.

 

Cet article EDB containers for OpenShift 2.3 – PEM integration est apparu en premier sur Blog dbi services.

Viewing all 2879 articles
Browse latest View live