Quantcast
Channel: dbi Blog
Viewing all 2848 articles
Browse latest View live

User Session lost using ADF Application

$
0
0

In one of my missions, I was involved in a new Fusion Middleware 12C (12.2.1.2) installation with an ADF application and an Oracle report server instance deployments .
This infrastructure is protected using an Access Manager Single Sign on Server.
In Production, the complete environment is fronted by a WAF server ending the https.
On the TEST The complete environment is fronted by a SSL reverse proxy ending the https.

In the chosen architecture, all Single Sign On request goes directly through the reverse proxy to the OAM servers.
The Application requests and the reports requests are routed through a HTTP server having the WebGate installed.

Below is an extract of the SSL part of the reverse Proxy configuration:
# SSL Virtual Host
<VirtualHost 10.0.1.51:443>
ServerName https://mySite.com
ErrorLog logs/ssl_errors.log
TransferLog logs/ssl_access.log
HostNameLookups off
ProxyPreserveHost On
ProxyPassReverse /oam http://appserver.example.com:14100/oam
ProxyPass /oam http://appserver.example.com:14100/oam
ProxyPassReverse /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /myCustom-sso-web http://appserver.example.com:14100/myCustom-sso-web
ProxyPass /reports http://appserver.example.com:7778/reports
ProxyPassReverse /reports http://appserver.example.com:7778/reports
ProxyPass /myApplication http://appserver.example.com:7778/myApplication
ProxyPassReverse /myApplication http://appserver.example.com:7778/myApplication
# SSL configuration
SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl/myStite_com.crt
SSLCertificateKeyFile /etc/httpd/conf/ssl/mySite_com.key
</VirtualHost>

HTTP Server Virtual hosts:
# Local requests
Listen 7778
<VirtualHost *:7778>
ServerName http://appserver.example.com:7778
# Rewrite included for OAM logout redirection
RewriteRule ^/oam/(.*)$ http://appserver.example.com:14100/oam/$1
RewriteRule ^/myCustom-sso-web/(.*)$ http://appserver.example.com:14100/myCustom-sso-sso-web/$1
</VirtualHost>

<VirtualHost *:7778>
ServerName https://mySite.com:443
</VirtualHost>
The ADf application and the reports servers mapping is done using custom configuration files included in http.conf files
#adf.conf
#----------
<Location /myApplication>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9001,appserver1.example.com:9003
WLProxySSLPassThrough ON
</Location>

# Force caching for image files
<FilesMatch "\.(jpg|jpeg|png|gif|swf)$">
Header unset Surrogate-Control
Header unset Pragma
Header unset Cache-Control
Header unset Last-Modified
Header unset Expires
Header set Cache-Control "max-age=86400, public"
Header set Surrogate-Control "max-age=86400"
</FilesMatch>

#reports.conf
#-------------
<Location /reports>
SetHandler weblogic-handler
WebLogicCluster appserver.example.com:9004,appserver1.example.com:9004
DynamicServerList OFF
WLProxySSLPassThrough ON
</Location>
After configuring a ADF application and the Reports Server to be protected through the WebGate, the users can connect and work without any issue during the first 30 minutes.
Then they loose their sessions. We thought first it was related to the session timeouts or inactivity timeout.
We increased the values of those timeouts without success.
We checked the logs and found out that the issue was related to the OAM and WebGate cookies.

The OAM Server gets and sets a cookie named OAM_ID.
Each WebGate gets and sets a cookie named OAMAuthnCookie_ + the host name and port.

The contents of the cookies are:

Authenticated User Identity (User DN)
Authentication Level
IP Address
SessionID (Reference to Server side session – OAM11g Only)
Session Validity (Start Time, Refresh Time)
Session InActivity Timeouts (Global Inactivity, Max Inactivity)
Validation Hash

The validity of a WebGate handled user session is 30 minutes by default and then the WebGate checks the OAM cookies.
Those cookies are secured and were lost because they were not forwarded by the WAF or the reverse proxy due to ending of the https.

We needed to changes the SSL reverse proxy configuration to send the correct information to the WebLogic Server and HTTP Server about ending SSL at reverse proxy level.
This has been done adding two HTTP Headers to the request before sending them to the Oracle Access Manager or Fusion Middleware HTTP Server.

# For the WebLogic Server to be informed about SSL ending at reverse proxy level
RequestHeader set WL-Proxy-SSL true
# For the Oracle HTTP Server to take the secure cookies in account
RequestHeader set X-Forwarded-Proto “https”

The WAF needed to be configured to do the same HTTP Headers adds in the production environment.

After those changes, the issue was solved.

 

Cet article User Session lost using ADF Application est apparu en premier sur Blog dbi services.


Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning.

$
0
0

Introduction :

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

In this blog I will explain how to setup a Grid Infrastructure software within AFD on an architecture SIHA or CRS

Case1. You want to configure AFD from very beginning (no UDEV, no ASMLib) with SIHA, Single Instance High Availability installation (former Oracle Restart)

Issue :

If we want to use AFD driver from very beginning, we should use Oracle AFD to prepare some disks for the ASM instance,
The issue is coming from the fact that AFD will be available just after the installation (can be configured before the installation)!

Solution :

Step1. Install GI stack in software only mode

setup_soft_only

Step2. Run root.sh when is prompted, without any other action(do not execute generated script rootUpgrade.sh)

Step3. Run roothas.pl to setup your HAS stack

[root] /u01/app/grid/product/12.2.0/grid/perl/bin/perl -I /u01/app/grid/product/12.2.0/grid/perl/lib -I /u01/app/grid/product/12.2.0/grid/crs/install /u01/app/grid/product/12.2.0/grid/crs/install/roothas.pl

Step4. As root user proceed to configure AFD

 /u01/app/grid/product/12.2.0/grid/bin/crsctl stop has -f
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
/u01/app/grid/product/12.2.0/grid/bin/crsctl start has

Step5.  Setup AFD string to discover new devices , as grid user

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_dsset '/dev/sd*'

Step6. Label new disk as root

 /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1

Step7. As grid user, launch ASMCA , to create your ASM instance, based on the diskgroup created on the new labeled disk , DISK1

disk_AFD

disk_AFD

Step8. Display AFD driver  within HAS stack.

check_res

 

Case2. You want to configure AFD from very beginning (no UDEV, no ASMLib) with CRS : Cluster Ready Services

Issue :

By installing on software-only mode, you will just copy and relink the binaries.
No wrapper scripts are created as (crsctl or clsecho).
The issue consists that AFD needs wrapper scripts and not the binaries (crsctl.bin).

Solution :

Step1.Do it on all nodes.

Install Grid Infrastructure on the all nodes of the future cluster in the mode “Software-only Installation”.

setup_soft_only

Step2. Do it on all nodes.

After the installation the wrapper scripts are not present. You can copy from any other installation (SIHA too) or use a cloned home.

After getting the two scripts , modify the variables inside them to be aligned with your current system used for installation:

ORA_CRS_HOME=/u01/app/grid/product/12.2.0/grid  --should be changed
MY_HOST=dbi1 –should be changed
ORACLE_USER=grid
ORACLE_HOME=$ORA_CRS_HOME
ORACLE_BASE=/u01/app/oracle
CRF_HOME=/u01/app/grid/product/12.2.0/grid –should be changed

Step3. Do it on all nodes

Configure AFD :

[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4. Do it only on the first node.

Scan & label the new disks using AFD.

/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdb1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdc1
/u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_label DISK1 /dev/sdd1
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi1 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------

Label                     Filtering   Path

================================================================================

DISK1                       ENABLED   /dev/sdb1

DISK2                       ENABLED   /dev/sdc1

DISK3                       ENABLED   /dev/sdd1

Step5. Do it on the other nodes.

Scan and display the disks on the other nodes of the future cluster. No need to label them again.

[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_scan
[root@dbi2 grid]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK1                       ENABLED   /dev/sdb1
DISK2                       ENABLED   /dev/sdc1
DISK3                       ENABLED   /dev/sdd1

Step6. Do it on 1st node

Run the script config.sh as oracle/grid user

/u01/app/grid/product/12.2.0/grid/crs/config/config.sh

config_luster

Step7. Do it on 1st node

Setup the connectivity between all the future nodes of the cluster and follow the wizard.

conn_all_nodes

Step8. Do it on 1st node

You will be asked to create a ASM diskgroup.

Normally without doing previous steps , will not be possible , as no udev no ASMLib no AFD configured. So no labeled disks for that step.

create_asm_DG

But…….

Step9. Do it on 1st node

Change discovery path to ‘AFD:*’and should retrieve the disks labeled on the previous step.

afd_path

Step10. Do it on 1st node

Provide AFD labeled disks to create the ASM disk group for the OCR files.Uncheck “Configure Oracle ASM Filter Driver”

CREATE_ASM_DG_2

Step11. Do it on 1st node

Finalize the configuration as per documentation.

 

Additionally, another way (easier ) to install/configure ASM Filter Driver you can find here :
https://blog.dbi-services.com/oracle-18c-cluster-with-oracle-asm-filter-driver/

Summary : Using the scenarios described above , we can configure Grid Infrastructure stack within AFD on  a SIHA or CRS architecture.

 

Cet article Configure AFD with Grid Infrastructure software (SIHA & CRS) from very beginning. est apparu en premier sur Blog dbi services.

Java9 new features

$
0
0

java9

Java9 is on its way now, in this blog I’ll talk about the new features I found interesting, the performances and so on.

Configure Eclipse for Java9

Prior to Eclipse Oxygen 4.7.1a, you’ll have to configure eclipse a little bit to make it run your Java9 projects.

Add in eclipse.ini after –launcher.appendVmargs

-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe

 

Still in eclipse.ini add:

--add-modules=ALL-SYSTEM

 

You should have something like this:

--launcher.appendVmargs
-vm
C:\Program Files\Java\jdk-9.0.4\bin\javaw.exe
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms40m
-Xmx512m
--add-modules=ALL-SYSTEM

New Features

 Modules

Like a lot of other languages, and in order to obfuscate a little more the code, java is going to use Modules. It simply means that you’ll be able to make your code requiring a specific library. This is quite helpful for small memory device that do not need the whole JVM to be loaded. You can have a list of available modules here.

When creating a module, you’ll generate a file called module-info.java which will be like:

module test.java9 {
	requires com.dbiservices.example.engines;
	exports com.dbiservices.example.car;
}

Here my module requires the “engines” module and exports the “car” module. This allows to only load classes related to our business and not some side libraries, it will help managing memory more efficiently but also requires some understanding regarding the module system. In addition, it creates a real dependency system between jars, and prevent using public classes that were not supposed to be exposed through the API. It prevents some strange behavior when you have duplicates entries, like several jar versions in the classpath. All non-exported modules will be encapsulated by default

 JShell

Java9 now provides a JShell, like other languages you can now execute java code through a java shell command prompt. Simply starts jshell from the JDK in the bin folder:

jshellThis kind of tool can greatly improve productivity for small tests, you don’t have to create small testing classes anymore. Very useful for regular expressions testing for example.

New HTTP API

The old http api is being upgraded finally. It now supports WebSockets and HTTP/2 protocol out of the box. For the moment the API is placed in an incubator module, that mean it can still change a little, but you can start playing with like following:

import jdk.incubator.http.*;
public class Run {

public static void main(String[] args) throws IOException, InterruptedException {
  HttpClient client = HttpClient.newHttpClient();
  HttpRequest req = HttpRequest.newBuilder(URI.create("http://www.google.com"))
		              .header("User-Agent","Java")
		              .GET()
		              .build();
  HttpResponse<String> resp = client.send(req, HttpResponse.BodyHandler.asString());
}

You’ll have to setup module-info.java accordingly:

module test.java9 {
	requires jdk.incubator.httpclient;
}

 Private interface methods

Since Java 8, an interface can contain behavior instead of only a method signature. But if you have several methods doing quite the same thing, usually you can refactor those methods into a private one. But default methods in java 8 can’t be private. In Java 9 you can add private helper methods to interfaces which can solve this issue:

public interface CarContract {

	void normalMethod();
	default void defaultMethod() {doSomething();}
	default void secondDefaultMethod() {doSomething();}
	
	private void doSomething(){System.out.println("Something");}
}

The private method “doSomething()” is hidden from the exposure of the interface.

 Unified JVM Logging

Java 9 adds a handy feature to debug the JVM thanks to logging. You can now enable logging for different tags like gc, compiler, threads and so on. You can set it thanks to the command line parameter -Xlog. Here’s an example of the configuration for the gc tag, using debug level without decoration:

-Xlog:gc=debug:file=log/gc.log:none

And the result:

ConcGCThreads: 2
ParallelGCThreads: 8
Initialize mark stack with 4096 chunks, maximum 16384
Using G1
GC(0) Pause Young (G1 Evacuation Pause) 24M->4M(254M) 5.969ms
GC(1) Pause Young (G1 Evacuation Pause) 59M->20M(254M) 21.708ms
GC(2) Pause Young (G1 Evacuation Pause) 50M->31M(254M) 20.461ms
GC(3) Pause Young (G1 Evacuation Pause) 84M->48M(254M) 30.398ms
GC(4) Pause Young (G1 Evacuation Pause) 111M->70M(321M) 31.902ms

We can even merge info:

-Xlog:gc+heap=debug:file=log/heap.log:none

Which results to this:

Heap region size: 1M
Minimum heap 8388608  Initial heap 266338304  Maximum heap 4248829952
GC(0) Heap before GC invocations=0 (full 0):
GC(0)  garbage-first heap   total 260096K, used 24576K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 24 young (24576K), 0 survivors (0K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(0) Eden regions: 24->0(151)
GC(0) Survivor regions: 0->1(3)
GC(0) Old regions: 0->0
GC(0) Humongous regions: 0->0
GC(0) Heap after GC invocations=1 (full 0):
GC(0)  garbage-first heap   total 260096K, used 985K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(0)   region size 1024K, 1 young (1024K), 1 survivors (1024K)
GC(0)  Metaspace       used 6007K, capacity 6128K, committed 6272K, reserved 1056768K
GC(0)   class space    used 547K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Heap before GC invocations=1 (full 0):
GC(1)  garbage-first heap   total 260096K, used 155609K [0x00000006c2c00000, 0x00000006c2d007f0, 0x00000007c0000000)
GC(1)   region size 1024K, 152 young (155648K), 1 survivors (1024K)
GC(1)  Metaspace       used 6066K, capacity 6196K, committed 6272K, reserved 1056768K
GC(1)   class space    used 548K, capacity 589K, committed 640K, reserved 1048576K
GC(1) Eden regions: 151->0(149)
GC(1) Survivor regions: 1->3(19)
...
...

There are other new features not detailed here, but you can find a list here.

 

Cet article Java9 new features est apparu en premier sur Blog dbi services.

MongoDB OPS Manager

$
0
0

MongoDB OPS Manager (MMS) is a tool for administering and managing MongoDB deployments, particularly large clusters. MongoDB Inc. qualified it as “the best way to manage your MongoDB data center“. OPS Manager also allows you to deploy a complete MongoDB cluster in multiple nodes and several topologies.  As you know, at dbi services, the MongoDB installation is based on our best practices, especially the MFA (MongoDB Flexible Architecture), more information here.

Is OPS Manager compatible with our installation best practices and our MongoDB DMK? For this reasons, I would like to post a guide for the installation and the configuration of the OPS Manager (MMS) based on the dbi services best practices.

In this installation guide, we’ll use the latest version of OPS Manager, release 4.0.2. We’ll install OPS Manager in a single instance, recommended for test and proof of concept.

Testing Environment

We’ll use a Docker container provisioned in the Swiss public cloud Hidora.  Below the information of the container:

  • CentOS 7
  • Add a Public IP
  • Endpoints configuration for: MongoDB DB port 2017, FTP port 21, SSH port 22, OPS Manager interface port 8080

Hidora_Endpoints_MongoDB

MongoDB Installation

Once your container has been provisioned, you can start the installation of MongoDB. It’s important to know, that OPS Manager needs a MongoDB database in order to stores the application information.  That’s why we need to install and start a mongo database at first.

For more details about the MongoDB Installation, you can refer to a previous blog.

[root@node32605-env-4486959]# mkdir -p /u00/app/mongodb/{local,admin,product}

[root@node32605-env-4486959]# mkdir -p /u01/mongodbdata/
[root@node32605-env-4486959]# mkdir -p /u01/mongodbdata/{appdb,bckpdb}
[root@node32605-env-4486959]# mkdir -p /u02/mongodblog/
[root@node32605-env-4486959]# mkdir -p /u02/mongodblog/{applog,bckplog}
[root@node32605-env-4486959]# mkdir -p /u99/mongodbbackup/ 

[root@node32605-env-4486959]# chown -R mongodb:mongodb /u00/app/mongodb/ /u01/mongodbdata/ /u99/mongodbbackup/

Let’s now download the latest MongoDB and OPS Manager releases from the MongoDB Download Center.

[root@node32605-env-4486959 opt]# wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.2.tgz
[root@node32605-env-4486959 opt]# wget https://downloads.mongodb.com/on-prem-mms/tar/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz

Based on the MFA, move the software inside the /product folder.

[root@node32605-env-4486959 opt] mv mongodb-linux-x86_64-rhel70-4.0.1.tgz mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz /u00/app/mongodb/product/

Permissions and Extraction:

[root@node32605-env-4486959 product]# chown -R mongodb:mongodb /u00/app/mongodb/product/* [root@node32605-env-4486959 product]# su - mongodb [mongodb@node32605-env-4486959 product]$ tar -xzf mongodb-linux-x86_64-rhel70-4.0.1.tgz [mongodb@node32605-env-4486959 product]$ tar -xzf mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64.tar.gz

Run mongo databases for OPS Manager and Backup:

[mongodb@node32605-env-4486959 bin]$ ./mongod --port 27017 --dbpath /u01/mongodbdata/appdb/ --logpath /u02/mongodblog/applog/mongodb.log --wiredTigerCacheSizeGB 1 --fork
[mongodb@node32605-env-4486959 bin]$ ./mongod --port 27018 --dbpath /u01/mongodbdata/bckpdb/ --logpath /u02/mongodblog/bckplog/mongodb.log --wiredTigerCacheSizeGB 1 --fork

Once the 2 databases have been successfully started, we can confirgure and start the OPS Manager application.

First, we need to configure the URL to access to OPS Manager.

[mongodb@node32605-env-4486959 ~]$ cd /u00/app/mongodb/product/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64/conf

Edit the conf-mms.properties file and add the following lines:

mongo.mongoUri=mongodb://127.0.0.1:27017/?maxPoolSize=150
 mongo.ssl=false
 mms.centralUrl=http://xxx.xxx.xx.xx:8080

Replace the xxx.xxx.xx.xx by your public IP or DNS name.

[mongodb@node32605-env-4486959 ~]$ cd /u00/app/mongodb/product/mongodb-mms-4.0.2.50187.20180905T1454Z-1.x86_64/bin
[mongodb@node32605-env-4486959 bin]$ ./mongodb-mms start

 OPS Manager configuration

Access to the OPS Manager application through the following URL:

http://public_ip:8080

MongoDB_UI

 

You need to register for the first time.

MongoDB_Register

 

Once your account have been created, configure the OPS Manager access URL.

MongoDB_URL

Then configure your email settings.

MongoDB_EmailSettings

Click on Continue and configure the User Authentication, Backup Snapshots, Proxy.

Finish by the OPS Manager versions configuration.

MongoDB_Version_Management

 

Congratulation, you finish your installation. You can start using OPS Manager now and deploy a MongoDB cluster.

MongoDB_OPSManager

 

 

 

Cet article MongoDB OPS Manager est apparu en premier sur Blog dbi services.

Oracle 12.2 : Windows Virtual Account

$
0
0

With Oracle 12.2 we can use a Virtual Account during the Oracle installation on Windows. Virtual Accounts allow you to install an Oracle Database and, create and manage Database services without passwords. A Virtual Account can be used as the Oracle Home User for Oracle Database Single Instance installations and does not require a user name or password during installation and administration.
In this blog I want to share an experience I had with the Windows Virtual Accounts when installing Oracle.
I was setting an Oracle environment on Windows Server 2016 for a client. During The installation I decided to use the Virtual Account option.
Capture1
After the installation of Oracle, I created a database PROD. And everything was fine

SQL*Plus: Release 12.2.0.1.0 Production on Wed Sep 19 05:43:05 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
PROD      READ WRITE

SQL>

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      C:\APP\ORACLE\PRODUCT\12.2.0\D
                                                 BHOME_1\DATABASE\SPFILEPROD.ORA
                                                
SQL>

Looking into the properties of my spfile I can see that there is a Windows group named ORA_OraDB12Home1_SVCACCTS
namedgroup
which has full control of the spfile. Indeed as we used the virtual account to install the Oracle software, oracle will automatically create this group and will use it for some tasks
Capture2
After the first database, the client asked to create a second database. Using DBCA I created a second let’s say ORCL.
After the creation of ORCL, I changed some configuration parameters of the first database PROD and decide to restart it. And then I was surprised with the following error.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:\APP\ORACLE\PRODUCT\12.2.0\DBHOME_1\DATABASE\INITPROD.ORA'
SQL>

Waw!! What happened is that when using DBCA to create the second database ORCL, Oracle changed the properties of the spfile of the first database PROD (spfilePROD.ora). Yes it’s strange but this was exactly what happened. The Virtual Group was replaced by OracleServiceORCL
Capture3

At the other side The ORCL spfile was fine.
Capture4

So I decided to remove the OracleServiceORCL in the properties of the PROD spfile and I add back the Virtual Group
Capture5

And Then I was able to start the PROD database

SQL> startup
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  8748760 bytes
Variable Size             293601576 bytes
Database Buffers          213909504 bytes
Redo Buffers                8028160 bytes
Database mounted.
Database opened.
SQL>

But this issue means that every time I create a new database with DBCA the properties of spfiles of others databases may be changed and this is not normal.
When checking for this strange issue I found this Oracle Support note
DBCA Using Virtual Account Incorrectly Sets The SPFILE Owner (Doc ID 2410452.1)

So I decided to apply the recommended patches by Oracle
Oracle Database 12.2.0.1.180116BP
26615680

C:\Users\Administrator>c:\app\oracle\product\12.2.0\dbhome_1\OPatch\opatch lspatches
26615680;26615680:SI DB CREATION BY DBCA IN VIRTUAL ACCOUNT INCORRECTLY SETS THE ACL FOR FIRST DB
27162931;WINDOWS DB BUNDLE PATCH 12.2.0.1.180116(64bit):27162931

And Then I create a new database TEST to see if the patches have corrected the issue.
Well I was able to restart all databases without any errors. But looking into the properties of the 3 databases, we can see that the patch added back the Virtual Group but the service of the last database is still present for previous databases. I don’t really understand why OracleServiceTest should be present in spfilePROD.ora and spfileORCL.ora.

Capture6

Capture7

Capture8

Conclusion : In this blog I shared an issue I experienced with Windows Virtual Account. Hope that this will help.

 

Cet article Oracle 12.2 : Windows Virtual Account est apparu en premier sur Blog dbi services.

Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3)

$
0
0

There is no in-place upgrade for the OID 11.1.1.9 to OID 12C 12.2.1.3. The steps to follow are the following:

  1. Install the required JDK version
  2. Install the Fusion Middleware Infrastructure 12c (12.2.1.3)
  3. Install the OID 12C (12.2.1.3) in the Fusion Middleware Infrastructure Home
  4. Upgrade the exiting OID database schemas
  5. Reconfigure the OID WebLogic Domain
  6. Upgrade the OID WebLogic Domain

1. Install JDK 1.8.131+

I have used the JDK 1.8_161

cd /u00/app/oracle/product/Java
tar xvf ~/software/jdk1.8.0_161

set JAVA_HOME and add  $JAVA_HOME/bin in the path

2. Install Fusion Middleware Infrastructure 12.2.1.3  software

I will not go into the details as this is a simple Fusion Middleware Infrastructure 12.2.1.3 software installation.
This software contains the WebLogic 12.2.1.3. Thee is no need to install a separate WebLogic software.

I used MW_HOME set to /u00/app/oracle/product/oid12c

java -jar ~/software/fmw_12.2.1.3_infrastructure.jar

3. Install OID 12C software

This part is just a software installation, you just need to follow the steps in the installation wizard

cd ~/software/
./fmw_12.2.1.3.0_oid_linux64.bin

4. Check the existing schemas:

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID ;

The results:

MRC_NAME COMP_ID OWNER VERSION STATUS UPGRADED
-------------- -------------------- ------------------------------ ------------ --------- --------
DEFAULT_PREFIX    OID            ODS                  11.1.1.9.0    VALID      N
IAM               IAU            IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS            IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM            IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM           IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS           IAM_OPSS             11.1.1.9.0    VALID      N
OUD               IAU            OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS            OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS           OUD_OPSS             11.1.1.9.0    VALID      N

9 rows selected.

I have a OID 11.1.1.9 and a IAM 11.1.2.3 using the same database as repository

5. ODS Schema upgrade:

Take care to only upgrade the ODS schema and not the IAM schemas or the Internet Access Manager will not work any more.
Associated to OID 11.1.1.9, there was only the ODS schema installed, the ODS upgrade requires to create new Schemas.

cd /u00/app/oracle/product/oid12c/oracle_common/upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-11-13-37AM.log
Reading installer inventory, this will take a few moments...
...completed reading installer inventory.

In the following, I provide the most important screen shots for the “ODS schema upgrade”

ODS schema upgrade 1

ODS schema upgrade 2
Checked the schema validity:

ODS schema upgrade 3

ODS schema upgrade 4

ODS schema upgrade 5

ODS schema upgrade 6

ODS schema upgrade 7

ODS schema upgrade 8

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID;

MRC_NAME       COMP_ID            OWNER               VERSION    STATUS      UPGRADED
————– —————- ——————————– ———— ——— ——–
DEFAULT_PREFIX OID                ODS                  12.2.1.3.0    VALID      Y
IAM               IAU                IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS                IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM                IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM               IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS               IAM_OPSS             11.1.1.9.0    VALID      N
OID12C           IAU                OID12C_IAU           12.2.1.2.0    VALID      N
OID12C           IAU_APPEND        OID12C_IAU_APPEND    12.2.1.2.0    VALID      N
OID12C           IAU_VIEWER        OID12C_IAU_VIEWER    12.2.1.2.0    VALID      N
OID12C           OPSS               OID12C_OPSS          12.2.1.0.0    VALID      N
OID12C           STB                OID12C_STB           12.2.1.3.0    VALID      N
OID12C           WLS                OID12C_WLS           12.2.1.0.0    VALID      N
OUD               IAU                OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS                OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS               OUD_OPSS             11.1.1.9.0    VALID      N

15 rows selected.

I named the new OID repository schemas OID12C during the ODS upgrade.

6. reconfigure the domain

cd /u00/app/oracle/product/oid12c/oracle_common/common/bin/
./reconfig.sh -log=/tmp/reconfig.log -log_prority=ALL

See screen shots “Reconfigure Domain”
Reconfigure Domain 1
Reconfigure Domain 2
Reconfigure Domain 3
Reconfigure Domain 4
Reconfigure Domain 5
Reconfigure Domain 6
Reconfigure Domain 7
Reconfigure Domain 8
Reconfigure Domain 9
Reconfigure Domain 10
Reconfigure Domain 11
Reconfigure Domain 12
Reconfigure Domain 13
Reconfigure Domain 14
Reconfigure Domain 15
Reconfigure Domain 16
Reconfigure Domain 17
Reconfigure Domain 18
Reconfigure Domain 19
Reconfigure Domain 20
Reconfigure Domain 21
Reconfigure Domain 22
Reconfigure Domain 23
Reconfigure Domain 24
Reconfigure Domain 25

7. Upgrading Domain Component Configurations

cd ../../upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-12-18-12PM.log
Reading installer inventory, this will take a few moments…

The following are the screen shots of the upgrade of the WebLogic Domain configuration

upgrade domain component configuration 1
upgrade domain component configuration 2
upgrade domain component configuration 3
upgrade domain component configuration 4
upgrade domain component configuration 5
upgrade domain component configuration 6
upgrade domain component configuration 7

8. Start the domain

For this first start I will use the normal start scripts installed when upgrading the domain in separate putty session to see the traces

Putty Session 1:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# Start the Admin Server in the first putty
./startWebLogic.sh

Putty Session 2:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# In an other shell session start the node Manager:
./startNodeManager.sh

Putty Session 3:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
./startComponent.sh oid1

Starting system Component oid1 ...

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Reading domain from /u01/app/OID/user_projects/domains/IDMDomain

Please enter Node Manager password:
Connecting to Node Manager ...
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090905> <Disabling the CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to HMACDRBG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090909> <Using the configured custom SSL Hostname Verifier implementation: weblogic.security.utils.SSLWLSHostnameVerifier$NullHostnameVerifier.>
Successfully Connected to Node Manager.
Starting server oid1 ...
Successfully started server oid1 ...
Successfully disconnected from Node Manager.

Exiting WebLogic Scripting Tool.

Done

The ODSM application is now deployed in the WebLogic Administration Server and the WLS_ODS1 WebLogic Server from the previous OID 11C  administration domain is not used any more.

http://host01.example.com:7002/odsm

7002 is the Administration Server port for this domain.

 

Cet article Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3) est apparu en premier sur Blog dbi services.

SQL Plan stability in 11G using stored outlines

$
0
0

A stored outline is a collection of hints associated with a specific SQL statement that allows a standard execution plan to be maintained, regardless of changes in the system environment or associated statistics. Plan stability is based on the preservation of execution plans at a point in time where the performance of a statement is considered acceptable. The outlines are stored in the OL$, OL$HINTS, and OL$NODES tables, but the [USER|ALL|DBA]_OUTLINES and [USER|ALL|DBA]_OUTLINE_HINTS views should be used to display information about existing outlines.

All of the caveats associated with optimizer hints apply equally to stored outlines. Under normal running the optimizer chooses the most suitable execution plan for the current circumstances. By using a stored outline you may be forcing the optimizer to choose a substandard execution plan, so you should monitor the affects of your stored outlines over time to make sure this isn’t happening. Remember, what works well today may not tomorrow.

Many times we are into the situation when the performance of a query regressing, or the optimizer is not able to choose the better execution plan.

In the next lines I will try to describe a scenario that needs the usage of a stored outline:

–we will identify the different plans that exists for our sql_id

SQL> select hash_value,child_number,sql_id,executions from v$sql where sql_id='574gkc8gn7u0h';

HASH_VALUE CHILD_NUMBER SQL_ID        EXECUTIONS 
---------- ------------ ------------- ---------- 
 524544016            0 574gkc8gn7u0h          4 
 576321033            1 574gkc8gn7u0h          5

 

Between the two different plans, we know that the best one is that with the cost 15 and the hash_value : 4013416232, but which is not all the time choosed by the optimizer, causing peak of performance

SQL> select * from table(dbms_xplan.display_cursor(‘574gkc8gn7u0h’,0));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkc8gn7u0h, child number 0
-------------------------------------
Select   m.msg_message_id,   m.VersionId,   m.Knoten_id,
m.Poly_User_Id,   m.State,   'U' as MutationsCode from
........................................................

Plan hash value: 4013416232

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          | XAK2_MSG_MESSAGE_ENTRY     |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY          |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          | XAK3_MSG_MESSAGE_ENTRY_DEL |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY_DEL      |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:LASTTREATEDVERSIONID<=:MAXVERSIONID)
   7 - filter("SERIAL#"=1999999999)
  10 - filter("SERIAL#"=1999999998)
----------------------------------------------

 

In order to fix this , we will create and enable an outline, that should help the optimizer to choose always the best plan:

 BEGIN
      DBMS_OUTLN.create_outline(hash_value    =>524544016,child_number  => 0);
    END;
  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter system set use_stored_outlines=TRUE;

System altered.

SQL> create or replace trigger trig_start_out after startup on database
  2  begin
  3  execute immediate 'alter system set use_stored_outlines=TRUE';
  4  end;
  5  /

Trigger created.

As the parameter “use_stored_outlines” is a ‘pseudo’ parameter, is not persistent over the reboot of the system, for that reason we had to create this trigger on startup database.

Now we can check , if the outline is used:

NAME                           OWNER                          CATEGORY                       USED
------------------------------ ------------------------------ ------------------------------ ------
SYS_OUTLINE_18092409295665701  TEST                         DEFAULT                        USED

And also, to check that the execution is taking in account

SQL> select * from table(dbms_xplan.display_cursor('574gkc8gn7u0h',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkc8gn7u0h, child number 0
-------------------------------------
Select   m.msg_message_id,   m.VersionId,   m.Knoten_id,
m.Poly_User_Id,   m.State,   'U' as MutationsCode from
msg_message_entry m where   m.VersionId between :LastTreatedVersionId
...................

Plan hash value: 4013416232

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          | XAK2_MSG_MESSAGE_ENTRY     |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY          |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      | TMPINTLIST                 |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          | XAK3_MSG_MESSAGE_ENTRY_DEL |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID| MSG_MESSAGE_ENTRY_DEL      |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(:LASTTREATEDVERSIONID<=:MAXVERSIONID)
   7 - filter("SERIAL#"=1999999999)
  10 - filter("SERIAL#"=1999999998)
  
Note
-----
   - outline "SYS_OUTLINE_18092409295665701" used for this statement

To use stored outlines when Oracle compiles a SQL statement we need to enable them by setting the system parameter USE_STORED_OUTLINES to TRUE or to a category name. This parameter can be also be set at the session level.
By setting this parameter to TRUE, the category by default on which the outlines are created is DEFAULT.
If you prefer to add a category on the procedure of outline creation, Oracle will used this outline category until you provide another category value or you disable the usage of the outlines by putting the parameter USE_STORED_OUTLINE to FALSE.

 

Cet article SQL Plan stability in 11G using stored outlines est apparu en premier sur Blog dbi services.

Oracle 18c : Active Data Guard and AWR Reports

$
0
0

Since Oracle Database 12c Release 2 (12.2), Automatic Workload Repository (AWR) data can be captured for Active Data Guard (ADG) standby databases. This feature enables analyzing any performance-related issues for ADG standby databases
AWR snapshots for ADG standby databases are called remote snapshots. A database node, called destination, is responsible for storing snapshots that are collected from remote ADG standby database nodes, called sources.
The AWR data captures for Active Data Guard require certain steps that I am trying to describe here. I am using Oracle 18c with following configuration

DGMGRL> show configuration

Configuration - NONC18C_DR

  Protection Mode: MaxPerformance
  Members:
  NONC18C_SITE1 - Primary database
    NONC18C_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 30 seconds ago)

DGMGRL>

The primary is opened in Read Write mode and the standby in Read Only Mode With Apply

SQL> select db_unique_name,open_mode, database_role from v$database;

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
NONC18C_SITE2                  READ ONLY WITH APPLY PHYSICAL STANDBY

SQL> select db_unique_name,open_mode, database_role from v$database;

DB_UNIQUE_NAME                 OPEN_MODE            DATABASE_ROLE
------------------------------ -------------------- ----------------
NONC18C_SITE1                  READ WRITE           PRIMARY

SQL>

The feature uses the Remote Management Framework which comes with a New Oracle built-in user called SYS$UMF. This user is locked by default and should be unlocked before configuring the RMF.

SQL> select username,common,account_status from dba_users where username like 'SYS%';

USERNAME                       COM ACCOUNT_STATUS
------------------------------ --- --------------------------------
SYS                            YES OPEN
SYSTEM                         YES OPEN
SYSBACKUP                      YES EXPIRED & LOCKED
SYSRAC                         YES EXPIRED & LOCKED
SYSKM                          YES EXPIRED & LOCKED
SYS$UMF                        YES EXPIRED & LOCKED
SYSDG                          YES EXPIRED & LOCKED

7 rows selected.

SQL> alter user sys$umf identified by root account unlock;

User altered.

For the configuration we need in our case 2 database links. Indeed each source must have two database links, a destination-to-source database link and a source-to-destination database link. So connecting on the primary, let’s create 2 database links
=>prima_to_stand: from the primary to the standby
=>stand_to_prima: from standby to the primary

SQL> create database link prima_to_stand CONNECT TO sys$umf IDENTIFIED BY root using 'STBY_NONC';

Database link created.

SQL> create database link stand_to_prima CONNECT TO sys$umf IDENTIFIED BY root using 'PRIMA_NONC';

Database link created.

SQL>
SQL> select * from dual@prima_to_stand;

D
-
X

SQL> select * from dual@stand_to_prima;

D
-
X

SQL>

The RMF topology is a centralized architecture that consists of all the participating database nodes along with their metadata and connection information. So let’s configure the nodes. We will call them “site_prim” for primary and “site_stby” for standby
While connecting on the primary we execute

SQL> exec dbms_umf.configure_node ('site_prim');

PL/SQL procedure successfully completed.

SQL>

On the standby side we do the same but here we give the database link. Be sure that the database links were created in the right direction, otherwise you will get errors later.

SQL> exec dbms_umf.configure_node('site_stby','stand_to_prima');

PL/SQL procedure successfully completed.

SQL>

And now from the primary, we can then create the RMF topology.

SQL> exec DBMS_UMF.create_topology ('Topology_1');

PL/SQL procedure successfully completed.

SQL>

To verify the status of the configuration we can use following UMF views on the primary

SQL> select * from dba_umf_topology;

TOPOLOGY_NAME    TARGET_ID TOPOLOGY_VERSION TOPOLOGY
--------------- ---------- ---------------- --------
Topology_1      1530523744                1 ACTIVE

SQL> select * from dba_umf_registration;

TOPOLOGY_NAME   NODE_NAME          NODE_ID  NODE_TYPE AS_SO AS_CA STATE
--------------- --------------- ---------- ---------- ----- ----- --------------------
Topology_1      site_prim       1530523744          0 FALSE FALSE OK

SQL>

Everything seems fine, so we can register the standby in the topology. On the primary let’s excute the register_node procedure.

SQL> exec DBMS_UMF.register_node ('Topology_1', 'site_stby', 'prima_to_stand', 'stand_to_prima', 'FALSE', 'FALSE');

PL/SQL procedure successfully completed.

SQL>

If we do not have errors then we can enable the AWR service.

SQL> exec DBMS_WORKLOAD_REPOSITORY.register_remote_database(node_name=>'site_stby');

PL/SQL procedure successfully completed.

SQL>

Using UMF views, we can again verify our configuration.

SQL> select * from dba_umf_topology;

TOPOLOGY_NAME    TARGET_ID TOPOLOGY_VERSION TOPOLOGY
--------------- ---------- ---------------- --------
Topology_1      1530523744                4 ACTIVE

SQL> select * from dba_umf_registration;

TOPOLOGY_NAME   NODE_NAME          NODE_ID  NODE_TYPE AS_SO AS_CA STATE
--------------- --------------- ---------- ---------- ----- ----- --------------------
Topology_1      site_prim       1530523744          0 FALSE FALSE OK
Topology_1      site_stby       3265600723          0 FALSE FALSE OK

SQL> select * from dba_umf_service;

TOPOLOGY_NAME      NODE_ID SERVICE
--------------- ---------- -------
Topology_1      3265600723 AWR

SQL>

SQL> select * from dba_umf_link;

TOPOLOGY_NAME   FROM_NODE_ID TO_NODE_ID LINK_NAME
--------------- ------------ ---------- --------------------
Topology_1        1530523744 3265600723 PRIMA_TO_STAND
Topology_1        3265600723 1530523744 STAND_TO_PRIMA

SQL>

It’s now time to generate remote snapshots for the standby. While connecting to the primary. Two snapshots are at least required to be able to generate an AWR report.

SQL> set time on
16:01:22 SQL> exec dbms_workload_repository.create_remote_snapshot('site_stby');

PL/SQL procedure successfully completed.

16:21:41 SQL> exec dbms_workload_repository.create_remote_snapshot('site_stby');

PL/SQL procedure successfully completed.

16:21:50 SQL>

And we can generate the report as we usually do

SQL> @?/rdbms/admin/awrrpti.sql

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats.  Please enter the
name of the format at the prompt. Default value is 'html'.

   'html'          HTML format (default)
   'text'          Text format
   'active-html'   Includes Performance Hub active report

Enter value for report_type: text



Type Specified: text


Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  DB Id      Inst Num   DB Name      Instance     Host
------------ ---------- ---------    ----------   ------
  2315634502     1      NONC18C      NONC18C      standserver1
  3265600723     1      NONC18C      NONC18C      standserver1
* 2315634502     1      NONC18C      NONC18C      primaserver.

Enter value for dbid: 3265600723
Using 3265600723 for database Id
Enter value for inst_num: 1
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing  without
specifying a number lists all completed snapshots.


Enter value for num_days:

Listing all Completed Snapshots
Instance     DB Name      Snap Id       Snap Started    Snap Level
------------ ------------ ---------- ------------------ ----------

NONC18C      NONC18C              1  24 Sep 2018 16:01    1
                                  2  24 Sep 2018 16:21    1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1

Enter value for end_snap: 2
End   Snapshot Id specified: 2



Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_1_2.txt.  To use this name,
press  to continue, otherwise enter an alternative.

Enter value for report_name:

And viewing the generated report, we can see that the database role is PHYSICAL STANDBY

WORKLOAD REPOSITORY report for

DB Name         DB Id    Unique Name DB Role          Edition Release    RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
NONC18C       3265600723 NONC18C_SIT PHYSICAL STANDBY   EE      18.0.0.0.0 NO  NO

Instance     Inst Num Startup Time
------------ -------- ---------------
NONC18C             1 24-Sep-18 15:49

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
standserver1.loc Linux x86 64-bit                    1                     2.96

Conclusion
In this blog we have shown how we can use UMF to generate AWR reports in Active Data Guard Instance. The framework UMF use the package DBMS_UMF which has many subprograms. The following note How to Generate AWRs in Active Data Guard Standby Databases (Doc ID 2409808.1) and Oracle documentation will help.

 

Cet article Oracle 18c : Active Data Guard and AWR Reports est apparu en premier sur Blog dbi services.


Documentum – Checking warnings&errors from an xPlore full re-index

$
0
0

When working with xPlore as a Full Text Server (indexing), there are a few ways to perform a full re-index. You can potentially do it from the IndexAgent UI, from the Dsearch UI, from the file system (with an ids.txt file for example, it is usually for a “small” number of r_object_id so that’s probably not an ideal way) or from the docbase (mass-queue, it’s not really a good way to do it either). Performing a full re-index from the xPlore Server directly will be faster because you remove a few layers where the Content Server asks for an index (the index queues) and expect an answer/result, that’s why I will in this blog only talk about the full re-index performed from the xPlore Server directly and below I will use a full re-index from the IndexAgent UI. For each of these cases, there might be a few warnings or errors along the re-index, some of which might be normal (password protected file), some others might not (timeout because xPlore heavily loaded).

The whole purpose of this blog is to show you how you can check these warnings/errors because there is no information about them directly displayed on the UI, you need to go find that information manually. These warnings/errors aren’t shown in the index queues since they weren’t triggered from the docbase but from the xPlore Server directly.

So first of all, you need to trigger a re-index using the IndexAgent:

  • Open the IndexAgent UI (https://<hostname>:<ia_port>/IndexAgent)
  • Login with the installation owner’s account
  • Stop the IndexAgent if it is currently running in Normal mode and then launch a re-index operation

It should look like that (for xPlore 1.6):
IA1

On the above screenshot, the green represents the success count and the blue is for the filtered count. Once completed and as shown above, you might have a few warnings/errors but you don’t have any information about them as I mentioned previously. To narrow down and facilitate the check of the warnings/errors, you need to know (approximately) the start and end time of the re-index operation: 2018-06-12 11:55 UTC to 2018-06-12 12:05 UTC for the above example. From that point, the analysis of the warnings/errors can be done in two main ways:

 

1. Using the Dsearch Admin

I will start with the way that most of you probably already know: use the Dsearch reports to see the errors/warnings. That’s not the fastest way, clearly not the funniest way either but it is an easy way for sure…

Accessing the reports from the Dsearch Admin:

  • Open the Dsearch Admin UI (https://<hostname>:<ds_port>/dsearchadmin)
  • Login with the admin account (or any other valid account with xPlore 1.6+)
  • Navigate to: Home > Diagnostic and Utilities > Reports
  • Select the “Document Processing Error Summary” report and set the following:
    • Start from: 2018-06-12 11:55
    • To: 2018-06-12 12:05
    • Domain name (optional): leave empty if you only have one IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
  • Click on Run to get the report

At this point, you will have a report with the number of warnings/errors per type, meaning that you do not have any information about the documents yet, you only know the number of errors for each of the pre-defined error types (=error code). For the above example, I had 8 warnings once the re-index was completed and I could see them all (seven warnings for ‘777’ and one warning for ‘770’):
IA2

Base on the information from this “Document Processing Error Summary” report, you can go deeper and find the details about the documents but you can only do it for one type, one Error Code, at a time. Therefore, you will have to loop on all Error Codes returned:

  • For each Error Code:
    • Select the “Document Processing Error Detail” report and set the following:
      • Start from: 2018-06-12 11:55
      • To: 2018-06-12 12:05
      • Domain name (optional): leave empty if you only have 1 IndexAgent, otherwise you can specify the domain name (usually the same name as the docbase)
      • Processing Error Code: Select the Error Code you want to see (either 777 or 770 in my case)
      • Number of Results to Display: Set here the number of items you want to display, 10, 20, …
    • Click on Run to get the report

And there you finally have the details about the warnings/errors documents that weren’t indexed properly because of the Error Code you choose. In my case, I selected 770 so I have only 1 document:
IA3

You can export this list to excel if you want, to do some processing on these items for example but you will need to do it for all Error Codes and then merge them or whatever.

 

2. Using the logs

In the above example, I used the IndexAgent to perform the re-index so I will use the IndexAgent logs to find what happened exactly. This section is really the main purpose of this blog because I assume that most people are using the Dsearch Admin reports already but probably not the logs! If you want to script the check of warnings/errors after a re-index of just if you want to play and have fun while doing your job, then this is what you need ;).

So let’s start simple: listing all errors and warnings and keeping only the lines that contain an r_object_id.

[xplore@full_text_server_01 ~]$ cd $JBOSS_HOME/server/DctmServer_Indexagent_DocBase1/logs/
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

Indexagent_DocBase1.log:2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
Indexagent_DocBase1.log:2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

As you can see above, there is also one queue item (1b0f1234501327f0) listed because I kept everything that is 16 char long with 0-9 or a-z. If you want, you can rather select only r_object_id starting with 09 to have all dm_documents (using this: “[ (<]09[0-9a-z]{14}[>) ]” ) or you can just remove the r_object_id starting with 1b which are the queue items.

In the above example, all the results are in the timeframe I expected them to be but it is possible that there are older or newer warnings/errors so you might want to apply another filter with the date. Since I want everything from 11:55 to 12:05 on the 12-Jun-2018, this is how I can do it (and removing the log file name too) using a time regex:

[xplore@full_text_server_01 logs]$ time_regex="2018-06-12 11:5[5-9]|2018-06-12 12:0[0-5]"
[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep --color "[ (<][0-9a-z]{16}[>) ]"

2018-06-12 11:55:26,456 WARN PrepWorkItem [full_text_server_01_9200_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGNT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,752 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:00,754 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:01,038 WARN PrepWorkItem [full_text_server_01_9260_IndexAgent-full_text_server_01.dbi-services.com-1-full_text_server_01.dbi-services.com-StatusUpdater][DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
2018-06-12 12:01:27,518 INFO ReindexBatch [Worker:Finalization Action:#6][DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete btch. From a total of 45, 44 done, 0 filtered, 0 errors, and 8 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the messages for each of these warnings/errors:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,^[^]]*],,' \
                                   | sort -u

[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.
[xplore@full_text_server_01 logs]$

 

Listing only the r_object_id (to resubmit them via the ids.txt for example):

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"

090f12345007f40e
090f1234500aa9f6
090f1234500aaa97
090f1234500aaa98
090f1234500aaa99
090f1234500aaa9a
090f1234500aaa9b
090f1234500aaa9d
[xplore@full_text_server_01 logs]$

 

If you want to generate the iapi commands to resubmit them all:

[xplore@full_text_server_01 logs]$ echo; egrep -i "err|warn" Indexagent_*.log* \
                                   | sed 's,^[^:]*:,,' \
                                   | egrep "${time_regex}" \
                                   | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                   | sed 's,.*[ (<]\([0-9a-z]\{16\}\)[>) ].*,\1,' \
                                   | sort -u \
                                   | grep -v "^1b"
                                   | sed 's/.*/queue,c,&,dm_fulltext_index_user/'

queue,c,090f12345007f40e,dm_fulltext_index_user
queue,c,090f1234500aa9f6,dm_fulltext_index_user
queue,c,090f1234500aaa97,dm_fulltext_index_user
queue,c,090f1234500aaa98,dm_fulltext_index_user
queue,c,090f1234500aaa99,dm_fulltext_index_user
queue,c,090f1234500aaa9a,dm_fulltext_index_user
queue,c,090f1234500aaa9b,dm_fulltext_index_user
queue,c,090f1234500aaa9d,dm_fulltext_index_user
[xplore@full_text_server_01 logs]$

 

Finally, to group the warnings/errors per types:

[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `egrep -i "err|warn" Indexagent_*.log* \
                                     | sed 's,^[^:]*:,,' \
                                     | egrep "${time_regex}" \
                                     | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                     | sed 's,^[^]]*],,' \
                                     | sort -u \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     egrep -i "err|warn" Indexagent_*.log* \
                                       | sed 's,^[^:]*:,,' \
                                       | egrep "${time_regex}" \
                                       | egrep "[ (<][0-9a-z]{16}[>) ]" \
                                       | sed 's,^[^]]*],,' \
                                       | sort -u \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ # Or to shorten a little bit the loop command:
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Listing warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}";
                                     echo;
                                   done

  --  Listing warnings/errors with the following messages: [Corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f12345007f40e message: DOCUMENT_WARNING CPS Warning [Corrupt file].

  --  Listing warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
[DM_INDEX_AGENT_REINDEX_BATCH] Updating queue item 1b0f1234501327f0 with message= Incomplete batch. From a total of 45, 44 done, 0 filtered, 0 errors, and 1 warnings.

  --  Listing warnings/errors with the following messages: [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file]
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aa9f6 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa97 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa98 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa99 message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9a message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9b message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].
[DM_INDEX_AGENT_RECEIVED_FT_CALLBACK_WARN] Received warn callback: id: 090f1234500aaa9d message: DOCUMENT_WARNING CPS Warning [MIME-Type (application/vnd.openxmlformats-officedocument.wordprocessingml.), Unknown file format or corrupt file].

[xplore@full_text_server_01 logs]$

 

So the above was related to a very simple example where a full reindex took only a few minutes because it is a very small repository. But what about a full reindex that takes days because there are several millions of documents? Well the truth is that checking the logs might actually surprise you because it is usually more accurate than checking the Dsearch Admin. Yes, I said more accurate!

 

3. Accuracy of the Dsearch Admin vs the Logs

Let’s take another example with a repository containing a few TB of documents. A full re-index took 2.5 days to complete and in the commands below, I will check the status of the indexing for the 1st day: from 2018-09-19 07:00:00 UTC to 2018-09-20 06:59:59 UTC. Here is what the Dsearch Admin is giving you:

IA4

So based on this, you would expect 1 230 + 63 + 51 = 1 344 warnings/errors. So what about the logs then? I included below the DM_INDEX_AGENT_REINDEX_BATCH which are the “1b” object_id (item_id) I was talking about earlier but these aren’t document indexing, they are just batches:

[xplore@full_text_server_01 logs]$ time_regex="2018-09-19 0[7-9]|2018-09-19 [1-2][0-9]|2018-09-20 0[0-6]"
[xplore@full_text_server_01 logs]$ command='egrep -i "err|warn" Indexagent_*.log* | sed 's,^[^:]*:,,'
                                   | egrep "${time_regex}"
                                   | egrep "[ (<][0-9a-z]{16}[>) ]"
                                   | sed 's,^[^]]*],,'
                                   | sort -u'
[xplore@full_text_server_01 logs]$
[xplore@full_text_server_01 logs]$ echo; IFS=$'\n'; \
                                   for type in `eval ${command} \
                                     | sed 's,.*\(\[[^\[]*\]\).*,\1,' \
                                     | sort -u`;
                                   do
                                     echo "  --  Number of warnings/errors with the following messages: ${type}";
                                     eval ${command} \
                                       | grep -F "${type}" \
                                       | wc -l;
                                     echo;
                                   done

  --  Number of warnings/errors with the following messages: [Corrupt file]
51

  --  Number of warnings/errors with the following messages: [DM_INDEX_AGENT_REINDEX_BATCH]
293

  --  Number of warnings/errors with the following messages: [DM_STORAGE_E_BAD_TICKET]
7

  --  Number of warnings/errors with the following messages: [Password-protected or encrypted file]
63

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction]
5

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 18, native msg: unknown error)]
1

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 257, native msg: handle is invalid)]
1053

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 30, native msg: out of memory)]
14

  --  Number of warnings/errors with the following messages: [Unknown error during text extraction(native code: 65534, native msg: unknown error)]
157

[xplore@full_text_server_01 logs]$

 

As you can see above, there is more granularity regarding the types of errors from the logs. Here are some key points in the comparison between the logs and the Dsearch Admin:

  1. In the Dsearch Admin, all messages that start with “Unknown error during text extraction” are considered as a single error type (N° 1023). Therefore from the logs, you can addition all of them: 5 + 1 + 1 053 + 14 + 157 = 1 230 to find the same number that was mentioned in the Dsearch Admin. You cannot separate them on the Dsearch Admin on the Error Summary report, it will only be on the Error Details report that you will see the full message and you can then separate them, kind of…
  2. You can find properly the same amount of “Password-protected or encrypted file” (63) as well as “Corrupt file” (51) from the logs and from the Dsearch Admin so no differences here
  3. You can see 7 “DM_STORAGE_E_BAD_TICKET” warnings/errors from the logs but none from the Dsearch Admin… Why is that? That’s because the Dsearch Admin do not have any Error Code for that so these errors aren’t shown!

So like I was saying at the beginning of this blog, using the Dsearch Admin is very easy but that’s not fun and you might actually miss a few information while checking the logs is funny and you are sure that you won’t miss anything (these 7 DM_STORAGE_E_BAD_TICKET errors for example)!

 

You could just as easily do the same thing in perl or using awk, that’s just a question of preferences… Anyway, you understood it, working with the logs allows you to do pretty much what you want but you will need some linux/scripting knowledge obviously while working with the Dsearch Admin is simple and easy but you will have to work with what OTX gives you and with the restrictions that it has.

 

 

Cet article Documentum – Checking warnings&errors from an xPlore full re-index est apparu en premier sur Blog dbi services.

Adding a timeout in monitoring probes

$
0
0

A few months ago day, as I was writing the documentation for a monitoring probe, I suddenly realized that that probe, along with others I wrote during that time to monitor Documentum installations, had all a big, unexpected flaw. Indeed, it struck me that if it hang for some reason while running, it could stay there well after the next monitoring cycle had begun, which could too be affected by the same problem, and so on, until lots of such processes could be hanging and possibly hogging valuable repository sessions, causing their complete exhaustion and the catastrophic consequence on the client applications. How ironic would it be that health checks would actually endanger an application ?

A true story

I realized all this because it already happened once, years ago, at a different client’s. It was just after we migrated from Documentum content server v5.3 to v6.x. A big shift was introduced in that new version: the command-line tools iapi, idql, dmbasic and dmawk went java. More precisely, they switched from the native libdmcl40.so C library to the library libdmcl.so which calls the DfCs behind the scenes, with this sorcery made possible thanks to JNI. The front is still native code but all the Documentum stuff is henceforth delegated to the java DfCs.
What was the impact on those tools ? It was huge: all those tools that used to start in less than a second took now around 10 seconds or more to start because of all the bloatware initialization. We vaguely noticed it during the tests and supposed it was caused by a big load in that less powerful environment so we went confidently to production one week-end.
The next Monday morning, panicked calls flooded the Help Desk; users were complaining that part of their applications did not work any more. A closer look in the application’s log showed that it had become impossible to open new sessions to some repositories. The process list on the server machine showed tens of documentum and idql processes running at once. Those idql processes were stuck instances of a monitoring probe that run once per minute. Its job was just to connect to the target docbase, run a quick query and exit with a status. For some reason, it was probably waiting for a session, or idql was taking a lot more than the expected few seconds to do its job; therefore, the next monitoring cycle started before the previous one was completed and it too it hang there, and so on until affected users became vocal. The real root cause was programmatic since one developer thought it was a good idea to periodically and too frequently connect to docbases from within Ajax code in the clients’ home page, without informing the docbases’ administrators of this new resource hungry feature. This resulted in a saturation of the allowed sessions, stuck idql processes, weblogic threads waiting for a connection and, ultimately, application downtime.
Needless to say, the flashy Ajax feature was quickly removed, the number of allowed concurrent sessions was boosted up and we decided to keep around a copy of those fast, full binary v5.3 tools for low-level tasks such as our monitoring needs.
So let’s see how to protect the probes from themselves and from changing environments or well-meaning but ingenuous developers.

The requirements

1. If the monitoring cycles are tight, the probes shall obviously do very simple things; complex things can take time, and be fragile and buggy. Simple things complete quickly and are less subject to hasards.
2. As seen, unless the probe is started only once and runs constantly in the background, the probe’s interpreter shall start very quickly which excludes java code and its JVM; this also avoids recent issues such as the random number generator entropy that used to plague java programs for some time now and, I’m sarcastic but confident, the next ones still lurking around the corner. The interpreter that executes the probe shall be that of some well known scripting language such as the bash or ksh shells, python or perl with the needed binding to access the resource to be monitored, e.g. a Documentum repository, or some native binary tool that is part of the product to monitor, such as idql or sqlplus, launched by the shell, or even a custom compiled program.
3. While a probe is running, no other instance of it shall be allowed to start; i.e. the next instance shall not start until after the current one completes.
4. A probe shall only be allowed to execute during an allotted time; once this delay is elapsed, the probe shall be terminated manu militari with a distinct return status.
5. The probe’s cycles too shall be monitored, e.g. missing cycles should be reported.

Points 1 easy to implement; e.g. to check the availability of a repository or a docbase, just try a connection to it and exit. If a more exhaustive test is required, a quick and simple query could be sent to the server. It all depends on how exhaustive we want to be. A SELECT query won’t be able to detect, say, unusable database indexes or indexes being rebuilt off-line, if it still completes within the allowed delay. Some neutral UPDATE could be attempted to detect those kinds of issues or, more straightforwardly yet, just query the state of the indexes. But whatever is monitored, let’s keep it quick and direct. The 3rd and 4th requirements can help detecting anomalies such as the preceding index problem (in an Oracle database, unusable indexes causes UPDATEs to hang, so timeout detection and forced termination are mandatory in such cases).

Point 2 is quite obvious: if the monitoring is so aggressive that it runs in one-minute cycles, the script that it executes shall complete in less than one minute; i.e. start time + execution time shall be less than the monitoring period, let’s say less than half that time to be safe. If the monitoring tools and script cannot keep up with the stringent timing, a different, more efficient approach shall be considered, unless the timing requirement is relaxed somewhat. For example, a reverse approach could be considered where instead of pulling the status from a target, it’s the target that publish its status, like a heartbeat; that would permit very tight monitoring cycles.

Point 3 requires a barrier to prevent the next cycle to start. This does not need to be a fancy test-and-set semaphore because concurrency is practically nonexistent. A simple test of existence of a conventional file is enough. If the file exists, it means a cycle is in progress and the next cycle is not allowed in. If the file does not exist, create it and continue. There may be a race condition but it is unlikely to occur given that the monitoring cycles are quite widely spread apart, one minute at the minimum if defined in the crontab.

Point 4 means that a timer shall be set up upon starting the probe. This is easy to do from a shell, e.g. thanks to the “timeout” command. Some tools may have their own command-line option to run in batch mode within a timeout duration, which is even better. Nonetheless, an external timer offers a double protection and is still desirable.

Point 5: Obviously, this part is only possible from outside the probe. On some system (e.g. nagios), the probe’s log file itself is monitored and, if not updated within some time interval, an alert is raised. This kind of passive or indirect heartbeat permits to detect disabled or stuck probes, but doesn’t remove them. Resilience shall be auto-applied whenever possible in order to minimize human intervention. This check is useful to detect cases where the probe or the scheduler itself have been suspended abruptly or are no longer available on the file system (it can even happen that the file system itself has been unmounted by mistake or due to some technical problem or unscheduled intervention).

An example

Let’s say that we want to monitor the availability of a docbase “doctest”. We propose to attempt a connection with idql as “dmadmin” from the server machine so trusted mode authentication is used and no password is needed. A response from the docbase shall arrive within 15s. The probe shall run with a periodicity of 5 minutes, i.e. 12 times per hour. Here is a no frills attempt:

#!/bin/bash

BARRIER=/tmp/sentinel_file
DOCBASE=doctest
LOG_FILE=/var/dctm/monitor_docbase_${DOCBASE}
TIMEOUT=15s
export DOCUMENTUM=/u01/app/documentum53/product/5.3

if [ -f $BARRIER ]; then
   echo "WARNING: previous $DOCBASE monitoring cycle still running" > $LOG_FILE
   exit 100
fi
touch $BARRIER
if [ $? -ne 0 ]; then
   echo "FATAL: monitoring of $DOCBASE failed while touch-ing barrier $BARRIER" > $LOG_FILE
   exit 101
fi

timeout $TIMEOUT $DOCUMENTUM/bin/idql $DOCBASE -Udmadmin -Pxx 2>&1 > /dev/null <<EoQ
   select * from dm_server_config;
   go
   quit
EoQ
rc=$?
if [ $rc -eq 124 ]; then
   echo "FATAL: monitoring of $DOCBASE failed in timeout of $TIMEOUT" > $LOG_FILE
elif [ $rc -eq 1 ]: then
  echo "FATAL: connection to $DOCBASE was unsuccessful"               > $LOG_FILE
else
   echo "OK: connection to $DOCBASE was successful"                   > $LOG_FILE
fi

rm $BARRIER
exit $rc

Line 3: the barrier is an empty file whose existence or inexistence simulates the state of the barrier; if the file exists, then the barrier is down and the access is forbidden; if the file does not exist, then the barrier is up and the access is allowed;
Line 7: we use the full native, DMCL-based idql utility for a quick start up;
Line 9: the barrier is tested by checking the file’s existence as written above; if the file already exists, it means that an older monitoring cycle is still running, so the new cycle aborts and returns an error message and an exit code;
Line 13: the barrier has been lowered to prevent the next cycle to execute the probe;
Line 19: the idql command is launched and monitored by the command timeout with a duration of $TIMEOUT;
Line 24: the timeout command’s return status is tested; if it is 124 (line 25), it means a timeout has occurred; the probe aborts with an appropriate error message; otherwise, it’s the command’s error code: if it is 1, idql could not connect; if it is 0, the connection was OK;
Lines 27 and 29: the connection attempt returned within the $TIMEOUT time interval, meaning the idql has a connection status;
Line 33: the barrier is removed so the next monitoring cycle has the green light;
Line 34: the exit code is returned; it should be 124 for timeout, 1 for no connection to the docbase, 0 if connection OK;

The timeout command belongs to the coreutils package so install that package through your linux distribution’s package manager if the command is missing.

If cron is used as a scheduler, the crontab entry could look like below (assuming the probe’s name is test-connection.sh):

0,5,10,15,20,25,30,35,40,45,50,55 * * * /u01/app/documentum/monitoring/test-connection.sh 2>&1 > /dev/null

cron is sufficient most of the time, even though its time granularity is 1 minute.
The probe could be enhanced very easily in such a way that, once deployed, it optionally installs itself in dmadmin’s crontab, e.g.:

/u01/app/documentum/monitoring/test-connection.sh --install "0,5,10,15,20,25,30,35,40,45,50,55 * * *"

for the maximum simplicity. But this is a different topic.

Some comments

On some large infrastructures, centralized scheduler and orchestration software may be in use (CTRL-M, Activeeon, Rundeck, Dkron, etc. Just check the web, they are plenty to shop for) which have their own management of rogue jobs and commands. Still, dedicated probes such as the preceding one have to be called but the timeout logic could be removed and externalized into the launcher. Better yet, only externalize the hard-coded timeout parameter so it is passed to the probe as a command-line parameter and the probe can still work independently from the launcher in use.
Other systems use a centralized monitoring system (e.g. nagios, Icinga, BMC TrueSight, Cacti, etc. Again, search the web) but whatever the software be prepared to manually write probes because, unless it is so ubiquitous like apache, tomcat or mysql, it is unlikely that the system to be monitored is supported out of the box, particularly specialized products such as Documentum.
Some of the above software need an agent process deployed and permanently running on each monitored or enrolled machine. There are pros and cons in this architecture but we won’t go there as the subject is out of scope.

Conclusion

Monitoring a target is interacting with it. Physics tells us that it is impossible to be totally invisible while observing but at least we can minimize the probes’ footprint. While it is impossible to anticipate every abnormality on a system, these few very simple guidelines can help a long way in making monitoring probes more robust, resilient and as unobtrusive as possible.

 

Cet article Adding a timeout in monitoring probes est apparu en premier sur Blog dbi services.

An SQLite extension for gawk (part I)

$
0
0

Quick: what is the most used database management system on our planet ? Oracle ? Wrong. SQL server ? Wrong again ! MySQL ? You’re almost there. It’s SQLite. Surprised ? I must confess that I was too. Actually, SQLite is special in that it is not the traditional 2-tiers client/server but one-tier and embedded, which means that it works as a library linked to an application. As such, it is used to fulfill the database needs of browsers, portable devices such as the iPods, iPhones, Android, etc… (see a short list of famous users here). Look also here for a succinct intro and here for a list of features. Here you’ll find distinctive features of SQLite and here common uses for it.
Let’s be clear from the start: although light, this is no toy software but a solid, time-proven, globally used, rigorously tested open-source product.
So, what the relation with gawk ? Well, none. Until now. As you may know, gawk has had for some time now an easy way to be extended with useful libraries. I already talked about this in my previous blogs, e.g. here. In particular, it has also a binding for PostgreSQL (see here). So, I told to myself, wouldn’t it be nice to make one for Oracle too ? Or for some key-value xDBMs such as Berkeley DB ? But fate decided otherwise: I already used SQLite in the past as an almost no-installation SQL database and during the present research I landed fortuitously on SQLite’s web pages. I was immediately hooked on. SQLite is petite (500 KiB in one unique source file for the whole RDBMS library and about 1 Mb for the shared library object file, unbelievable !), easy to learn and use, and open-source. Actually, the SQLite creators propose such an unorthodox license agreement that I feel compelled to list it here:

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.

So inspirational and quite a depart from the traditional, indigestible EULAs ! There is enough here to push the religion business to bankruptcy. And the lawyers. And make our sorry planet a paradise again.
In effect, the only data structure known to gawk is the associative array, or hashes in perl parlance, or dictionaries for pythonists. And they are entirely held in memory. I thought it would be a nice addition to gawk to be able to work with on-disk tables for those cases where huge amount of textual data have to be processed in random order.
As this article is rather large, I’ve split it into 3 parts. Part I, the one you’re reading now, presents the requirements and proposes an API. Part II lists the extension code and shows how to compile and use it in gawk. Part III comments the code and describes a stress test for the interface and SQLite from within a gawk script. So, let’s see how I hammered that screw !

The Objectives

At the very minimum, the gawk interface to SQLite (for short sqlite_gawk henceforth) shall be able to open a database file (a SQLite database is entirely self-contained in a single file, I told you it’s light), send DML/DDL statements to it and run SELECT queries against it. The SELECT statement shall be able to present the result in a nice and flexible way to avoid messing up the screen with wrapped-around, illegible lines, i.e. the columns’ width shall be individually configurable and some truncation, with or without an ellipsis, or wrap-around be possible within those limits.
Also, as SQLite supports blobs, sqlite_gawk should be able to deal with them, i.e. insert, retrieve and display them if they are textual. SQLite comes with an interactive shell functionally akin to Oracle’s sqlplus and named sqlite3 which extends the collection of SQL functions (yes, SQLite allows that too). The SQL functions the shell adds are readfile() and writefile() to read the content of a file into a blob, respectively dump a blob into a file. We definitively want that functionality in the interface too.
Another requirement is that, along with displaying the retrieved rows on the screen, sqlite_select shall be able to store them into an integer-indexed gawk array of associative arrays. The goal is to permit further in-memory processing from within gawk.

The API

All the above requirements converge into the following API:

int do_sqlite_open(db_filename)
-- opens the database file db_filename;
-- in case of success, returns a non-negative integer which is a handle to the db;
-- returns -1 and sends to stderr an error message from SQLite if an error occured;

int sqlite_close(db_handle)
-- closes the database whith db_handle handle;
-- returns 0 is success and -1 plus an error message from SQLite to stderr in case of error;

int sqlite_exec(db_handle, sql_stmt)
-- sends the DML/DDL SQL statement sql_stmt to the database with db_handle handle;
-- returns -1 plus an error message from SQLite to stderr in case of error, a non-negative integer from SQLite if success;

int sqlite_select(db_handle, select_stmt [,"" | separator_string | list-of-columns-widths | , dummy, gawk_array])
-- sends the SELECT select_stmt statement with or without formatting options;
-- the output is either displayed or sent into a gawk array of associative arrays;

sqlite_exec takes INSERT, DELETE, UPDATE, i.e. all SQL statements different from SELECT. In addition, as said above, INSERT, UPDATE and SELECT also accept the SQL extensions readfile() and writefile() for blob I/Os from/to file. Here are a few examples of usage from gawk:

rc = sqlite_exec(my_db, "INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile('/home/dmadmin/setup_files/instantclient-basic-linux.x64-12.2.0.1.0.zip'))"

rc = sqlite_select(my_db, "SELECT n1, writefile('blob_2000.dmp', my_blob) FROM test_with_blob where n1 = 2000 limit 1")

rc = sqlite_exec(my_db, "UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c') where n1 = 1000")

sqlite_select()

As expected from functions that must produce human-readable output, this is the most feature rich, and complex, function of the interface:

int sqlite_select(db_handle, select_stmt [, "" | , "col_separator_string" | , "list-of-columns-formats" | , dummy, gawk_array])

This compact syntax can be split into the following five acceptations:

int sqlite_select(db_handle, select_stmt)
int sqlite_select(db_handle, select_stmt, "")
int sqlite_select(db_handle, select_stmt, "col_separator_string")
int sqlite_select(db_handle, select_stmt, "list-of-columns-formats")
int sqlite_select(db_handle, select_stmt, dummy, gawk_array)

The function sqlite_select is overloaded and takes from 2 to 4 parameters; when the arities are identical (variants 2 to 4), the format of the 3rd parameter makes the difference. Let’s see what service they provide.

int sqlite_select(db_handle, select_stmt)

The first select outputs its result as a table with fixed column-widths; those widths are from 8 to 15 characters wide. Its purpose is to give a quick overview of a query’s result without messing up the screen with long, wrapped around lines. Here is an example with fake data:

sqlite_select(my_db, 'SELECT * FROM test1')
n1 s1 s2
-------- -------- --------
100 hello1 hello...
200 hello2 hello...
300 hello3 hello...
400 hello4 hello...
400 hello... hello...
400 hello... hello...
6 rows selected
3 columns displayed

We can see that too large columns are truncated and an ellipsis string (…) is appended to them to show this fact.
This quick overview terminates with a table of optimum columns widths so that, if used later, screen width permitting, the columns can be displayed entirely without truncation.

Optimum column widths
=====================
for query: SELECT * FROM test1
n1 3
s1 18
s2 21

This variant is simple to use and handy for having a quick peek at the data and their display needs.
The 4th sqlite_select() variant let us provide column formats. Here is an example of how to do that with previous optimum column widths:

sqlite_select(my_db, 'SELECT * FROM test1', "3 18 21")
n1 s1 s2
--- ------------------ ---------------------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with spaces hello0105 with spaces
400 hello6 with spaces hello0106
6 rows selected

The columns are now displayed without truncation.

int sqlite_select(db_handle, select_stmt, "")

The second variant takes en empty string as the 3rd parameters, which means “use | as a column separator”. Here is an example of output:

sqlite_select(my_db, 'SELECT * FROM test1', "")
n1|s1|s2
100|hello1|hello0101
200|hello2|hello0102
300|hello3|hello0103
400|hello4|hello0104
400|hello5 with spaces |hello0105 with spaces
400|hello6 with spaces |hello0106
6 rows selected

If such a default character is not appropriate, a more suitable string can be provided, which is the purpose of sqlite_select() 3rd variant, e.g. ‘||’ as shown below:

sqlite_select(my_db, 'SELECT * FROM test1', "||")
n1 ||s1 ||s2
--------||--------||--------
100 ||hello1 ||hello...
200 ||hello2 ||hello...
300 ||hello3 ||hello...
400 ||hello4 ||hello...
400 ||hello...||hello...
400 ||hello...||hello...
6 rows selected
3 columns displayed

Since this format is easy to parse, it is handy for exporting the data into a file and subsequently import them into a spreadsheet program.

int sqlite_select(db_handle, select_stmt, "list-of-columns-formats")

We’ve already seen the function’s 4th variant but there is more to the column formats.
The parameter “list-of-columns-formats” is a comma- or space-separated ordered list of numeric values, the column widths, one number for each column in the SELECT clause of that statement. If they are too many values, the superfluous ones are ignored. If they are fewer, the last one is extended to cover for the missing values.
They can also end with one of t, e or w characters where t stands for t(runcation), e stands for e(llipsis) suffix if truncation and w stands for w(rap-around).
The minimal width is 3 characters if an ellipsis is requested to accommodate the suffix itself.
Here is an example of invocation and output:

sqlite_select(my_db, 'SELECT * FROM test1', "2e 15e 10w")
n1 s1 s2
--- --------------- ----------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with ... hello0105
with space
s
400 hello6 with ... hello0106
400 hello6-with-... hello0106-
12345
7 rows selected

Here, we want the first column to be displayed in a 3-character wide field with truncation allowed and an ellipsis suffix. No truncation occurred in this column.
The second column should be displayed in a 15-character wide column also with ellipsis as suffix if truncation, which is visible here.
The third column is displayed in a 10-character wide column with wrapping-around.
This variant attempts to emulate some of the flexibility of Oracle sqlplus “col XX format YY” command.

int sqlite_select(db_handle, select_stmt, dummy, gawk_array)

Finally, the last acceptation below does not output anything on the screen but fills in a gawk array with the query’s result.
A dummy parameter has been introduced to allow resolving the overloaded function and invoke the expected code. It can be set to any value since this formal parameter is ignored or, as the say, is reserved for future use.
Here is an example of invocation:

sqlite_select(my_db, 'SELECT * FROM test1', 0, a_test)
6 rows selected

If we iterate and print the gawk array:

   for (row in a_test) {
      printf("row %d: ", row)
      for (col in a_test[row])
         printf("  %s = %s", col, a_test[row][col])
      printf "\n" 
   }
   printf "\n"

, we can guess its structure:

row 0: n1 = 100 s1 = hello1 s2 = hello0101
row 1: n1 = 200 s1 = hello2 s2 = hello0102
row 2: n1 = 300 s1 = hello3 s2 = hello0103
row 3: n1 = 400 s1 = hello4 s2 = hello0104
row 4: n1 = 400 s1 = hello5 with spaces s2 = hello0105 with spaces
row 5: n1 = 400 s1 = hello6 with spaces s2 = hello0106

As we can see, the array’s structure is the following (say the query returns a table of count rows and ncolumns):

array[0] = sub-array_0
array[1] = sub-array_1
...
array[count-1] = sub-array_count-1
where array is indexed by an integer and the sub-arrays are associative arrays with following composition:
sub-array0[col0] = value0,0
sub-array0[col1] = value0,1
...
sub-array0[coln-1] = value0,n-1
sub-array1[col0] = value1,0
...
sub-array1[coln-1] = value1,n-1
...
sub-arraycount-1[col0] = valuecount-1,0
...
sub-arraycount-1[coln-1] = valuecount-1,n-1

Said otherwise, the returned array is an integer-indexed array of associative arrays; its first dimension contains the rows and its second dimension contains the columns, i.e. it’s a table of database rows and value columns.
This feature is very interesting but comes with some limitation since all the data are held in memory. Maybe a future release of gawk will be able to transparently paginate gawk arrays to/from disk providing a kind of virtual memory for them. Its implementation could even use SQLite. The problem turns out to map multi-dimensional associative arrays onto relational tables. Some performance degradation is expected, unless an efficient pagination mechanism is introduced. Documentum implemented the mapping partially for repeating attributes, i.e. flat arrays of values, with joins between _s and _r tables, but there is much more to it in gawk’s array. This would be a fascinating subject for another blog on its own as one can imagine.
An implicit and free-of-charge benefit of this implementation would be persistence. Another would be serialization.
For the time being, if a query returns too many rows to be held in memory at once, it may be better to first print them into a file as delimited values (use variants 2 or 3 of sqlite_select for that, as describe above) and later work on them sequentially from there, with as many passes as needed. Or use the SQLite shell and work the data on-the-fly. e.g.:

cat - << EoQ | sqlite3 | gawk -v FS="|" '{
# do something, e.g.:
print "row", NR, $0
}'
.open my_db
.separator |
select * from test1;
.exit
EoQ
row 1 100|hello1|hello01001
row 2 200|hello2|hello01002
row 3 300|hello3|hello01003

How could Documentum benefit of SQLite ?

Whereas it would not be realistic to use SQLite to store documents’ metadata as a replacement for a full 2-tiers RDBMS, it could still find a purpose within Documentum.
Documentum, and lots of other software, stores its configuration files, such as the server.ini and the old dmcl.ini, in ini files, e.g.:

[SERVER_STARTUP]
docbase_id = 1000000
docbase_name = mydocbase
database_name = mydb
database_conn = dbconn
database_owner = dmadmin
database_password_file = /home/dmadmin/dba/mydocbase/dbpasswd.txt

[DOCBROKER_PROJECTION_TARGET]
host = myhost

[DOCBROKER_PROJECTION_TARGET_n]
#n can be 0-49
key=value

[FUNCTION_SPECIFIC_STORAGE]
#Oracle & DB2 only
key=value

[TYPE_SPECIFIC_STORAGE]
key=value
#Oracle & DB2 only

[FUNCTION_EXTENT_SIZE]
key=value
#Oracle only

[TYPE_EXTENT_SIZE]
key=value

There are also key-value files such as the dfc.properties.
By storing these data inside a SQLite table, in its own database for example (databases are so cheap in SQlite, just one file), common typos could be avoided. A classic error with ini files in particular consists in misplacing a setting in the wrong section; such errors are not easy to spot because Documentum silently ignores them.
Consider this snippet from dfcfull.properties for instance:

# ACS configuration
# =================
# Preferences prefixed dfc.acs are used by dfc for distributed content services 
# ACS.                                                                          


# Defines how often dfc verifies acs projection, specified in seconds.          
# min value:  0, max value: 10000000
# 
dfc.acs.avail.refresh_interval = 360


# Indicates whether to verify if ACS server projects.                           
# 
dfc.acs.check_availability = true


# Defines how often dfc verifies that docbase related config objects are 
# modified, specified in seconds.                                               
# min value:  0, max value: 10000000
# 
dfc.acs.config.refresh_interval = 120


# Defines how often dfc verifies that global registry related config objects are 
# modified, specified in seconds.                                               
# min value:  0, max value: 10000000
# 
dfc.acs.gr.refresh_interval = 120
...

Wouldn’t it be nice to move these settings into a SQLite table with the structure dfc_properties(key PRIMARY KEY, value, comment, is_enabled) ? Both could still coexist and be merged at server startup time, with more priority to the file (i.e. if a given setting is present in both the table and in the file, the latter would prevail). We could have a classic file dfc.propertes along with the database file dfc.properties.db.
Common operations on the parameters would be done through SQL statements, e.g.:

SELECT key, value, comment from dfc_properties where is_enable = TRUE;
SELECT 'other_values', key, value from dfc_properties where is_enable = FALSE and key = ... ORDER BY key, value;
SELECT 'all_values', is_enable, key, value from dfc_properties where ORDER BY is_enable DESC, key, value;
UPDATE server_ini SET value = 'mydb.world' WHERE key = 'database_conn';

All the supported parameters would already be present in the table with correct names and default values. Alternative or deactivated values would have their is_enabled value to FALSE.
But the main interest of using SQlite’s tables here would be the CHECK constraints (or FOREIGN keys) to prevent typos in the parameter names. And for a server.ini.db with structure server_ini(section, key, value, comment, is_enabled), the section + key would be a composite foreign key with a domain CHECK constraints in the parent table, to prevent misplaced parameters. This would require a real database schema with some complexity, and would be delivered by Documentum. The point here is that SQLite would be an excellent tool for this kind of data.
But let’s leave it at that. Such an design and implementation could also deserve its own blog.
I hope this interface got your interest. In the next part, I’ll present its gory details, comment on the implementation and its limits and show how to compile and use it within gawk. See you there !

 

Cet article An SQLite extension for gawk (part I) est apparu en premier sur Blog dbi services.

A SQLite extension for gawk (part II)

$
0
0

Welcome to part II of a three-part article on extending gawk with a SQLite binding. Part I is here. Part II is followed by Part III, which give some explanations for the code presented here and shows how to use the extension with a stress test.
Here, I’ll list the source code of the extension and give instructions to compile and use it in gawk. Beware though that the code should be taken with several grains of salt, actually a whole wheelbarrow of it, because it has been only superficially tested. Some more serious testing is required in order to trust it entirely. So, caveat emptor !
I assume the source code of gawk is already installed (see for example the instructions here). We still need the SQLite source code. Go here and download the amalgamation zip file. Unzip it somewhere and then copy the file sqlite3.c to the gawk extension directory, ~/dmgawk/gawk-4.2.1/extension. Compile it with the command below:

gcc -c sqlite3.c -DHAVE_READLINE -fPIC -lpthread -ldl

Now, edit the file Makefile.am and add the references to the new extension, as shown below:

vi Makefile.am
pkgextension_LTLIBRARIES = \
filefuncs.la \
...
sqlite_gawk.la <-----
 
noinst_LTLIBRARIES = \
...
time_la_SOURCES = time.c
time_la_LDFLAGS = $(MY_MODULE_FLAGS)
time_la_LIBADD = $(MY_LIBS)
sqlite_gawk_la_SOURCES = sqlite_gawk.c <-----
sqlite_gawk_la_LDFLAGS = $(MY_MODULE_FLAGS) <-----
sqlite_gawk_la_LIBADD = $(MY_LIBS) -lpthread -ldl -lreadline <-----

...

Save and quit; that’s all for the make file;
We are still in the extension directory. Let’s edit the interface sqlite_gawk.c now and insert the code below;
vi sqlite_gawk.c

/*
 * sqlite-gawk.c - an interface to sqlite() library;
 * Cesare Cervini
 * dbi-services.com
 * 8/2018
*/
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#include <sys/types.h>
#include <sys/stat.h>

#include "gawkapi.h"

// extension;
#include <time.h>
#include <errno.h>
#include <limits.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <regex.h>
#include <sqlite3.h>

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

static const gawk_api_t *api;   /* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "an interface to sqlite3: version 1.0";

int plugin_is_GPL_compatible;

/* internal structure and variables */
/*
internally stores the db handles so an integer is returned to gawk as the index of a array;
*/
#define MAX_DB 100
static unsigned nb_sqlite_free_handles = MAX_DB;
static sqlite3 *sqlite_handles[MAX_DB];

/* init_sqlite_handles */
/*
since gawk does not know pointers, we use integers as db handles, which are actually indexes into a table of sqllite db handles;
initializes the sqlite_handles array to null pointers and resets the number of free handles;
called at program start time;
*/
static awk_bool_t init_sqlite_handles(void) {
   for (unsigned i = 0; i < MAX_DB; i++)
      sqlite_handles[i] = NULL;
   nb_sqlite_free_handles = MAX_DB;

   register_ext_version(ext_version);

   return awk_true;
}

/*
readfile() and writefile() functions for blob I/Os from/to file into/from memory;
e.g.:
   INSERT INTO chickenhouse my_blob = readfile('chicken_run.mp4');
   SELECT writefile("'Mary Poppins Returns.mp4'", my_blob) FROM children_movies;
they are taken directly from the source file sqlite3.c, i.e. sqlite-s shell program;
those functions are later registered in do_sqlite_open() via a call to sqlite_create_function() for use as extending SQL functions;
to be done while opening a db in do_sqlite_open() and for each opened db where the extended functions must be available, i.e. all for short;
*/

// ------------------------------------------- begin of imported functions from sqllite3.c ---------------------------------------------
/*
** This function is used in place of stat().  On Windows, special handling
** is required in order for the included time to be returned as UTC.  On all
** other systems, this function simply calls stat().
*/
static int fileStat(
  const char *zPath,
  struct stat *pStatBuf
){
  return stat(zPath, pStatBuf);
}

/*
** Set the result stored by context ctx to a blob containing the 
** contents of file zName.
*/
static void readFileContents(sqlite3_context *ctx, const char *zName){
  FILE *in;
  long nIn;
  void *pBuf;

  in = fopen(zName, "rb");
  if( in==0 ) return;
  fseek(in, 0, SEEK_END);
  nIn = ftell(in);
  rewind(in);
  pBuf = sqlite3_malloc( nIn );
  if( pBuf && 1==fread(pBuf, nIn, 1, in) ){
    sqlite3_result_blob(ctx, pBuf, nIn, sqlite3_free);
  }else{
    sqlite3_free(pBuf);
  }
  fclose(in);
}

/*
** Implementation of the "readfile(X)" SQL function.  The entire content
** of the file named X is read and returned as a BLOB.  NULL is returned
** if the file does not exist or is unreadable.
*/
static void readfileFunc(
  sqlite3_context *context,
  int argc,
  sqlite3_value **argv
){
  const char *zName;
  (void)(argc);  /* Unused parameter */
  zName = (const char*)sqlite3_value_text(argv[0]);
  if( zName==0 ) return;
  readFileContents(context, zName);
}

/*
** This function does the work for the writefile() UDF. Refer to 
** header comments at the top of this file for details.
*/
static int writeFile(
  sqlite3_context *pCtx,          /* Context to return bytes written in */
  const char *zFile,              /* File to write */
  sqlite3_value *pData,           /* Data to write */
  mode_t mode,                    /* MODE parameter passed to writefile() */
  sqlite3_int64 mtime             /* MTIME parameter (or -1 to not set time) */
){
  if( S_ISLNK(mode) ){
    const char *zTo = (const char*)sqlite3_value_text(pData);
    if( symlink(zTo, zFile)<0 ) return 1;
  }else
  {
    if( S_ISDIR(mode) ){
      if( mkdir(zFile, mode) ){
        /* The mkdir() call to create the directory failed. This might not
        ** be an error though - if there is already a directory at the same
        ** path and either the permissions already match or can be changed
        ** to do so using chmod(), it is not an error.  */
        struct stat sStat;
        if( errno!=EEXIST
         || 0!=fileStat(zFile, &sStat)
         || !S_ISDIR(sStat.st_mode)
         || ((sStat.st_mode&0777)!=(mode&0777) && 0!=chmod(zFile, mode&0777))
        ){
          return 1;
        }
      }
    }else{
      sqlite3_int64 nWrite = 0;
      const char *z;
      int rc = 0;
      FILE *out = fopen(zFile, "wb");
      if( out==0 ) return 1;
      z = (const char*)sqlite3_value_blob(pData);
      if( z ){
        sqlite3_int64 n = fwrite(z, 1, sqlite3_value_bytes(pData), out);
        nWrite = sqlite3_value_bytes(pData);
        if( nWrite!=n ){
          rc = 1;
        }
      }
      fclose(out);
      if( rc==0 && mode && chmod(zFile, mode & 0777) ){
        rc = 1;
      }
      if( rc ) return 2;
      sqlite3_result_int64(pCtx, nWrite);
    }
  }

  if( mtime>=0 ){
#if defined(AT_FDCWD) && 0 /* utimensat() is not universally available */
    /* Recent unix */
    struct timespec times[2];
    times[0].tv_nsec = times[1].tv_nsec = 0;
    times[0].tv_sec = time(0);
    times[1].tv_sec = mtime;
    if( utimensat(AT_FDCWD, zFile, times, AT_SYMLINK_NOFOLLOW) ){
      return 1;
    }
#else
    /* Legacy unix */
    struct timeval times[2];
    times[0].tv_usec = times[1].tv_usec = 0;
    times[0].tv_sec = time(0);
    times[1].tv_sec = mtime;
    if( utimes(zFile, times) ){
      return 1;
    }
#endif
  }

  return 0;
}

/*
** Argument zFile is the name of a file that will be created and/or written
** by SQL function writefile(). This function ensures that the directory
** zFile will be written to exists, creating it if required. The permissions
** for any path components created by this function are set to (mode&0777).
**
** If an OOM condition is encountered, SQLITE_NOMEM is returned. Otherwise,
** SQLITE_OK is returned if the directory is successfully created, or
** SQLITE_ERROR otherwise.
*/
static int makeDirectory(
  const char *zFile,
  mode_t mode
){
  char *zCopy = sqlite3_mprintf("%s", zFile);
  int rc = SQLITE_OK;

  if( zCopy==0 ){
    rc = SQLITE_NOMEM;
  }else{
    int nCopy = (int)strlen(zCopy);
    int i = 1;

    while( rc==SQLITE_OK ){
      struct stat sStat;
      int rc2;

      for(; zCopy[i]!='/' && i<nCopy; i++);
      if( i==nCopy ) break;
      zCopy[i] = '';

      rc2 = fileStat(zCopy, &sStat);
      if( rc2!=0 ){
        if( mkdir(zCopy, mode & 0777) ) rc = SQLITE_ERROR;
      }else{
        if( !S_ISDIR(sStat.st_mode) ) rc = SQLITE_ERROR;
      }
      zCopy[i] = '/';
      i++;
    }

    sqlite3_free(zCopy);
  }

  return rc;
}

/*
** Set the error message contained in context ctx to the results of
** vprintf(zFmt, ...).
*/
static void ctxErrorMsg(sqlite3_context *ctx, const char *zFmt, ...){
  char *zMsg = 0;
  va_list ap;
  va_start(ap, zFmt);
  zMsg = sqlite3_vmprintf(zFmt, ap);
  sqlite3_result_error(ctx, zMsg, -1);
  sqlite3_free(zMsg);
  va_end(ap);
}

/*
** Implementation of the "writefile(W,X[,Y[,Z]]])" SQL function.  
** Refer to header comments at the top of this file for details.
*/
static void writefileFunc(
  sqlite3_context *context,
  int argc,
  sqlite3_value **argv
){
  const char *zFile;
  mode_t mode = 0;
  int res;
  sqlite3_int64 mtime = -1;

  if( argc4 ){
    sqlite3_result_error(context, 
        "wrong number of arguments to function writefile()", -1
    );
    return;
  }

  zFile = (const char*)sqlite3_value_text(argv[0]);
  if( zFile==0 ) return;
  if( argc>=3 ){
    mode = (mode_t)sqlite3_value_int(argv[2]);
  }
  if( argc==4 ){
    mtime = sqlite3_value_int64(argv[3]);
  }

  res = writeFile(context, zFile, argv[1], mode, mtime);
  if( res==1 && errno==ENOENT ){
    if( makeDirectory(zFile, mode)==SQLITE_OK ){
      res = writeFile(context, zFile, argv[1], mode, mtime);
    }
  }

  if( argc>2 && res!=0 ){
    if( S_ISLNK(mode) ){
      ctxErrorMsg(context, "failed to create symlink: %s", zFile);
    }else if( S_ISDIR(mode) ){
      ctxErrorMsg(context, "failed to create directory: %s", zFile);
    }else{
      ctxErrorMsg(context, "failed to write file: %s", zFile);
    }
  }
}
// ------------------------------------------- end of imported functions from sqllite3.c ---------------------------------------------

/* get_free_sqlite_handle */
/*
looks for a free slot in sqlite_handles;
return its index if found, -1 otherwise;
*/
static unsigned get_free_sqlite_handle(void) {
   if (0 == nb_sqlite_free_handles) {
       fprintf(stderr, "maximum of open db [%d] reached, no free handles !\n", MAX_DB);
       return -1;
   }
   for (unsigned i = 0; i < MAX_DB; i++)
      if (NULL == sqlite_handles[i])
         return i;
   // should never come so far;
   return -1;
}

/* do_sqllite_open */
/* returns -1 if error, a db handle in the range 0 .. MAX_DB - 1 otherwise; */
static awk_value_t *
do_sqlite_open(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_name;
   short int ret;

   assert(result != NULL);

   unsigned int db_handle = get_free_sqlite_handle();
   if (-1 == db_handle)
      return make_number(-1, result);

   if (get_argument(0, AWK_STRING, &db_name)) {
      sqlite3 *db;

      ret = sqlite3_open(db_name.str_value.str, &db);

      if (ret) {
         char error_string[1000];
         sprintf(error_string, "sqlite3_open(): cannot open database [%s], error %s\n", db_name.str_value.str, sqlite3_errmsg(db));
         fprintf(stderr, "%s\n", error_string);
         update_ERRNO_string(_(error_string));
         ret = -1;
      }
      else {
         sqlite_handles[db_handle] = db;
         nb_sqlite_free_handles--;
         ret = db_handle;

         // register the extension functions readfile() and writefile() for blobs;
         ret = sqlite3_create_function(db, "readfile", 1, SQLITE_UTF8, 0, readfileFunc, 0, 0);
         if (ret == SQLITE_OK) {
            ret = sqlite3_create_function(db, "writefile", -1, SQLITE_UTF8, 0, writefileFunc, 0, 0);
            if (SQLITE_OK != ret)
               fprintf(stderr, "%s\n", "could not register function writefile()");
         }
         else if (SQLITE_OK != ret)
            fprintf(stderr, "%s\n", "could not register function readfile()");
      }
   }
   else {
      update_ERRNO_string(_("sqlite3_open(): missing parameter database name"));
      ret = -1;
   }

   return make_number(ret, result);
}

/* do_sqllite_close */
/* returns -1 if error, 0 otherwise; */
static awk_value_t *
do_sqlite_close(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_handle;
   int ret;

   assert(result != NULL);

   if (get_argument(0, AWK_NUMBER, &db_handle)) {
      sqlite3_close(sqlite_handles[(int) db_handle.num_value]);
      sqlite_handles[(int) db_handle.num_value] = NULL;
      nb_sqlite_free_handles++;
      ret = 0;
   }
   else {
      update_ERRNO_string(_("sqlite3_close(): missing parameter database handle"));
      ret = -1;
   }
   return make_number(ret, result);
}

/* do_sqllite_exec */
/*
returns -1 if error, 0 otherwise;
sqlite_exec is overloaded;
if 2 parameters, usual DML/DDL statements;
if 6 parameters, then incremental blob I/O;
sqlite_exec(db, db_name, table, column, rowid, readfile(file_name))
or
sqlite_exec(db, db_name, table, column, rowid, writefile(file_name))
implements sqlite3'c shell readfile()/writefile() syntax with incremental blob I/Os;
Example of usage:
first, get the rowid of the row that contains the blob to access;
   sqlite_select(db, "select rowid from <table> where <condition>", array)
then, call the sqlite_exec function with the tuple (<db_name>, <table>, <blob_column>, <rowid>) and the action to do, either readfile() or writefile();
   sqlite_exec(db, <db_name>, '<table>', '<blob_column>', array[0]["rowid"], readfile(file_name))
   sqlite_exec(db, <db_name>, '<table>', '<blob_column>', array[0]["rowid"], writefile(file_name))
e.g.:
   rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/setup_files/documentum.tar)")
note how the file name is not quoted;
in case of readfile(), if the blob's size changes, an update of the blob filled with zero-byte bytes and with the new size is first performed, then the blob is reopened;
see doc here for incremental blob I/Os: https://sqlite.org/c3ref/blob_open.html;
int sqlite3_blob_open(sqlite3*, const char *zDb, const char *zTable, const char *zColumn, sqlite3_int64 iRow, int flags, sqlite3_blob **ppBlob);
int sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64);
int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset);
int sqlite3_blob_write(sqlite3_blob *, const void *z, int n, int iOffset);
int sqlite3_blob_close(sqlite3_blob *);
*/
static awk_value_t *
do_sqlite_exec(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_handle;
   int ret = 1;

   assert(result != NULL);

   if (!get_argument(0, AWK_NUMBER, &db_handle)) {
      fprintf(stderr, "in do_sqlite_exec, cannot get the db handle argument\n");
      ret = -1;
      goto end;
   }
   if (2 == nargs) {
      awk_value_t sql_stmt;
      if (!get_argument(1, AWK_STRING, &sql_stmt))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the sql_stmt argument\n");
         ret = -1;
         goto end;
      }
      char *errorMessg = NULL;
      ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, NULL, NULL, &errorMessg);
      if (SQLITE_OK != ret) {
         fprintf(stderr, "in do_sqlite_exec, SQL error %s while executing [%s]\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sql_stmt.str_value.str);
         sqlite3_free(errorMessg);
         ret = -1;
         goto end;
      }
   }
   else if (6 == nargs) {
      awk_value_t arg, number_value;
      char *db_name = NULL, *table_name = NULL, *column_name = NULL, *file_stmt = NULL, *file_name = NULL;

      if (!get_argument(1, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the db_name argument\n");
         ret = -1;
         goto abort;
      }
      db_name = strdup(arg.str_value.str);

      if (!get_argument(2, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the table_name argument\n");
         ret = -1;
         goto abort;
      }
      table_name = strdup(arg.str_value.str);

      if (!get_argument(3, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the column_name argument\n");
         ret = -1;
         goto abort;
      }
      column_name = strdup(arg.str_value.str);
      
      if (!get_argument(4, AWK_NUMBER, &number_value))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the rowid argument\n");
         ret = -1;
         goto abort;
      }
      long int rowid = number_value.num_value;
      
      if (!get_argument(5, AWK_STRING, &arg))  {
         fprintf(stderr, "in do_sqlite_exec, cannot get the readfile()/writefile() argument\n");
         ret = -1;
         goto abort;
      }
      file_stmt = strdup(arg.str_value.str);
      
      unsigned short bRead2Blob;
      char *RE_readfile = "^readfile\\(([^)]+)\\)$";
      char *RE_writefile = "^writefile\\(([^)]+)\\)$";
      regex_t RE;
      regmatch_t pmatches[2];

      if (regcomp(&RE, RE_readfile, REG_EXTENDED)) {
         fprintf(stderr, "in do_sqlite_exec, error compiling REs %s\n", RE_readfile);
         ret = -1;
         goto abort;
      }
      if (regexec(&RE, file_stmt, 2, pmatches, 0)) {
         // no call to readfile() requested, try writefile();
         regfree(&RE);
         if (regcomp(&RE, RE_writefile, REG_EXTENDED)) {
            fprintf(stderr, "in do_sqlite_exec, error compiling REs %s\n", RE_writefile);
            ret = -1;
            goto abort;
         }
         if (regexec(&RE, file_stmt, 2, pmatches, 0)) {
            fprintf(stderr, "in do_sqlite_exec, error executing RE %s and RE %s against %s;\nneither readfile(file_name) nor writefile(file_name) was found\n", RE_readfile, RE_writefile, file_stmt);
            ret = -1;
            goto abort;
         }
         else bRead2Blob = 0;
      }
      else bRead2Blob = 1;
      file_name = strndup(file_stmt + pmatches[1].rm_so, pmatches[1].rm_eo - pmatches[1].rm_so);
      regfree(&RE);
      sqlite3_blob *pBlob;
      if (bRead2Blob) {
         ret = sqlite3_blob_open(sqlite_handles[(int) db_handle.num_value], db_name, table_name, column_name, rowid, 1, &pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec, at reading blob, with parameters: db_name=%s, table_name=%s, column_name=%s, rowid=%ld, file statement=%s, error in sqlite3_blob_open %s\n%s\n",
                            db_name, table_name, column_name, rowid, file_stmt,
                            sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
            ret = -1;
            goto abort;
         }

         FILE *fs_in = fopen(file_name, "r");
         if (NULL == fs_in) {
            fprintf(stderr, "in do_sqlite_exec, error opening file %s for reading\n", file_name);
            ret = -1;
            goto local_abort_w;
         }

         // will the blob size change ?
         fseek(fs_in, 0, SEEK_END);
         unsigned long file_size = ftell(fs_in);
         rewind(fs_in);
         unsigned long blobSize = sqlite3_blob_bytes(pBlob);
         if (file_size != blobSize) {
            // yes, must first update the blob with the new size and reopen it;
            char stmt[500];
            char *errorMessg = NULL;
            sprintf(stmt, "update %s set %s = zeroblob(%ld) where rowid = %ld", table_name, column_name, file_size, rowid);
            ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], stmt, NULL, NULL, &errorMessg);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, SQL error %s while changing the blob's size through [%s]:\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), stmt, errorMessg);
               sqlite3_free(errorMessg);
               ret = -1;
               goto local_abort_w;
            }
            ret = sqlite3_blob_reopen(pBlob, rowid);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, error while reopening the blob: %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
               ret = -1;
               goto local_abort_w;
            }
         }

         // let's work with a 10 MiB large buffer;
         unsigned long BUFFER_SIZE = 10 * 1024 * 1024;
         char *pBuffer = (char *) malloc(sizeof(char) * BUFFER_SIZE);
         unsigned long nbBytes;
         unsigned long offset = 0;
         while ((nbBytes = fread(pBuffer, sizeof(char), BUFFER_SIZE, fs_in)) > 0) {
            ret = sqlite3_blob_write(pBlob, pBuffer, nbBytes, offset);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_write, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
               ret = -1;
               free(pBuffer);
               goto local_abort_w;
            }
            offset += nbBytes;
         }
         free(pBuffer);
local_abort_w:
         fclose(fs_in);
         ret = sqlite3_blob_close(pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_close, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
            ret = -1;
         }
      }
      else {
         ret = sqlite3_blob_open(sqlite_handles[(int) db_handle.num_value], db_name, table_name, column_name, rowid, 0, &pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec at writing blob, error %d in sqlite3_blob_open with parameters: db_name=%s, table_name=%s, column_name=%s, rowid=%ld, file statement=%s\n",
                            ret,
                            db_name, table_name, column_name, rowid, file_stmt);
            ret = -1;
            goto abort;
         }
         unsigned long BUFFER_SIZE = 10 * 1024 * 1024;
         char *pBuffer = (char *) malloc(sizeof(char) * BUFFER_SIZE);
         unsigned long offset = 0;
         FILE *fs_out = fopen(file_name, "w");
         if (NULL == fs_out) {
            fprintf(stderr, "in do_sqlite_exec, error %d opening file %s for writing\n", errno, file_name);
            ret = -1;
            goto local_abort_r;
         }
         unsigned long blobSize = sqlite3_blob_bytes(pBlob);
         if (BUFFER_SIZE >= blobSize) {
            ret = sqlite3_blob_read(pBlob, pBuffer, blobSize, offset);
            if (SQLITE_OK != ret) {
               fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_read, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
               ret = -1;
               goto local_abort_r;
            }
            unsigned long nbBytes = fwrite(pBuffer, sizeof(char), BUFFER_SIZE, fs_out);
            if (nbBytes < blobSize) {
               fprintf(stderr, "in do_sqlite_exec, error in fwrite()\n");
               ret = -1;
               goto local_abort_r;
            }
         }
         else {
            unsigned long nbBytes;
            while ((nbBytes = (blobSize <= BUFFER_SIZE ? blobSize : BUFFER_SIZE)) > 0) {
               ret = sqlite3_blob_read(pBlob, pBuffer, nbBytes, offset);
               if (SQLITE_OK != ret) {
                  fprintf(stderr, "in do_sqlite_exec, sqlite3_blob_read, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
                  ret = -1;
                  goto local_abort_r;
               }
               ret = fwrite(pBuffer, sizeof(char), nbBytes, fs_out);
               if (ret < nbBytes) {
                  fprintf(stderr, "in do_sqlite_exec, error in fwrite()\n");
                  ret = -1;
                  goto local_abort_r;
               }
               offset += nbBytes;
               blobSize -= nbBytes;
            }
         }
local_abort_r:
         fclose(fs_out);
         free(pBuffer);
         ret = sqlite3_blob_close(pBlob);
         if (SQLITE_OK != ret) {
            fprintf(stderr, "in do_sqlite_exec, processing of writefile(), sqlite3_blob_close, error %s\n%s\n", sqlite3_errmsg(sqlite_handles[(int) db_handle.num_value]), sqlite3_errstr(ret));
            ret = -1;
         }
      }
abort:
      free(db_name);
      free(table_name);
      free(column_name);
      free(file_stmt);
      free(file_name);
   }
   else {
      fprintf(stderr, "in do_sqlite_exec, unsupported number of parameters in statement while processing [%d]\n", nargs);
      ret = -1;
   }
end:
   return make_number(ret, result);
}

static unsigned max(unsigned n1, unsigned n2) {
   if (n1 > n2)
      return n1;
   else return n2;
}

// this struct is used to pass parameters to the callbacks;
typedef struct DISPLAYED_TABLE {
   char *sqlStmt;

   unsigned short bHeaderPrinted;
   char *COL_SEPARATOR;
   char *ELLIPSIS;
   unsigned short len_ellipsis;
   unsigned short MAX_WIDTH;
   unsigned short MIN_WIDTH;

   unsigned nb_columns;
   unsigned *max_col_widths;
   unsigned *actual_col_widths;
   char *col_overflow_action;
   char **headers;
   unsigned widest_column;

   char *size_list;  // list of blank- or comma-separated column widths;

   unsigned long NR;
   unsigned short bStoreOrDisplay;
   awk_array_t gawk_array;

   unsigned short bEpilog;
} DISPLAYED_TABLE;

void cleanup(DISPLAYED_TABLE *dt) {
   if (dt -> sqlStmt)
      free(dt -> sqlStmt);
   if (dt -> max_col_widths)
      free(dt -> max_col_widths);
   if (dt -> actual_col_widths)
      free(dt -> actual_col_widths);
   for (unsigned i = 0; i < dt -> nb_columns; i++) {
      if (dt -> headers)
         free(dt -> headers[i]);
   }
   if (dt -> headers)
      free(dt -> headers);
   if (dt -> size_list)
      free(dt -> size_list);
}

// strip the trailing blanks so they are not counted toward the column width;
// and returns the number of characters from the beginning;
// former version attempted to insert a  terminator but it caused error when the string resides in code area (e.g. SELECT sqlite_version()) because modifications are not allowed there for obvious reasons;
unsigned getUsefulLen(char * str) {
   unsigned len = strlen(str);
   char *p = str + len - 1;
   while (' ' == *p && p > str) p--;
   if (' ' != *p)
      len = p - str + 1;
   else
      len = 0;
   return(len);
}

char *fillStr(char *S, char ch, unsigned max_len) {
   S[max_len] = '';
   for (char *p = S; max_len; p++, max_len--)
      *p = ch;
   return S;
}

/* select_callback_raw */
// displays the data without truncation nor cleaning up tainling blanks;
// columns are separated by dt -> COL_SEPARATOR, which is useful to import data as CSV;
static int select_callback_raw(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);
      cleanup(dt);
      return 0;
   }

   if (!dt -> bHeaderPrinted) {
      // header has not been printed yet, print it and afterwards print the first row;
      for (unsigned i = 0; i < nb_columns; i++)
         printf("%s%s", column_names[i], i  COL_SEPARATOR : "");
      printf("\n");
      dt -> bHeaderPrinted = 1;
   }

   for (unsigned i = 0; i < nb_columns; i++)
      printf("%s%s", column_values[i], i  COL_SEPARATOR : "");
   printf("\n");
   dt -> NR++;
   return 0;
}

/* select_callback_draft */
/*
display the data in maximum 15-character wide columns, with possible truncation, in which case an ellipsis (...) is appended;
at the end, the optimum widths for each column are listed so they can be passed as a string list in the call to sqlite_select(,, "....") to avoid truncation;
this output is convenient as a quick draft;
*/
static int select_callback_draft(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   char col_str[dt -> MAX_WIDTH + 1];

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);

      printf("\nOptimum column widths\n");
      printf("=====================\n");
      printf("for query: %s\n", dt -> sqlStmt);
      for (unsigned i = 0; i < dt > nb_columns; i++)
         printf("%-*s  %d\n", dt -> widest_column + 5, dt -> headers[i], dt -> max_col_widths[i]);

      cleanup(dt);
      return 0;
   }

   if (!dt -> bHeaderPrinted) {
      // header has not been printed yet, print it and afterwards print the first row;
      dt -> nb_columns = nb_columns; 
      dt -> max_col_widths = (unsigned *) malloc(sizeof(unsigned) * nb_columns);
      dt -> actual_col_widths = (unsigned *) malloc(sizeof(unsigned) * nb_columns);
      dt -> headers = (char **) malloc(sizeof(char *) * nb_columns);

      char *header_line = NULL;

      for (unsigned i = 0; i &t; nb_columns; i++) {
         char *tmp_s;
         unsigned len = strlen(column_names[i]);
         dt -> max_col_widths[i] = len;
         dt -> widest_column = max(dt -> widest_column, len);
         if (len > dt -> MAX_WIDTH) {
            // column overflow, apply a truncation with ellipsis;
            dt -> actual_col_widths[i] = dt -> MAX_WIDTH;
            strncpy(col_str, column_names[i], dt -> MAX_WIDTH - dt -> len_ellipsis);
            col_str[dt -> MAX_WIDTH - dt -> len_ellipsis] = '';
            strcat(col_str, dt -> ELLIPSIS);
            tmp_s = col_str;
         }
         else if (len %lt; dt -> MIN_WIDTH) {
            dt -> actual_col_widths[i] = dt -> MIN_WIDTH;
            tmp_s = column_names[i];
         }
         else {
            dt -> actual_col_widths[i] = len;
            tmp_s = column_names[i];
         }
         printf("%-*s%s", dt -> actual_col_widths[i], tmp_s, i  COL_SEPARATOR : "");
         dt -> headers[i] = strdup(column_names[i]);
      }
      printf("\n");

      for (unsigned i = 0; i < nb_columns; i++) {
         header_line = (char *) realloc(header_line, sizeof(char) * dt -> actual_col_widths[i]);
         fillStr(header_line, '-', dt -> actual_col_widths[i]);
         printf("%s%s", header_line, i  COL_SEPARATOR : "");
      }
      printf("\n");
      free(header_line);

      dt -> bHeaderPrinted = 1;
   }
   // header has been printed, print the rows now;
   for (unsigned i = 0; i < nb_columns; i++) {
      char *tmp_s;
      unsigned len = getUsefulLen(column_values[i]);
      dt -> max_col_widths[i] = max(dt -> max_col_widths[i], len);
      if (len > dt -> actual_col_widths[i]) {
         strncpy(col_str, column_values[i], dt -> actual_col_widths[i] - dt -> len_ellipsis);
         col_str[dt -> actual_col_widths[i] - dt -> len_ellipsis] = '';
         strcat(col_str, dt -> ELLIPSIS);
         tmp_s = col_str;
      }
      else {
         tmp_s = column_values[i];
      }
      printf("%-*.*s%s", dt -> actual_col_widths[i], dt -> actual_col_widths[i], tmp_s, i  COL_SEPARATOR : "");
   }
   printf("\n");
   dt -> NR++;
   return 0;
}

/* printConstrained */
// prints the row's column in constrained column widths;
static void printConstrained(DISPLAYED_TABLE *dt, char **data, unsigned nb_columns) {
   // let's replicate the data because they will be modified locally;
   char **ldata = (char **) malloc(sizeof(char *) * nb_columns);
   for (unsigned i = 0; i < nb_columns; i++)
      ldata[i] = strndup(data[i], getUsefulLen(data[i]));
   unsigned bWrapOccured;
   do {
      bWrapOccured = 0;
      for (unsigned i = 0; i < nb_columns; i++) {
         char *col_str = NULL;
         unsigned len = strlen(ldata[i]);
         dt -> actual_col_widths[i] = dt -> max_col_widths[i];
         if (len > dt -> max_col_widths[i]) {
            // column width overflow, apply the requested action: either wrap-around, truncate with ellipsis or truncate without ellipsis;
            if ('e' == dt -> col_overflow_action[i]) {
               if (dt -> max_col_widths[i] < dt -gt&; len_ellipsis)
                  dt -> actual_col_widths[i] = dt -> len_ellipsis;
               col_str = strndup(ldata[i], dt -> actual_col_widths[i] - dt -> len_ellipsis);
               col_str[dt -> actual_col_widths[i] - dt -> len_ellipsis] = '';
               strcat(col_str, dt -> ELLIPSIS);
               sprintf(ldata[i], "%*s", len, " ");
            }
            else if ('t' == dt -> col_overflow_action[i]) {
               col_str = strndup(ldata[i], dt -> actual_col_widths[i]);
               sprintf(ldata[i], "%*s", len, " ");
            }
            else if ('w' == dt -> col_overflow_action[i]) {
               col_str = strndup(ldata[i], dt -> actual_col_widths[i]);
               // shift the column names by as many printed characters;
               // the new column names will be printed at the next cycle of the inner loop 
               unsigned j;
               for (j = dt -> actual_col_widths[i]; j < len; j++)
                  ldata[i][j - dt -> actual_col_widths[i]] = ldata[i][j];
               ldata[i][len - dt -> actual_col_widths[i]] = '';
               bWrapOccured = 1;
            }
         }
         else {
            col_str = strdup(ldata[i]);
            // no wrap-around necessary here but prepare the str for the next cycle just in case;
            sprintf(ldata[i], "%*s", len, " ");
         }
         printf("%-*s%s", dt -> actual_col_widths[i], col_str, i  COL_SEPARATOR : "");
         free(col_str);
      }
      printf("\n");
   } while (bWrapOccured);
   for (unsigned i = 0; i < nb_columns; i++)
      free(ldata[i]);
   free(ldata);
}

/* select_callback_sized */
// displays the columns within predetermined sizes, wrap-around if overflow;
static int select_callback_sized(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);
      cleanup(dt);
      return 0;
   }

   if (!dt -> bHeaderPrinted) {
      // header has not been printed yet, print it and afterwards print the first row;
      dt -> actual_col_widths = (unsigned *) malloc(sizeof(unsigned) * nb_columns);
      if (dt -> nb_columns < nb_columns) {
         unsigned last_width = dt -> max_col_widths[dt -> nb_columns - 1];
         fprintf(stderr, "warning: missing column sizes, extending the last provided one %d\n", last_width);
         dt -> max_col_widths = (unsigned *) realloc(dt -> max_col_widths, sizeof(unsigned) * nb_columns);
         dt -> col_overflow_action = (char *) realloc(dt -> col_overflow_action, sizeof(char) * nb_columns);
         char last_overflow_action = dt -> col_overflow_action[dt -> nb_columns - 1];
         for (unsigned i = dt -> nb_columns; i &t; nb_columns; i++) {
            dt -> max_col_widths[i] = last_width;
            dt -> col_overflow_action[i] = last_overflow_action;
         }
         dt -> nb_columns = nb_columns;
      }
      else if (dt -> nb_columns > nb_columns) {
         fprintf(stderr, "warning: too many columns widths given, %d vs actual %d, ignoring the %d in excess\n", dt -> nb_columns, nb_columns, dt -> nb_columns - nb_columns);
         dt -> nb_columns = nb_columns;
      }
      printConstrained(dt, column_names, nb_columns);

      char *header_line = NULL;
      for (unsigned i = 0; i < nb_columns; i++) {
         header_line = (char *) realloc(header_line, sizeof(char) * dt -> actual_col_widths[i]);
         fillStr(header_line, '-', dt -> actual_col_widths[i]);
         printf("%s%s", header_line, i < nb_columns - 1 ? dt -> COL_SEPARATOR : "");
      }
      printf("\n");
      free(header_line);
      dt -> bHeaderPrinted = 1;
   }
   printConstrained(dt, column_values, nb_columns);
   dt -> NR++;
   return 0;
}

/* select_callback_array */
/*
returs the database rows into the gawk associative array passed as parameter;
its structure is as follows:
array[0] = sub-array_0
array[1] = sub-array_1
...
array[count-1] = sub-array_count-1
where the sub-arrays are associative arrays too with structure:
sub-array0[col1] = value1
sub-array0[col2] = value2
...
sub-array0[coln] = valuen
sub-array1[col1] = value1
...
sub-array1[coln] = valuen
...
sub-arraym[col1] = value1
...
sub-arraym[coln] = valuen
Said otherwise, the returned array is an array of associative arrays whose first dimension contains the rows and second dimension contains the columns, 
i.e. it' a table of database rows or an array of hashes, or a list of dictionaries;
in perl linguo, it's an array of hashes;
in python, it would be an array of dictionaries;
*/
static int select_callback_array(void *vdt, int nb_columns, char **column_values, char **column_names) {
   DISPLAYED_TABLE *dt = (DISPLAYED_TABLE *) vdt;

   if (dt -> bEpilog) {
      printf("%ld rows selected\n", dt -> NR);
      cleanup(dt);
      return 0;
   }

   awk_array_t row;
   awk_value_t value;
   awk_value_t row_index;
   awk_value_t col_index, col_value;

   if (!dt -> bHeaderPrinted) {
      // create the main array once;
      // doesn't work; keep the code in case a fix is found;
      //db_table = create_array();
      //value.val_type = AWK_ARRAY;
      //value.array_cookie = db_table; 
   
      // add it to gawk's symbol table so it appear magically in gawk's script namespace;
      //if (!sym_update(dt -> array_name, &value)) 
      //   fatal(ext_id, "in select_callback_array, creation of table array %s failed\n", dt -> array_name);
      //db_table = value.array_cookie;

      // nothing special to do here;
      dt -> bHeaderPrinted = 1;
   }
   char index_str[50];
   unsigned len = sprintf(index_str, "%ld", dt -> NR);
   make_const_string(index_str, len, &row_index);

   // create the sub-array for each row;
   // indexes are the column names and values are the column values;
   row = create_array();
   value.val_type = AWK_ARRAY;
   value.array_cookie = row;
   if (! set_array_element(dt -> gawk_array, &row_index, &value))
      fatal(ext_id, "in select_callback_array, creation of row array %ld failed\n", dt -> NR);
   row  = value.array_cookie;

   for (unsigned i = 0; i  < nb_columns; i++) {
      make_const_string(column_names[i], strlen(column_names[i]), &col_index);
      make_const_string(column_values[i], strlen(column_values[i]), &col_value);
      if (! set_array_element(row, &col_index, &col_value))
         fatal(ext_id, "in select_callback_array, assigned value %s to index %s at row %ld failed\n", column_values[i], column_names[i], dt -> NR);
   }

   dt -> NR++;
   return 0;
}

/* do_sqllite_select */
/*
generic select entry point;
possible invocations:
Case: call profile:                                          --> action;
   0: sqlite_select(db, sql_stmt)                            --> draft output, default fixed width columns, with needed column widths list at the end;
   1: sqlite_select(db, sql_stmt, "")                        --> raw output, no truncation, | as default separator;
   2: sqlite_select(db, sql_stmt, "separator-string")        --> raw output, no truncation, use given string as separator;
   2: sqlite_select(db, sql_stmt, "list-of-columns-widths")  --> fixed sized column output, a w|t|e suffix is allowed for wrapping-around or truncating too large columns without or with ellipsis;
   3: sqlite_select(db, sql_stmt, dummy, gawk_array)         --> raw output into the gawk associative array gawk_array;
the appropriate callback will be called based on the invocation's profile;
returns -1 if error, 0 otherwise;
*/
static awk_value_t *
do_sqlite_select(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t db_handle, sql_stmt, col_sizes;
   int ret = 0;

   assert(result != NULL);

   if (!get_argument(0, AWK_NUMBER, &db_handle)) {
      fprintf(stderr, "in do_sqlite_select, cannot get the db handle argument\n");
      ret = -1;
      goto quit;
   }
   if (!get_argument(1, AWK_STRING, &sql_stmt)) {
      fprintf(stderr, "do_sqlite_select, cannot get the sql_stmt argument\n");
      ret = -1;
      goto quit;
   }
   DISPLAYED_TABLE dt;
   dt.sqlStmt = strdup(sql_stmt.str_value.str);
   dt.bHeaderPrinted = 0;
   dt.COL_SEPARATOR = "  ";
   dt.ELLIPSIS = "..."; dt.len_ellipsis = strlen(dt.ELLIPSIS); 
   dt.MAX_WIDTH = 15;
   dt.MIN_WIDTH = dt.len_ellipsis + 5;
   dt.nb_columns = 0;
   dt.max_col_widths = NULL;
   dt.actual_col_widths = NULL;
   dt.col_overflow_action = NULL;
   dt.headers = NULL;
   dt.widest_column = 0;
   dt.size_list = NULL;
   dt.NR = 0;
   dt.bStoreOrDisplay = 1;
   dt.gawk_array = NULL;
   dt.bEpilog = 0;

   unsigned short bCase;
   unsigned short bFoundSeparator = 0;
   char *errorMessg = NULL;

   if (4 == nargs) {
      bCase = 3;
      awk_value_t value;
      if (!get_argument(3, AWK_ARRAY, &value))
         fatal(ext_id, "in do_sqlite_select, accessing the gawk array parameter failed\n");
      dt.gawk_array = value.array_cookie;
      clear_array(dt.gawk_array);
   }
   else if (get_argument(2, AWK_STRING, &col_sizes)) {
      if (0 == strlen(col_sizes.str_value.str))
         // raw, unformatted output;
         bCase = 1;
      else {
         // columns are output with constrained widths and possible wrapping-around or truncation with/without ellipsis;
         bCase = 2;
         char *width_str, *tmp_str, *next_tok_iter;
         long width_value;
         tmp_str = strdup(col_sizes.str_value.str);
         next_tok_iter = tmp_str;
         while ((width_str = strtok(next_tok_iter, " ,/"))) {
            errno = 0;
            char *overflow_action_suffix;
            width_value = strtol(width_str, &overflow_action_suffix, 10);
            if ((errno == ERANGE && (width_value == LONG_MAX || width_value == LONG_MIN)) ||
                (errno != 0 && width_value == 0) ||
                (width_value < 0)) {
               if (0 == dt.nb_columns) {
                  // let's take this as a separator for select_callback_raw();
                  dt.COL_SEPARATOR = width_str;
                  bFoundSeparator = 1;
                  bCase = 0;
               }
               else {
                  fprintf(stderr, "invalid number in size string [%s], exiting ...\n", width_str);
                  if (dt.nb_columns > 0) {
                     free(dt.max_col_widths);
                     free(dt.col_overflow_action);
                  }
                  free(tmp_str);
                  ret = -1;
                  goto quit;
               }
            }
            else if (bFoundSeparator) {
               // nothing else is accepted after a separator;
               fprintf(stderr, "separator [%s] must be the only parameter in raw output, exiting ...\n", dt.COL_SEPARATOR);
               free(tmp_str);
               ret = -1;
               goto quit;
            }
            dt.max_col_widths = (unsigned *) realloc(dt.max_col_widths, sizeof(unsigned) * (dt.nb_columns + 1));
            dt.col_overflow_action = (char *) realloc(dt.col_overflow_action, sizeof(char) * (dt.nb_columns + 1));
            dt.max_col_widths[dt.nb_columns] = width_value;
            if (NULL == overflow_action_suffix || ! *overflow_action_suffix)
               dt.col_overflow_action[dt.nb_columns] = 'e';
            else if ('t' == *overflow_action_suffix || 'w' == *overflow_action_suffix || 'e' == *overflow_action_suffix)
               dt.col_overflow_action[dt.nb_columns] = *overflow_action_suffix;
            else if (0 == dt.nb_columns) {
               bCase = 0;
               dt.COL_SEPARATOR = strdup(width_str);
               bFoundSeparator = 1;
               dt.nb_columns++;
               break;      
            }
            else {
               // allowed overflow suffix is one of t, w or e;
               fprintf(stderr, "invalid overflow action suffix [%c]; it must be one of w (wrap-around), t (truncation without ellipsis) or e (truncation with ellipsis), exiting ...\n", *overflow_action_suffix);
               free(tmp_str);
               ret = -1;
               goto quit;
            }
            if ('e' == dt.col_overflow_action[dt.nb_columns] && width_value < dt.len_ellipsis) {
               fprintf(stderr, "column [%d] has maximum width [%ld] and requests a truncation with ellipsis [%s] but a minimum width of [%d] characters is necessary for this, assuming that minimum width\n", dt.nb_columns, width_value, dt.ELLIPSIS, dt.len_ellipsis);
               dt.max_col_widths[dt.nb_columns] = dt.len_ellipsis;
            }
            dt.nb_columns++;
            next_tok_iter = NULL;
         }
         free(tmp_str);
      }
   }
   else
      // draft output, i.e. default column width, possible truncation, optimal column widths listed at the end;
      bCase = 0;

   switch (bCase) {
      case 0: ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_draft, &dt, &errorMessg);
              break;
      case 1: if (!bFoundSeparator)
                 // use default separator
                 dt.COL_SEPARATOR = "|";
              ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_raw, &dt, &errorMessg);
              break;
      case 2: ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_sized, &dt, &errorMessg);
              break;
      case 3: ret = sqlite3_exec(sqlite_handles[(int) db_handle.num_value], sql_stmt.str_value.str, select_callback_array, &dt, &errorMessg);
              break;
      default: fprintf(stderr, "programming error: did you not forget a case ?\n");
   }
   if (SQLITE_OK == ret) {
      dt.bEpilog = 1;
      0 == bCase ? select_callback_draft(&dt, 0, NULL, NULL) :
      1 == bCase ? select_callback_raw(&dt, 0, NULL, NULL) :
      2 == bCase ? select_callback_sized(&dt, 0, NULL, NULL) :
      3 == bCase ? select_callback_array(&dt, 0, NULL, NULL) :
      0;
   }
   else {
      fprintf(stderr, "do_sqlite_select, SQL error %s while executing [%s]\n", errorMessg, sql_stmt.str_value.str);
      sqlite3_free(errorMessg);
   }
quit:
   return make_number(ret, result);
}

/* these are the exported functions along with their min and max arities; */
   static awk_ext_func_t func_table[] = {
        {"sqlite_open", do_sqlite_open, 1, 1, awk_false, NULL},
        {"sqlite_close", do_sqlite_close, 1, 1, awk_false, NULL},
        {"sqlite_exec", do_sqlite_exec, 6, 2, awk_false, NULL},
        {"sqlite_select", do_sqlite_select, 4, 2, awk_false, NULL},
};

static awk_bool_t (*init_func)(void) = init_sqlite_handles;

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, sqlite_gawk, "")

Quite the extension ! Sorry for this lengthy listing but there is a lot of stuff going on here.
Next, let’s make the awk and the new extension. Here are the incantations:

pwd
/home/dmadmin/dmgawk/gawk-4.2.1/extension
./configure
make
cd .libs; gcc -o sqlite_gawk.so -shared sqlite_gawk.o ../sqlite3.o -pthread

That’s it. As said elsewhere, an additional sudo make install will install the new gawk and its extension to their canonical locations, i.e. /usr/local/bin/gawk for gawk and /usr/local/lib/gawk for the extensions. But for the moment, let’s test it; for this, we still need a test gawk script.
vi tsqlite.awk

# test program for the sqlite_gawk, interface to sqlite3;
# Cesare Cervini
# dbi-services.com
# 8/2018

@load "sqlite_gawk"

BEGIN {
   my_db = sqlite_open("/home/dmadmin/sqlite-amalgamation-3240000/test.db")
   print "db opened:", my_db

   my_db2 = sqlite_open("/home/dmadmin/sqlite-amalgamation-3240000/test.db")
   print "db opened:", my_db2

   sqlite_close(my_db)
   sqlite_close(my_db2)

   my_db = sqlite_open("/home/dmadmin/sqlite-amalgamation-3240000/test.db")
   print "db opened:", my_db

   printf "\n"

   rc = sqlite_exec(my_db, "CREATE TABLE IF NOT EXISTS test1(n1 NUMBER, s1 TEXT, s2 CHAR(100))")
   print "return code = ", rc

   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(100, \"hello1\", \"hello0101\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(200, \"hello2\", \"hello0102\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(300, \"hello3\", \"hello0103\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello4\", \"hello0104\")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello5 with spaces       \", \"hello0105 with spaces              \")")
   print "return code = ", rc
   rc = sqlite_exec(my_db, "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello6 with spaces        \", \"hello0106   \")")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT * FROM test1";
   split("", a_test)
   print "sqlite_select(my_db, " stmt ", 0, a_test)"
   rc = sqlite_select(my_db, stmt, 0, a_test)
   dumparray("a_test", a_test);
   for (row in a_test) {
      printf("row %d: ", row)
      for (col in a_test[row])
         printf("  %s = %s", col, a_test[row][col])
      printf "\n"
   }
   printf "\n"

   # print in draft format;
   stmt = "SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' ORDER BY 1"
   print "sqlite_select(my_db, \"" stmt "\")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # print in draft format;
   stmt = "SELECT sql FROM sqlite_master ORDER BY tbl_name, type DESC, name"
   print "sqlite_select(my_db, \"" stmt "\", \"100\")"
   rc = sqlite_select(my_db, stmt , "100")
   print "return code = ", rc
   printf "\n"

   # print in draft format;
   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # print in raw format with non default separator;
   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ", \"||\")"
   rc = sqlite_select(my_db, stmt, "||")
   print "return code = ", rc
   printf "\n"

   # now that we know the needed column widths, let's used them;
   # trailing spaces are removed to compact the column somewhat;
   stmt = "SELECT * FROM test1" 
   print "sqlite_select(my_db, " stmt ", \"3 18 21\")"
   rc = sqlite_select(my_db, stmt, "3 18 21")
   print "return code = ", rc
   printf "\n"

   # print in raw format, with default | separator;
   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ", \"\")"
   rc = sqlite_select(my_db, stmt, "")
   print "return code = ", rc
   printf "\n"

   stmt = "INSERT INTO test1(n1, s1, s2) VALUES(400, \"hello6-with-spaces        \", \"hello0106-12345\")" 
   print "sqlite_exec(my_db, " stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT * FROM test1"
   print "sqlite_select(my_db, " stmt ", \"2e 15e 10w\")"
   rc = sqlite_select(my_db, stmt, "2e 15e 10w")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT count(*) FROM test1"
   print "sqlite_select(my_db," stmt ")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "DELETE FROM test1"
   print "sqlite_exec(my_db, " stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT count(*) FROM test1"
   print "sqlite_select(my_db," stmt ")"
   rc = sqlite_select(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   rc = sqlite_exec(my_db, "CREATE TABLE IF NOT EXISTS test_with_blob(n1 NUMBER, my_blob BLOB)")
   print "return code = ", rc

   rc = sqlite_exec(my_db, "DELETE FROM test_with_blob")
   print "return code = ", rc

   stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(1, readfile(\"gawk-4.2.1.tar.gz\"))" 
   print "sqlite_exec(my_db," stmt ")"
   #rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   
   stmt = "SELECT n1, writefile('yy' || rowid, my_blob) FROM test_with_blob" 
   print "sqlite_select(my_db, " stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # file too large, > 3 Gb, fails silently;
   # do don't do it;
   # stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile(\"/home/dmadmin/setup_files/documentum.tar\"))" 

   # this one is OK at 68 Mb;
   stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile(\"/home/dmadmin/setup_files/instantclient-basic-linux.x64-12.2.0.1.0.zip\"))" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT n1, writefile('\"yy' || rowid || '\"', my_blob) FROM test_with_blob where n1 = 1000" 
   print "sqlite_select(my_db, " stmt ")"
   #rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "INSERT INTO test_with_blob(n1, my_blob) VALUES(5000, readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c'))" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   stmt = "UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c') where n1 = 1000" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # xx is a 999'000'000 bytes file; the import using a memory buffer with that size takes some time to complete;
   # the incremental blob I/Os below seem faster;
   # to make one, use: dd if=/dev/zero of=xx count=990 bs=1000000
   stmt = "UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/xx') where n1 = 1000" 
   print "sqlite_exec(my_db," stmt ")"
   rc = sqlite_exec(my_db, stmt)
   print "return code = ", rc
   printf "\n"

   # this is needed to enforce typing of a_array to array;
   # split("", a_test)
   delete(a_test)
   print "sqlite_select(db, \"select rowid from test_with_blob where n1 = 1000 limit 1\", 0, a_test)"
   sqlite_select(db, "select rowid from test_with_blob where n1 = 1000 limit 1", 0, a_test)
   print "after getting blob"
   dumparray("a_test", a_test)
   print "sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', " a_test[0]["rowid"] ", writefile(~/dmgawk/my_blob_" a_test[0]["rowid"] "))"
   rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "writefile(/home/dmadmin/dmgawk/my_blob_" a_test[0]["rowid"] ")")
   print "return code = ", rc
   printf "\n"

   #print "sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', " a_test[0]["rowid"] ", readfile(/home/dmadmin/setup_files/documentum.tar))"
   #rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/setup_files/documentum.tar)")
   #rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/setup_files/patch.bin)")
   rc = sqlite_exec(my_db, "main", "test_with_blob", "my_blob", a_test[0]["rowid"], "readfile(/home/dmadmin/dmgawk/xx)")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT n1, hex(my_blob) FROM test_with_blob where n1 = 2000 limit 1" 
   stmt = "SELECT n1, my_blob FROM test_with_blob where n1 = 2000 limit 1" 
   stmt = "SELECT n1, substr(my_blob, 1) FROM test_with_blob where n1 = 2000 limit 1" 
   rc = sqlite_select(my_db, stmt)
   rc = sqlite_select(my_db, stmt, "10 100w")
   print "return code = ", rc
   printf "\n"

   stmt = "SELECT n1, replace(my_blob, '\n', '\\n') as 'noLF' FROM test_with_blob where n1 = 5000 limit 2" 
   print "sqlite_select(my_db," stmt ", 10, 100w)"
   rc = sqlite_select(my_db, stmt, "10, 100w")
   print "return code = ", rc
   printf "\n"

   sqlite_close(my_db)

   exit(0)
}

function dumparray(name, array, i) {
   for (i in array)
      if (isarray(array[i]))
         dumparray(name "[\"" i "\"]", array[i])
      else
         printf("%s[\"%s\"] = %s\n", name, i, array[i])
      }

To execute the test:

AWKLIBPATH=gawk-4.2.1/extension/.libs gawk-4.2.1/gawk -f tsqlite.awk
db opened: 0
db opened: 0
db opened: 0
 
return code = 0
return code = 0
return code = 0
return code = 0
return code = 0
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, 0, a_test)
6 rows selected
a_test["0"]["n1"] = 100
a_test["0"]["s1"] = hello1
a_test["0"]["s2"] = hello0101
a_test["1"]["n1"] = 200
a_test["1"]["s1"] = hello2
a_test["1"]["s2"] = hello0102
a_test["2"]["n1"] = 300
a_test["2"]["s1"] = hello3
a_test["2"]["s2"] = hello0103
a_test["3"]["n1"] = 400
a_test["3"]["s1"] = hello4
a_test["3"]["s2"] = hello0104
a_test["4"]["n1"] = 400
a_test["4"]["s1"] = hello5 with spaces
a_test["4"]["s2"] = hello0105 with spaces
a_test["5"]["n1"] = 400
a_test["5"]["s1"] = hello6 with spaces
a_test["5"]["s2"] = hello0106
row 0: n1 = 100 s1 = hello1 s2 = hello0101
row 1: n1 = 200 s1 = hello2 s2 = hello0102
row 2: n1 = 300 s1 = hello3 s2 = hello0103
row 3: n1 = 400 s1 = hello4 s2 = hello0104
row 4: n1 = 400 s1 = hello5 with spaces s2 = hello0105 with spaces
row 5: n1 = 400 s1 = hello6 with spaces s2 = hello0106
 
sqlite_select(my_db, "SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' ORDER BY 1")
name
--------
test
test1
test_...
3 rows selected
1 columns displayed
 
Optimum column widths
=====================
for query: SELECT name FROM sqlite_master WHERE type IN ('table','view') AND name NOT LIKE 'sqlite_%' ORDER BY 1
name 14
return code = 0
 
sqlite_select(my_db, "SELECT sql FROM sqlite_master ORDER BY tbl_name, type DESC, name", "100")
sql
----------------------------------------------------------------------------------------------------
CREATE TABLE test(a1 number)
CREATE TABLE test1(n1 NUMBER, s1 TEXT, s2 CHAR(100))
CREATE TABLE test_with_blob(n1 NUMBER, my_blob BLOB)
3 rows selected
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1)
n1 s1 s2
-------- -------- --------
100 hello1 hello...
200 hello2 hello...
300 hello3 hello...
400 hello4 hello...
400 hello... hello...
400 hello... hello...
6 rows selected
3 columns displayed
 
Optimum column widths
=====================
for query: SELECT * FROM test1
n1 3
s1 18
s2 21
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "||")
n1 ||s1 ||s2
--------||--------||--------
100 ||hello1 ||hello...
200 ||hello2 ||hello...
300 ||hello3 ||hello...
400 ||hello4 ||hello...
400 ||hello...||hello...
400 ||hello...||hello...
6 rows selected
3 columns displayed
 
Optimum column widths
=====================
for query: SELECT * FROM test1
n1 3
s1 18
s2 21
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "3 18 21")
n1 s1 s2
--- ------------------ ---------------------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with spaces hello0105 with spaces
400 hello6 with spaces hello0106
6 rows selected
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "")
n1|s1|s2
100|hello1|hello0101
200|hello2|hello0102
300|hello3|hello0103
400|hello4|hello0104
400|hello5 with spaces |hello0105 with spaces
400|hello6 with spaces |hello0106
6 rows selected
return code = 0
 
sqlite_exec(my_db, INSERT INTO test1(n1, s1, s2) VALUES(400, "hello6-with-spaces ", "hello0106-12345"))
return code = 0
 
sqlite_select(my_db, SELECT * FROM test1, "2e 15e 10w")
column [0] has maximum width [2] and requests a truncation with ellipsis [...] but a minimum width of [3] characters is necessary for this, assuming that minimum width
n1 s1 s2
--- --------------- ----------
100 hello1 hello0101
200 hello2 hello0102
300 hello3 hello0103
400 hello4 hello0104
400 hello5 with ... hello0105
with space
s
400 hello6 with ... hello0106
400 hello6-with-... hello0106-
12345
7 rows selected
return code = 0
 
sqlite_select(my_db,SELECT count(*) FROM test1)
count(*)
--------
7
1 rows selected
1 columns displayed
 
Optimum column widths
=====================
for query: SELECT count(*) FROM test1
count(*) 8
return code = 0
 
sqlite_exec(my_db, DELETE FROM test1)
return code = 0
 
sqlite_select(my_db,SELECT count(*) FROM test1)
count(*)
--------
0
1 rows selected
1 columns displayed
 
Optimum column widths
=====================
for query: SELECT count(*) FROM test1
count(*) 8
return code = 0
 
return code = 0
return code = 0
sqlite_exec(my_db,INSERT INTO test_with_blob(n1, my_blob) VALUES(1, readfile("gawk-4.2.1.tar.gz")))
return code = 0
 
sqlite_select(my_db, SELECT n1, writefile('yy' || rowid, my_blob) FROM test_with_blob)
return code = 0
 
sqlite_exec(my_db,INSERT INTO test_with_blob(n1, my_blob) VALUES(1000, readfile("/home/dmadmin/setup_files/instantclient-basic-linux.x64-12.2.0.1.0.zip")))
return code = 0
 
sqlite_select(my_db, SELECT n1, writefile('"yy' || rowid || '"', my_blob) FROM test_with_blob where n1 = 1000)
return code = 0
 
sqlite_exec(my_db,INSERT INTO test_with_blob(n1, my_blob) VALUES(5000, readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c')))
return code = 0
 
sqlite_exec(my_db,UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/gawk-4.2.1/extension/sqlite_gawk.c') where n1 = 1000)
return code = 0
 
sqlite_exec(my_db,UPDATE test_with_blob set my_blob = readfile('/home/dmadmin/dmgawk/xx') where n1 = 1000)
return code = 0
 
sqlite_select(db, "select rowid from test_with_blob where n1 = 1000 limit 1", 0, a_test)
1 rows selected
after getting blob
a_test["0"]["rowid"] = 1
sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', 1, writefile(~/dmgawk/my_blob_1))
return code = 0
 
sqlite_exec(my_db, 'main', 'test_with_blob', 'my_blob', 1, readfile(/home/dmadmin/setup_files/documentum.tar))
return code = 0
 
sqlite_select(my_db)
return code = 0
 
sqlite_select(my_db,SELECT n1, replace(my_blob, '\n', '\n') as 'noLF' FROM test_with_blob where n1 = 5000 limit 2, 10, 100w)
n1 noLF
---------- ----------------------------------------------------------------------------------------------------
5000 /*\n * sqlite-gawk.c - an interface to sqlite() library;\n * Cesare Cervini\n * dbi-services.com\n *
8/2018\n*/\n#ifdef HAVE_CONFIG_H\n#include \n#endif\n\n#include \n#include \n#include \n#include \n#include \n\n#include \n#inc
lude \n\n#include "gawkapi.h"\n\n// extension;\n#include \n#include \n#
include \n#include \n#include \n#include \n#inclu
de \n#include \n\n#include "gettext.h"\n#define _(msgid) gettext(msgid)\n#defi
ne N_(msgid) msgid\n\nstatic const gawk_api_t *api; /* for convenience macros to work */\nstatic a
wk_ext_id_t ext_id;\nstatic const char *ext_version = "an interface to sqlite3: version 1.0";\n\nint
plugin_is_GPL_compatible;\n\n/* internal structure and variables */\n/*\ninternally stores the db h
...
c)(void) = init_sqlite_handles;\n\n/* define the dl_load function using the boilerplate macro */\n\n
dl_load_func(func_table, sqlite_gawk, "")\n\n
1 rows selected
return code = 0

That was a very long second part. If you are still there, please turn now to part Part III for some explanation of all this.

 

Cet article A SQLite extension for gawk (part II) est apparu en premier sur Blog dbi services.

PDB Snapshot Carousel Oracle 18.3

$
0
0

A new feature with Oracle 18c is the PDB snapshot carousel. As indicated by its name a PDB snapshot is a copy of a PDB at a specific point in time. You have the possibility to create up to eight snapshots, when you reach the maximum number of snapshots, the last snapshot is over written. The snapshot carousel is obviously the name of all your PDB snapshots.

We have the possibility to create automatic snapshots using the “snapshot mode every” clause when you create or alter a PDB. For example you can change the snapshot mode from a PDB to every  3 hours:

SQL> alter session set container=pdb;

Session altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
MANUAL

SQL> alter pluggable database snapshot mode every 3 hours;

Pluggable database altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
AUTO			  3

To return to manual mode, just type:

SQL> alter pluggable database snapshot mode manual;

Pluggable database altered.

We can create PDB snapshots manually, you can use a specific name or not:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

SQL> alter pluggable database snapshot;

Pluggable database altered.

We can query the dba_pdb_snapshots view to display the PDB snapshots location:

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS ORDER BY SNAP_SCN;

CON_ID CON_NAME SNAPSHOT_NAME SNAP_SCN

FULL_SNAPSHOT_PATH

3        PDB	  PDB_SNAP    1155557
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155557.pdb

3        PDB	  SNAP_2263384607_987432172  1155823
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155823.pdb

If you want to drop a snapshot, you have two methods:

You delete the snapshot with the following alter pluggable statement:

SQL> alter pluggable database drop snapshot SNAP_2263384607_987432172;

Pluggable database altered.

Otherwise you set the MAX_PDB_SNAPSHOTS property to zero in the PDB:

You can query the CDB_PROPERTIES and CDB_PDBS to display the parameter value:

SELECT r.CON_ID, p.PDB_NAME, PROPERTY_NAME,
  	PROPERTY_VALUE AS value, DESCRIPTION
  	FROM   CDB_PROPERTIES r, CDB_PDBS p
  	WHERE  r.CON_ID = p.CON_ID
  	AND    PROPERTY_NAME LIKE 'MAX_PDB%'
  	AND    description like 'maximum%'
  	ORDER BY PROPERTY_NAME

CON_ID	PDB_NAME	PROPERTY_NAME	VALUE	           DESCRIPTION
  3		  PDB     MAX_PDB_SNAPSHOTS    8    maximum number of snapshots for a given PDB

And if you set it to zero all your PDB snapshots will be dropped:

SQL> alter session set container=pdb;

Session altered.

SQL> alter pluggable database set max_pdb_snapshots = 0;

Pluggable database altered.

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS
ORDER BY SNAP_SCN;

no rows selected

But the main interest of the snapshot PDBS is to create new PDBS from a productive environment based on a point in time of the production PDB.

So we create a PDB snapshot named PDB_SNAP:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

And now we create a PDB from the PDB_SNAP snapshot:

SQL> create pluggable database PDB2 from PDB using snapshot PDB_SNAP create_file_dest='/home/oracle/oradata/DB18/pdb2';

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB		                  READ WRITE NO
	 4 APPPSI			  READ WRITE NO
	 5 PDB2 			  READ WRITE NO

We have also the possibility to change the snapshot mode:

SQL> alter session set container=pdb;

Session altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
MANUAL


SQL> ALTER PLUGGABLE DATABASE SNAPSHOT MODE EVERY 1 HOURS;

Pluggable database altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
AUTO		  1

We have the possibility to create a PDB that creates snapshots every 15 minutes :

SQL> create pluggable database pdb_new from pdb
  2  file_name_convert=('pdb','pdb_new')
  3  snapshot mode every 15 minutes;

Pluggable database created.

There is a pre requisite for configuring automatic PDB snapshots: the CDB must be in local undo mode.

Finally the snapshots are correctly created in my environment every 15 minutes:

oracle@localhost:/home/oracle/oradata/DB183/pdb/ [DB183] ls -lrt snap*
-rw-r--r--. 1 oracle dba 65690276 Oct  1 15:04 snap_3893567541_798493.pdb
-rw-r--r--. 1 oracle dba 65740202 Oct  1 15:19 snap_3893567541_801189.pdb
-rw-r--r--. 1 oracle dba 65823279 Oct  1 15:34 snap_3893567541_803706.pdb

And to verify if it is correct , I had created in my pdb_new environment a location table in my psi schema with two records at 15H20:

SQL> create table psi.location (name varchar2(10));

Table created.

SQL> insert into psi.location values ('London');

1 row created.

SQL> insert into psi.location values('Paris');

1 row created.

SQL> commit;

And we create a new pdb from the snap to verify if the data are correct:

SQL> create pluggable database pdb_psi from pdb_new 
     using snapshot SNAP_45745043_988386045 
     create_file_dest='/home/oracle/oradata/DB183/pdb_psi';

Pluggable database created.

We open pdb_psi and we check:

SQL> alter session set container=pdb_psi;

Session altered.

SQL> select * from psi.location;

NAME
----------
London
Paris

This feature might be very useful for testing purposes, imagine you have a production PDB, you only have to create a refreshable clone named PDB_MASTER and configure it to create daily snapshots. If you need a PDB for testing you only have to create a clone from any snapshot.

Conclusion

All those tests have been realized on an Linux x86-64 server, with Oracle 18.3 Enterprise Edition. My DB183 database has been initialized with the “_exadata_feature_on”  hidden parameter to avoid the “ORA-12754 Feature PDB Snapshot Carousel is disabled due to missing capability” error message.

If you have a look at the Database Licensing User Manual:

https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-B6113390-9586-46D7-9008-DCC9EDA45AB4

Feature / Option / Pack SE2 EE EE-ES DBCS SE DBCS EE DBCS EE-HP DBCS EE-EP ExaCS Notes
PDB Snapshot Carousel N N Y N Y Y Y Y

 

You will see that PDB Carousel (and a lot of interesting new features in Oracle 18.3) are only available for Engineered System or in Cloud and not for Enterprise Edition for third party hardware. I really hope Oracle will change this behavior in the future releases.

 

Cet article PDB Snapshot Carousel Oracle 18.3 est apparu en premier sur Blog dbi services.

Foglight: Monitoring solution for databases [Part 01]

$
0
0

What is Foglight?

Foglight is a solution from Quest which promises to provide visibility into issues affecting the application and end user experience.

The solution also helps you to find quickly the root cause in application, database, infrastructure, or network to resolve issues by providing “Intuitive workflows”

Let’s give it a try!

Preparing the installation

Requirements for installing Foglight are:

  • A machine to host the Management Server. Ideally dedicated
  • Administrator or root access to all machines requiring a Foglight agent
  • An administrator password for Foglight
  • A user account on the machine where you are installing Foglight
  • The IATEMPDIR environment variable is set to a location with sufficient space for installer self-extraction

Architecture

Foglight requires 2 components:

  • a Management Server: the data collection and processing server
  • a database repository: can be a PostgreSQL embedded in the installation process (Standard installation) or a supported external database: MySQL, Oracle, PostgreSQL or Microsoft SQL server (Custom Installation)
    • If you chose the embedded database, it is automatically stopped or started with the Management Server
    • You can start with the embedded database and then migrate to an external one. The procedure is available here

Important considerations

  • For this test I chose to download and install Foglight with the embedded PostgreSQL database
  • I will use the 45 days trial license which is by default activated at the installation. It is possible to install the license during the installation if you perform the custom install
  • I will make a silent installation given the fact that the Foglight installer can be started in command-line mode by using the console mode or silent mode

Installation

After unzipping the downloaded file, we can arrange the installation parameters according to our needs. This is done by editing the installation parameter file (as description of each parameter can be found here):


[foglight@mgt-server Installers]$ egrep -v "^#|^$" fms_silent_install.properties
INSTALLER_UI=SILENT
USER_INSTALL_DIR=/foglight/app
FMS_LICENSE_AGREEMENT=yes
FMS_SERVICE=false
FMS_UPGRADE=1
FMS_ADMIN_PASSWORD=foglight
FMS_HTTPS_ONLY=0
FMS_HA_MODE=0
FMS_DB_USER=foglight
FMS_DB_USER_PASSWORD=foglight
FMS_DB=embedded
FMS_DB_HOST=127.0.0.1
FMS_DB_PORT=15432
FMS_DB_SETUPNOW=1
FMS_RUN_NOW=false
FMS_CLUSTER_MCAST_PORT=45566
FMS_HTTP_PORT=8080
FMS_HTTPS_PORT=8443
FMS_FEDERATION_PORT=1099
FMS_QP5APP_PORT=8448
FMS_SERVICE_LINUX_ENABLED=0
FMS_SERVICE_LINUX_VALID_PLATFORM=false

The we can run the installation in silent as below:


[foglight@mgt-server Installers]$ ./foglight-5.9.2-foglightfordatabaseslinux-x86_64.bin -i silent -f fms_silent_install.properties
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Preparing SILENT Mode Installation...

===============================================================================
Foglight 5.9.2 (created with InstallAnywhere by Macrovision)
-------------------------------------------------------------------------------

===============================================================================
Installing...
-------------

[==================|==================|==================|==================] [------------------|------------------|------------------|------------------]

Installation Complete.

At this stage the installation should have succeed. If it is not the case, have a look on the below log files located in the user home:

[foglight@mgt-server ~]$ ll ~
total 8
-rw-rw-r-- 1 foglight foglight 573 Oct 1 14:47 Foglight_5.9.3_Install_2018-10-01_144723_001.log
-rw-rw-r-- 1 foglight foglight 4026 Oct 1 14:47 Foglight_5.9.3_InstallLog.log

Start, Stop and login

Start

Now we can start our installation:

[foglight@mgt-server app]$ fms -d
2018-10-01 15:00:08.000 INFO [native] Attempting to start Foglight as a daemon.
The startup may take some time to complete. Please check the log file for more
information. Use the '--stop' command line option to shut down a running
daemon.
2018-10-01 15:00:08.000 INFO [native] Daemon process for 'Foglight' started.

And the check out what are the running processes. The is 1 process for the management server and various for the postgres database as we are in a embedded installation:

[foglight@mgt-server app]$ ps -ef | grep foglight
foglight 23601 1 74 23:01 pts/0 00:02:22 Foglight 5.9.2: Foglight Daemon
foglight 23669 1 0 23:01 pts/0 00:00:00 /foglight/app/postgresql/bin/postgres -D /foglight/app/state/postgresql-data --port=15432
foglight 23670 23669 0 23:01 ? 00:00:00 postgres: logger process
foglight 23672 23669 0 23:01 ? 00:00:00 postgres: checkpointer process
foglight 23673 23669 0 23:01 ? 00:00:00 postgres: writer process
foglight 23674 23669 0 23:01 ? 00:00:00 postgres: wal writer process
foglight 23675 23669 0 23:01 ? 00:00:00 postgres: autovacuum launcher process
foglight 23676 23669 0 23:01 ? 00:00:00 postgres: stats collector process
foglight 23687 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48463) idle
foglight 23688 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48464) idle
foglight 23689 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48465) idle
foglight 23690 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48466) idle
foglight 23691 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48467) idle
foglight 23692 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48468) idle
foglight 23693 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48469) idle
foglight 23694 23669 0 23:02 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48470) idle
foglight 23695 23669 1 23:02 ? 00:00:03 postgres: foglight foglight 127.0.0.1(48471) idle
foglight 23853 23669 1 23:04 ? 00:00:00 postgres: foglight foglight 127.0.0.1(48474) idle
foglight 23868 23601 47 23:04 pts/0 00:00:11 FoglightAgentManager 5.9.2: FglAM /foglight/app/fglam/state/default on server
foglight 23876 23868 0 23:04 ? 00:00:00 Quest Application Watchdog 5.9.5: Monitoring PID 23868
foglight 23943 1 0 23:04 pts/0 00:00:00 Quest Application Relauncher 5.9.5: /foglight/app/fglam/bin/fglam

Another option is to start the Management Server using initialization scripts. This option is particularly useful when you want to automatically start the Management Server after it has been rebooted:

[root@mgt-server ~]# cp /foglight/app/scripts/init.d/Linux/foglight /etc/init.d/
[root@mgt-server ~]# ll /etc/init.d/foglight
-rwxr-xr-x 1 root root 2084 Oct 1 15:13 /etc/init.d/foglight

Login

Given the parameters provided in the installation files, I can reach the web console here: https://192.168.56.100:8443
My user/password is the default foglight/foglight and here I am:

Capture

Stop

[foglight@mgt-server ~]$ export PATH=$PATH:/foglight/app/bin

[foglight@mgt-server ~]$ fms --stop
2018-10-01 15:15:17.000 INFO [native] Sending stop request to 'Foglight'
process running in /foglight/app/state (pid 12570).
2018-10-01 15:15:17.000 INFO [native] Shutdown request transmitted.

After few seconds you can observe that the PostgreSQL and Management server are down:

[foglight@servemgt-server r ~]$ ps -ef |grep foglight
root 13182 2656 0 15:15 pts/0 00:00:00 su - foglight
foglight 13183 13182 0 15:15 pts/0 00:00:00 -bash
foglight 13251 13183 0 15:15 pts/0 00:00:00 ps -ef
foglight 13252 13183 0 15:15 pts/0 00:00:00 grep --color=auto foglight

I hope this helps and please do not hesitate to contact us for more details.

 

Cet article Foglight: Monitoring solution for databases [Part 01] est apparu en premier sur Blog dbi services.

First steps into SQL Server 2019 availability groups on K8s

$
0
0

A couple of weeks ago, Microsoft announced the first public CTP version of next SQL Server version (CTP2). It is not a surprise, the SQL Server vNext becomes SQL Server 2019 and there are a plenty of enhancements as well as new features to discover. But for now, let’s start with likely one of my favorites: availability groups on Kurbernetes (aka K8s). As far I may see from customers and hear from my colleagues as well, we assist to a strong adoption of K8s with OpenShift as a main driver. I would not be surprised to see some SQL Server pods at customer shops in a near future, especially with the support of availability groups on K8s. From my opinion, that is definitely something that was missing in the previous for microservices architectures or not, for either quality or production environments.

blog 143 - 0 - AG K8s

Well, I decided to learn more about this new feature but let’s say this write-up concerns the CTP 2.0 version and chances are things will likely change in the future. So, don’t focus strictly on my words or commands I’m using in this blog post.

It is some time since I used the Service Azure Kubernetes (AKS) and I already wrote about it in a previous blog post. I used the same environment to deploy my first availability group on K8s. It was definitely an interesting experience because it involved getting technical skills about K8s infrastructure.

So, let’s set briefly the context with my K8s cluster on Azure that is composed of 3 agent nodes as shown below:

$ kubectl get nodes -o wide
NAME                       STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-78763348-0   Ready     agent     126d      v1.9.6    <none>        Ubuntu 16.04.4 LTS   4.13.0-1016-azure   docker://1.13.1
aks-nodepool1-78763348-1   Ready     agent     126d      v1.9.6    <none>        Ubuntu 16.04.4 LTS   4.13.0-1016-azure   docker://1.13.1
aks-nodepool1-78763348-2   Ready     agent     35d       v1.9.6    <none>        Ubuntu 16.04.5 LTS   4.15.0-1023-azure   docker://1.13.1

 

I also used a custom namespace – agdev – to scope my availability group resources names.

$ kubectl get ns
NAME           STATUS        AGE
ag1            Terminating   23h
agdev          Active        10h
azure-system   Active        124d
default        Active        124d
kube-public    Active        124d
kube-system    Active        124d

 

Referring to the Microsoft documentation, the SQL secrets (including master key and SA password secrets) are ready for use:

$ kubectl get secret sql-secrets -n agdev
NAME                   TYPE                                  DATA      AGE
sql-secrets            Opaque                                2         1d

$ kubectl describe secret sql-secrets -n agdev
Name:         sql-secrets
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
masterkeypassword:  14 bytes
sapassword:         14 bytes

 

  • The operator

The first component to deploy is the operator which is a very important component in this infrastructure and that builds upon the basic Kubernetes resource and controller concepts. Kubernetes has a very pluggable way to add your own logic in the form of a controller in addition of existing built-in controllers as the old fashion replication controller, the replica sets and deployments. All of them are suitable for stateless applications but the story is not the same when we have to deal with stateful systems like databases because those system require specific application domain knowledge to correctly scale, upgrade and reconfigure while protecting against data loss or unavailability. For example, how to deal correctly with availability groups during a crash of pod? If we think about it, the work doesn’t consist only in restarting the crashing pod but the system will also have to execute custom tasks in a background including electing of a new primary (aka leader election), ensuring a safe transition during the failover period to avoid split brain scenarios etc.

Deploying the mssql-operator includes the creation of a new pod:

$ kubectl get pods -n agdev -l app=mssql-operator
NAME                              READY     STATUS    RESTARTS   AGE
mssql-operator-67447c4bd8-s6tbv   1/1       Running   0          11h

 

Let’s go further by getting more details about this pod:

$ kubectl describe pod -n agdev mssql-operator-67447c4bd8-s6tbv
Name:           mssql-operator-67447c4bd8-s6tbv
Namespace:      agdev
Node:           aks-nodepool1-78763348-0/10.240.0.4
Start Time:     Mon, 01 Oct 2018 08:12:47 +0200
Labels:         app=mssql-operator
                pod-template-hash=2300370684
Annotations:    <none>
Status:         Running
IP:             10.244.1.56
Controlled By:  ReplicaSet/mssql-operator-67447c4bd8
Containers:
  mssql-operator:
    Container ID:  docker://148ba4b8ccd91159fecc3087dd4c0b7eb7feb36be4b3b5124314121531cd3a3c
    Image:         mcr.microsoft.com/mssql/ha:vNext-CTP2.0-ubuntu
    Image ID:      docker-pullable://mcr.microsoft.com/mssql/ha@sha256:c5d20c8b34ea096a845de0222441304a14ad31a447d79904bafaf29f898704d0
    Port:          <none>
    Host Port:     <none>
    Command:
      /mssql-server-k8s-operator
    State:          Running
      Started:      Mon, 01 Oct 2018 08:13:32 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      MSSQL_K8S_NAMESPACE:  agdev (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from mssql-operator-token-bd5gc (ro)
…
Volumes:
  mssql-operator-token-bd5gc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mssql-operator-token-bd5gc
    Optional:    false

 

Some interesting items to note here:

  • The SQL Server CTP image – mcr.microsoft.com/mssql/ha – comes from the new Microsoft Container Registry (MCR). The current tag is vNext-CTP2.0-ubuntu at the moment of this write-up
  • Volume secret is mounted to pass sensitive data that concerns a K8s service account used by the pod. In fact, the deployment of availability groups implies the creation of multiple service accounts
$ kubectl describe secret -n agdev mssql-operator-token-bd5gc
Name:         mssql-operator-token-bd5gc
Namespace:    agdev
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=mssql-operator
              kubernetes.io/service-account.uid=03cb111e-c541-11e8-a34a-0a09b8f01b34

Type:  kubernetes.io/service-account-token

Data
====
namespace:  5 bytes
token:      xxxx
ca.crt:     1720 bytes

 

The command is /mssql-server-k8s-operator that is a binary file like other mssql-server* files packaged in the new SQL Server image and which are designed to respond to different events by appropriated actions like updating K8s resources:

$ kubectl exec -ti -n agdev mssql-operator-67447c4bd8-s6tbv -- /bin/bash
root@mssql-operator-67447c4bd8-s6tbv:/# ll mssql*
-rwxrwxr-x 1 root root 32277998 Sep 19 16:00 mssql-server-k8s-ag-agent*
-rwxrwxr-x 1 root root 31848041 Sep 19 16:00 mssql-server-k8s-ag-agent-supervisor*
-rwxrwxr-x 1 root root 31336739 Sep 19 16:00 mssql-server-k8s-failover*
-rwxrwxr-x 1 root root 32203064 Sep 19 16:00 mssql-server-k8s-health-agent*
-rwxrwxr-x 1 root root 31683946 Sep 19 16:00 mssql-server-k8s-init-sql*
-rwxrwxr-x 1 root root 31422517 Sep 19 16:00 mssql-server-k8s-operator*
-rwxrwxr-x 1 root root 31645032 Sep 19 16:00 mssql-server-k8s-rotate-creds*

root@mssql-operator-67447c4bd8-s6tbv:/# file mssql-server-k8s-operator
mssql-server-k8s-operator: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, not stripped

 

  • The SQL Server instances and AGs

The next step consisted in running the SQL Server AG deployment. Looking at the manifest file, we may notice we deploy custom SQL Server objects (kind: SqlServer) from new mssql.microsoft.com API installed previously as well as their corresponding services to expose SQL Server pods to the external traffic.

The deployment includes 3 StatefulSets that manage pods with 2 containers, respectively the SQL Server engine and its agent (HA supervisor). I was surprised to not see a deployment with kind: StatefulSet but I got the confirmation that the “logic” is encapsulated in the SqlServer object definition. Why StatfulSets here? Well, because they are more valuable for applications like databases by providing, inter alia, stable and unique network identifiers as well as stable and persistent storage. Stateless pods do not provide such capabilities. To meet StafulSet prerequisites, we need first to define persistent volumes for each SQL Server pod. Recent version of K8s allows to use dynamic provisioning and this is exactly what is used in the initial Microsoft deployment file with the instanceRootVolumeClaimTemplate:

instanceRootVolumeClaimTemplate:
   accessModes: [ReadWriteOnce]
   resources:
     requests: {storage: 5Gi}
   storageClass: default

 

However, in my context I already created persistent volumes for previous tests as shown below:

$ kubectl get pv -n agdev
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                STORAGECLASS   REASON    AGE
pvc-cb299d79-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            Delete           Bound     agdev/mssql-data-1   azure-disk               9h
pvc-cb4915b4-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            Delete           Bound     agdev/mssql-data-2   azure-disk               9h
pvc-cb67cd06-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            Delete           Bound     agdev/mssql-data-3   azure-disk               9h

$ kubectl get pvc -n agdev
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mssql-data-1   Bound     pvc-cb299d79-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            azure-disk     9h
mssql-data-2   Bound     pvc-cb4915b4-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            azure-disk     9h
mssql-data-3   Bound     pvc-cb67cd06-c5b4-11e8-a34a-0a09b8f01b34   10Gi       RWO            azure-disk     9h

 

So, I changed a little bit the initial manifest file for each SqlServer object with my existing persistent claims:

instanceRootVolume:
    persistentVolumeClaim:
      claimName: mssql-data-1

instanceRootVolume:
    persistentVolumeClaim:
      claimName: mssql-data-2

instanceRootVolume:
    persistentVolumeClaim:
      claimName: mssql-data-3

 

Furthermore, next prerequisite for StatefulSet consists in using a headless service and this exactly we may find with the creation of ag1 service during the deployment:

$ kubectl get svc -n agdev
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)             AGE
ag1           ClusterIP      None           <none>          1433/TCP,5022/TCP   1d

 

I also noticed some other interesting items like extra pods in completed state:

$ kubectl get pods -n agdev -l app!=mssql-operator
NAME                            READY     STATUS      RESTARTS   AGE
mssql-initialize-mssql1-plh8l   0/1       Completed   0          9h
mssql-initialize-mssql2-l6z8m   0/1       Completed   0          9h
mssql-initialize-mssql3-wrbkl   0/1       Completed   0          9h
mssql1-0                        2/2       Running     0          9h
mssql2-0                        2/2       Running     0          9h
mssql3-0                        2/2       Running     0          9h

$ kubectl get sts -n agdev
NAME      DESIRED   CURRENT   AGE
mssql1    1         1         9h
mssql2    1         1         9h
mssql3    1         1         9h

 

In fact, those pods are related to jobs created and executed in a background during the deployment of the SQL Server AG:

$ kubectl get jobs -n agdev
NAME                      DESIRED   SUCCESSFUL   AGE
mssql-initialize-mssql1   1         1            22h
mssql-initialize-mssql2   1         1            22h
mssql-initialize-mssql3   1         1            22h

 

Let’s take a look at the mssql-initialize-mssql1 job:

$ kubectl describe job -n agdev mssql-initialize-mssql1
Name:           mssql-initialize-mssql1
Namespace:      agdev
Selector:       controller-uid=cd481f3c-c5b5-11e8-a34a-0a09b8f01b34
Labels:         controller-uid=cd481f3c-c5b5-11e8-a34a-0a09b8f01b34
                job-name=mssql-initialize-mssql1
Annotations:    <none>
Parallelism:    1
Completions:    1
Start Time:     Mon, 01 Oct 2018 22:08:45 +0200
Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
Pod Template:
  Labels:           controller-uid=cd481f3c-c5b5-11e8-a34a-0a09b8f01b34
                    job-name=mssql-initialize-mssql1
  Service Account:  mssql-initialize-mssql1
  Containers:
   mssql-initialize:
    Image:      mcr.microsoft.com/mssql/ha:vNext-CTP2.0-ubuntu
    Port:       <none>
    Host Port:  <none>
    Command:
      /mssql-server-k8s-init-sql
    Environment:
      MSSQL_K8S_NAMESPACE:              (v1:metadata.namespace)
      MSSQL_K8S_SA_PASSWORD:           <set to the key 'sapassword' in secret 'sql-secrets'>  Optional: false
      MSSQL_K8S_NUM_SQL_SERVERS:       1
      MSSQL_K8S_SQL_POD_OWNER_UID:     cd13319a-c5b5-11e8-a34a-0a09b8f01b34
      MSSQL_K8S_SQL_SERVER_NAME:       mssql1
      MSSQL_K8S_SQL_POST_INIT_SCRIPT:
      MSSQL_K8S_MASTER_KEY_PASSWORD:   <set to the key 'masterkeypassword' in secret 'sql-secrets'>  Optional: false
    Mounts:                            <none>
  Volumes:                             <none>
Events:                                <none>

 

These jobs are one-time initialization code that is executed when SQL Server and the AG is bootstrapped (thank you to @MihaelaBlendea to give more details on this topic) through the mssql-server-k8s-init-sql command. This is likely something you may remove according to your context (if you daily deal with a lot of K8s jobs for example).

Then, the deployment led to create 3 StatefulSets with their respective pods mssql1-0, mssql2-0 and mssql3-0. Each pod contains 2 containers as shown below for the mssql1-0 pod:

$ kubectl describe pod -n agdev mssql1-0
Name:           mssql1-0
Namespace:      agdev
Node:           aks-nodepool1-78763348-1/10.240.0.5
…
Status:         Running
IP:             10.244.0.38
Controlled By:  StatefulSet/mssql1
Containers:
  mssql-server:
    Container ID:   docker://8e23cec873ea3d1ebd98f8f4f0ab0b11b840c54c17557d23817b9c21a863bb42
    Image:          mcr.microsoft.com/mssql/server:vNext-CTP2.0-ubuntu
    Image ID:       docker-pullable://mcr.microsoft.com/mssql/server@sha256:87e691e2e5f738fd64a427ebe935e4e5ccd631be1b4f66be1953c7450418c8c8
    Ports:          1433/TCP, 5022/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Mon, 01 Oct 2018 22:11:44 +0200
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8080/healthz delay=60s timeout=1s period=2s #success=1 #failure=3
    Environment:
      ACCEPT_EULA:        y
      MSSQL_PID:          Developer
      MSSQL_SA_PASSWORD:  <set to the key 'initsapassword' in secret 'mssql1-statefulset-secret'>  Optional: false
      MSSQL_ENABLE_HADR:  1
    Mounts:
      /var/opt/mssql from instance-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from no-api-access (ro)
  mssql-ha-supervisor:
    Container ID:  docker://f5a0d4d51a459752a2c509eb3ec7874d94586a7499201f559c9ad8281751e514
    Image:         mcr.microsoft.com/mssql/ha:vNext-CTP2.0-ubuntu
    Image ID:      docker-pullable://mcr.microsoft.com/mssql/ha@sha256:c5d20c8b34ea096a845de0222441304a14ad31a447d79904bafaf29f898704d0
    Port:          8080/TCP
    Host Port:     0/TCP
    Command:
      /mssql-server-k8s-ag-agent-supervisor
    State:          Running
      Started:      Mon, 01 Oct 2018 22:11:45 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      MSSQL_K8S_NAMESPACE:                         agdev (v1:metadata.namespace)
      MSSQL_K8S_POD_NAME:                          mssql1-0 (v1:metadata.name)
      MSSQL_K8S_SQL_SERVER_NAME:                   mssql1
      MSSQL_K8S_POD_IP:                             (v1:status.podIP)
      MSSQL_K8S_NODE_NAME:                          (v1:spec.nodeName)
      MSSQL_K8S_MONITOR_POLICY:                    3
      MSSQL_K8S_HEALTH_CONNECTION_REBOOT_TIMEOUT:
      MSSQL_K8S_SKIP_AG_ANTI_AFFINITY:
      MSSQL_K8S_MONITOR_PERIOD_SECONDS:
      MSSQL_K8S_LEASE_DURATION_SECONDS:
      MSSQL_K8S_RENEW_DEADLINE_SECONDS:
      MSSQL_K8S_RETRY_PERIOD_SECONDS:
      MSSQL_K8S_ACQUIRE_PERIOD_SECONDS:
      MSSQL_K8S_SQL_WRITE_LEASE_PERIOD_SECONDS:
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from mssql1-token-5zlkq (ro)
….
Volumes:
  no-api-access:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  instance-root:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mssql-data-1
    ReadOnly:   false
  mssql1-token-5zlkq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mssql1-token-5zlkq
    Optional:    false
…

 

We recognize the mssql-server and mssql-ha-supervisor container as stated to the Microsoft documentation. The mssql-server container is listening on the port 1433 (SQL engine) and 5022 (hadr point). Note the container includes a HTTP liveness probes (http-get http://:8080/healthz delay=60s timeout=1s period=2s #success=1 #failure=3) to determine its health. Morever, the mssql-ha-supervisor container is self-explaining and aims to monitor the SQL Server instance if we refer to the environment variable names. I believe another blog post will be necessary to talk about it. Each SQL Server pod (meaning a SQL Server instance here that listen on the port 1433) is exposed to the external traffic by a dedicated service as shown below. External IPs are assigned to the K8s cluster load balancer services through the Azure Load Balancer (basic SKU).

$ kubectl get svc -n agdev
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)             AGE
ag1                    ClusterIP      None           <none>           1433/TCP,5022/TCP   23h
mssql1                 LoadBalancer   10.0.43.216    xx.xx.xx.xxx    1433:31674/TCP      23h
mssql2                 LoadBalancer   10.0.28.27     xx.xx.xx.xxx    1433:32681/TCP      23h
mssql3                 LoadBalancer   10.0.137.244   xx.xx.xxx.xxx    1433:31152/TCP      23h

 

  • The AG Services

Finally, I only deployed the service corresponding to ag1-primary that connects to the primary replica. It is up to you to deploy other ones according to your context. In fact, the ag1-primary service acts as the AG listener in this new infrastructure.

$ kubectl get svc -n agdev
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)             AGE
ag1           ClusterIP      None           <none>          1433/TCP,5022/TCP   23h
ag1-primary   LoadBalancer   10.0.32.104    xxx.xx.xx.xxx       1433:31960/TCP      1m
mssql1        LoadBalancer   10.0.43.216    xx.xx.xx.xxx   1433:31674/TCP      23h
mssql2        LoadBalancer   10.0.28.27     xx.xx.xx.xxx   1433:32681/TCP      23h
mssql3        LoadBalancer   10.0.137.244   xx.xx.xxx.xxx   1433:31152/TCP      23h

 

So, it’s time to connect to my availability group from the external IP of the ag1-primary service. I already add a test database to the availability group and here a picture of the situation:

-- groups info
SELECT 
	g.name as ag_name,
	rgs.primary_replica, 
	rgs.primary_recovery_health_desc as recovery_health, 
	rgs.synchronization_health_desc as sync_health
FROM sys.dm_hadr_availability_group_states as rgs
JOIN sys.availability_groups AS g
				 ON rgs.group_id = g.group_id

-- replicas info
SELECT 
	g.name as ag_name,
	r.replica_server_name,
	r.availability_mode_desc as [availability_mode],
	r.failover_mode_desc as [failover_mode],
	rs.is_local,
	rs.role_desc as role,
	rs.operational_state_desc as op_state,
	rs.connected_state_desc as connect_state,
	rs.synchronization_health_desc as sync_state,
	rs.last_connect_error_number,
	rs.last_connect_error_description
FROM sys.dm_hadr_availability_replica_states AS rs
JOIN sys.availability_replicas AS r
	ON rs.replica_id = r.replica_id
JOIN sys.availability_groups AS g
	ON g.group_id = r.group_id
ORDER BY r.replica_server_name, rs.is_local;

-- DB level          
SELECT 
	g.name as ag_name,
	r.replica_server_name,
	DB_NAME(drs.database_id) as [database_name],
	drs.is_local,
	drs.is_primary_replica,
	synchronization_state_desc as sync_state,
	synchronization_health_desc as sync_health,
	database_state_desc as db_state
FROM sys.dm_hadr_database_replica_states AS drs
		 JOIN sys.availability_replicas AS r
		  ON r.replica_id = drs.replica_id
		 JOIN sys.availability_groups AS g
		  ON g.group_id = drs.group_id
ORDER BY g.name, drs.is_primary_replica DESC;
GO

 

blog 143 - 1 - AG config

This is a common picture we may get with traditional availability group. Another way to identify the primary replica is going through the kubectl command pod and to filter by label as follows:

$ kubectl get pods -n agdev -l="role.ag.mssql.microsoft.com/ag1"="primary"
NAME       READY     STATUS    RESTARTS   AGE
mssql1-0   2/2       Running   0          1d

 

To finish, let’s simulate the crash of the pod mssql1-0 and let’s see what happens:

$ kubectl delete pod -n agdev mssql1-0
pod "mssql1-0" deleted
kubectl get pods -n agdev
NAME                              READY     STATUS        RESTARTS   AGE
mssql-initialize-mssql1-plh8l     0/1       Completed     0          1d
mssql-initialize-mssql2-l6z8m     0/1       Completed     0          1d
mssql-initialize-mssql3-wrbkl     0/1       Completed     0          1d
mssql-operator-67447c4bd8-s6tbv   1/1       Running       0          1d
mssql1-0                          0/2       Terminating   0          1d
mssql2-0                          2/2       Running       0          1d
mssql3-0                          2/2       Running       0          1d

...

$ kubectl get pods -n agdev
NAME                              READY     STATUS              RESTARTS   AGE
mssql-initialize-mssql1-plh8l     0/1       Completed           0          1d
mssql-initialize-mssql2-l6z8m     0/1       Completed           0          1d
mssql-initialize-mssql3-wrbkl     0/1       Completed           0          1d
mssql-operator-67447c4bd8-s6tbv   1/1       Running             0          1d
mssql1-0                          0/2       ContainerCreating   0          9s
mssql2-0                          2/2       Running             0          1d
mssql3-0                          2/2       Running             0          1d

...

$ kubectl get pods -n agdev
NAME                              READY     STATUS      RESTARTS   AGE
mssql-initialize-mssql1-plh8l     0/1       Completed   0          1d
mssql-initialize-mssql2-l6z8m     0/1       Completed   0          1d
mssql-initialize-mssql3-wrbkl     0/1       Completed   0          1d
mssql-operator-67447c4bd8-s6tbv   1/1       Running     0          1d
mssql1-0                          2/2       Running     0          2m
mssql2-0                          2/2       Running     0          1d
mssql3-0                          2/2       Running     0          1d

 

As expected, the controller detects the event and recreates accordingly an another mssql1-0 pod but that’s not all. Firstly, let’s say because we are concerned by StatefulSet the pod keeps the same identity. Then the controller performs also other tasks including failover the availability group to another pod and change the primary with the mssql3-0 pod as shown below. The label of this pod is updated to identify the new primary.

$ kubectl get pods -n agdev -l="role.ag.mssql.microsoft.com/ag1"="primary"
NAME       READY     STATUS    RESTARTS   AGE
mssql3-0   2/2       Running   0          1d

 

This blog post was just an overview of what could be a SQL Server availability group on K8s. Obviously, there are a plenty of other interesting items to cover and to deep dive … probably in a near future. Stay tuned!

 

Cet article First steps into SQL Server 2019 availability groups on K8s est apparu en premier sur Blog dbi services.


Walking through the Zürich ZOUG Event – September the 18th

$
0
0

What a nice and an interesting new experience… My first ZOUG Event… Interesting opportunities to meet some great persons and hear to some great sessions. I had the chance to participate to Markus Michalewicz sessions. Markus is Senior Director, Database HA and Scalability Product Management by Oracle, and was the special guest to this event.

https://soug.ch/events/soug-day-september-2018/

The introduction session was done by Markus. He covered a presentation of the different HA solutions in order to talk about MAA. Oracle Maximum Availability Architecture (MAA) is, from my understanding, more a service delivered by Oracle in order to help customer to find their best solution at the lowest cost and complexity according to their constraint.

I was really looking to hear the next session from Robert Bialek from Trivadis about oracle database service high availability with Data Guard. Bialek covered a nice presentation of Data Guard, how it is working and providing some good tips in the way it should be configured.

The best session was certainly the next one, done by my colleague, Clemens Bleile, Oracle Technology Leader at dbi. What a great sharing experience from his past years as one of the managers in the Oracle Support Performance team EMEA. Clemens talked about SQLTXPLAIN, performance troubleshooting tool, its history and the future. Clemens also presented SQLT tool.

The last session I followed was chaired by Markus. The subject was autonomous database, and all the automatic features which came along the last Oracle releases. Will this make Databases been able to be managed themselves? The future will let us know. :-)

Thanks to dbi management to have given me the opportunity to join this Zoug event!

 

Cet article Walking through the Zürich ZOUG Event – September the 18th est apparu en premier sur Blog dbi services.

Deploy a MySQL Server in Docker containers

$
0
0

We hear about Docker every day. By working on MySQL Server, I am curious to test this platform which makes it possible to create containers independent of the OS to deploy virtualized applications.
So let’s try to deploy a MySQL Server with Docker!


Here is the architecture we will put in place:
MySQL on Docker
So we will run a Docker container for MySQL within a VM.

I’m working on a CentOS 7 installed on a VirtualBox Machine:

[root@node4 ~]# cat /etc/*release*
CentOS Linux release 7.5.1804 (Core)
Derived from Red Hat Enterprise Linux 7.5 (Source)

I install Docker on my VM and enable the Docker service:

[root@node4 ~]# yum install docker
[root@node4 ~]# systemctl enable docker.service

I start the Docker service:

[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com
[root@node4 ~]# systemctl stop docker.service
[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com
[root@node4 ~]# systemctl start docker.service
[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-10-05 16:42:33 CEST; 2s ago
     Docs: http://docs.docker.com
 Main PID: 1514 (dockerd-current)
   CGroup: /system.slice/docker.service
           ├─1514 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nati...
           └─1518 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2...
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.561072533+02:00" level=warning msg="Docker could not enable SELinux on the...t system"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.597927636+02:00" level=info msg="Graph migration to content-addressability... seconds"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.598407196+02:00" level=info msg="Loading containers: start."
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.642465451+02:00" level=info msg="Firewalld running: false"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.710685631+02:00" level=info msg="Default bridge (docker0) is assigned with... address"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.762876995+02:00" level=info msg="Loading containers: done."
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.780275247+02:00" level=info msg="Daemon has completed initialization"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.780294728+02:00" level=info msg="Docker daemon" commit="8633870/1.13.1" gr...on=1.13.1
Oct 05 16:42:33 node4 systemd[1]: Started Docker Application Container Engine.
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.799371435+02:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.

I check my network:

[root@node4 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:f3:9e:fa brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 85959sec preferred_lft 85959sec
    inet6 fe80::a00:27ff:fef3:9efa/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:45:62:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.204/24 brd 192.168.56.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe45:62a7/64 scope link
       valid_lft forever preferred_lft forever
4: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:b0:bf:02:d6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
[root@node4 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b32241ce8931        bridge              bridge              local
9dd4a24a4e61        host                host                local
f1490ec17c17        none                null                local

So I have a network bridge named docker0 to which an IP address is assigned.

To obtain some information about the system, I can run the following command:

[root@node4 ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: false
 Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version:  (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
 seccomp
  WARNING: You're not using the default seccomp profile
  Profile: /etc/docker/seccomp.json
Kernel Version: 3.10.0-862.3.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 1
Total Memory: 867.7 MiB
Name: node4
ID: 6FFJ:Z33K:PYG3:2N4B:MZDO:7OUF:R6HW:ES3D:H7EK:MFLA:CAJ3:GF67
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)

For the moment I have no containers:

[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Now I can search the Docker Hub for MySQL images, and I pull the first one in my example (I normally chose an official build with the biggest number of stars):

[root@node4 ~]# docker search --filter "is-official=true" --filter "stars=3" mysql
INDEX       NAME                DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
docker.io   docker.io/mysql     MySQL is a widely used, open-source relati...   7075      [OK]
docker.io   docker.io/mariadb   MariaDB is a community-developed fork of M...   2267      [OK]
docker.io   docker.io/percona   Percona Server is a fork of the MySQL rela...   376       [OK]
[root@node4 ~]# docker pull docker.io/mysql
Using default tag: latest
Trying to pull repository docker.io/library/mysql ...
latest: Pulling from docker.io/library/mysql
802b00ed6f79: Pull complete
30f19a05b898: Pull complete
3e43303be5e9: Pull complete
94b281824ae2: Pull complete
51eb397095b1: Pull complete
54567da6fdf0: Pull complete
bc57ddb85cce: Pull complete
d6cd3c7302aa: Pull complete
d8263dad8dbb: Pull complete
780f2f86056d: Pull complete
8e0761cb58cd: Pull complete
7588cfc269e5: Pull complete
Digest: sha256:038f5f6ea8c8f63cfce1bce9c057ab3691cad867e18da8ad4ba6c90874d0537a
Status: Downloaded newer image for docker.io/mysql:latest

I create my container for MySQL named mysqld1:

[root@node4 ~]# docker run -d --name mysqld1 docker.io/mysql
b058fba64c7e585caddfc75f5d96076edb3e80b31773f135d9e44a3487724914

But if I list it, I see that I have a problem, it has exited with an error:

[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
b058fba64c7e        docker.io/mysql     "docker-entrypoint..."   55 seconds ago      Exited (1) 54 seconds ago                       mysqld1
[root@node4 ~]# docker logs mysqld1
error: database is uninitialized and password option is not specified
  You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD

That means that I forgot password assignment for the ‘root’ user account of MySQL Server. So I stop and the remove the container, and create it again with some additional options:

[root@node4 ~]# docker stop b058fba64c7e
b058fba64c7e
[root@node4 ~]# docker rm b058fba64c7e
b058fba64c7e
[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node4 ~]# docker run --name mysqld1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=manager -d docker.io/mysql
46a2020f58740d5a87288073ab6292447fe600f961428307d2e2727454655504

Now my container is up and running:

[root@node4 ~]#  docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
46a2020f5874        docker.io/mysql     "docker-entrypoint..."   5 seconds ago       Up 5 seconds        0.0.0.0:3306->3306/tcp, 33060/tcp   mysqld1

I can execute a bash shell on the container in an interactive mode to open a session on it:

[root@node4 ~]# docker exec -it mysqld1 bash
root@46a2020f5874:/#

And try to connect to MySQL Server:

root@46a2020f5874:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.12 MySQL Community Server - GPL
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.02 sec)

Great news, everything works well! In a few minutes I have a MySQL Server on his latest version up and running in a Docker container.

 

Cet article Deploy a MySQL Server in Docker containers est apparu en premier sur Blog dbi services.

From Oracle to Postgres with the EDB Postgres Migration Portal

$
0
0

EnterpriseDB is a valuable actor in PostgreSQL’s world. In addition to provide support, they also deliver very useful tools to manage easily your Postgres environments. Among these we can mention EDB Enterprise Manager, EDB Backup & Recovery Tool, EDB Failover Manager, aso…
With this post I will present one of the last in the family, EDB Postgres Migration Portal, a helpful tool to migrate from Oracle to Postgres.

To acces to the Portal, use your EDB account or create one if you don’t have. By the way, with your account you can also connect to PostgresRocks, a very interesting community platform. Go take a look :) .

Once connected, click on “Create project” :
1

Fulfill the fields and click on “Create”. Currently it is only possible to migrate from Oracle 11 or 12 to Postgres EDB Advanced Server 10 :
2

All your projects are displayed at the bottom of the page. Click on the “Assess” link to continue :
3

The migration steps consist of the following :

  1. Extracting the DDL metadata from Oracle database using the EDB’s DDL Extractor script
  2. Running assessment
  3. Correcting conflicts
  4. Downloading and running the new DDL statements adapted to your EDB Postgres database
  5. Migrating data

1. Extracting the DDL metadata from Oracle database

The DDL Extractor script is easy to use. You just need to specify the schema name to extract the DDLs and the path to store the DDLs file. As you can guess, the script run the Oracle dbms_metadata.get_dll package to extract objects definitions :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select object_type, count(*) from dba_objects where owner='HR' and status='VALID' group by object_type order by 1;

OBJECT_TYPE COUNT(*)
----------------------- ----------
INDEX 19
PROCEDURE 2
SEQUENCE 3
TABLE 7
TRIGGER 2

SQL>

SQL> @edb_ddl_extractor.sql
# -- EDB DDL Extractor Version 1.2 for Oracle Database -- #
# ------------------------------------------------------- #
Enter SCHEMA NAME to extract DDLs : HR
Enter PATH to store DDL file : /home/oracle/migration

Writing HR DDLs to /home/oracle/migration_gen_hr_ddls.sql
####################################################################################################################
## DDL EXTRACT FOR EDB POSTGRES MIGRATION PORTAL CREATED ON 03-10-2018 21:41:27 BY DDL EXTRACTION SCRIPT VERSION 1.2
##
## SOURCE DATABASE VERSION: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
####################################################################################################################
Extracting SYNONYMS...
Extracting DATABASE LINKS...
Extracting TYPE/TYPE BODY...
Extracting SEQUENCES...
Extracting TABLEs...
Extracting PARTITION Tables...
Extracting CACHE Tables...
Extracting CLUSTER Tables...
Extracting KEEP Tables...
Extracting INDEX ORGANIZED Tables...
Extracting COMPRESSED Tables...
Extracting NESTED Tables...
Extracting EXTERNAL Tables..
Extracting INDEXES...
Extracting CONSTRAINTS...
Extracting VIEWs..
Extracting MATERIALIZED VIEWs...
Extracting TRIGGERs..
Extracting FUNCTIONS...
Extracting PROCEDURE...
Extracting PACKAGE/PACKAGE BODY...

DDLs for Schema HR have been stored in /home/oracle/migration_gen_hr_ddls.sql
Upload this file to the EDB Migration Portal to assess this schema for EDB Advanced Server Compatibility.

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@vmrefdba01:/home/oracle/migration/ [DB1]

2. Assessment

Go back to your browser. It’s time to check if the Oracle schema can be imported to Postgres or not. Upload the output file…
4…and click on “Run assessment” to start the check.
The result is presented as follow :
6

3. Correcting conflicts

We can notice an issue in the report above… the bfile type is not supported by EDB PPAS. You can click on the concerned table to get more details about the issue :7Tips : when you want to manage bfile columns in Postgres, you can use the external_file extension.
Of course several other conversion issues can happen. A very good point with the Portal is that it provide a knowledge base to solve conflicts. You will find all necessary information and workarounds by navigating to the “Repair handler” and “Knowledge base” tabs. Moreover, you can do the corrections directly from the Portal.

4. Creating the objects in Postgres database

Once you have corrected the conflicts and the assess report indicates a 100% success ratio, click on the top right “Export DLL” button to download the new creation script adapted for Postgres EDB :
8
Then connect to your Postgres instance and run the script :
postgres=# \i Demo_HR.sql
CREATE SCHEMA
SET
CREATE SEQUENCE
CREATE SEQUENCE
CREATE SEQUENCE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE PROCEDURE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TRIGGER
postgres=#

Quick check :
postgres=# select object_type, count(*) from dba_objects where schema_name='HR' and status='VALID' group by object_type order by 1;
object_type | count
-------------+-------
INDEX | 19
PROCEDURE | 2
SEQUENCE | 3
TABLE | 7
TRIGGER | 2
(5 rows)

Sounds good ! All objects have been created successfully.

5. Migrating data

The Migration Portal doesn’t provide an embedded solution to import the data. So to do that you can use the EDB Migration Tool Kit.
Let see how it works…
You will find MTK in the edbmtk directory of the {PPAS_HOME}. Inside etc the toolkit.properties file is used to store the connection parameters to the source & target database :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb] cat toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@192.168.22.101:1521:DB1
SRC_DB_USER=system
SRC_DB_PASSWORD=manager

TARGET_DB_URL=jdbc:edb://localhost:5444/postgres
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=admin123
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb]

MTK use JDBC to connect to the Oracle database. You need to download the Oracle JDBC driver (ojdbc7.jar) and to store it in the following location :
postgres@ppas01:/home/postgres/ [PG10edb] ll /etc/alternatives/jre/lib/ext/
total 11424
-rw-r--r--. 1 root root 4003800 Oct 20 2017 cldrdata.jar
-rw-r--r--. 1 root root 9445 Oct 20 2017 dnsns.jar
-rw-r--r--. 1 root root 48733 Oct 20 2017 jaccess.jar
-rw-r--r--. 1 root root 1204766 Oct 20 2017 localedata.jar
-rw-r--r--. 1 root root 617 Oct 20 2017 meta-index
-rw-r--r--. 1 root root 2032243 Oct 20 2017 nashorn.jar
-rw-r--r--. 1 root root 3699265 Jun 17 2016 ojdbc7.jar
-rw-r--r--. 1 root root 30711 Oct 20 2017 sunec.jar
-rw-r--r--. 1 root root 293981 Oct 20 2017 sunjce_provider.jar
-rw-r--r--. 1 root root 267326 Oct 20 2017 sunpkcs11.jar
-rw-r--r--. 1 root root 77962 Oct 20 2017 zipfs.jar
postgres@ppas01:/home/postgres/ [PG10edb]

As HR’s objects already exist, let’s start the data migration with the -dataOnly option :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb] ./runMTK.sh -dataOnly -truncLoad -logBadSQL HR
Running EnterpriseDB Migration Toolkit (Build 51.0.1) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.22.101:1521:DB1
user =system
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5444/postgres
user =postgres
password=******
Connecting with source Oracle database server...
Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '10.5.12'
Importing redwood schema HR...
Loading Table Data in 8 MB batches...
Disabling FK constraints & triggers on hr.countries before truncate...
Truncating table COUNTRIES before data load...
Disabling indexes on hr.countries before data load...
Loading Table: COUNTRIES ...
[COUNTRIES] Migrated 25 rows.
[COUNTRIES] Table Data Load Summary: Total Time(s): 0.054 Total Rows: 25
Disabling FK constraints & triggers on hr.departments before truncate...
Truncating table DEPARTMENTS before data load...
Disabling indexes on hr.departments before data load...
Loading Table: DEPARTMENTS ...
[DEPARTMENTS] Migrated 27 rows.
[DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.046 Total Rows: 27
Disabling FK constraints & triggers on hr.employees before truncate...
Truncating table EMPLOYEES before data load...
Disabling indexes on hr.employees before data load...
Loading Table: EMPLOYEES ...
[EMPLOYEES] Migrated 107 rows.
[EMPLOYEES] Table Data Load Summary: Total Time(s): 0.168 Total Rows: 107 Total Size(MB): 0.0087890625
Disabling FK constraints & triggers on hr.jobs before truncate...
Truncating table JOBS before data load...
Disabling indexes on hr.jobs before data load...
Loading Table: JOBS ...
[JOBS] Migrated 19 rows.
[JOBS] Table Data Load Summary: Total Time(s): 0.01 Total Rows: 19
Disabling FK constraints & triggers on hr.job_history before truncate...
Truncating table JOB_HISTORY before data load...
Disabling indexes on hr.job_history before data load...
Loading Table: JOB_HISTORY ...
[JOB_HISTORY] Migrated 10 rows.
[JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.035 Total Rows: 10
Disabling FK constraints & triggers on hr.locations before truncate...
Truncating table LOCATIONS before data load...
Disabling indexes on hr.locations before data load...
Loading Table: LOCATIONS ...
[LOCATIONS] Migrated 23 rows.
[LOCATIONS] Table Data Load Summary: Total Time(s): 0.053 Total Rows: 23 Total Size(MB): 9.765625E-4
Disabling FK constraints & triggers on hr.regions before truncate...
Truncating table REGIONS before data load...
Disabling indexes on hr.regions before data load...
Loading Table: REGIONS ...
[REGIONS] Migrated 4 rows.
[REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4
Enabling FK constraints & triggers on hr.countries...
Enabling indexes on hr.countries after data load...
Enabling FK constraints & triggers on hr.departments...
Enabling indexes on hr.departments after data load...
Enabling FK constraints & triggers on hr.employees...
Enabling indexes on hr.employees after data load...
Enabling FK constraints & triggers on hr.jobs...
Enabling indexes on hr.jobs after data load...
Enabling FK constraints & triggers on hr.job_history...
Enabling indexes on hr.job_history after data load...
Enabling FK constraints & triggers on hr.locations...
Enabling indexes on hr.locations after data load...
Enabling FK constraints & triggers on hr.regions...
Enabling indexes on hr.regions after data load...
Data Load Summary: Total Time (sec): 0.785 Total Rows: 215 Total Size(MB): 0.01

Schema HR imported successfully.
Migration process completed successfully.

Migration logs have been saved to /home/postgres/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Tables: 7 out of 7

Total objects: 7
Successful count: 7
Failed count: 0
Invalid count: 0

*************************************************************
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb]

Quick check :
postgres=# select * from hr.regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
(4 rows)

Conclusion

Easy, isn’t it ?
Once again, EnterpriseDB is providing a very practical, user-frendly and quick to handle tool. In my demo the HR schema is pretty simple. The migration of more complexe schema can be more challenging. Currently only migrations from Oracle are available but SQL Server and other legacy databases should be supported in future versions. In the meantime, you must use EDB Migration Tool Kit for that.

That’s it. Have fun and… be ready to say goodbye to Oracle :-)

 

Cet article From Oracle to Postgres with the EDB Postgres Migration Portal est apparu en premier sur Blog dbi services.

How to migrate Grid Infrastructure from release 12c to release 18c

$
0
0

Oracle Clusterware 18c builds on this innovative technology by further enhancing support for larger multi-cluster environments and improving the overall ease of use. Oracle Clusterware is leveraged in the cloud in order to provide enterprise-class resiliency where required and dynamic as well as online allocation of compute resources where needed, when needed.
Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application.
HA in consolidated environments is no longer simple active/standby failover.

In this blog we will see how to upgrade our Grid Infrastructure stack from 12cR2 to 18c.

Step1: You are required to patch your GI with the patch 27006180

[root@dbisrv04 ~]# /u91/app/grid/product/12.2.0/grid/OPatch/opatchauto apply /u90/Kit/27006180/ -oh /u91/app/grid/product/12.2.0/grid/

Performing prepatch operations on SIHA Home........

Start applying binary patches on SIHA Home........

Performing postpatch operations on SIHA Home........

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u91/app/grid/product/12.2.0/grid successfully
OPatchAuto successful.

Step2: Check the list of patches applied

grid@dbisrv04:/u90/Kit/ [+ASM] /u91/app/grid/product/12.2.0/grid/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Lsinventory Output file location : /u91/app/grid/product/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-10-11_09-06-44AM.txt

--------------------------------------------------------------------------------
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.


Interim patches (1) :

Patch  27006180     : applied on Thu Oct 11 09:02:50 CEST 2018
Unique Patch ID:  21761216
Patch description:  "OCW Interim patch for 27006180"
   Created on 5 Dec 2017, 09:12:44 hrs PST8PDT
   Bugs fixed:
     13250991, 20559126, 22986384, 22999793, 23340259, 23722215, 23762756
........................
     26546632, 27006180

 

Step3: Upgrage the binaries to the release 18c

upgrade_grid

directory_new_grid

– recommend to run the rootUpgrade.sh script manually

run_root_script

/u90/app/grid/product/18.3.0/grid/rootupgrade.sh
[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u90/app/grid/product/18.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u90/app/grid/product/18.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/dbisrv04/crsconfig/roothas_2018-10-11_09-21-27AM.log

2018/10/11 09:21:29 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2018/10/11 09:21:30 CLSRSC-363: User ignored prerequisites during installation
2018/10/11 09:21:31 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2018/10/11 09:21:34 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2018/10/11 09:22:25 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2018/10/11 09:23:52 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2018/10/11 09:23:57 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbisrv04 successfully pinned.
2018/10/11 09:24:00 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2018/10/11 09:24:02 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2018/10/11 09:24:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/10/11 09:24:49 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.
CRS-4123: Oracle High Availability Services has been started.


dbisrv04     2018/10/11 09:25:58     /u90/app/grid/product/18.3.0/grid/cdata/dbisrv04/backup_20181011_092558.olr     70732493   

dbisrv04     2018/07/31 15:24:14     /u91/app/grid/product/12.2.0/grid/cdata/dbisrv04/backup_20180731_152414.olr     0
2018/10/11 09:25:59 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbisrv04'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'dbisrv04'
CRS-2677: Stop of 'ora.driver.afd' on 'dbisrv04' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbisrv04' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/10/11 09:27:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

– you can ignore the warning related to the memory resources

ignore_prereq

completed_succesfully

– once finished the installation, verify what has been made

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA3.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.db18c.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
--------------------------------------------------------------------------------
 

Cet article How to migrate Grid Infrastructure from release 12c to release 18c est apparu en premier sur Blog dbi services.

Getting started with Red Hat Satellite – Installation

$
0
0

This is the start of a series of posts I wanted to write for a long time: Getting started with Red Hat Satellite. Just in case you don’t know what it is, this statement from the official Red Hat website summarizes it quite well: “As your Red Hat® environment continues to grow, so does the need to manage it to a high standard of quality. Red Hat Satellite is an infrastructure management product specifically designed to keep Red Hat Enterprise Linux® environments and other Red Hat infrastructure running efficiently, properly secured, and compliant with various standards.”. In this first post it is all about the installation of Satellite and that is surprisingly easy. Lets go.

What you need as a starting point is a redhat Enterprise Linux minimal installation, either version 6 or 7. In my case it is the latest 7 release as of today:

[root@satellite ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Of course the system should be fully registered so you will be able to install updates / fixes and additional packages (and that requires a redhat subscription):

[root@satellite ~]$ subscription-manager list
+-------------------------------------------+
    Installed Product Status
+-------------------------------------------+
Product Name:   Red Hat Enterprise Linux Server
Product ID:     69
Version:        7.5
Arch:           x86_64
Status:         Subscribed
Status Details: 
Starts:         11/20/2017
Ends:           09/17/2019

As time management is critical that should be up and running before proceeding. For redhat Enterprise Linux chrony is the tool to go for:

[root@satellite ~]$ yum install -y chrony
[root@satellite ~]$ systemctl enable chronyd
[root@satellite ~]$ systemctl start chronyd

Satellite requires a fully qualified hostname so lets add that to the hosts file (of course you would do that with DNS in a real environment):

[root@satellite mnt]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.22.11 satellite.it.dbi-services.com satellite

As a Satellite server only makes sense when clients can connect to it a few ports need to be opened (not going into the details here, that will be the topic of another post):

[root@satellite ~]$ firewall-cmd --permanent \
                                 --add-port="53/udp" --add-port="53/tcp" \
                                 --add-port="67/udp" --add-port="69/udp" \
                                 --add-port="80/tcp"  --add-port="443/tcp" \
                                 --add-port="5000/tcp" --add-port="5647/tcp" \
                                 --add-port="8000/tcp" --add-port="8140/tcp" \
                                 --add-port="9090/tcp"

That’s basically all you need to do as preparation. There are several methods to install Satellite, I will use the downloaded iso as the source (what is called the “Disconnected Installation” what you will usually need in enterprise environments):

[root@satellite ~]$ ls -la /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso 
-rw-r--r--. 1 root root 3041613824 Oct 11 18:16 /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso

First of all the required packages need to be installed so we need to mount the iso:

[root@satellite ~]$ mount -o ro,loop /var/tmp/satellite-6.3.3-rhel-7-x86_64-dvd.iso /mnt
[root@satellite ~]$ cd /mnt/
[root@satellite mnt]# ls
addons  extra_files.json  install_packages  media.repo  Packages  repodata  RHSCL  TRANS.TBL

Installing the packages required for Satellite is just a matter of calling the “install_packages” script:

[root@satellite mnt]$ ./install_packages 
This script will install the satellite packages on the current machine.
   - Ensuring we are in an expected directory.
   - Copying installation files.
   - Creating a Repository File
   - Creating RHSCL Repository File
   - Checking to see if Satellite is already installed.
   - Importing the gpg key.
   - Installation repository will remain configured for future package installs.
   - Installation media can now be safely unmounted.

Install is complete. Please run satellite-installer --scenario satellite

The output already tells us what to do next, executing the “satellite-installer” script (I will go with the defaults here but there are many options you could specify already here):

[root@satellite mnt]$ satellite-installer --scenario satellite
This system has less than 8 GB of total memory. Please have at least 8 GB of total ram free before running the installer.

Hm, I am running that locally in a VM so lets try to increase that at least for the time of the installation and try again:

[root@satellite ~]$ satellite-installer --scenario satellite
Installing             Package[grub2-efi-x64]                             [0%] [                                         ]

… and here we go. Some minutes later the configuration/installation is completed:

[root@satellite ~]$ satellite-installer --scenario satellite
Installing             Done                                               [100%] [.......................................]
  Success!
  * Satellite is running at https://satellite.it.dbi-services.com
      Initial credentials are admin / L79AAUCMJWf6Y4HL

  * To install an additional Capsule on separate machine continue by running:

      capsule-certs-generate --foreman-proxy-fqdn "$CAPSULE" --certs-tar "/root/$CAPSULE-certs.tar"

  * To upgrade an existing 6.2 Capsule to 6.3:
      Please see official documentation for steps and parameters to use when upgrading a 6.2 Capsule to 6.3.

  The full log is at /var/log/foreman-installer/satellite.log

Ready:
Selection_065

Before we go into some details on how to initially configure the system in the next post lets check what we have running. A very good choice (at least when you ask me :) ) is to use PostgreSQL as the repository database:

[root@satellite ~]$ ps -ef | grep postgres
postgres  1264     1  0 08:56 ?        00:00:00 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432
postgres  1381  1264  0 08:56 ?        00:00:00 postgres: logger process   
postgres  2111  1264  0 08:57 ?        00:00:00 postgres: checkpointer process   
postgres  2112  1264  0 08:57 ?        00:00:00 postgres: writer process   
postgres  2113  1264  0 08:57 ?        00:00:00 postgres: wal writer process   
postgres  2114  1264  0 08:57 ?        00:00:00 postgres: autovacuum launcher process   
postgres  2115  1264  0 08:57 ?        00:00:00 postgres: stats collector process   
postgres  2188  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36952) idle
postgres  2189  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36954) idle
postgres  2193  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36958) idle
postgres  2194  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36960) idle
postgres  2218  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36964) idle
postgres  2474  1264  0 08:58 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2541  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36994) idle
postgres  2542  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36996) idle
postgres  2543  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36998) idle
postgres  2609  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2618  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2627  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2630  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2631  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2632  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2634  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2660  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2667  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2668  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2672  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2677  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2684  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2685  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2689  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
root      2742  2303  0 08:59 pts/0    00:00:00 grep --color=auto postgres

Lets quickly check if that is a supported version of PostgreSQL:

[root@satellite ~]$ cat /var/lib/pgsql/data/PG_VERSION 
9.2
[root@satellite ~]$ su - postgres
-bash-4.2$ psql
psql (9.2.24)
Type "help" for help.

postgres=# select version();
                                                    version                                                    
---------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.2.24 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Hm, 9.2 is already out of support. Nothing we would recommend to our customers but as long as redhat itself is supporting that it is probably fine. Just do not expect to get any fixes for that release from PostgreSQL community. Going a bit further into the details the PostgreSQL instance contains two additional users:

postgres=# \du
                             List of roles
 Role name |                   Attributes                   | Member of 
-----------+------------------------------------------------+-----------
 candlepin |                                                | {}
 foreman   |                                                | {}
 postgres  | Superuser, Create role, Create DB, Replication | {}

That corresponds to the connections to the instance we can see in the process list:

-bash-4.2$ ps -ef | egrep "foreman|candlepin" | grep postgres
postgres  2541  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36994) idle
postgres  2542  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36996) idle
postgres  2543  1264  0 08:58 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(36998) idle
postgres  2609  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2618  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2627  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2630  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2631  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2632  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2634  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2677  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2684  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2685  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  2689  1264  0 08:59 ?        00:00:00 postgres: foreman foreman [local] idle
postgres  3143  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37114) idle
postgres  3144  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37116) idle
postgres  3145  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37118) idle
postgres  3146  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37120) idle
postgres  3147  1264  0 09:03 ?        00:00:00 postgres: candlepin candlepin 127.0.0.1(37122) idle

Foreman is responsible for the life cycle management and candlepin is responsible for the subscription management. Both are fully open source and can also be used on their own. What else do we have:

[root@satellite ~]$ ps -ef | grep -i mongo
mongodb   1401     1  0 08:56 ?        00:00:08 /usr/bin/mongod --quiet -f /etc/mongod.conf run
root      3736  2303  0 09:11 pts/0    00:00:00 grep --color=auto -i mongo

In addition to the PostgreSQL instance there is also a MongoDB process running. What is it for? It is used by Katello which is a Foreman plugin that brings “the full power of content management alongside the provisioning and configuration capabilities of Foreman”.

The next component is Pulp:

[root@satellite ~]# ps -ef | grep pulp
apache    1067     1  0 08:56 ?        00:00:03 /usr/bin/python /usr/bin/celery beat --app=pulp.server.async.celery_instance.celery --scheduler=pulp.server.async.scheduler.Scheduler
apache    1076     1  0 08:56 ?        00:00:02 /usr/bin/python /usr/bin/pulp_streamer --nodaemon --syslog --prefix=pulp_streamer --pidfile= --python /usr/share/pulp/wsgi/streamer.tac
apache    1085     1  0 08:56 ?        00:00:11 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache    1259     1  0 08:56 ?        00:00:12 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --maxtasksperchild=2
apache    1684  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1685  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1686  1042  0 08:56 ?        00:00:04 (wsgi:pulp)     -DFOREGROUND
apache    1687  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1688  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1689  1042  0 08:56 ?        00:00:00 (wsgi:pulp-cont -DFOREGROUND
apache    1690  1042  0 08:56 ?        00:00:01 (wsgi:pulp_forg -DFOREGROUND
apache    2002  1085  0 08:57 ?        00:00:00 /usr/bin/python /usr/bin/celery worker -A pulp.server.async.app -n resource_manager@%h -Q resource_manager -c 1 --events --umask 18 --pidfile=/var/run/pulp/resource_manager.pid
apache   17757  1259  0 09:27 ?        00:00:00 /usr/bin/python /usr/bin/celery worker -n reserved_resource_worker-0@%h -A pulp.server.async.app -c 1 --events --umask 18 --pidfile=/var/run/pulp/reserved_resource_worker-0.pid --maxtasksperchild=2
root     18147  2303  0 09:29 pts/0    00:00:00 grep --color=auto pulp

This one is responsible “for managing repositories of software packages and making it available to a large numbers of consumers”. So far for the main components. We will have a more in depth look into these in one of the next posts.

 

Cet article Getting started with Red Hat Satellite – Installation est apparu en premier sur Blog dbi services.

Viewing all 2848 articles
Browse latest View live