Quantcast
Channel: dbi Blog
Viewing all 2852 articles
Browse latest View live

Getting started with Red Hat Satellite – Initial configuration

$
0
0

In the last post it was all about the Installation of Red Hat Satellite and the components that are used under the hood. In this post it is about doing the initial configuration so that packages can be synchronized from the Red Hat content network and subscriptions are available to be consumed by clients. This requires some initial tasks to be done in the Satellite itself but also in the Red Hat portal where you can manage your subscriptions.

When you log on to the Satellite Console for the first time you’ll land on the monitoring page:
Selection_066
As we currently do not have any systems managed by this Satellite system the overview is quite boring at the moment. The first task we need to do for going further is to create an “Organization”. Organizations are logical groups you should use to divide your infrastructure to whatever makes sense in your case, could be a division, could be based on content, whatever. This can either be done using the Console but it can also be done by using the command line utility hammer. For the moment there is just the “Default Organization”:
Selection_068

[root@satellite lib]$ hammer organization list
---|----------------------|----------------------|-------------|----------------------|------------
ID | TITLE                | NAME                 | DESCRIPTION | LABEL                | DESCRIPTION
---|----------------------|----------------------|-------------|----------------------|------------
1  | Default Organization | Default Organization |             | Default_Organization |            
---|----------------------|----------------------|-------------|----------------------|------------

Lets create a new one we will use throughout this series of posts:

[root@satellite lib] hammer organization create --name "dbi services" --label "dbi-services" --description "dbi services headquarter"
Organization created
[root@satellite lib] hammer organization list
---|----------------------|----------------------|--------------------------|----------------------|-------------------------
ID | TITLE                | NAME                 | DESCRIPTION              | LABEL                | DESCRIPTION             
---|----------------------|----------------------|--------------------------|----------------------|-------------------------
3  | dbi services         | dbi services         | dbi services headquarter | dbi-services         | dbi services headquarter
1  | Default Organization | Default Organization |                          | Default_Organization |                         
---|----------------------|----------------------|--------------------------|----------------------|-------------------------

Once we have that we can switch to our new organization in the Console:
Selection_070

For being able to browse the content of our Organization we need to create a so called “Debug Certificate”. This can be done by switching to the “Administer->Organizations” screen:
Selection_072

Once you selected the organization the certificate can be generated:
Selection_073

For importing that certificate into Firefox it needs to be converted. Using the generated certificate file create two files, one containing the private key section and another one containing the certificate section, like here:

dwe@dwe:~/Downloads$ cat key.pem 
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAroq3rZuJ.....Q5eCqquzW4/Ie7SI3MQZQ==
-----END RSA PRIVATE KEY-----

dwe@dwe:~/Downloads$ cat cert.pem 
-----BEGIN CERTIFICATE-----
MIIG8jCCBdqgAwIB....Ix55eToRfqUZLzcAlrOFTaF8UrbDOoFTJldF
wxDDBpzk
-----END CERTIFICATE-----

These files can now be used to create a certificate that can be imported into Firefox:

dwe@dwe:~/Downloads$ openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out dbi-services.pfx -name dbiservices
Enter Export Password:
Verifying - Enter Export Password:
dwe@dwe:~/Downloads$ ls -la *pfx*
-rw------- 1 dwe dwe 3460 Oct 12 10:36 dbi-services.pfx

Selection_074

Once we have that in Firefox we are able to browse the organization’s repository. The URL is of the following format: https://[SATELLITE-MACHINE]/pulp/repos/[ORGANIZATION-LABEL], so in my case: https://192.168.22.11/pulp/repos/dbi-services

Selection_075

For being able to attach systems to the Satellite need to create a so called “Subscription Allocation” in the Red Hat portal:
Selection_075
Selection_076
Selection_077

Once we have the “Subscription Allocation” we need to add subscriptions to it (I will use just two for the scope of this post):
Selection_078
Selection_079
Selection_080
Selection_081

These definitions need to be exported as a “Manifest” that then can be imported into our Satellite:
Selection_082

Importing the Manifest into the Satellite is done in the “Content > Red Hat Subscriptions” section:
Selection_084
Selection_086
Selection_087

From now on we are ready to synchronize content from the Red Hat content network. Again this can either be done via the Console or on the command line. To list all the products available using the hammer command line utility:

[root@satellite ~]$ hammer product list --organization "dbi services"
----|----------------------------------------------------------------------------------|-------------|--------------|--------------|-----------
ID  | NAME                                                                             | DESCRIPTION | ORGANIZATION | REPOSITORIES | SYNC STATE
----|----------------------------------------------------------------------------------|-------------|--------------|--------------|-----------
12  | dotNET on RHEL Beta for RHEL Server                                              |             | dbi services | 0            |           
99  | dotNET on RHEL for RHEL Server                                                   |             | dbi services | 0            |           
54  | MRG Realtime                                                                     |             | dbi services | 0            |           
1   | Oracle Java for RHEL Client                                                      |             | dbi services | 0            |           
94  | Oracle Java for RHEL Compute Node                                                |             | dbi services | 0            |           
18  | Oracle Java for RHEL Compute Node - Extended Update Support                      |             | dbi services | 0            |           
38  | Oracle Java for RHEL Server                                                      |             | dbi services | 0            |           
...

Each of those “products” are also a repository. To list the product set for Red Hat Enterprise Linux using hammer:

[root@satellite ~]$ hammer repository-set list --product "Red Hat Enterprise Linux Server" --organization "dbi services"
-----|-----------|---------------------------------------------------------------------------------
ID   | TYPE      | NAME                                                                            
-----|-----------|---------------------------------------------------------------------------------
2008 | yum       | Red Hat Enterprise Linux 4 AS Beta (RPMs)                                       
7750 | yum       | Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (Debug RPMs)                    
2009 | yum       | Red Hat Enterprise Linux 4 AS Beta (Source RPMs)                                
2006 | yum       | Red Hat Enterprise Linux 4 AS Beta (Debug RPMs)                                 
2007 | file      | Red Hat Enterprise Linux 4 AS Beta (ISOs)                                       
7752 | yum       | Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (Source RPMs)                   
7751 | yum       | Red Hat Satellite Tools 6.4 (for RHEL 7 Server) (RPMs)                          
...

For the scope of this blog post we are only interested in the Satellite tools and the latest Enterprise Linux products, so we enable only those two. For enabling the first one using the command line:

[root@satellite ~]$ hammer repository-set enable --name "Red Hat Enterprise Linux 7 Server (RPMs)" \
                                                 --releasever "7Server" \
                                                 --basearch "x86_64" \
                                                 --product "Red Hat Enterprise Linux Server" \
                                                 --organization "dbi services"
Repository enabled

For enabling the second one using the Console go to “Content->Red Hat Subscriptions”, locate the “Red Hat Enterprise Linux” section, expand that and enable “Red Hat Satellite Tools 6.3 (for RHEL 7 Server) (RPMs)”:

Selection_067
Selection_068
Selection_069

To verify what we did:

[root@satellite ~]$ hammer repository list --product "Red Hat Enterprise Linux Server" --organization "dbi services"
---|-----------------------------------------------------------|---------------------------------|--------------|---------------------------------------------------------------------------------
ID | NAME                                                      | PRODUCT                         | CONTENT TYPE | URL                                                                             
---|-----------------------------------------------------------|---------------------------------|--------------|---------------------------------------------------------------------------------
1  | Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server     | Red Hat Enterprise Linux Server | yum          | https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os             
2  | Red Hat Satellite Tools 6.3 for RHEL 7 Server RPMs x86_64 | Red Hat Enterprise Linux Server | yum          | https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/sat-tools/6....
---|-----------------------------------------------------------|---------------------------------|--------------|---------------------------------------------------------------------------------

Finally we need to synchronize these repositories (this will take some time):

[root@satellite ~]$ hammer product synchronize --name "Red Hat Enterprise Linux Server" --organization "dbi services"

You can monitor the progress in the Console:
Selection_071
Selection_072
Selection_074

Btw: When you want to check the overall status of the components can do do it like this on the command line:

[root@satellite pulp]$ hammer ping
candlepin:      
    Status:          ok
    Server Response: Duration: 19ms
candlepin_auth: 
    Status:          ok
    Server Response: Duration: 66ms
pulp:           
    Status:          ok
    Server Response: Duration: 111ms
pulp_auth:      
    Status:          ok
    Server Response: Duration: 52ms
foreman_tasks:  
    Status:          ok
    Server Response: Duration: 1163ms

That’s it for the basic configuration. Satellite is up and running and we have content clients can consume. In the next post we’ll attach a new Red Hat Linux installation to the Satellite.

 

Cet article Getting started with Red Hat Satellite – Initial configuration est apparu en premier sur Blog dbi services.


Where come from Oracle CMP$ tables and how to delete them ?

$
0
0

Regarding the following “MOS Note Is Table SCHEMA.CMP4$222224 Or Similar Related To Compression Advisor? (Doc ID 1606356.1)”,
we know that since Oracle 11.2.0.4 BP1 or Higher, due to the failure of Compression Advisor some tables with names
that include “CMP”, created “temporary – the time the process is running” by Compression Advisor process (ie CMP4$23590) are not removed from the database as that should be the case.
How theses tables are created ? How to “cleanly” remove them ?

1.Check None CMP tables exist.

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         0

2. Check there is no compression enabled for the table we will use to test the Compression Advisor.

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

3.Execute the Compression Advisor procedure

The procedure DBMS_COMPRESSION.get_compression_ratio analyzes the compression ratio of a table, and gives information about compressibility of a table.
For information, Oracle Database 12c include a number of enhancements to the DBMS_COMPRESSION package such as In-Memory Compression or Advanced Compression.

Let’s executing the DBMS_COMPRESSION.get_compression_ratio procedure:

SQL> 
alter session set tracefile_identifier = 'CompTest1110201815h51';
alter session set events '10046 trace name context forever, level 12';
set serveroutput on

DECLARE
  l_blkcnt_cmp    PLS_INTEGER;
  l_blkcnt_uncmp  PLS_INTEGER;
  l_row_cmp       PLS_INTEGER;
  l_row_uncmp     PLS_INTEGER;
  l_cmp_ratio     NUMBER;
  l_comptype_str  VARCHAR2(32767);
BEGIN
  DBMS_COMPRESSION.get_compression_ratio (
    scratchtbsname  => 'USERS',
    ownname         => 'TEST_LAF',
    objname         => 'FOO',
    subobjname      => NULL,
    comptype        => DBMS_COMPRESSION.comp_advanced,
    blkcnt_cmp      => l_blkcnt_cmp,
    blkcnt_uncmp    => l_blkcnt_uncmp,
    row_cmp         => l_row_cmp,
    row_uncmp       => l_row_uncmp,
    cmp_ratio       => l_cmp_ratio,
    comptype_str    => l_comptype_str,
    subset_numrows  => DBMS_COMPRESSION.comp_ratio_allrows,
    objtype         => DBMS_COMPRESSION.objtype_table
  );

  DBMS_OUTPUT.put_line('Number of blocks used (compressed)       : ' ||  l_blkcnt_cmp);
  DBMS_OUTPUT.put_line('Number of blocks used (uncompressed)     : ' ||  l_blkcnt_uncmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (compressed)   : ' ||  l_row_cmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (uncompressed) : ' ||  l_row_uncmp);
  DBMS_OUTPUT.put_line('Compression ratio                        : ' ||  l_cmp_ratio);
  DBMS_OUTPUT.put_line('Compression type                         : ' ||  l_comptype_str);
END;
/

Number of blocks used (compressed)       : 1325
Number of blocks used (uncompressed)     : 1753
Number of rows in a block (compressed)   : 74
Number of rows in a block (uncompressed) : 55
Compression ratio                        : 1.3
Compression type                         : "Compress Advanced"

PL/SQL procedure successfully completed.

4.Which “CMP internal” tables are created by DBMS_COMPRESSION.get_compression_ratio ?

To handle the compression advisor process, Oracle creates 4 CMP* tables : CMP1$23590, CMP2$23590, CMP3$23590, CMP4$23590.

Strangely, Oracle Trace 10046 files contains only DDL for the creation of the last 2 ones (we can also use LogMinner to find the DDL) : CMP3$23590, CMP4$23590.
The table CMP3$23590 is a copy of the source table.
The table CMP4$23590 is a copy “compressed” of CMP3$23590 table.

grep  "CMP*" DBI_ora_20529_CompTest1110201823h19.trc

drop table "TEST_LAF".CMP1$23590 purge
drop table "TEST_LAF".CMP2$23590 purge
drop table "TEST_LAF".CMP3$23590 purge
drop table "TEST_LAF".CMP4$23590 purge
create table "TEST_LAF".CMP3$23590 tablespace "USERS" nologging  as select /*+ DYNAMIC_SAMPLING(0) FULL("TEST_LAF"."FOO") */ *  from "TEST_LAF"."FOO"  sample block( 99) mytab
create table "TEST_LAF".CMP4$23590 organization heap  tablespace "USERS"  compress for all operations nologging as select /*+ DYNAMIC_SAMPLING(0) */ * from "TEST_LAF".CMP3$23590 mytab
drop table "TEST_LAF".CMP1$23590 purge
drop table "TEST_LAF".CMP2$23590 purge
drop table "TEST_LAF".CMP3$23590 purge
drop table "TEST_LAF".CMP4$23590 purge

As we can see above, the “internal” tables (even the one compressed CMP4$23590) are removed at the end of the process.

To be sure, we check in the database :

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         0

So, everything is fine, no ‘CMP’ tables exist and the source table is not compressed :

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

5.But what happens if DBMS_COMPRESSION.get_compression_ratio fails ?

Let’s forcing the failure of the DBMS_COMPRESSION.get_compression_ratio procedure…

SQL> 
alter session set tracefile_identifier = 'CompTest1410201822h03';
alter session set events '10046 trace name context forever, level 12';
set serveroutput on

DECLARE
  l_blkcnt_cmp    PLS_INTEGER;
  l_blkcnt_uncmp  PLS_INTEGER;
  l_row_cmp       PLS_INTEGER;
  l_row_uncmp     PLS_INTEGER;
  l_cmp_ratio     NUMBER;
  l_comptype_str  VARCHAR2(32767);
BEGIN
  DBMS_COMPRESSION.get_compression_ratio (
    scratchtbsname  => 'USERS',
    ownname         => 'TEST_LAF',
    objname         => 'FOO',
    subobjname      => NULL,
    comptype        => DBMS_COMPRESSION.comp_advanced,
    blkcnt_cmp      => l_blkcnt_cmp,
    blkcnt_uncmp    => l_blkcnt_uncmp,
    row_cmp         => l_row_cmp,
    row_uncmp       => l_row_uncmp,
    cmp_ratio       => l_cmp_ratio,
    comptype_str    => l_comptype_str,
    subset_numrows  => DBMS_COMPRESSION.comp_ratio_allrows,
    objtype         => DBMS_COMPRESSION.objtype_table
  );
 24
  DBMS_OUTPUT.put_line('Number of blocks used (compressed)       : ' ||  l_blkcnt_cmp);
  DBMS_OUTPUT.put_line('Number of blocks used (uncompressed)     : ' ||  l_blkcnt_uncmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (compressed)   : ' ||  l_row_cmp);
  DBMS_OUTPUT.put_line('Number of rows in a block (uncompressed) : ' ||  l_row_uncmp);
  DBMS_OUTPUT.put_line('Compression ratio                        : ' ||  l_cmp_ratio);
  DBMS_OUTPUT.put_line('Compression type                         : ' ||  l_comptype_str);
END;
 32  /
DECLARE
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation

What “CMP*” tables persist after ?

Two “CMP*” tables is always present :

SQL> select count(*) from dba_tables where table_name like 'CMP%';

  COUNT(*)
----------
         2

SQL> select owner,table_name from dba_tables where table_name like 'CMP%';

OWNER     TABLE_NAME
------- ----------
TEST_LAF  CMP3$23687
TEST_LAF  CMP4$23687


Since “CMP3*” and “CMP4*” are copy (compressed for the second one) of source table, space disk can increase dramatically if Compressoin Advisor fails frequently and mainly with huge tables, so it’s important to remove these tables.

The source table called FOO, CMP3$23687 and CMP4$23687 internal tables contains same set of data (less for the last 2 ones since we use the sample block option)…

SQL> select count(*) from test_laf.CMP3$23687;

  COUNT(*)
----------
     22147

SQL> select count(*) from test_laf.CMP4$23687;

  COUNT(*)
----------
     22147

SQL> select count(*) from test_laf.foo;

  COUNT(*)
----------
     22387

The worst is that now we are in presence of compressed table while we don’t have the compression license option :

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'CMP4$23687';

COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

To remove the oracle “CMP*” internal tables tables, let’s analyzing the 10046 trace file to check how oracle remove these tables when the DBMS_COMPRESSION.get_compression_ratio procedure run successfully:

Find below all the steps that oracle does to drop these tables:

drop table "TEST_LAF".CMP1$23687 purge

BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;

drop table "TEST_LAF".CMP2$23687 purge

PARSING IN CURSOR #140606951937256 len=515 dep=2 uid=0 oct=47 lid=0 tim=3421988631 hv=2219505151 ad='69fd11c8' sqlid='ct6c4h224pxgz'
BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;


drop table "TEST_LAF".CMP3$23687 purge

BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;


drop table "TEST_LAF".CMP4$23687 purge
BEGIN
  BEGIN
    IF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_CONTENTS)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    ELSIF (sys.is_vpd_enabled(sys.dictionary_obj_owner, sys.dictionary_obj_name, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN
      xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name);
    END IF;
  EXCEPTION
    WHEN OTHERS THEN
     null;
  END;
END;

To remove “CMP*” tables, Oracle does :
– drop table *** purge
– call internal procedure : xdb.XDB_PITRIG_PKG.pitrig_truncate or xdb.XDB_PITRIG_PKG.pitrig_dropmetadata regarding if Oracle Virtual Private Database is used.

7. Last Test : Check the source table is not compressed, we don’t want to have the compression enabled since we are not licensing…

SQL> select nvl(COMPRESSION,'NO') as COMPRESSION,nvl(COMPRESS_FOR,'NO') as COMPRESS_FOR from dba_tables where table_name = 'FOO';

COMPRESS COMPRESS_FOR
-------- ------------------------------
NO       NO

6.Conclusion

To drop “CMP*” tables used by the DBMS_COMPRESSION.get_compression_ratio procedure, just execute : drop table CMP* purge.

I have not tested more in details the case where compression is used into Oracle VPD, so I don’t know the impact of executing the system procedure : xdb.XDB_PITRIG_PKG.pitrig_truncate or xdb.XDB_PITRIG_PKG.pitrig_dropmetadata in case we use VPD.

 

Cet article Where come from Oracle CMP$ tables and how to delete them ? est apparu en premier sur Blog dbi services.

Inheriting super user privileges over a role automatically in PostgreSQL

$
0
0

In a recent project at a customer where we synchronize the users and group out of Active Directory we hit a little issue I was not aware of before. Suppose you have created a role in PostgreSQL, you made that role a superuser and then granted that role to another role. What happens when you login using the other role? Will you have the super user privileges by default? Sounds confusing, I know, so lets do a test.

To start with we create a simple role and make that role a super user:

postgres=# create role my_admin;
CREATE ROLE
postgres=# alter role my_admin superuser;
ALTER ROLE

Of course you could also do that in one step:

postgres=# create role my_admin superuser;
CREATE ROLE

As a second step lets create a new user that is a member of the admin group and inherits the permissions of that role automatically:

postgres=# create user my_dba login password 'admin' in role my_admin inherit;
CREATE ROLE
postgres=# \du
                                    List of roles
 Role name |                         Attributes                         | Member of  
-----------+------------------------------------------------------------+------------
 my_admin  | Superuser, Cannot login                                    | {}
 my_dba    |                                                            | {my_admin}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

The questions now is: When we login using the my_dba user are we superuser automatically?

postgres@pgbox:/home/postgres/ [PGDEV] psql -X -U my_dba postgres
psql (12devel)
Type "help" for help.

postgres=> \du
                                    List of roles
 Role name |                         Attributes                         | Member of  
-----------+------------------------------------------------------------+------------
 my_admin  | Superuser, Cannot login                                    | {}
 my_dba    |                                                            | {my_admin}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

postgres=> create database db1;
ERROR:  permission denied to create database
postgres=> 

… and we are not. What we can do is:

postgres=> set role my_admin;
SET
postgres=# create database db1;
CREATE DATABASE

The reason for that is that some privileges are not inherited automatically and these are: LOGIN, SUPERUSER, CREATEDB, and CREATEROLE.

What you can do is put something like that into “.psqlrc”:

set role my_admin

… or do it like that:

postgres=# alter user my_dba set role my_admin;
ALTER ROLE

This will explicitly set the role with each login and the super user privileges will be there. When you have a bit more complicated scenario where roles are assigned based on patterns in the username you could do something like this and add it to .psqlrc as well (or put that into a file and then execute that file in .psqlrc):

DO $$
DECLARE
  lv_username pg_roles.rolname%TYPE := current_user;
BEGIN
  if ( substr(lv_username,1,2) = 'xx'
       and
       position ('yy' in lv_username) > 0
     )
  then
    execute 'set role my_admin';
  end if;
  perform 1;
END $$;

… or whatever checks you need to identify the correct user names. Hope that helps …

 

Cet article Inheriting super user privileges over a role automatically in PostgreSQL est apparu en premier sur Blog dbi services.

Monitoring Linux With Nmon

$
0
0

I was looking for tools to monitor linux servers and I found an interesting one nmon ( short for Nigel’s Monitor). I did some tests. In this blog I am describing how to install nmon and how we can use it
I am using a Oracle Enterprise Linux System.

[root@condrong nmon]# cat /etc/issue
Oracle Linux Server release 6.8
Kernel \r on an \m

[root@condrong nmon]#

For the installation I used the repository epel

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm 
yum search nmon
yum install nmon.x86_64

Once installed, the tool is launched by just running the nmon command

[root@condrong nmon]# nmon

nmon1

If we type c we have CPU statistics
nmon2
If we type m we have memory statistics
nmon3
If we type t we can see Top Processes and so on
nmon4

nmon can be also scheduled. The data are collected in a file and this file can be analyzed later. For this we can use following options

OPTIONS
       nmon follow the usual GNU command line syntax, with long options starting
       with  two  dashes  (‘-’).   nmon  [-h] [-s ] [-c ] [-f -d
        -t -r ] [-x] A summary of options is included below.

       -h     FULL help information

              Interactive-Mode: read startup banner and type:  "h"  once  it  is
              running For Data-Collect-Mode (-f)

       -f            spreadsheet output format [note: default -s300 -c288]
              optional

       -s   between refreshing the screen [default 2]

       -c    of refreshes [default millions]

       -d     to increase the number of disks [default 256]

       -t            spreadsheet includes top processes

       -x            capacity planning (15 min for 1 day = -fdt -s 900 -c 96)

In my example I just create a file my_nmon.sh and execute the script

[root@condrong nmon]# cat my_nmon.sh 
#! /bin/bash
nmon -f -s 60 -c 30

[root@condrong nmon]# chmod +x my_nmon.sh 
[root@condrong nmon]# ./my_nmon.sh

Once executed, the script will create a file in the current directory with an extension .nmon

[root@condrong nmon]# ls -l *.nmon
-rw-r--r--. 1 root root 55444 Oct 18 09:51 condrong_181018_0926.nmon
[root@condrong nmon]#

To analyze this file, we have many options. For me I downloaded the nmon_analyzer
This tool works with Excel 2003 on wards and supports 32-bit and 64-bit Windows.
After copying my nmon output file in my windows station, I just have to launch the excel file and then use the button Analyze nmon data
nmon5
And below I show some graphs made by the nmon_analyzer
nmon6

nmon7

nmon8

Conclusion
As we can see nmon is a very useful tool which can help monitoring our servers. It works also for Aix systems.

Cet article Monitoring Linux With Nmon est apparu en premier sur Blog dbi services.

Schema only account with Oracle 18.3

$
0
0

With Oracle 18.3, we have the possibility to create schemas without a password. Effectively in a perfect world, we should not be able to connect to application schemas. For security reasons it is a good thing that nobody can connect directly to the application schema.

A good way is to use proxy connections, in fact connect as app_user but using the psi_user password for example:

Let’s create a user named app_user:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.

SQL> create user app_user identified by app_user
  2  quota unlimited on users;

User created.

SQL> grant create session , create table to app_user;

Grant succeeded.

Let’s create a proxy user named psi_user:

SQL> create user psi_user identified by psi_user;

User created.

SQL> grant create session to psi_user;

Grant succeeded.
We allow the proxy connection to the app_user:

SQL> alter user app_user grant connect through psi_user;

User altered.

Now we can connect via the proxy user using the following syntax:

SQL> connect psi_user[app_user]/psi_user@pdb 
Connected.

We can see we are connected as user app_user but using the psi_user password:

SQL> select sys_context('USERENV','SESSION_USER') as session_user,
sys_context('USERENV','SESSION_SCHEMA') as session_schema,
sys_context('USERENV','PROXY_USER') as proxy,
user
from dual;

SESSION_USER	SESSION_SCHEMA	        PROXY		USER
APP_USER	APP_USER		PSI_USER	APP_USER

But there is a problem, if the app_user is locked the proxy connection does not work anymore:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> alter user app_user account lock;

User altered.

SQL> connect psi_user[app_user]/psi_user@pdb
ERROR:
ORA-28000: The account is locked.

Warning: You are no longer connected to ORACLE.

The good solution is to use the schema only Oracle 18c new feature:

We drop the old accounts:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> drop user psi_user cascade;

User dropped.

SQL> drop user app_user cascade;

User dropped.

And we recreate them in the following way, we first create the schema owner with no authentication:

SQL> create user app_user no authentication
  2  quota unlimited on users;

User created.

SQL> grant create session , create table to app_user;

Grant succeeded.
We create the proxy user as before:
SQL> create user psi_user identified by psi_user;

We allow the proxy user to connect to the app_user:

SQL> alter user app_user grant connect through psi_user;

User altered.

We now can connect via psi_user:

SQL> connect psi_user[app_user]/psi_user@pdb
Connected.

And as the app_user has been created in no authentication, you receive the classical ORA-01017 error when you try to connect directly with the app_user account:

SQL> connect app_user/app_user@pdb
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Using no authentication is a good protection, but you cannot grant system privileges to such users:

SQL> grant sysdba to app_user;
grant sysdba to app_user
*
ERROR at line 1:
ORA-40366: Administrative privilege cannot be granted to this user.

We can try to alter the app_user with a password and grant it to sysdba but it does not work:

SQL> alter user app_user identified by password;

User altered.

SQL> grant sysdba to app_user;

Grant succeeded.

SQL> alter user app_user no authentication;
alter user app_user no authentication
*
ERROR at line 1:
ORA-40367: An Administrative user cannot be altered to have no authentication
type.

SQL> revoke sysdba from app_user;

Revoke succeeded.

SQL> alter user app_user no authentication;

User altered.

To understand correctly the behavior, I made the following test:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
I remove the no authentication:

SQL> alter user app_user identified by app_user;

User altered.

Now I can connect on the app_user schema, I create a table and insert some values:

SQL> connect app_user/app_user@pdb
Connected.
SQL> create table employe (name varchar2(10));

Table created.

SQL> insert into employe values('Larry');

1 row created.

SQL> commit;

Commit complete.

I reset the app_user to no authentication:

SQL> connect sys@pdb as sysdba
Enter password: 
Connected.
SQL> alter user app_user no authentication;

User altered.

I connect with the proxy user, I can display the employe table content:

SQL> connect psi_user[app_user]/psi_user@pdb
Connected.
SQL> select * from employe;

NAME
----------
Larry

The table belongs to the app_user schema:

SQL> select object_name, object_type, owner from all_objects where object_name ='EMPLOYE';

OBJECT_NAME	OBJECT_TYPE	OWNER
EMPLOYE		TABLE		APP_USER
SQL> insert into employe values ('Bill');

1 row created.

SQL> commit; 

Commit complete.

SQL> select * from employe;

NAME
----------
Larry
Bill

What is the behavior in the audit trail ?

We create an audit policy to detect any table creation:

SQL> create audit policy psi_user_audit_policy
  2  privileges create table
  3  when 'SYS_CONTEXT(''USERENV'',''SESSION_USER'') = ''APP_USER'''
  4  evaluate per session
  5 container=current

Audit policy created.

SQL> audit policy psi_user_audit_policy whenever successful;

Audit succeeded.
If now we have a look at the unified_audit_trail view:

SQL> select event_timestamp, dbusername, dbproxy_username from unified_audit_trail where object_name = 'SALARY' and action_name = 'CREATE TABLE'

EVENT_TIMESTAMP		DBUSERNAME	DBPROXY_USERNAME
16-OCT-18 03.40.49	APP_USER	PSI_USER

We can identify clearly the proxy user in the audit trail.

Conclusion:

The schema only accounts is an interesting new feature. In resume we can create a schema named app_user and set the authentication to NONE, the consequence is that you cannot be logged in. We can create a proxy account named psi_user which connects through app_user and we can create tables , views … to this app_user schema.








Cet article Schema only account with Oracle 18.3 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 J-1

$
0
0

I woke up this morning (20.10.2018) knowing that this day will be a very long day. Indeed today my Colleague Mouhamadou and me traveled to San Francisco in order to attend to the Oracle Open World (OOW). I travelled from Delémont to Zurich, meaning about 1h30 and then I caught my flight from Zurich at 13:10 (GMT+2) to land in San Francisco about 12 hours laters at 16h10 (GMT-7).

Of course when you arrive on American soil and you are not U.S. citizen, you have to face the queue before the customs… about 1h45 of queue in my case. Then we caught the shuttle for the Handlery Union Square Hotel and finally arrive at 18h30 (GMT-7).

Queue before the customs

After dropping our bags in the hotel, we decided to go for a lunch on Mikkeller Bar on 34 Mason St. where many Oracle people and consultants where seating for a beer.

Mikkeller Bar

During the next 4 days, Mouhamadou and me will share information regarding our Oracle Open World experience, product keynotes, sessions, aso , so do not hesitate to follow us either on twitter or on dbi services blog platform ! You can also find some San Francisco pictures on my Instagram account.

For me now it’s time to sleep… see you tomorrow for a ride tour with Bryn Llewellyn, Mouhamadou Diaw, Franck Pachot and many others on the Golden Gate before the Oracle Open World registration.

Good night !

Cet article Oracle Open World 2018 J-1 est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 0

$
0
0

Today was the day 0 for the oracle Openworld 2018. The event is officially not started but there were many nice entertainments to discover and enjoy San Francisco.
The one Gregory Steulet (CFO of dbi services) and myself participated was the bike ride organized by Bryn Llewellyn who is the PL/SQL Product Manager. There were also Mike Dietrich (Master Product Manager) ,Franck Pachot (Oracle Master 12c and ACE Director) and many other people

After renting bikes, we start the tour on the Goldengate Bridge. It was wonderful. The weather was not so bad.
Captureday01
After the Goden gate Bridge, we continue around San Francisco Bay and we did many stops to fill the batteries.

Captureday02

After a long biking and after a good lunch, we took the boat to go back.

Tomorrow is the big day, the official start of the OpenWorld 2018 with multiple sessions.

Sure that we will come again to summarize the sessions we will attend

Cet article Oracle OpenWorld 2018: Day 0 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D 0: Bike trip starting on Golden Gate

$
0
0

Today (22.10.2018) my colleague Mouhamadou and me had the opportunity to make a bike trip (#BikeB4OOW) organized by Bryn Llewellyn product manager for Oracle PL/SQL and Edition Based Redefinition (EBR). We were well accompagnied with several other famous people such as Franck Pachot, Mike Dietrich, Pieter Van Puymbroeck, Liesbeth Van Raemd and Ivaca Arsov.. Just to name few of them.

Oracle Biking Team

We started our trip beside the Golden Gate at the Welcome Center at 10:00 am in direction of Sausalito and we kept on along the coast until Tiburon that we reached at about 12:00. There we splitted our group between the ones who would enjoy a meal and take the ferry and another group which prefered to come back by bike.

Golden Gate Bike trip

Mouhamadou and me accompanied by Franck, Pieter, Liesbeth and Ivica retained the first option and enjoyed a delicious meal at the Servino Ristaurante.

Lunch Time in Tiburon

We then went for a small digestion walk on Paradise drive taking some sea lions&heron’s pictures but also some selfies.

Heron in San Francisco

Finally we took the ferry to reach North Beach and bringing back our bikes.  It was the opportunity to have a wonderful view on Alcatraz Island, San Francisco and the Golden Gate bridge.

Alcatraz and San Francisco

Because a blog speaking about a bike trip starting on the Golden Gate, without even a Golden Gate picture is not imaginable I made a small detour before giving back my bike to catch a picture…

Golden Gate

On the way back to the hotel we caught our Oracle Pass at Moscone Center in order to attend to the sessions tomorrow.

The Oracle Pass

So tomorrow no sightseeing and no bike trip on the programm but I very do hope lot of interesting technical sessions and fun. For sure I will attend to the Larry Ellison Keynote “Cloud Generation 2″ and I’m pretty sure that I’m going to heard about autonomous tasks, AI, security and cloud ;-).

Greetings from San Francisco

Cet article Oracle Open World 2018 D 0: Bike trip starting on Golden Gate est apparu en premier sur Blog dbi services.


Oracle Open World 2018 D1: Larry Ellison keynote Cloud Gen2

$
0
0

Today (23.10.2018) it was the first time I attended to a Keynote from Larry Ellison and I haven’t been disappointed. Not because of technical information or about true facts and figures but simply because of his charism and Oracle capability to transform a keynote in a real show. Whatever he is speaking about and whatever if it’s true or not, I’ve to admit that it was really distracting and funny.

Oracle Cloud Generation 2 - intro

As expected during this keynote, Larry’s keywords were:

1. Cloud
2. Autonomous
3. Machine Learning
4. RoBots
5. Amazon

During the entire keynote parallels have been made between the cloud and an autonomous car. “Sit back, relax and let Oracle do the driving” You can find most of this comparison on auto.oracle.com.

Oracle Cloud Generation 2 - Car
Cloud Computing Generation 2 – new architecture

Secure Infrastructure requires new hardware and new software

  1. Impenetrable Barrier: Dedicated Network of Cloud Control Computers
    • Barrier: Dedicated Network of Cloud Control Computers. Cloud Control Computers protect cloud perimeter and customer Zones.
    • Impenetrable: No Customer access to cloud control computers and Memory
  2. Autonomous RoBots: AI/ML RoBots Find and kill Threats
    • Database immediately Patches Itself while running – Stop Data Theft
    • No delay for human process or downtime
    • No longer Our People versus Their robots – Our Robots vs their Robots

One of the key difference between Cloud Generation 1 vs Cloud Generation 2 is the Cloud Control Computer architecture as presented below.

Cloud Generation 2 - Architecture

First Generation Clouds Built on Decade Old Technology

Comparing to the second generation Cloud Generation 1 was

  • Designed for building “angry birds”
  • Not meant to run our mission critical business applications
  • Security was an afterthought
  • Pay significantly more for higher performance
  • Cloud way or no way – Not meant to move your datacenter to the cloud

In comparison Cloud Generation 2 is one unified architecture:

  • Foundation for autonomous database
  • Extensible platform for Saas Applications
  • Runs Enterprise Applications and Cloud Native Applications
  • Gen 2 public Cloud Available Now – Gen 2 Cloud@Customer 2019
  • Easy Free Push Button Upgrade From Gen 1 to Gen 2 Database@Customer

Oracle Generation 2 Other design goals are:

  • Support better functionality and performance than on premises
  • Improve automation (Eliminate mundane tasks, managing upgrade, security patches)
  • Easily connect between your datacenter and ours securely
  • Guarantee consistent enterprise performance (Industry first SLA that covers availability, performance and manageability)

Oracle Performance vs Amazon Performance

Such a keynote wouldn’t have been complete without a performance and price comparison between Oracle Cloud and Amazon AWS. As “expected” Oracle Gen2 performs much better for lower cost. Many slides and even “demos” have been done in order to prove how fast is Oracle database comparing to Amazon’s databases.

  • Performance Benchmarks 3 to 100 Times faster than Amazon’s Databases
  • Oracle Autonomous Data Warehouse vs Amazon Redshift – 9x Faster and 8x cheaper
  • Oracle Autonomous Transaction Processing vs Amazon Aurora – 11x faster and 8x cheaper
  • Oracle Autonomous Transaction Processing vs Amazon aurora mixed workloads – 100x faster and 80x cheaper
  • Autonomous Transaction Processing vs Oracle on RDS – 3x faster and 2x cheaper
  • Autonomous Transaction Procssing vs Oracle on RDS while patching – Infinitely Faster and Infinitely Cheaper

No comment…

But for those of you who wants to compare, Oracle graciously offers 2 TB Autonomous Database for 3’300 hours on https://cloud.oracle.com/try-autonomous-database

Larry Ellison spokes also about the low latency and high Bandwidth RDMA Cluster Networking.

Oracle Cloud Generation 2 Security

A special focus on security has been done in order to explain that Cloud Gen 2 is fully secure and will protect you against any threats:

  • Compliance: Service, tools AI/ML to monitor your cloud infrastructure
  • Edge Security: DDos, DNS, WAF
  • Access Security: Identify, resource access management
  • Autonomous Database: Autonomous database: self-patching, self-repairing
  • Data Security: At-rest and in-transit encryption, key management
  • Network Security: Cloud Control Computers: private encrypted backbone
  • Isolation: Full physical isolation from other tenants and Oracle

Oracle Cloud Generation 2 Availability

Availability of Oracle Cloud Generation 2:

  • New Infrastructure and Database Customers get Gen 2 NOW
  • Database Cloud@Customer upgraded to Gen2: Summer 2019 (Database Cloud@Customer free upgrade autonomous database Cloud@Customer)
  • Full OCI Cloud@Customer: Calendar 2019 (Complete Gen 2 Cloud on your premise, under your control)

Oracle Autonomous Database

Everything is Automated: Nothing to learn, nothing to do

  • Automatic Provisioning
  • Automatic Scaling
  • Automatic Tuning
  • Automatic Security
  • Automatic Fault Tolerant Failover
  • Automatic Backup and Recovery
  • And More

Core Message: Easiest to Use  Lowest Cost to Operate. All of that makes Oracle 25x more reliable that Amazon.

Larry also presented real ADW (Autonomous DataWarehouse) use cases (ADW in Action On customer workloads) where he compared customer tuned real warehouse workloads to ADW. Conclusion: ADW consistently exceeds hand-tuned performance and in addition ADW stays tuned as workload changes. Of course the conclusions are the same for ATP (Autonomous Transaction Processing)

Cloud Generation 2 - Performance

Some sentences I caught:

About Oracle Cloud:

  • “Security built in from the center to the outside, from the perimeter to the inside”
  • “The most important component of the generation 2 cloud is the autonomous database and we did lots of progress since last year”.
  • “We have autonomous robots searching for threats. We search and destroy the threats”… “that’s our robots vs their robots”
  • “We eliminate human labor and human errors”

About Amazon:

  • “They should have a funny contract with their internet provider, price is depending on the way data move from or towards their cloud”
  • “Move out the AWS database cloud cost a lot.”…” Move data in the Oracle cloud that’s done.”
  • “AWS is a semi-autonomous database”… “semi-autonomous database, you drive it and you die.”… “Our database is fully autonomous”
  • “We guarantee that we cut the bill by half regarding amazon”
  • “Oracle database is 25x more available and reliable than the amazon database”
  • “Oracle is the best database that you can run on Amazon”

Cloud Generation 2 - Larry Ellison

Cet article Oracle Open World 2018 D1: Larry Ellison keynote Cloud Gen2 est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Microservices Get Rid of Your DBA and Send the DB into Burnout

$
0
0

I had the pleasure to attend this morning (23.10.2018) to the session of Franck Pachot about Microservices. Between 70 and 100 peoples were present in the room to listen the ACE director, OAK table member and OCM speaking about microservices.

Franck Pachot - microservices

Franck introduces microservices based on the fact that customers could want to get rid of their databases.

Getting rid of your database because it’s shared, it contains persistent data and because you query it with SQL could be good reasons. With smaller components you can share less and you can dedicate each component to an owner. With that it mind comes the idea of micro-services. Of course such reasoning has many limits such as the fact that micro-services shouldn’t have data in common.

Usually you query databases with SQL or PL/SQL. However SQL is a 4th generation language and  SQL developpers are rare. SQL is not only too complicated but also not portable. It’s even worse with PL/SQL and T-SQL.

Solution: microservices with easier technology, development offshored. This is precisely what Franck spoke about in his session

Indeed he did a demo with two tables (account and customers tables). He transfered few dollars from one account to another. At first using SQL, then with PL/SQL, JavaScript on the client and then JavaScript in the Database using MLE (Multi Language Engine) and checked CPU time for each of these methods. The results are the following:

  • SQL – 5 seconds of CPU
  • PL/SQL – 30 seconds of CPU
  • JavaScript on client – 2 minutes of CPU (45s on the client and 75 into the database)
  • JavaScript in DB (MLE) – 1 minute of CPU

SQL Statement

What is particulary interesting here is that you can think that you will offload the database by executing this statement with java on the client. Such wish could be motivated by decreasing the CPU power and therefore Oracle licensing footprint. However it is exaclty the opposite that Franck proved. You will multiply by at least twice the CPU power required to execute the same operation. Running on through different engine, process, machine does not scale and burns more CPU cycle in each tier.

The difference between SQL and PL/SQL which are running in the same process is due to context switches.

The diffrence betwen SQL and JavaScript on the client is due to context switches on the service, context switch on the client but also network latency.

Even if a context switch is really fast (have a look on to see cost in CPU cycle), Franck (who is working at CERN) explained us that during this time a proton can do a complete round of the CERN Large Hadron Collider (27km).

Anyway it has been really interesting to see that it will be possible in the future to load javascript in an Oracle Database using MLE.

Cet article Oracle Open World 2018 D1: Microservices Get Rid of Your DBA and Send the DB into Burnout est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 1

$
0
0

The first session I assisted today was Oracle Active Data Guard: Best Practices and New Features Deep Dive.
This session was done by Nitin Karkhanis Director of Software Development managing the Data Guard Broker development team and Mahesh Girkar Senior Director of Software Development in Oracle’s Database Division. His team is responsible for developing High Availability features for Data Guard and Active Data Guard..
It was really a very interesting session. It was divided in two parts: The new features for 18c and new features they will implement for Oracle 19c.

Some Active Data Guard New Features for Oracle 18c

>Multi-Instance Redo Apply support now change block tracking
>Data Guard and Database Nologging mode
>The database buffer cache will be preserved on an Active Data Guard during role change
>Creating private temporary table is supported in Active Data Guard
>Better protection against failed logins

Some New Data Guard Broker Commands for Oracle 18c

dgmgrl > show all;
dgmgrl > set debug ON | OFF
dgmgrl > set echo ON | OFF
dgmgrl > set time ON | OFF
dgmgrl > validate  database boston spfile;
dgmgrl > validate  network configuration for boston;

Some New features for Data Guard in 19c

>Multi-Instance Redo Apply will work with the In Memory Column Store
>Global Temporary Tables can be now created and dropped in an Active Data Guard Standby
>Tunable Automatic Outage Resolution. The parameters that control the wait time that determine a hung process will be now documented
>DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME. In former versions these parameters were hidden.
>In the future is flashback is done at the primary database no action will be needed on standby side. We just have to mount the standby
and Oracle will do the rest

Some Data Guard Broker Feature in 19c

TRACE_LEVEL replaces DEBUG

dgmgrl > set TRACE_LEVEL USER|SUPPORT 

New Commands to Set Database Parameters

dgmgrl>EDIT DATABASE SET PARAMETER parameter_name=value
dgmgrl>EDIT DATABASE RESET PARAMETER parameter_name=value

New command to export and import broker configuration file

dgmgrl> export configuration to ‘meta.xml’
dgmgrl> import configuration from ‘meta.xml’ 

Properties now pass through to Initialization Parameters
>DataGuardSyncLatency (DATA_GUARD_SYNC_LATENCY)
>StandbyFileManagement (STANDBY_FILE_MANAGEMENT)

Another session I attended was Best Practices for Maintaining Oracle RAC/Single Instance Database Environments presented by Bill Burton Consulting Member of Technical Staff Oracle
Scott Jesse Senior Director,Customer Support, DB Scalability, Security, Networking, SSC Oracle
Bryan Vongray, Senior Principal Technical Support Engineer, Oracle.
In fact according to their statistics more than 50% of opened SR concern well kown issues. So in this session they present some tools that can help for troubleshooting and monitoring RAC and Single Instance. They present
TFA
OraCheck and ExaCheck
By many illustrations, the speakers explain how to use the different tools to diagnose our environment.

The last session I attended was Inside the Head of a Database Hacker by Mark Fallon, Chief Security Architect, Oracle

The speaker tried in simple words to understand the motivations and the way the hackers act.
This will help to protect any environment. He comes to the conclusion that the security needs to be a collaboration
>Database Security
>Application Security
>Network Security
>End-point Security
>Process
>Employee Education
>Physical Security
>Supply Chain Security

So we will come back tomorrow for another briefing of our day

Cet article Oracle OpenWorld 2018: Day 1 est apparu en premier sur Blog dbi services.

SQL Server availability groups, SQL Browser and Shared Memory considerations

$
0
0

Few weeks ago, my colleagues and me discussed availability groups and network considerations for one of our customers including disabling SQL Browser service and shared memory protocol. The point was disabling both features may lead to unexpected behaviors when creating availability groups.

blog 145 - 0 - AG network banner

Let’s start with the SQL Browser service. It is not uncommon to disable this service at customer shops and to use directly SQL Server listen ports instead. But if you go trough the availability group wizard you will find there a plenty of blockers when actions than require connecting to the secondary replica as adding a database, performing a failover and so on.

Disabling the SQL Browser service doesn’t mean you can not reach out your SQL Server instance by using the named instance format SERVER\INSTANCE. There are some scenarios that’s work perfectly including either connecting from the local server through the shared memory or by using SQL Server aliases. Let’s say my infrastructure includes 2 AG replicas vmtest2012r04\SQL2014 and vmtest2012r05\SQL2014. SQL browser is disabled and shared memory is enabled on each. There are no aliases as well. If you try to connect from the vmtest2012r04\SQL2014 by using named instance format it will work on the local replica (through shared memory) but it won’t work if you try to connect to the remote replica vmtest2012r05\SQL2014. In the latest case, you have will to use SERVER,PORT format as shown below:

C:\Users\dab>sqlcmd -S vmtest2012r204\SQL2014 -Q"SELECT 'OK' AS connection"
connection
---------
OK

(1 rows affected)

C:\Users\dab>sqlcmd -S vmtest2012r205\SQL2014 -Q"SELECT 'OK' AS connection"
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : SQL Server Network Inte
rfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. .
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login timeout expired.
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : A network-related or in
stance-specific error has occurred while establishing a connection to SQL Server
. Server is not found or not accessible. Check if instance name is correct and i
f SQL Server is configured to allow remote connections. For more information see
 SQL Server Books Online..

C:\Users\dab>sqlcmd -S vmtest2012r205,1453 -Q"SELECT 'OK' AS connection"
connection
---------
OK

 

But I guess this is not a big surprise for you. This kind of configuration works well with availability group but at the cost of some compromises. Indeed, creating an availability group remains pretty easy and you just have to keep using SERVER,PORT format when the wizard asks for connection information.

blog 145 - 1 - AG wizard - replica

But the game is different for adding a database to the AG or at least any operation that requires to connect to the replicas. In this case the Wizard forces to connect to the second replica by using SERVER\INSTANCE format leading to get stuck at this step.

blog 145 - 2 - AG wizard - add DB

The only way is to go through T-SQL script (or PowerShell command) to change the format to SERVER,PORT. Probably something that may be fixed by Microsoft in the future.

Let’s add now to the equation disabling the shared memory protocol on each replica. I met some customers who disable it to meet their internal best practices because their applications are not intended to connect locally on the same server than their database engine. At the first glance, this is not a bad idea but we may get in trouble with operations performed on availability group architectures. This is a least what we experienced every time we were in this specific context. For instance, if I try to create an availability group, I will face the following timeout error message:

blog 145 - 3 - AG wizard - shared memory disabled

This is a pretty weird issue and to get more details, we have to take a look at the cluster log. Here the interesting sample of messages we may find out:

...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF].  (268435455)
...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [HYT00] [Microsoft][SQL Server Native Client 11.0]Login timeout expired (0)
...2018/10/22-20:42:51.436 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (268435455)
...2018/10/22-20:42:51.436 INFO  [RES] SQL Server Availability Group <AG2014>: [hadrag] Could not connect to SQL Server (rc -1)
...2018/10/22-20:42:51.436 INFO  [RES] SQL Server Availability Group <AG2014>: [hadrag] SQLDisconnect returns following information
...2018/10/22-20:42:51.451 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] ODBC Error: [08003] [Microsoft][ODBC Driver Manager] Connection not open (0)
...2018/10/22-20:42:51.451 ERR   [RES] SQL Server Availability Group <AG2014>: [hadrag] Failed to connect to SQL Server
...2018/10/22-20:42:51.451 ERR   [RHS] Online for resource AG2014 failed.

 

It seems that the RHS.exe, through the resource associated to my AG, is not able to connect to the SQL Server replica during the initialization phase. According to the above cluster log, the ODBC connection seems to be limited to connect by using INSTANCE\NAME format and as I far as I know there is no interface to change it with the AG cluster resource DLL (thanks Microsoft guys for confirming this point). Therefore, disabling both SQL Browser and shared memory leads to the AG cannot be brought online because a communication channel cannot be established between the primary and the cluster service. My friend MVP Christophe Laporte tried also some funny tests by trying to create custom DSN connections without luck.   So, the simplest way to fix it if you want to keep disable the SQL Browser service is to enable the shared memory on each replica. Another workaround may consist in using SQL aliases but it leads to a static configuration that requires to document well your architecture.

In a nutshell, disabling SQL Browser limits the AG operations that can be done through the GUI. Adding the shared memory to the equation may have a bigger impact to the underlying WSFC infrastructure that you have to be aware of. According to my tests, this behavior seems to be same with versions from SQL2012 to SQL2017 (on Windows) regardless the WSFC version.

Hope this helps!

 

 

 

 

 

Cet article SQL Server availability groups, SQL Browser and Shared Memory considerations est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D1: Top Five MySQL Query Tuning Tips

$
0
0

Yesterday (22.10.2018) I participated to the Janis Griffin’s session about “Top Five Query Tuning Tips” at #OOW2018. Janis is Senior DBA /Performance Evangelist for SolarWinds and Ace Director. She is specialized in Performance Tuning.

Janis Griffin - MySQL Tuning Tips

She introduces her session by speaking about Challenges of Tuning. “Tuning takes time.”, “You cannot give enough power if SQL is ineficient”, “You therefore have to monitor wait time”. It sounds basic telling that it doesn’t worth adding CPU or memory when your SQL Statements have bad execution plan or are simply ineficient but that a common reflex that I already observed by customers.

But Tuning is hard, you do not always know where to start with (which statement you have to tune at first). It requires expertise in many areas, technical but also business. Of course tuning takes time and it’s not always the priority of the editor companies. Finally where to stop when you start tuning a statement ?

Janis Griffin - Total Wait Time

Let’s start with the tips…

1. Monitor Wait Time and understand the total time a Query spends in Database. MysQL helps by providing Wait Events and Thread States. Of course starting with MySQL 5.6 the Performance_Schema has been greatly improved and has 32 new tables in version 5.7. You can also access to the SYS Schema which is now provided by default with about 100 views.

2. Review the execution plan by using “explain”, “explain extended“, “explain FORMAT=JSON“, “Optimizer Trace” or “MySQL Workbench“. She also gave us some tips such as “Avoiding using table aliases since they don’t translate in plan”. “Optimizer trace” available since version 5.6.3+ can be used with:

set optimizer trace ="enabled=on"

Janis Griffin - Statement

3. Gather object information. Have a look on table definition and find if it’s really a table or if it’s a view. Get size of the table by using

mysqlshow --status database {table} {column}

Then Examine Columns in Where Clause and review selected Column and especially the usage of ‘*’ and scalar column. Have also a look on existing indexes (if multi-column, know the left leading column). Make sure the Optimizer can use the index, indeed functions on indexed columns can turn off index and look for implicit conversions. Her tip is to check keys and constraints, because they help creating better execution plan.

4. Find the driving table. You need to know the size of the actual data sets of each step:

  • In Joins (Right, Left, Outer)
  • What are the filtering predicates
  • When is each filtering predicate applied

But also compare size of the final result set with data examined. The goal is to reduce rows examined.

You also have to check if you are using the best indexes. Keep in mind that adding indexes is not always the right thing to do since you have to consider insert, update and delete operations. Consider also usage of Covering and Partial indexes.

5. Engineer Out the Stupid. Look for performance inhibitors such as:

  • Cursor or row by row processing
  • Parallel query processing. Not always bad but have a look on this blog from Alex Rubin named “increasing slow query performance with parallel query execution
  • Hard-coded hints
  • Nested views
  • Abuse of Wild Cards(*) or No Where Clause
  • Code-based SQL Generation (e.g. PHP generator, LINQ; nHibernate)
  • implicit data conversions
  • Non-sargable /scalar functions (eg. Select… where upper(first_name) = ‘JANIS’

Finally you can have a look on Janis Best practices with MySQL Tuning here.

Cet article Oracle Open World 2018 D1: Top Five MySQL Query Tuning Tips est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud

$
0
0

During this second day at Oracle Open World 2018 (24.10.2018) I attended to Mark Hurd’s keynote name “Accelerating Growth in the Cloud. Several famous people participated to this keynote such as:

Ian Bremmer who is the president and founder of Eurasia Group and according to Oracle “the leading global political risk research and consulting firm. Mr Bremmer is also the president and founder of GZERO Media.

Sherry Aaholm who is the Vice President and Chief Information Officer of Cummins Inc. “Cummins Inc. is an American Fortune 500 corporation that designs, manufactures, and distributes engines, filtration, and power generation products” – wikipedia.

Sherry Aaholm with Mark Hurd

Navindra Yadav, Founder of Tetration Analytics. “Cisco Tetration offers holistic workload protection for multicloud data centers by enabling a zero-trust model using segmentation” – Cisco

Navindra Yadav and Mike Hurd

Thaddeus Arroyo, Chief Executive Officer of AT&T Business. Mr Arroyo is responsible for the company’s integrated global business solutions organization, which servces more than 3 million business customers in nearly. “AT&T is the world’s largest telecommunications company, the second largest provider of mobile telephone services, and the largest provider of fixed telephone services in the United States through AT&T Communications.” – wikipedia

Thaddeus Arroyo with Mike Hurd

Geopolitical analysis with Ian Bremmer

The session started with a videoconference between Mark Hurd and Ian Bremmer regarding geopolitical topics. China has been mentioned as the biggest economy in the world and technology superpower. It has also been underlined the alignment between Chinese company and Chinese Government. Regading U.S they spoke about investment in physical defense vs investment in virtual defense where there is still lot to do compared to some other countries.

Disruption as a constant

Mark Hurd then presented few slides starting with a short summary named “With disruption as a constant – technology becomes the differentiator”

  • Data is key asset for business to own, analyze, use and secure
  • Virtual assets will win over physical resources
  • Cyber teams are the new future
  • Cloud and integrated technologies, like AI, help organizations lower costs while driving innovation & improving productivity

Past predictions

He then recapped the predictions he did in 2015/2016 for 2025

  • 80% of production apps will be in the cloud
  • Two SaaS Suite providers will have 80% market share
  • The number of corporate-owned data centers will have decreased by 80%
  • 80% of IT budgets will be spent on cloud services
  • 80% of IT budgets will be spent on business innovation, and only 20% on system maintenance
  • All enterprise data will be stored in the cloud
  • 100% of application development and testing will be conducted in the cloud
  • Enterprise clouds will be the most secure place for IT processing

and the ones he did in 2017 for 2020

  • More than 50% of all enterprise data will be managed autonomously and also be more secure
  • Even highly regulated industries will shift 50% of their production workloads to cloud
  • 90% of all enterprise applications will feature integrated AI capabilities
  • The top ERP vendor in the cloud will own more than half of the total ERP market

Then he presented few predictions that have been afterwards by Forbes and Gartner Reseach to prove that the analysts and press had followed the same predictions…

  • In 15 months, 80% of all IT budgets will be committed to cloud apps and solutions – Forbes, Louis Columbus, “State of Cloud Adoption and Security”, 2017
  • 80% of enterprises will have shut down their traditional data centers by 2025 – Gartner Reserach, Dave Cappuccio, “The Data Center is Dead” 2018
  • The Cloud Could Be Your Most Secure Place for Data, Niall Browne CISO, Domo, 2017
  • Oracle, Salesforce, and MSFT together have a 70% share of all SaaS revenue – Forrester Research, 10 Cloud Computing predictions for 2018
  • AI Technologies Will Be in Almost Every New Software Product by 2020 – Gartner Research, Jim Hare, AI development strategies, 2017

Mark Hurd then spoke about AI in a slide named “Business Applications with AI” where he presented few statistics in order to better understand in what AI(chatbot, blockchain, aso) can help businesses. Not to mention that all these technologies will be encapsulated in Cloud Services.

  • ERP Cloud – 30% of Financial Analyst’s time “roughly 1 full day a week) is spent doing manual reports in excel. using AI, reports become error free and more insightful.
  • HCM Cloud – 35% of job recruiter’s day spent in sourcing and screening candidates. This could be cut in half, and result in improved employee talent.
  • SCM Cloud – 65% of Managers time spent manually tracking the shipment of goods. With Blockchain, this could be automated for improved visibility and trust.
  • CX Cloud – 60% of phone-support time on customer issues could be avoided altogether. With Integrated CX and AI could be addressed in a single call or via a chatbot.

Mark Hurd’s predictions by 2025

Finally he spoke about his own predictions for 2025: By 2025, all cloud apps will include AI

  • These Cloud apps will further distance themselves from legacy applications.
  • AI will be pervasive and woven into all business apps and platform services.
  • The same will be true for technologies like blokchain.

According to him by 2025, 85% of interactions with customers will be automated: Customer experience is fundamentally changing (and will dramatically improve) with these emerging technologies:

  • AI-based Digital Assistanst increases productivity and humanizes experiences
  • AI-driven Analytics helps businesses understand complexity of all customer needs
  • Internet of Things brings customers closer to companies that serve them

New I.T jobs by 2025

Regarding I.T jobs the following has been predicted by Mark Hurd:

  • 60% of the I.T Jobs have not been invented yet (But will be by 2025)

and the new jobs in 2025 will be:

  • Data professional (Analyst Scientist, Engineers)
  • Robot Supervisor
  • Human to Machine UX specialists
  • Smart Cyty Technology Designers
  • AI-Assisted Healtcare Technician

As a summary he concludes with a slide named “Better Business, Better I.T”

  • Cloud is irrefutable and foundational
  • Next in cloud is accelerated productivity and innovation
  • AI and other technologies will be integrated features
  • Autonomous database software will reduce cost and reduce risk

Mike Hurd during OOW2018

Cet article Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D2: Peter Zaitsev – MySQL 8 Field Report

$
0
0

As a former MySQL Consultant during this second day (24.10.2018) I couldn’t miss a session given by Peter Zaitsev founder of Percona and assuming the role of CEO. This session named MySQL 8 Field Report is a kind of summary of all new features encapsulated in MySQL 8.

Peter Zaitsev

During the first slides, Peter presented performance related to utf8mb4 since it’s the default characterset in version 8. These slides had an Oracle logo on the botom that why I prefer make some tests before speaking about these results. However according to these slides there is a strong performance increase on OTLP database in RO as well as in RW compared to MySQL 5.7.

Security

In terms of security Peter spoke about:

  • Roles
  • Breakdown of Super Privileges
  • Password history
  • Faster cached-SHA2 Authentication
  • skip-grants blocks remote connections
  • Multiple Addresses for bind address (8.0.13)
  • Require Password for Password Change (8.0.13)
  • Redo and Undo Logs are now encrypted if Table Encryption is enabled

In the trend of autonomous database MySQL 8 is able to tune automatically the following parameters:

  • innodb_buffer_pool_size
  • innodb_log_file_size
  • innodb_flush_method

if you set innodb_dedicated_server to auto-tune. However as explained in the documentation: “Only consider enabling this option if your MySQL instance runs on a dedicated server where the MySQL server is able to consume all available system resources. Enabling this option is not recommended if your MySQL instance shares system resources with other applications.”

Partial In-Place Update for JSON and invisible Index

It’s not anymore required to do a full rewrite of a field in MySQL 8 you can now update field in JSON object. However only update and removal of element is supported. Full support has been added in maintenance releases.

Thanks to invisible indexes you can test impact of dropping indexes before actually dropping them. You can use use_invisible_indexes to use invisible indexes in a session.

Improved Optimizer Cost Model

Peter gave us an interesting link regarding MySQL 8.0 Optimizer, the unofficial MySQL 8.0 Optimizer Guide. I really advice you to have a look on this very interesting website.

Performance Schema

About performance schema MySQL 8.0 provides the following:

Resource Groups

“MySQL supports creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group.” – MySQL Documentation

According to Peter’s slides MYSQL 8.0 is about 100% faster (select and update) with resource groups.

Developer features

  • Instant Add Column (add column without rebuilding table)
Alter table t1 add column d int default 1000, algorithm=instant;
  • Better Handlinf of Hot Row Contention
  • Descending flag in index definition is no more ignored
    • Allows efficient handling of ORDER BY A ASC, B DESC queries
  • JSON to Table Conversion (Labs)
  • Much Better GIS
  • Functions in DEFAULT (8.0.13)
Create table t2 (a binary(16) default uuid_to_bin(uuid()));
Create index idx1 ON t1 ((col1+col2));
  • MySQL Document Store
    • Full Text Indexing
    • GeoJSON Support

As a summary Peter concludes by telling that MySQL 8 looks like release to be excited about and has a lot of new features both for Devs and Ops.

 

 

Cet article Oracle Open World 2018 D2: Peter Zaitsev – MySQL 8 Field Report est apparu en premier sur Blog dbi services.


Oracle OpenWorld 2018: Day 2

$
0
0

Today is my second day of Oracle OpenWorld 2018. I can now go to the Moscone Center without GPS (cool) and then I decided to follow a MySQL session (my boss will be happy). Yes my first session was Using the MySQL Binary Log as a Change Stream by Luis Soares, Software Development Director, Oracle
The speaker explains what are the binary logs.
openday3_1
How to initialize the binary logs and how to manage them
openday3_2
How to inspect them
openday3_3
What Changed on MySQL 8
openday3_4
He also explained how these binary logs can be combined with other tools in a case of replication.

With ProxySQL
openday3_5

With Gh-ost
openday3_6

Using the binary logs we can also undo some transactions (feature not developped by Oracle) but by the community
openday3_7

And the conclusion
openday3_8
My second session was DBAs Versus Autonomous Databases. It was a very funny session but a very interesting topic. The Speaker started by doing a remind of the different version of Oracle since the beginning.
openday3_9
Another funny picture
openday3_10
And the famous topic
openday3_11
And still the famous question
openday3_12
So everybody will understand that the session was very exciting with many questions.

After this session I decided to follow a session about Oracle Sharding. The session was animated by
Mark Dilman, Senior Director, Software Development, Oracle
Srinagesh Battula, Sr. Principal Product Manager, Oracle
Gairik Chakraborty, Senior Director,Database Administration, Epsilon

They start by defining what is Sharding, how to setup, how to manage queries and so on. You can see this blog to understand what is Sharding. And Then after they talk about the New Features on Oracle 19c.

openday3_13

As you can see the quality of pictures may be better, but there are lot people and it’s not easy to take pictures.
After I visit some stands and this was the end of my day.
See you tomorrow for my Day 3.

Cet article Oracle OpenWorld 2018: Day 2 est apparu en premier sur Blog dbi services.

pgconf.eu finally kicked off

$
0
0

So, finally it started: Magnus kicked off the 10th annual PostgreSQL Conference Europe this morning in Lisbon. With 450 attendees the conference is even bigger this year than it was last year in Warsaw and it will probably be even bigger next year. One can really feel the increasing interest in PostgreSQL in Europe (and probably around the world as well). Even Tom Lane is attending this year.

Conferences are not only about technical content, social events are important as well. You can meet people, have great discussion, enjoy local food and drinks. And that is exactly what we did yesterday evening when the Swiss PostgreSQL community came together for lunch:
sdr

Conferences are not only about fun, sometimes you have to work on your queue. Working at conferences on the other side gives you the possibility to chose nice working places:
sdr

… and of course you have to work hard on preparing the booth:
sdr

But once you’ve done all that you are ready for the conference:
cof
sdr

… and then the mess starts: There is such an impressive line up of speakers, where do you go? Not an easy choice and you will obviously miss one or the other session. But hey, that’s the PostgreSQL community: Everybody is open for questions and discussions, just jump in.

One of the benefits of sponsoring is that you get a big thank you when the conference starts and that you can have your logo on the official t-shirt:
oznor
cof

And that brings us to the final thoughts of this post: Why are we doing that? The answer is quite simple: Without sponsoring, organizing such a big community event is impossible. As you know PostgreSQL is a pure community project so it depends on the community not only on the technical but also on the financial level. When you make money with community projects you should give something back and sponsoring is one way of doing that.

Finally, we are committed to open source technologies. You can see that e.g. in the events we are organizing, on our blog and events such as this one. Three days of great content, great discussion and fun ahead.

Cet article pgconf.eu finally kicked off est apparu en premier sur Blog dbi services.

My first day at the #pgconfeu 2018 in Lisbon

$
0
0

After the first Swiss PostgreSQL community dinner yesterday evening, the conference started this morning. dbi services as Gold partner of the 10th European conference in Lisbon, get the opportunity to have a booth to present all our Open Infrastructures Services.

IMG_6085

For the occasion we decided to announce this morning our brand new video of our OpenDB Appliance, which is a real success, because we have more than one hundred views and many attended of the conference was coming to our booth to get more information about it.

Today I followed many sessions, but one of them was especially interesting for me “zheap: An answer to PostgreSQL bloat woes” from Amit Kapila
This presentation presented the new Postgres storage engine “ZHEAP” which is currently under development, currently no availability plan of this storage engine exist, I think not before 2020. But I’m exiting to test this new feature of PostgreSQL.

First what it is this new ZHEAP storage engine? ZHEAP allow the usage of a separate UNDO tablespace to guarantee rollbacks, which currently at Postgres is done with keeping the old and new rows into the table itself. The problem of keeping both values into the table, is that the table will bloat.

The presentation is available here on slideshare : link to the presentation

As an experimented Oracle DBA I want to test it. Therefore I asked my colleague Daniel Westermann: how can I test it ? he say “it’s easy”. I always hear that from Postgres that it’s easy, so I say we will do it now.

At 17h20 I started to clone the git repository of the project https://github.com/EnterpriseDB/zheap
30 minutes later after installing and creating my own build, I’m ready for the testing.

See below some output of the new running development ZHEAP database.

02:42:34 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] grep -i undo postgresql-Wed.log 

2018-10-17 02:41:49.498 CEST - 10 - 6544 -  - @ LOG:  background worker "undo worker launcher" (PID 6553) exited with exit code 1

02:42:42 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] grep -i "discard worker" postgresql-Wed.log 

2018-10-17 02:41:52.594 CEST - 1 - 6597 -  - @ LOG:  discard worker started

At startup we see the new “undo worker” and “discard worker” process into the logfile, where Amit Kapila just talked about.
So now I will test to create a new table with the storage_engine “ZHEAP”

02:50:39 postgres@dbi-pg-tun:/u02/pgdata/zheap/pg_log/ [ZHEAP] sqh
psql (12devel dbi services zheap build)
Type "help" for help.

PSQL>  create table t_zheap(c1 int, c2 varchar) with (storage_engine='zheap');
CREATE TABLE
Time: 12.433 ms
PSQL> 

That’s it :-) my first table using the zheap storage is created, and I can start testing.

Trust me I was coming back from the session at 17:20 and at less than 30 minutes later, I have a running test system using the ZHEAP storage engine, it is very impressive how fast it is to get access to a Postgres development platform.

Tomorrow I will write a blog where I will make some tests using ZHEAP, because now it’s time for the PGconf.eu Party :-)

elephant

Cet article My first day at the #pgconfeu 2018 in Lisbon est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D3: Replication what’s new in MySQL 8

$
0
0

For this last day (25.10.2018) at Oracle Open World my first technical session was “Replication what’s new in MySQL 8“. This session was given by Nuno Carvalho – Oracle MySQL Principal Software Engineer, Vitor Oliveira – Oracle MySQL Performance Architect, Louis Soares – Oracle MySQL Software Development Director. You can find the list of new features linked with MySQL InnoDB Cluster here.

MySQL - high availabiltiy

They introduced the session with the challenges a database has to face today:

  • We share lots of data
  • All things distributed
  • We are not sharing anymore few Ko but Mo
  • Go green requires dynamic and adaptative behavior
  • Moving, transforming and processing data quicker than anyone else means having an edge over competitors
  • We expect service always available even in case of migration/upgrade
  • etc…

Some years ago we solved availability concerns with replication but replication is not anymore able to solve the today’s challenges. Replication was perfect to generate and reproducing multiple copies of data at one or more sites. The MySQL replication technology evolved since version 3.23 where replication was asynchronous. Since version 5.5 thanks to a the semi-synchronous replication plugin, we have semi-synchronous replication and now since version 5.7.17 and 8.01 we have group replication.

MySQL Replication evolution

In order to answer to the today’s challenges, the solution must fit with these requirements:

  • Replicate: The number of servers should grow or shrink dynamically with as little pain as possible
  • Automate: The primary/secondary role assignment has to be automatic. A new primary has to be elected automatically on primary failures. The read/write modes on primary and secondaries have to be setup automatically. A consistent wview of which server is the primary has to be provided.
  • Integrate: MySQL has to fit with other technologies such as Hadoop, Kafka, Solr, Lucene, aso…
  • Scale: Repliacte between clusters for disaster recovery. For read scale out, asynchronous read replicas can be connected to the cluster
  • Enhance: Group replication for higher availability. Asynchronous Replication for Read Scale-out. One-stop shell to deploy and manage the cluster. Seamlessly and automatically route the workload to the proper database server in the cluster (in case of failure). Hide failures from the application.

MySQL role change

Enhancements in MySQL 8 (and 5.7)

The following has been enhanced in version 8 and 5.7:

  • Binary log Enhancements. Thanks to new metadata: Easy to decode what is in the binary log. Further facilitates connecting MySQL to other systems using the binary log stream. Cpaturing data changes through the binary log is simplified. Also more stats showing where the data is/was at a certain point in time.
  • Operations: Preventing Updates On replicas that leave the cluster- Automatic protection against involuntarily tainting of offline replicas. Primary Election Weights – Choose next primary by assigning election weights to the candidates. Trigger primary Election Online – User tells current primary to give up its role and assign it to another server(new in 8.0.13). Relaxed Member Eviction – User controls the amount of time to wait until others decide to evict a member from the group.
  • Performance: Highly efficient Replication Applier – write set parallelization. Fast Group Replication Recovery – Replica quickly online by using WRITESET. High Cluster Througput – More transactions per second while sustaining zero lag on any replica. Efficient Replication of JSON Documents – Replicate only changed fields of documents (Partial JSON Updates).
  • Monitoring: Monitor Lag with microsecond precision – From the immediate master and for each stage of the replication applier process. Global Group Stats Available on Every Server – Version, Role and more

MySQL 8 Group Replication

I finally recommend to have a look on the blogs from the Engineers  where you will find news, technical information and much more: http://mysqlhighavailability.com

 

Cet article Oracle Open World 2018 D3: Replication what’s new in MySQL 8 est apparu en premier sur Blog dbi services.

Oracle OpenWorld 2018: Day 3

$
0
0

Today my first session was about GDPR Data Security in the GDPR Era. It was presented by
Joao Nunes, IT Senior Manager, NOS
Tiago Rocha, Database Administrator, “Nos Comunicaões, Sa.”
Eric Lybeck, Director, PwC
The speakers started by presenting what is GDPR which is a new law protecting data for European citizens. After they explain the changes for companies about this new law.
They talk about the GDPR articles related to Oracle Database Security.
openday3_0
And they conclude by underlining that the technical to be compliant with GDPR is not the most important part and then companies must have well documented processes.
openday3_11
My second session was Oracle Database Security Assessment Tool: Know Your Security Posture Before Hackers Do presented by
Pedro Lopes, DBSAT and EMEA Field Product Manager, Oracle
Marella Folgori, Oracle
Riccardo D’Agostini, Responsabile Progettazione Data Security, Intesa Sanpaolo
This session was about the new Oracle Database Security Assessment Tool (DBSAT) which can help to discover sensitive personal data, identify database users and their entitlements, and understand the configuration and operational security risks. This tool is free for Oracle customers.
I think this picture will help to better understand DBSAT
openday3_1
They also presented new features in the upcoming vesion. Note that actually only CIS rules are included
openday3_2
My last session was Multitenant Security Features Clarify DBA Role in DevOps Cloud presented
SPEAKERS
Franck Pachot, Database Engineer, CERN
Pieter Van Puymbroeck, Database Administrator, Exitas NV
We will not present Franck Pachot and as usual the session was exciting. It was about security in a multitenant environment.
The speakers explain how privileges can be restricted using lockdown profiles in a multitenant environment.
And to finish this beautiful picture
openday3_3

Cet article Oracle OpenWorld 2018: Day 3 est apparu en premier sur Blog dbi services.

Viewing all 2852 articles
Browse latest View live