Quantcast
Channel: dbi Blog
Viewing all 2833 articles
Browse latest View live

SUMA Server Part 2

$
0
0

In my previous Blog I wrote how to install and configure SUSE Manager. In this Blog I will go more in the deep like repository management and PXE boot configuration.

Configuring repository

To syncronize products from SUSE repository, you need configure first the Organization Credentials.

To get your username and password for repository you need to go: https://scc.suse.com/ Login with your SUSE Account

 

Admin -> Setup Wizard -> Organization Credentials -> Add a new credential

After you successfully added the Organization Credentials, you need to refresh it.

 

PXE Boot

Preboot Execution Environment can make your system installation and deployment easier. You can prepare multiple profiles/systems and boot your client directly from the image to install it.
Before we start with profiles, we need configure our DHCP Server to enable PXE boot.

if your DHCP Server runs on Linux:

You need open /etc/dhcp/dchpd.conf and add next-server and filename to your subnet.

[root@localhost dhcp]# vi dhcpd.conf

#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp-server/dhcpd.conf.example
#   see dhcpd.conf(5) man page
#

default-lease-time 600;
max-lease-time 7200;
ddns-update-style none;
authoritative;
allow booting;
allow bootp;

subnet 192.168.50.0 netmask 255.255.255.0 {
 range 192.168.50.50 192.168.50.200;
 option routers 192.168.50.1;
 option subnet-mask 255.255.255.0;
 option domain-name-servers 8.8.8.8, 8.8.4.4;
 next-server 192.168.50.149;
 filename "pxelinux.0";
}

save the file and restart your dhcp server:

systemctl restart dhcpd

if your DHCP Server runs on Windows Server:

Open DHCP configuration -> IPv4 -> Scope -> Server Options You need to add new options 066 and 067

066: FQDN or IP for your SUMA Server 067: pxelinux.0

Boot from PXE

After you configured your DHCP Server, you can boot your client from LAN. You should see following screen:

As we did not configured any system images, the list is empty.

Copying ISO files to Server

To be able to make installation over network, you need to download iso files and put it on SUMA Server. Go to https://www.suse.com/download/sles/ and download the iso file.

Create needed folders:

mkdir -p /srv/www/htdocs/pub/isos
mkdir -p /srv/www/htdocs/pub/distros/SLES-15-SP2-DVD-x86_64

Copy the files to SUMA Server: /srv/www/htdocs/pub/isos and mount it

scp sles15-sp2-x86_64.iso root@192.168.50.3:/srv/www/htdocs/pub/isos
mount -o loop /srv/www/htdocs/pub/isos/sles15-sp2-x86_64.iso /srv/www/htdocs/pub/distros/SLES-15-SP2-DVD-x86_64

After you copied all your isos and mounted, you need to define tree path in SUSE Manager. Open your SUMA Server -> Systems -> Autoinstallation -> Distributions.

Make sure that you synced down your products which you want to deploy!

If you configured correctly, you will see on the right side a green checkmark.

Create your AutoYast file

The AutoYaST profile in this section installs a SUSE Linux Enterprise Server system with all default installation options including a default network configuration using DHCP. After the installation is finished, a bootstrap script located on the SUSE Manager server is executed in order to register the freshly installed system with SUSE Manager. You need to adjust the IP address of the SUSE Manager server, the name of the bootstrap script, and the root password according to your environment:

All possible Attributes you can find under: https://doc.opensuse.org/projects/autoyast/

Basic AutoYast file:


<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns"
xmlns:config="http://www.suse.com/1.0/configns">
<general>
<mode>
<confirm config:type="boolean">false</confirm>
</mode>
</general>
<networking>
<keep_install_network config:type="boolean">true</keep_install_network>
</networking>
<software>
<install_recommended config:type="boolean">true</install_recommended>
<patterns config:type="list">
<pattern>base</pattern>
</patterns>
</software>
<users config:type="list">
<user>
<encrypted config:type="boolean">false</encrypted>
<fullname>root</fullname>
<gid>0</gid>
<home>/root</home>
<password_settings>
<expire></expire>
<flag></flag>
<inact></inact>
<max></max>
<min></min>
<warn></warn>
</password_settings>
<shell>/bin/bash</shell>
<uid>0</uid>
<username>root</username>
<user_password>linux</user_password>
</user>
</users>
<scripts>
<init-scripts config:type="list">
<script>
<interpreter>shell</interpreter>
<location>http://192.168.1.1/pub/bootstrap/my_bootstrap.sh</location>
</script>
</init-scripts>
</scripts>
</profile>

After you prepared and modifed your AutoYast file, you need to upload it to SUMA Server. Systems -> Autoinstallation -> Profiles -> Upload Kickstart/Autoyast File and past the content fo file contents.

Now let’s boot client again from the PXE:

Conclusion

SUSE Manager can for beginners little bit complicated. But once you understand the main concept your learning curve will growth fast.

Cet article SUMA Server Part 2 est apparu en premier sur Blog dbi services.


Oracle Rolling Invalidate Window Exceeded(3)

$
0
0

By Franck Pachot

.
This extends a previous post (Rolling Invalidate Window Exceeded) where, in summary, the ideas were:

  • When you gather statistics, you want the new executions to take into account the new statistics, which means that the old execution plans (child cursors) should be invalidated
  • You don’t want all child cursors to be immediately invalidated, to avoid an hard parse storm, and this is why this invalidation is rolling: a 5 hour window is defined, starting at the next execution (after the statistics gathering) and a random timestamp is set there where a newer execution will hard parse rather than sharing an existing cursor
  • The “invalidation” term is misleading as it has nothing to do with the V$SQL.INVALIDATIONS wich is at parent cursor level. Here the existing plans are still valid. The “rolling invalidation” just make them non-shareable

In this blog post I’ll share my query to show the timestamps involved:

  • INVALIDATION_WINDOW which is the start of the invalidation (or rather the end of sharing of this cursor) for a future parse call
  • KSUGCTM (Kernel Service User Get Current TiMe) which is the time when non-sharing occured and a new child cursor has been created (hard parse instead of soft parse)

As usual, here is a simple example


alter system flush shared_pool;
create table DEMO as select * from dual;
insert into DEMO select * from dual;
commit;
alter system set "_optimizer_invalidation_period"=15 scope=memory;

I have created a demo table and set the invalidation to 15 seconds instead of the 5 hours default.


20:14:19 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:14:19 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT

1 row selected.

I’ve gathered the statistics at 20:14:19 but there are no cursor yet to invalidate.


20:14:20 SQL> host sleep 30

20:14:50 SQL> select * from DEMO;

DUMMY
-----
X

1 row selected.

20:14:50 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0

1 row selected.

I have executed my statement which created the child parent and, of course, no invalidation yet.


20:14:50 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:14:50 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.14.50.270984000 PM GMT

2 rows selected.

20:14:50 SQL> host sleep 30

20:15:20 SQL> select * from DEMO;

DUMMY
-----
X


1 row selected.

20:15:20 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0

1 row selected.

I have gathered statistics and ran my statement again. There’s no invalidation yet because the invalidation window starts only at next parse or execution that occurs after statistics gathering. This next execution occured after 20:15:20 and sets the start of the invalidation window. But for the moment, the same child is still shared.


20:15:20 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:15:20 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.14.50.270984000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.15.20.476025000 PM GMT


3 rows selected.

20:15:20 SQL> host sleep 30

20:15:50 SQL> select * from DEMO;

DUMMY
-----
X

1 row selected.

20:15:50 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><size>0x0</size><details>already_processed</details></ChildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1614111334</invalidation_window><ksugctm>1614111350</ksugctm></ChildNode>
             1

2 rows selected.

I’ve gathered the statistics again, but what matters here is that I’ve run my statement now that the invalidation window has been set (by the previous execution from 20:15:20), and has been reached (I waited 30 seconds which is more than the 15 second window I’ve defined), and then this new execution set the cursor as non-shareable , for “Rolling Invalidate Window Exceeded(3)” reason, and has created a new child cursor.

20:15:50 SQL> select child_number,invalidations,parse_calls,executions,cast(last_active_time as timestamp) last_active_time
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*.invalidation_window>([0-9]*)./invalidation_window>.ksugctm>([0-9]*).*','\1')),'second') invalidation_window
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*([0-9]*)./invalidation_window>.ksugctm>([0-9]*)./ksugctm>.*','\2')),'second') ksugctm
    from v$sql_shared_cursor left outer join v$sql using(sql_id,child_number,address,child_address)
    where reason like '%Rolling Invalidate Window Exceeded(3)%' and sql_id='0m8kbvzchkytt'
    order by sql_id,child_number,invalidation_window desc
    ;

  CHILD_NUMBER   INVALIDATIONS   PARSE_CALLS   EXECUTIONS LAST_ACTIVE_TIME                  INVALIDATION_WINDOW               KSUGCTM
--------------   -------------   -----------   ---------- -------------------------------   -------------------------------   ----------------------------

             0               0             3            2 23-FEB-21 08.15.50.000000000 PM   23-FEB-21 08.15.34.000000000 PM   23-FEB-21 08.15.50.000000000 PM

1 row selected.

So at 20:15:20 the invalidation has been set (but not exposed yet) at random within the next 15 seconds (because I changed the 5 hours default) and it is now visible as INVALIDATION_WINDOW: 20:15:34 and then the next execution after this timestamp has created a new child at 21:15:50 which is visible by KSUGCTM but also in LAST_ACTIVE_TIME (even if this child cursor has not been executed, just updated).

The important thing is that those child cursors will not be used again but are still there, increasing the length of the list of child cursors that is read when parsing a new statement with the same SQL text. And this can go up to 8192 if you’ve left the default “_cursor_obsolete_threshold” (which is recommended to lower – see Mike Dietrich blog post)

And this also means that you should not gather statistics too often and this is why GATHER AUTO is the default option. You may lower the STALE_PERCENT for some tables (if very large with few changes, it may not be gathered othen enough) but gathering stats from a table everyday, even if small, has bad effect on cursor versions.


SQL> alter session set nls_timestamp_format='dd-mon hh24:mi:ss';
SQL>

select sql_id,child_number,ksugctm,invalidation_window
    ,(select cast(max(stats_update_time) as timestamp) from v$object_dependency 
      join dba_tab_stats_history on to_owner=owner and to_name=table_name and to_type=2
      where from_address=address and from_hash=hash_value and stats_update_time ([0-9]*)./invalidation_window>.ksugctm>([0-9]*).*','\1')),'second') invalidation_window
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*([0-9]*)./invalidation_window>.ksugctm>([0-9]*)./ksugctm>.*','\2')),'second') ksugctm
    from v$sql_shared_cursor left outer join v$sql using(sql_id,child_number,address,child_address)
    where reason like '%Rolling Invalidate Window Exceeded(3)%' --and sql_id='0m8kbvzchkytt'
    ) order by sql_id,child_number,invalidation_window desc;

SQL_ID            CHILD_NUMBER KSUGCTM           INVALIDATION_WINDOW   LAST_ANALYZE
-------------     ------------ ---------------   -------------------   ----------------------------------
04kug40zbu4dm                2 23-feb 06:01:23   23-feb 06:01:04
0m8kbvzchkytt                0 23-feb 21:34:47   23-feb 21:34:25       23-FEB-21 09.34.18.582833000 PM GMT
0m8kbvzchkytt                1 23-feb 21:35:48   23-feb 21:35:23       23-FEB-21 09.35.18.995779000 PM GMT
0m8kbvzchkytt                2 23-feb 21:36:48   23-feb 21:36:22       23-FEB-21 09.36.19.305025000 PM GMT
0m8kbvzchkytt                3 23-feb 21:37:49   23-feb 21:37:32       23-FEB-21 09.37.19.681986000 PM GMT
0m8kbvzchkytt                4 23-feb 21:38:50   23-feb 21:38:26       23-FEB-21 09.38.20.035265000 PM GMT
0m8kbvzchkytt                5 23-feb 21:39:50   23-feb 21:39:32       23-FEB-21 09.39.20.319662000 PM GMT
0m8kbvzchkytt                6 23-feb 21:40:50   23-feb 21:40:29       23-FEB-21 09.40.20.617857000 PM GMT
0m8kbvzchkytt                7 23-feb 21:41:50   23-feb 21:41:28       23-FEB-21 09.41.20.924223000 PM GMT
0m8kbvzchkytt                8 23-feb 21:42:51   23-feb 21:42:22       23-FEB-21 09.42.21.356828000 PM GMT
0m8kbvzchkytt                9 23-feb 21:43:51   23-feb 21:43:25       23-FEB-21 09.43.21.690408000 PM GMT
0sbbcuruzd66f                2 23-feb 06:00:46   23-feb 06:00:45
0yn07bvqs30qj                0 23-feb 01:01:09   23-feb 00:18:02
121ffmrc95v7g                3 23-feb 06:00:35   23-feb 06:00:34

This query joins with the statistics history in order to get an idea of the root cause of the invalidation. I look at the cursor dependencies, and the table statistics. This may be customized with partitions, index names,…

The core message here is that gathering statistics on a table will make the cursors unshareable. If you have, say 10 versions because of multiple NLS settings et bind variable length,… and gather the statistics every day, the list of child cursor will increase until reaching the obsolete threshold. And when the list is long, you will have more pressure on library cache during attempts to soft parse. If you gather statistics without the automatic job, and do it without ‘GATHER AUTO’, even on small tables where gathering is fast, you increase the number of cursor versions without a reason. The best practice for statistics gathering is keeping the AUTO settings. The query above may help to see the correlation between statistics gathering and rolling invalidation.

Cet article Oracle Rolling Invalidate Window Exceeded(3) est apparu en premier sur Blog dbi services.

[Data]nymizer – Data anonymizer for PostgreSQL

$
0
0

Often there is the requirement to populate a test or development database with data from production, but this comes with a risk: Do you really want, that developers or testers have access to sensitive data? In a lot of companies this might not be an issue, but for others, sensitive data must not be available to any other database than production. In Oracle there is Data Masking but there is nothing in Community PostgreSQL which helps you with that. Of course you could develop something on your own, but there is another solution: [Data]nymizer. This tool will produce a native dump file, and sensitive data is masked based on flexible rules. Because the result is a dump file, the size of the dump file might become an issue, if your source database is large, and you want to dump the whole database. But usually you do not have sensitive data in all the tables and you have the option to dump only specific tables. Lets have a look at how this can be installed, and how it works.

The installation itself is straight forward:

postgres@debian10pg:/home/postgres/ [pgdev] curl -sSfL https://git.io/pg_datanymizer | sh -s -- -b bin v0.1.0
pg_datanymizer installer: Version v0.1.0 will be installed
pg_datanymizer installer: Successfully installed pg_datanymizer 0.1.0 to bin/pg_datanymizer
postgres@debian10pg:/home/postgres/ [pgdev] ls -l bin/
total 14452
-rwxr-xr-x 1 postgres postgres 14796464 Feb 24 10:51 pg_datanymizer

To check if it works in general, we can print the help:

postgres@debian10pg:/home/postgres/ [pgdev] bin/pg_datanymizer --help
pg_datanymizer 0.1.0

USAGE:
    pg_datanymizer [OPTIONS] 

FLAGS:
        --help       Prints help information
    -V, --version    Prints version information

OPTIONS:
    -f, --file                    Path to dump file, example: /tmp/dump.sql
    -c, --config                  Path to config file. Default: ./config.yml
    -d, --dbname                  database to dump [default: postgres]
    -h, --host                    database server host or socket directory [default: localhost]
    -W, --password                force password prompt (should happen automatically)
        --pg_dump                 pg_dump file location [default: pg_dump]
    -p, --port                    database server port number
    -U, --username                connect as specified database user

ARGS:
     

There are not too many options and you’ll notice that the tool can be used over the network as well. This is quite important, as access to the production host usually is limited.

Before we proceed with the tool we need some data, so lets use a standard pgbench database for this:

postgres@debian10pg:/home/postgres/ [pgdev] psql
psql (14devel)
Type "help" for help.

postgres=# create database test;
CREATE DATABASE
postgres=# \! pgbench -i -s 10 test
dropping old tables...
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
creating tables...
generating data (client-side)...
1000000 of 1000000 tuples (100%) done (elapsed 1.73 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done in 2.83 s (drop tables 0.00 s, create tables 0.03 s, client-side generate 1.77 s, vacuum 0.33 s, primary keys 0.70 s).
postgres=# \c test
You are now connected to database "test" as user "postgres".
test=# \d
              List of relations
 Schema |       Name       | Type  |  Owner   
--------+------------------+-------+----------
 public | pgbench_accounts | table | postgres
 public | pgbench_branches | table | postgres
 public | pgbench_history  | table | postgres
 public | pgbench_tellers  | table | postgres
(4 rows)
test=# 

The “filler” column of pgbench_accounts is not populated by default. Lets assume we have some sensitive data there, e.g. email addresses:

test=# test=# update pgbench_accounts set filler = 'email@_'||md5(bid::text)||'.com';
UPDATE 1000000
test=# select * from pgbench_accounts limit 5;
  aid   | bid | abalance |                                        filler                                        
--------+-----+----------+--------------------------------------------------------------------------------------
 999947 |  10 |        0 | email@_d3d9446802a44259755d38e6d163e820.com                                         
 999948 |  10 |        0 | email@_d3d9446802a44259755d38e6d163e820.com                                         
 999949 |  10 |        0 | email@_d3d9446802a44259755d38e6d163e820.com                                         
 999950 |  10 |        0 | email@_d3d9446802a44259755d38e6d163e820.com                                         
 999951 |  10 |        0 | email@_d3d9446802a44259755d38e6d163e820.com                                         
(5 rows)

Using a simple configuration file like this:

postgres@debian10pg:/home/postgres/ [pgdev] cat config.yaml 
tables:
  - name: pgbench_accounts
    rules:
      filler:
        template:
          format: user-{{_1}}-{{_2}}
          rules:
            - random_num: {}
            - email:
                kind: Safe

… we can easily obfuscate this:

postgres@debian10pg:/home/postgres/ [pgdev] bin/pg_datanymizer -c config.yaml -U postgres -f output.sql test 
Prepare data scheme...
Fetch tables metadata...
[1 / 4] Prepare to dump table: public.pgbench_tellers
[Dumping: public.pgbench_tellers] [|##################################################|] 100 of 100 rows [100%] (0s)
[Dumping: public.pgbench_tellers] Finished in 0 seconds
[Dumping: public.pgbench_accounts] [|##################################################|] 1016397 of 1016397 rows [100%] (0s)
[Dumping: public.pgbench_accounts] Finished in 54 seconds
[Dumping: public.pgbench_history] [|##################################################|] 0 of 0 rows [100%] (0s)
[Dumping: public.pgbench_history] Finished in 0 seconds
[Dumping: public.pgbench_branches] [|##################################################|] 10 of 10 rows [100%] (0s)
[Dumping: public.pgbench_branches] Finished in 0 seconds
Finishing with indexes...

Looking at the result, the filler column contains email addresses, but not with the original values anymore:

postgres@debian10pg:/home/postgres/ [pgdev] grep "@" output.sql | head -10
999947  10      0       user-17320282186627338435-muriel@example.com
999948  10      0       user-14511192900116306114-rollin@example.org
999949  10      0       user-11496284339692580677-adelle@example.net
999950  10      0       user-1388753146590388317-kyra@example.net
999951  10      0       user-16622047998196191495-hardy@example.org
999952  10      0       user-1728042100917541840-leanna@example.org
999953  10      0       user-16390037134577324059-daniela@example.com
110689  2       0       user-797822007425813191-brendan@example.org
110690  2       0       user-9262399608909070020-hunter@example.org
110691  2       0       user-17500639029911909208-susan@example.net

“Email” is only one of the available the rules. Have a look at the Readme for the other options.

Filtering tables, either to be included or excluded is possible as well:

postgres@debian10pg:/home/postgres/ [pg14] cat config.yaml 
filter:
  only:
    - public.pgbench_accounts

tables:
  - name: pgbench_accounts
    rules:
      filler:
        template:
          format: user-{{_1}}-{{_2}}
          rules:
            - random_num: {}
            - email:
                kind: Safe

Using this configuration, only the pgbench_accounts table will be in the dump file:

postgres@debian10pg:/home/postgres/ [pgdev] bin/pg_datanymizer -c config.yaml -U postgres -f output.sql test 
Prepare data scheme...
Fetch tables metadata...
[1 / 4] Prepare to dump table: public.pgbench_tellers
[Dumping: public.pgbench_tellers] --- SKIP ---
[2 / 4] Prepare to dump table: public.pgbench_history
[Dumping: public.pgbench_history] --- SKIP ---
[3 / 4] Prepare to dump table: public.pgbench_branches
[Dumping: public.pgbench_branches] --- SKIP ---
[4 / 4] Prepare to dump table: public.pgbench_accounts
[Dumping: public.pgbench_accounts] [|##################################################|] 1016397 of 1016397 rows [100%] (0s)
[Dumping: public.pgbench_accounts] Finished in 35 seconds
Finishing with indexes...
Full Dump finished in 35 seconds

Really a nice tool, and very flexible, if you have the requirement for data anonymization.

Cet article [Data]nymizer – Data anonymizer for PostgreSQL est apparu en premier sur Blog dbi services.

SQL Server: Control the size of your Transaction Log file with Resumable Index Rebuild

$
0
0

Introduction

In this blog post, I will demonstrate how the Resumable capability of Online index rebuild operation can help you to keep the transaction log file size under control.

An index rebuild operation is done in a single transaction that can require a significant log space. When doing a Rebuild on a large index the transaction log file can grow until your run out of disk space.
On failure, the transaction needs to rollback. You end up with a large transaction log file, no free space on your transaction log file volume, and an index not rebuilt.

Since SQL Server 2017 with Enterprise Edition, using the Resumable option of index online rebuild operation we can try to keep under control the transaction log file size.

Demo

For the demo, I’ll use the AdventureWorks database with the Adam Machanic’s bigAdventures tables.

Index rebuild Log usage

My transaction log file size is 1 GB and it’s empty.

USE [AdventureWorks2019]
go
select total_log_size_in_bytes/1024/1024 AS TotalLogSizeMB
	, (total_log_size_in_bytes - used_log_space_in_bytes)/1024/1024 AS FreeSpaceMB
    , used_log_space_in_bytes/1024./1024  as UsedLogSpaceMB,
    used_log_space_in_percent
from sys.dm_db_log_space_usage;

I now rebuild the index on bigTransactionHistory.

ALTER INDEX IX_ProductId_TransactionDate ON bigTransactionHistory REBUILD
	WITH (ONLINE=ON);


I had a few autogrowth events bringing my file to 3583 MB. The log space required to rebuild this index is about 3500 MB.

Now, let’s say I want to limit my transaction log file to 2 GB.

Index rebuild script

First, I build a table that contains the list of indexes I have to rebuild during my maintenance window. For the demo purpose it’s a very simple one:

select *
from IndexToMaintain;

The idea is to go through all the indexes to rebuild and start a Rebuild with the option RESUMABLE=ON.
When a rebuild is done the value for the RebuildStatus column is updated to 1.

Here is the code:

WHILE (select Count(*) from IndexToMaintain where RebuildStatus = 0) > 0
BEGIN
	DECLARE @rebuild varchar(1000)
		, @DatabaseName varchar(1000)
		, @TableName varchar(1000)
		, @IndexName varchar(1000)
		, @id int

	select @DatabaseName = DatabaseName
		, @TableName = TableName
		, @IndexName = IndexName
		, @id = id
	from IndexToMaintain 
	where RebuildStatus = 0;

	SET @rebuild = CONCAT('ALTER INDEX ', @IndexName, ' ON ',@DatabaseName, '.dbo.', @TableName, ' REBUILD WITH (ONLINE=ON, RESUMABLE=ON);')
	
	exec(@rebuild)

	UPDATE IndexToMaintain SET RebuildStatus = 1 where id = @id;
END

The commands executed will look like this.

ALTER INDEX IX_ProductId_TransactionDate ON bigTransactionHistory REBUILD
	WITH (ONLINE=ON, RESUMABLE=ON);

The Job is scheduled to be run at a “high” frequency (depending on the file size) during the defined maintenance window. For example, it could be every 5 minutes between 1am and 3am.

We don’t need to use ALTER INDEX with RESUME to resume an index rebuild, we can just execute the original ALTER INDEX command again, as found in the DMV. It’s very useful and simplifies this kind of script.

Alert on Log Space usage

To contain the transaction log file size I create an Agent Alert that will be triggered when the file is used at 50%.In response to this Alert, it will execute another Job with 2 steps.

The first one checks the DMV index_resumable_operations for any running resumable index operation and pauses it.

IF EXISTS (
	select *
	from AdventureWorks2019.sys.index_resumable_operations
	where state_desc = 'RUNNING'
)
BEGIN
	DECLARE @sqlcmd varchar(1000)	
	select @sqlcmd=CONCAT('ALTER INDEX ', iro.name, ' ON ', OBJECT_NAME(o.object_id), ' PAUSE;')
	from AdventureWorks2019.sys.index_resumable_operations AS iro
		join sys.objects AS o
			on iro.object_id = o.object_id
	where iro.state_desc = 'RUNNING';

	EXEC(@sqlcmd)
END

The second step will then perform a Log backup to free up the transaction log space inside the file.

DECLARE @backupFile varchar(1000) 
SET @backupFile = 'C:\Backup\AdventureWorks2019_'+replace(convert(varchar(20),GetDate(),120), ':', '_')+'.trn' 
BACKUP LOG AdventureWorks2019 TO DISK = @backupFile

The command to be executed by this Job:

ALTER INDEX IX_ProductId_TransactionDate ON bigTransactionHistory PAUSE;

Running the Rebuild

I set the RebuildStatus value for my index at 0 and enable the Job (scheduled to run every minute). It starts to run at 13:04.
As we can see in the Job history the index rebuild job ran twice (around 23s) with a failed status. This means that during rebuild it was stopped by the other job doing a PAUSE followed by a log backup.
The third time it runs it could finish rebuilding the index, set the RebuildStatus to 1, and quit successfully.The Job triggered by the alert has been run twice.Two transaction log backups have been performed.While doing the rebuild we managed to keep the transaction log file at a 2GB size compared to the 3.5GB it would use without using the Resumable feature.

Conclusion

This demo was just an example of how the resumable option of index rebuild could be used to contain the transaction file size during index maintenance.
Obviously, this solution is not usable as-is for production. You will find the code on my GitHub if you want to play with it.
I hope you found this blog interesting. Feel free to give me feedback in the comments below.

 

Cet article SQL Server: Control the size of your Transaction Log file with Resumable Index Rebuild est apparu en premier sur Blog dbi services.

Oracle Database Appliance: what have you missed since X3/X4/X5?

$
0
0

Introduction

ODA started to become popular with X3-2 and X4-2 in 2013/2014. These 2 ODAs were very similar. The X5-2 from 2015 was different with 3.5 inches disks instead of 2.5 inches and additional SSDs for small databases (FLASH diskgroup). All these 3 ODAs were running 11gR2 and 12cR1 databases and were managed by the oakcli binary. If you’re still using these old machines, you should know that there is a lot of differences compared to modern ODAs. Here is an overview of what have changed on these appliances.

Single-node ODAs

Starting from X6, ODAs are also available in “lite” versions, understand single-node ODAs. The benefits are real: way cheaper than 2-node ODAs (now called High Availability ODAs), no need for RAC complexity, easy plug in (power supply and network and that’s it), cheaper Disaster Recovery, faster deployment, etc. Most of the ODAs sold today are single-nodes ODAs as Real Application Cluster if becoming less and less popular. Today, ODA’s family is composed of 2 lite versions, X8-2S and X8-2M and one HA version X8-2HA.

Support for Standard Edition

Up to X5, ODAs had only supported Enterprise Edition, meaning that the base price was more likely a 6-digit figure in $/€/CHF if you pack the server with 1 EE PROC license. With Standard Edition, the base price is “only” one third of that (X8-2S with 1 SE2 PROC license).

Full SSD storage

I/Os have always been a bottleneck for databases. X6 and later ODAs are mainly full SSD servers. “Lite” ODAs are only running on NVMe SSD (the fastest storage solution for now), and HA ODAs are available in both configurations: SSD (High Performance) or a mix of SSD and HDD (High Capacity). The latest one being quite rare. Even the smallest ODA X8-2S with only 2 NVMe SSDs will be faster than any other disk-based ODA.

Higher TeraByte density and flexible disk configuration

For sure, comparing a 5-year old ODA to X8 is not fair, but ODA X3 and X4 used to pack 18TB in 4U when ODA X8-2M will have up to 75TB in 2U. Some customers didn’t chose ODA 5 years ago because of the limited capacity, it’s even no more a subject today.

Another point is that storage configuration is more flexible. With ODA X8-2M you are able to add disks by pair, and with ODA X8-2HA you can add 5-disk packs. There is no more the need for doubling capacity as we did on X3/X4/X5 (and you could only do it once).

Furthermore, you can now choose an accurate disk split between DATA and RECO (+/-1%) compared to the DATA/RECO options on X3-X4-X5: 40/60 or 80/20.

Web GUI

A real appliance needs a real GUI, X6 introduced the ODA Web GUI, a basic GUI for basic ODA functions (dbhomes and databases creation and deletion, mainly) and this GUI became more and more capable during the past years. If some actions are still missing, the GUI is now quite powerfull and also user-friendly. And you can still use the command line (odacli) if you prefer.

Smart management

ODA now has a repository and everything is ordered and referenced in that repository, each database, dbhome, network, job is identified with a unique id. And all tasks are backgroup jobs with a verbose status.

Next-gen virtualization support

With old HA you had to choose between bare-metal mode or virtualized-mode, the last one being for running additional virtual machines for other purposes than databases. But the databases were also running on a single dedicated VM. Virtualized-mode relied on OVM technology, soon deprecated and now replaced with OLVM. OLVM brings both the advantages of a virtualized ODA (running additional VMs) and a bare-metal ODA (running databases in bare-metal). And it relies on KVM instead of Xen, which is better because it’s part of the Linux operating system.

Data Guard support

It’s quite a new feature, but it’s already a must-have. The command line interface (odacli) is now able to create and manage a Data Guard configuration, and even do the duplicate and the switchover/failover. It’s so convenient that it’s a key benefit for the ODA compared to other platforms. Please have a look at this blogpost for a test case. If you’re used to configure Data Guard, you will probably appreciate this feature a lot.

Performance

ODA has always been a great challenger compared to other platforms. Regarding modern ODAs, NVMe SSDs associated to high-speed cores (as soon as you limit the number of cores in use in the ODA to match your license – please have a look how to) make the ODA a strong performer even compared to EXADATA. Don’t miss that point, your databases will probably run better on ODA than on anything else.

Conclusion

If you’re using Oracle databases, you should probably consider again ODA in your short list. It’s not the perfect solution, and some configurations cannot be addressed by ODA, but it brings much more advantages than drawbacks. And now there is a complete range of models for each need. If your next infrastructure is not in the Cloud, it’s probably with ODAs.

Cet article Oracle Database Appliance: what have you missed since X3/X4/X5? est apparu en premier sur Blog dbi services.

Automate CNOs and VCOs for SQL Server AAG

$
0
0

During the installation of a new SQL Server environment in a Project, we wanted to automate the whole process deployment and configuration when installing a new SQL Server Always On Availability Group (AAG).
This installation requires to prestage cluster computer objects in Active Directory Domain Services, called Cluster Name Objects (CNOs) and Virutal Computer Objects (VCOs).
For more information on the prestage process, please read this Microsoft article.

In this blog, we will see how to automate the procedure through PowerShell scripts. ActiveDirectory module is required.

CNO Creation

First, you need an account with the approriate permissions to create Objects in a specific OU of the domain.
With this account, you can create the CNO object as follows:

# To configure following your needs
$Ou1='CNO-VCO';
$Ou2='MSSQL';
$DC1='dbi';
$DC2='test';
$ClusterName='CLST-PRD1';
$ClusterNameFQDN="$(ClusterName).$($DC1).$($DC2)";

# Test if the CNO exists
If (-not (Test-path "AD:CN=$($ClusterName),OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)")){
	# Create CNO for Windows Cluster
	New-ADComputer -Name "$ClusterName" `
          -SamAccountName "$ClusterName" `
            -Path "OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)" `
              -Description "Failover cluster virtual network name account" `
                 -Enabled $false -DNSHostName $ClusterNameFQDN;

	# Wait for AD synchronization
	Start-Sleep -Seconds 20;
};

Once the CNO created, we have to configure the correct permissions. We have to give, to the account we will use for the creation of the Windows Server Failover Cluster (WSFC), the correct Access Control Lists (ACLs) to be able to claim the object during the WSFC installation process.

# Group Account use for the installation
$GroupAccount='MSSQL-Admins';

# Retrieve existing ACL on the CNO
$acl = Get-Acl "AD:$((Get-ADComputer -Identity $ClusterName).DistinguishedName)";

# Create a new access rule which will give to the installation account Full Control on the object
$identity = ( Get-ADGroup -Identity $GroupAccount).SID;
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericAll";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;

# Add the new acess rule to the existing ACL, then set the ACL on the CNO to save the changes
$acl.AddAccessRule($ace); 
Set-acl -aclobject $acl "AD:$((Get-ADComputer -Identity $ClusterName).DistinguishedName)";

Here, our CNO is created disabled with the correct permissions we require for the installation.
We need to create its DNS entry, and give to the CNO read/write permissions on it.

# Specify the IP address the Cluster we will use
$IPAddress='192.168.0.2';

# Computer Name of the AD / DNS server name
$ADServer = 'DC01';

Add-DnsServerResourceRecordA -ComputerName $ADServer -Name $ClusterName -ZoneName "$($DC1).$($DC2)" -IPv4Address $IPAddress;

#Retrieve ACl for DNS Record
$acl = Get-Acl "AD:$((Get-DnsServerResourceRecord -ComputerName $ADServer -ZoneName "$($DC1).$($DC2)" -Name $ClusterName).DistinguishedName)";

#Retrive SID Identity for CNO to update in ACL
$identity = (Get-ADComputer -Identity $ClusterName).SID;

# Contruct ACE for Generic Read
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericRead";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace);

# Contruct ACE for Generic Write
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericWrite";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace);

#Update ACL for DNS Record of the CNO
Set-acl -aclobject $acl "AD:$((Get-DnsServerResourceRecord  -ComputerName $ADServer  -ZoneName "$($DC1).$($DC2)" -Name $ClusterName).DistinguishedName)";

At this step, the installation process will be able to claim the CNO while creating the new Cluster.
The prestage for the CNO object is completed.

VCO Creation

The creation of the VCO, used by the AAG for its Listener, is quite similar.
As there is no additional complexity compared to the creation of the CNO, here is the whole code:

# To configure following your needs
$Ou1='CNO-VCO';
$Ou2='MSSQL';
$DC1='dbi';
$DC2='test';
$ListenerName='LSTN-PRD1';
$ListenerNameFQDN="$(ListenerName).$($DC1).$($DC2)";
$IPAddress='192.168.0.3';
$ADServer = 'DC01';

If (-not (Test-path "AD:CN=$($ListenerName),OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)")){
	# Create VCO for AAG
	New-ADComputer -Name "$ListenerName" -SamAccountName "$ListenerName" -Path "OU=$($Ou1),OU=$($Ou2),DC=$($DC1),DC=$($DC2)" -Description "AlwaysOn Availability Group Listener Account" -Enabled $false -DNSHostName $ListenerNameFQN;

	# Wait for AD synchronization
	Start-Sleep -Seconds 20;
};

# Retrieve existing ACL on the VCO
$acl = Get-Acl "AD:$((Get-ADComputer -Identity $ListenerName).DistinguishedName)"; `

# Create a new access rule which will give CNO account Full Control on the object
$identity = (Get-ADComputer -Identity $ClusterName).SID;
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericAll";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;

# Add the ACE to the ACL, then set the ACL to save the changes
$acl.AddAccessRule($ace);
Set-acl -aclobject $acl "AD:$((Get-ADComputer -Identity $ListenerName).DistinguishedName)";

# Create a new DNS entry for the Listener
Add-DnsServerResourceRecordA -ComputerName $ADServer -Name $ListenerName -ZoneName "$($DC1).$($DC2)" -IPv4Address $IPAddress;

# We have to give the CNO the access to the DNS record
$acl = Get-Acl "AD:$((Get-DnsServerResourceRecord -ComputerName $ADServer -ZoneName "$($DC1).$($DC2)" -Name $ListenerName).DistinguishedName)";

#Retrive SID Identity for CNO to update in ACL
$identity = (Get-ADComputer -Identity $ClusterName).SID;

# Contruct ACE for Generic Read
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericRead";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace);

# Contruct ACE for Generic Write
$adRights = [System.DirectoryServices.ActiveDirectoryRights] "GenericWrite";
$type = [System.Security.AccessControl.AccessControlType] "Allow";
$inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "All";
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$inheritanceType;
$acl.AddAccessRule($ace); `

#Update ACL for DNS Record of the CNO
Set-acl -aclobject $acl "AD:$((Get-DnsServerResourceRecord  -ComputerName $ADServer -ZoneName "$($DC1).$($DC2)" -Name $ListenerName).DistinguishedName)";

In this blog, we saw how to automate the creation and the configuration of CNOs and VCOs in the AD/DNS.
This is useful when you have several Clusters to install and several Listeners to configure, and you want to make sure there is no mistake while saving time.

Cet article Automate CNOs and VCOs for SQL Server AAG est apparu en premier sur Blog dbi services.

Be careful with prepared transactions in PostgreSQL

$
0
0

PostgreSQL gives you the possibility for two-phase commit. You’ll might need that if you want an atomic distributed commit. If you check the PostgreSQL documentation there is a clear warning about using these kind of transactions: “Unless you’re writing a transaction manager, you probably shouldn’t be using PREPARE TRANSACTION”. If you really need to use them, you need to be very careful, that prepared transactions are committed or rollback-ed as soon as possible. In other words, you need a mechanism that monitors the prepared transactions in your database and takes appropriate action if they are kept open too long. If this happens you will run into various issues and it is not immediately obvious where your issues come from.

To start with, lets create a simple prepared transaction:

postgres=# begin;
BEGIN
postgres=*# create table t1 (a int);
CREATE TABLE
postgres=*# insert into t1 values (1);
INSERT 0 1
postgres=*# prepare transaction 'abc';
PREPARE TRANSACTION

From this point on, the transaction is not anymore associated with the session. You can verify that easily if you try to commit or rollback the transaction:

postgres=# commit;
WARNING:  there is no transaction in progress
COMMIT

This also means that the “t1” table that was created before we prepared the transaction is not visible to us:

postgres=# select * from t1;
ERROR:  relation "t1" does not exist
LINE 1: select * from t1;
                      ^

Although we are not in any visible transaction anymore, there are locks in the background because of our prepared transaction:

postgres=# select * from pg_locks where database = (select oid from pg_database where datname = 'postgres') and mode like '%Exclusive%';
 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid |        mode         | granted | fastpath | waitstart 
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+---------------------+---------+----------+-----------
 relation |    12969 |    24582 |      |       |            |               |         |       |          | -1/562             |     | RowExclusiveLock    | t       | f        | 
 relation |    12969 |    24582 |      |       |            |               |         |       |          | -1/562             |     | AccessExclusiveLock | t       | f        | 
(2 rows)

There is one AccessExclusiveLock lock, wihch is the lock on the “t1” table. The other lock, “RowExclusiveLock”, is the lock that protects the row we inserted above. How can we know that? Well, currently this is only a guess, as the “t1” table is not visible:

postgres=# select relname from pg_class where oid = 24582;
 relname 
---------
(0 rows)

Once we commit the prepared transaction, we can verify, that it really was about “t1”:

postgres=# commit prepared 'abc';
COMMIT PREPARED
postgres=# select relname from pg_class where oid = 24582;
 relname 
---------
 t1
(1 row)

postgres=# select * from t1;
 a 
---
 1
(1 row)

We can also confirm that by again taking a look the locks:

postgres=# select * from pg_locks where database = (select oid from pg_database where datname = 'postgres') and mode like '%Exclusive%';
 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted | fastpath | waitstart 
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-----+------+---------+----------+-----------
(0 rows)

These locks are gone as well. So, not a big deal, as soon as the prepared transaction is committed all is fine. This is the good case and if it goes like that you will probabyl not hit any issue.

Lets create another prepared transaction:

postgres=# begin;
BEGIN
postgres=*# insert into t1 values(2);
INSERT 0 1
postgres=*# prepare transaction 'abc';
PREPARE TRANSACTION

First point to remember: Once you create a prepared transaction it is fully stored on disk:

postgres=# \! ls -la $PGDATA/pg_twophase/*
-rw------- 1 postgres postgres 212 Feb 26 11:24 /u02/pgdata/DEV/pg_twophase/00000233

Once it is committed the file is gone:

postgres=# commit prepared 'abc';
COMMIT PREPARED
postgres=# \! ls -la $PGDATA/pg_twophase/
total 8
drwx------  2 postgres postgres 4096 Feb 26 11:26 .
drwx------ 20 postgres postgres 4096 Feb 26 10:49 ..

Why is that? The answer is, that a prepared transaction even can be committed or rollback-ed if the server crashes. But this also means, that prepared transactions are persistent across restarts of the instance:

postgres=# begin;
BEGIN
postgres=*# insert into t1 values(3);
INSERT 0 1
postgres=*# prepare transaction 'abc';
PREPARE TRANSACTION
postgres=# \! pg_ctl restart 
waiting for server to shut down.... done
server stopped
waiting for server to start....2021-02-26 11:28:51.226 CET - 1 - 10576 -  - @ LOG:  redirecting log output to logging collector process
2021-02-26 11:28:51.226 CET - 2 - 10576 -  - @ HINT:  Future log output will appear in directory "pg_log".
 done
server started
postgres=# \! ls -la  $PGDATA/pg_twophase/
total 12
drwx------  2 postgres postgres 4096 Feb 26 11:28 .
drwx------ 20 postgres postgres 4096 Feb 26 11:28 ..
-rw-------  1 postgres postgres  212 Feb 26 11:28 00000234

Is that an issue? Imagine someone prepared a transaction and forgot to commit or rollback the transaction. A few days later someone wants to modify the application and tries to add a column to the “t1” table:

postgres=# alter table t1 add column b text;

This will be blocked for no obvious reason. Looking at the locks once more:

 locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |        mode         | granted | fastpath |           waitstart           
----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+---------------------+---------+----------+-------------------------------
 relation |    12969 |    24582 |      |       |            |               |         |       |          | 3/4                | 10591 | AccessExclusiveLock | f       | f        | 2021-02-26 11:30:30.303512+01
 relation |    12969 |    24582 |      |       |            |               |         |       |          | -1/564             |       | RowExclusiveLock    | t       | f        | 
(2 rows)

We can see that pid 10591 is trying to get the look but cannot get in (granted=’f’). The other entry has no pid entry and this is the prepared transaction. The pid will always be empty for prepared transactions, so if you already know this, it might point you to the correct solution for this. If you don’t, then you are almost stuck. There is no session you can terminate, as nothing is reported about that in pg_stat_activity:

postgres=# select datid,datname,pid,wait_event_type,wait_event,state,backend_type from pg_stat_activity ;
 datid | datname  |  pid  | wait_event_type |     wait_event      | state  |         backend_type         
-------+----------+-------+-----------------+---------------------+--------+------------------------------
       |          | 10582 | Activity        | AutoVacuumMain      |        | autovacuum launcher
       |          | 10584 | Activity        | LogicalLauncherMain |        | logical replication launcher
 12969 | postgres | 10591 | Lock            | relation            | active | client backend
 12969 | postgres | 10593 |                 |                     | active | client backend
       |          | 10580 | Activity        | BgWriterHibernate   |        | background writer
       |          | 10579 | Activity        | CheckpointerMain    |        | checkpointer
       |          | 10581 | Activity        | WalWriterMain       |        | walwriter
(7 rows)

You will not see any blocking sessions (blocked_by=0):

postgres=# select pid
postgres-#      , usename
postgres-#      , pg_blocking_pids(pid) as blocked_by
postgres-#      , query as blocked_query
postgres-#   from pg_stat_activity
postgres-#   where cardinality(pg_blocking_pids(pid)) > 0;
  pid  | usename  | blocked_by |           blocked_query           
-------+----------+------------+-----------------------------------
 10591 | postgres | {0}        | alter table t1 add column b text;

Even if you restart the instance the issue will persist. The only solution to that is, to either commit or rollback the prepared transactions;

postgres=# select * from pg_prepared_xacts;
 transaction | gid |           prepared            |  owner   | database 
-------------+-----+-------------------------------+----------+----------
         564 | abc | 2021-02-26 11:28:37.362649+01 | postgres | postgres
(1 row)
postgres=# rollback prepared 'abc';
ROLLBACK PREPARED
postgres=# 

As soon this completed the other session will be able to complete it’s work:

postgres=# alter table t1 add column b text;
ALTER TABLE

Remember: When things look really weird, it might be, because you have ongoing prepared transactions.

Cet article Be careful with prepared transactions in PostgreSQL est apparu en premier sur Blog dbi services.

Oracle Blockchain Tables: COMMIT-Time

$
0
0

Oracle Blockchain Tables are available now with Oracle 19.10. (see Connor’s Blog on it), they are part of all editions and do not need any specific license. I.e. whenever we need to store data in a table, which should never be updated anymore and we have to ensure data cannot be tampererd, then blockchain tables should be considered as an option. As Oracle writes in the documentation that blockchain tables could e.g. be used for “audit trails”, I thought to test them by archiving unified audit trail data. Let me share my experience:

First of all I setup a 19c-database so that it supports blockchain tables:

– Installed 19.10.0.0.210119 (patch 32218454)
– Set COMAPTIBLE=19.10.0 and restarted the DB
– Installed patch 32431413

REMARK: All tests I’ve done with 19c have been done with Oracle 21c on the Oracle Cloud as well to verify that results are not caused by the backport of blockchain tables to 19c.

Creating the BLOCKCHAIN TABLE:

Blockchain Tables do not support the Create Table as Select-syntax:


create blockchain table uat_copy_blockchain2
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data
as select * from unified_audit_trail;

ERROR at line 6:
ORA-05715: operation not allowed on the blockchain table

I.e. I have to pre-create the blockain table and insert with “insert… select”:


CREATE blockchain TABLE uat_copy_blockchain 
   ("AUDIT_TYPE" VARCHAR2(64),
	"SESSIONID" NUMBER,
	"PROXY_SESSIONID" NUMBER,
	"OS_USERNAME" VARCHAR2(128),
...
	"DIRECT_PATH_NUM_COLUMNS_LOADED" NUMBER,
	"RLS_INFO" CLOB,
	"KSACL_USER_NAME" VARCHAR2(128),
	"KSACL_SERVICE_NAME" VARCHAR2(512),
	"KSACL_SOURCE_LOCATION" VARCHAR2(48),
	"PROTOCOL_SESSION_ID" NUMBER,
	"PROTOCOL_RETURN_CODE" NUMBER,
	"PROTOCOL_ACTION_NAME" VARCHAR2(32),
	"PROTOCOL_USERHOST" VARCHAR2(128),
	"PROTOCOL_MESSAGE" VARCHAR2(4000)
   )
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data;

Table created.

Now load the data into the blockchain table:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

Elapsed: 00:00:07.24
SQL> commit;

Commit complete.

Elapsed: 00:00:43.26

Over 43 seconds for the COMMIT!!!

The reason for the long COMMIT-time is that the blockchain (or better the row-chain of hashes for the 26526 rows) is actually built when committing. I.e. all blockchain related columns in the table are empty after the insert, before the commit:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  and ORABCTAB_CHAIN_ID$ is NULL
  4  and ORABCTAB_SEQ_NUM$ is NULL
  5  and ORABCTAB_CREATION_TIME$ is NULL
  6  and ORABCTAB_USER_NUMBER$ is NULL
  7  and ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
     26526

During the commit those hidden columns are updated:


SQL> commit;

Commit complete.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  or ORABCTAB_CHAIN_ID$ is NULL
  4  or ORABCTAB_SEQ_NUM$ is NULL
  5  or ORABCTAB_CREATION_TIME$ is NULL
  6  or ORABCTAB_USER_NUMBER$ is NULL
  7  or ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
         0

When doing a SQL-Trace I can see the following recursive statements during the COMMIT:


SQL ID: 6r4qu6xnvb3nt Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_inst_id$ = :1,
  orabctab_chain_id$ = :2, orabctab_seq_num$ = :3, orabctab_user_number$ = :4,
   ORABCTAB_CREATION_TIME$ = :5
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.56       0.55          0          0          0           0
Execute  26526     10.81      12.21       3824       3395      49546       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052     11.38      12.76       3824       3395      49546       26526

********************************************************************************

SQL ID: 4hc26wpgb5tqr Plan Hash: 2019081831

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      9.29      10.12        512      26533      27822       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    26527      9.29      10.12        512      26533      27822       26526

********************************************************************************

SQL ID: 2t5ypzqub0g35 Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_hash$ = :1
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.58       0.57          0          0          0           0
Execute  26526      6.79       7.27       1832       2896      46857       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052      7.37       7.85       1832       2896      46857       26526

********************************************************************************

SQL ID: bvggpqdp5u4uf Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26527      5.34       5.51          0          0          0           0
Fetch    26527      0.75       0.72          0      53053          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53055      6.10       6.24          0      53053          0       26526

********************************************************************************

SQL ID: dktp4suj3mn0t Plan Hash: 4188997816

SELECT  "AUDIT_TYPE",  "SESSIONID",  "PROXY_SESSIONID",  "OS_USERNAME",
  "USERHOST",  "TERMINAL",  "INSTANCE_ID",  "DBID",  "AUTHENTICATION_TYPE",
  "DBUSERNAME",  "DBPROXY_USERNAME",  "EXTERNAL_USERID",  "GLOBAL_USERID",
  "CLIENT_PROGRAM_NAME",  "DBLINK_INFO",  "XS_USER_NAME",  "XS_SESSIONID",
  "ENTRY_ID",  "STATEMENT_ID",  "EVENT_TIMESTAMP",  "EVENT_TIMESTAMP_UTC",
  "ACTION_NAME",  "RETURN_CODE",  "OS_PROCESS",  "TRANSACTION_ID",  "SCN",
  "EXECUTION_ID",  "OBJECT_SCHEMA",  "OBJECT_NAME",  "SQL_TEXT",  "SQL_BINDS",
    "APPLICATION_CONTEXTS",  "CLIENT_IDENTIFIER",  "NEW_SCHEMA",  "NEW_NAME",
   "OBJECT_EDITION",  "SYSTEM_PRIVILEGE_USED",  "SYSTEM_PRIVILEGE",
  "AUDIT_OPTION",  "OBJECT_PRIVILEGES",  "ROLE",  "TARGET_USER",
  "EXCLUDED_USER",  "EXCLUDED_SCHEMA",  "EXCLUDED_OBJECT",  "CURRENT_USER",
  "ADDITIONAL_INFO",  "UNIFIED_AUDIT_POLICIES",  "FGA_POLICY_NAME",
  "XS_INACTIVITY_TIMEOUT",  "XS_ENTITY_TYPE",  "XS_TARGET_PRINCIPAL_NAME",
  "XS_PROXY_USER_NAME",  "XS_DATASEC_POLICY_NAME",  "XS_SCHEMA_NAME",
  "XS_CALLBACK_EVENT_TYPE",  "XS_PACKAGE_NAME",  "XS_PROCEDURE_NAME",
  "XS_ENABLED_ROLE",  "XS_COOKIE",  "XS_NS_NAME",  "XS_NS_ATTRIBUTE",
  "XS_NS_ATTRIBUTE_OLD_VAL",  "XS_NS_ATTRIBUTE_NEW_VAL",  "DV_ACTION_CODE",
  "DV_ACTION_NAME",  "DV_EXTENDED_ACTION_CODE",  "DV_GRANTEE",
  "DV_RETURN_CODE",  "DV_ACTION_OBJECT_NAME",  "DV_RULE_SET_NAME",
  "DV_COMMENT",  "DV_FACTOR_CONTEXT",  "DV_OBJECT_STATUS",  "OLS_POLICY_NAME",
    "OLS_GRANTEE",  "OLS_MAX_READ_LABEL",  "OLS_MAX_WRITE_LABEL",
  "OLS_MIN_WRITE_LABEL",  "OLS_PRIVILEGES_GRANTED",  "OLS_PROGRAM_UNIT_NAME",
   "OLS_PRIVILEGES_USED",  "OLS_STRING_LABEL",  "OLS_LABEL_COMPONENT_TYPE",
  "OLS_LABEL_COMPONENT_NAME",  "OLS_PARENT_GROUP_NAME",  "OLS_OLD_VALUE",
  "OLS_NEW_VALUE",  "RMAN_SESSION_RECID",  "RMAN_SESSION_STAMP",
  "RMAN_OPERATION",  "RMAN_OBJECT_TYPE",  "RMAN_DEVICE_TYPE",
  "DP_TEXT_PARAMETERS1",  "DP_BOOLEAN_PARAMETERS1",
  "DIRECT_PATH_NUM_COLUMNS_LOADED",  "RLS_INFO",  "KSACL_USER_NAME",
  "KSACL_SERVICE_NAME",  "KSACL_SOURCE_LOCATION",  "PROTOCOL_SESSION_ID",
  "PROTOCOL_RETURN_CODE",  "PROTOCOL_ACTION_NAME",  "PROTOCOL_USERHOST",
  "PROTOCOL_MESSAGE",  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",
  "ORABCTAB_SEQ_NUM$",  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",
  "ORABCTAB_HASH$",  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.85       3.84          0          0          0           0
Fetch    26526      1.31       1.31          0      28120          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      5.17       5.15          0      28120          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

I.e. for every row inserted in the transaction, several recursive statements have to be executed to compute and update the inserted rows to link them together through the hash chain.

That raises the question if I should take care of PCTFREE when creating the blockchain table to avoid row migration (often wrongly called row chaining).

As with normal tables, blockchain tables have a default of 10% for PCTFREE:


SQL> select pct_free from tabs where table_name='UAT_COPY_BLOCKCHAIN';

  PCT_FREE
----------
        10

Do we actually have migrated rows after the commit?


SQL> @?/rdbms/admin/utlchain

Table created.

SQL> analyze table uat_copy_blockchain list chained rows;

Table analyzed.

SQL> select count(*) from chained_rows;

  COUNT(*)
----------
      7298

SQL> select count(distinct dbms_rowid.rowid_relative_fno(rowid)||'_'||dbms_rowid.rowid_block_number(rowid)) blocks_with_rows
  2  from uat_copy_blockchain;

BLOCKS_WITH_ROWS
----------------
	    1084

So it makes sense to adjust the PCTFREE. In my case best would be something like 25-30%, because the blockchain date makes around 23% of the average row length:


SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN';

SUM(AVG_COL_LEN)
----------------
	     401

SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN'
  2  and column_name like 'ORABCTAB%';

SUM(AVG_COL_LEN)
----------------
	      92

SQL> select (92/401)*100 from dual;

(92/401)*100
------------
  22.9426434

I could reduce the commit-time by 5 secs by adjusting the PCTFREE to 30.

But coming back to the commit-time issue:

This can easily be tested by just chekcing how much the commit-time increases when more data is loaded per transaction. Here the test done on 21c on the Oracle Cloud:


SQL> create blockchain table test_block_chain (a number, b varchar2(100), c varchar2(100))
  2  no drop until 0 days idle
  3  no delete until 31 days after insert
  4  hashing using "sha2_512" version v1;

Table created.

SQL> set timing on
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 1000;

999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:00.82
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 2000;

1999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:01.56
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 4000;

3999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:03.03
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 8000;

7999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:06.38
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 16000;

15999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:11.71

I.e. the more data inserted, the longer the commit-times. The times go up almost linearly with the amount of data inserted per transaction.

Can we gain something here by doing things in parallel? A commit-statement cannot be parallelized, but you may of course split your e.g. 24000 rows insert into 2 x 12000 rows inserts and run them in parallel and commit them at the same time. I created 2 simple scripts for that:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct.bash 
#!/bin/bash

ROWS_TO_LOAD=$1

sqlplus -S cbleile/${MY_PASSWD}@pdb1 <<EOF
insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum <= $ROWS_TO_LOAD ;
-- alter session set events '10046 trace name context forever, level 12';
set timing on
commit;
-- alter session set events '10046 trace name context off';
exit
EOF

exit 0

oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct_parallel.bash 
#!/bin/bash

PARALLELISM=$1
LOAD_ROWS=$2

for i in $(seq ${PARALLELISM})
do
  ./load_bct.bash $LOAD_ROWS &
done
wait

exit 0

Loading 4000 Rows in a single job:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 1 4000

4000 rows created.


Commit complete.

Elapsed: 00:00:03.56

Loading 4000 Rows in 2 jobs, which run in parallel and each loading 2000 rows:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 2 2000

2000 rows created.


2000 rows created.


Commit complete.

Elapsed: 00:00:17.87

Commit complete.

Elapsed: 00:00:18.10

That doesn’t scale at all. Enabling SQL-Trace for the 2 jobs in parallel showed this:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      8.41       8.58          0    1759772       2088        2000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2001      8.41       8.58          0    1759772       2088        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=103 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=25 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=9 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                             108        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00
********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      0.55       0.55          0          0          0           0
Fetch     2000      7.39       7.52          0    1758556          0        2000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      7.95       8.08          0    1758556          0        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                              80        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00

The single job for above 2 statements contained the following:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.76       1.85          0       8001       4140        4000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      1.76       1.85          0       8001       4140        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=102 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=26 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=0 card=1)(object id 11132)

********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.09       1.09          0          0          0           0
Fetch     4000      0.06       0.06          0       8000          0        4000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     8001      1.15       1.16          0       8000          0        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)

I.e. there’s a massive difference in logical IOs and I could see in the trace that the SQLs became slower with each execution.

Summary: Blockchain Tables are a great technology, but as with any other technology you should know its limitations. There is an overhead when committing and inserting into such tables in parallel sesssions does currently not scale when committing. If you test blockchain tables then I do recommend to review your PCT-FREE-setting of the blockchain table to avoid row migration.

Cet article Oracle Blockchain Tables: COMMIT-Time est apparu en premier sur Blog dbi services.


Delphix: a glossary to get started

$
0
0

By Franck Pachot

.
dbi-services is partner of Delphix – a data virtualization platform for easy cloning of databases. I’m sharing a little glossary to get started if you are not familiar with the terms you see in doc, console or logs.

Setup console

The setup console is the first interface you will access when installing Delphix engine (“Dynamic Data Platform”). You import the .ova and start it. If you are on a network with DHCP you can connect to the GUI, like at http://http://192.168.56.111/ServerSetup.html#/dashboard. If not you will access to the console (also available through ssh) where you have a simple help. The basic commands are `ls` to show what is available – objects and operations, `commit` to validate your changes, `up` to… go up.


network
setup
update
set hostname="fmtdelphix01"
set dnsServers="8.8.8.8, 1.1.1.1"
set defaultRoute="192.168.56.1"
set dnsDomain="tetral.com"
set primaryAddress="192.168.56.111/24"
commit

And anyway, if you started with DHCP I think you heed to disable it ( network setup update ⏎ set dhcp=false ⏎ commit ⏎)

When in the console, the harder thing for me is to find the QWERTY for ” and = as the others are on the numeric pad (yes… numeric pad is still useful!)

Storage test

Once you have an ip address you can ssh to it for the command line console with the correct keyboard and copy/paste. On thing that you can do only there and before engine initialization is storage tests

delphix01> storage
delphix01 storage> test
delphix01 storage test> create
delphix01 storage test create *> ls
Properties
type: StorageTestParameters
devices: (unset)
duration: 120
initializeDevices: true
initializeEntireDevice: false
testRegion: 512GB
tests: ALL

Before that you can set a different duration and testRegion if you don’t want to wait. Then you type `commit`to start it (and check the ETA to know how many coffees you can drink) or `discard` to cancel the test.

Setup console

Then you will continue with the GUI and the first initialization will run the wizard: choose “Virtualization engine”, setup the admin and sysadmin accounts (sysadmin is the one for this “Setup console” and admin the one for the “Management” console), NTP, Network, Storage, Proxy, Certificates, SMTP. Don’t worry many things can be changed later. Like adding network interfaces, adding new disk (just click on rediscover and accept them as “Data” usage, add certificates for HTTPS, get the registration key, and add users. The users here are for this Server Setup GUI or CLI console only.

GUI: Setup and Management consoles

The main reason for this blog post is to explain the names that can be misleading because named differently at different places. There are two graphical consoles for this engine once setup is done:

  • The Engine Setup console with #serverSetup in the URL and SETUP subtitle in the DELPHIX login screen. You use SYSADMIN here (or another user that you will create in this console). You manage the engine here (network, storage,…)
  • The Management console with #delphixAdmin in the URL and the “DYNAMIC DATA PLATFORM” subtitle. You use the ADMIN user here (or another user that you will create in this console). You manage your databases here.

Once you get this, everything is simple. I’ll mention the few other concepts that may have a misleading name in the console or the API. Actually, there’s a third console, the Self Service with the /jetstream/#mgmt in the URL that you access from the Management console, with the Management user. And of course there are the APIs. I’ll cover only the Management console in re rest of this post.

Management console

It’s subtitle in the login screen is “Dynamic Data platform” and it is actually the “Virtualization” engine. There, you use the “admin” user, not the “sysadmin” one. Or any newly added one. The Manage/Dashboard is the best place to start. The main goal of this post is to explain quickly the different concepts and their different names.

Environments

An Environment is the door to other systems. Think of “environments” as if it was called “hosts”. You will create an environment for source and target hosts. It needs only ssh access (the best is to add the dephix ssh key in the target’s .ssh/authorized keys). You can create a dedicated linux user, or use the ‘oracle’ one for simplicity. It only needs a directory that it owns (I use “/u01/app/delphix”) where it will install the “Toolkit” (about 500MB used but check the prerequisites). That’s sufficient for sources but if you want to mount clones you need sudo provileges for that:

cat > /etc/sudoers.d/delphix_oracle <<'CAT'
Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /bin/mount, /bin/umount, /bin/mkdir, /bin/rmdir, /bin/ps
CAT

And that’s all you need. There’s no agent running. All is run by the Delphix engine when needed, through ssh.

Well, I mention ssh only for operations, but the host must be able to connect to the Delphix engine, to send backups of dSource or mount a NFS.

Additionally, you will need to ensure that you have enough memory to start clones as I’m sure you will quickly be addicted to the easiness of provisioning new databases. I use this to check available memory in small pages (MemAvailable) and large pages (HugePages_Free):

awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*" #awk #meminfo

You find it there: https://franckpachot.medium.com/proc-meminfo-formatted-for-humans-350c6bebc380

As in many places, you name your environment (I put the host name and a little information behind like “prod” or “clones”) and have a Notes textbox that can be useful for you or your colleagues. Data virtualization is about agility and self-documented tools are the right place: you see the latest info next to the current status.

In each environments you can auto-discover the Databases. Promote one as a dSource. And if the database is an Oracle CDB you can discover the PDBs inside it.
You can also add filesystem directories. And this is where the naming confusion starts: they are displayed here, in environments, as “Unstructured Files” and you add them with “Add Database” and clone them to “vFiles”…

Datasets and Groups

And all those dSource, VDB, vFiles are “Datasets”. If you click on “dSources”, “VDBs” or “vFiles” you always go to “Datasets”. And there, they are listed in “Groups”. And in each group you see the Dataset name with its type (like “VDB” or “dSource”) and status (like “Running”, “Stopped” for VDBS, or “Active” or “Detached” for dSources). The idea is that all Datasets have a Timeflow, Status and Configuration. Because clones can also be sources for other clones. In the CLI console you see all Datasets as “source” objects, with a “virtual” flag that is true only for VDB or an unlinked dSource.

Don’t forget the Notes in the Status panel. I put the purpose there (why the clone is created, who is the user,…) and state (if the application is configured to work on it for example).

About the groups, you arrange them as you want. They also have Notes to describe it. And you can attach default policies to them. I group by host usually, and type of users (as they have different policies). And in the name of the group or the policy, I add a little detail to see which one is daily refreshed for example, or which one is a long-term used clone.

dSource

The first dataset you will have is the dSource. In a source environment, you have Dataset Homes (the ORACLE_HOME for Oracle) and from there a “data source” (a database) is discovered in an environment. And it will run a backup sent to Delphix (as a device type TAPE, for Oracle, handeled by Delphix libobk.so). This is stored in the Delphix engine storage and the configuration is kept to be able to refresh later with incremental backups (called SnapSync, or DB_SYNC or Snapshot with the camera icon). Delphix will then apply the incrementals on his copy-on-write filesystem. There’s no need for an Oracle instance to apply them. It seems that Delphix handles the proprietary format of Oracle backupsets. Of course, the archive logs generated during the backups must be kept but they need an Oracle instance for that so they are just stored to be applied on thin provisioning clone or refresh. If there’s a large gap and the incremental takes long, then you may opt for a DoubleSync where only the second one, faster, need to be covered by archived logs.

Timeflow

So you see the points of Sync as snapshots (camera icon) in the timeflow and you can provision a clone from them (the copy-paste Icon in the Timeflow). Automatic snapshots can be taken by the SnapSync policy and will be kept to cover the Retention policy (but you can mark one to keep longer as well). You take a snapshot manually with the camera icon.

In addition to the archivelog needed to cover the SnapSync, intermediate archive logs and even online logs can be retrieved with LogSync when you clone from an intermediate Point-In-Time. This, in the Timeflow, is seen with “Open LogSync” (an icon like a piece of paper) and from there you can select a specific time.

In a dSource, you select the snapshot, or point-in-time, to create a clone from it. It creates a child snapshot where all changes will be copy-on-write so that modifications on the parent are possible (the next SnapSync will write on the parent) and modifications on the child. And the first modification will be the apply of the redo log before opening the clone. The clone is simply an instance on an NFS mount to the Delphix engine.

VDB

Those clones become a virtual database (VDB) which is still a Dataset as it can be source for further clones.

They have additional options. They can be started and stopped as they are fully managed by Delphix (you don’t have to do anything on the server). And because they have a parent, you can refresh them (the round arrow icon). In the Timeflow, you see the snapshots as in all Datasets. But you also have the refreshes. And there is another operation related to this branch only: rewind.

Rewind

This is like a Flashback Database in Oracle: you mount the database from another point-in-time. This operation has many names. In the Timeflow the icon with two arrows on left is called “Rewind”. In the jobs you find “Rollback”. And none are really good names because you can move back and then in the future (relatively to current state of course).

vFiles

Another Datasource is vFiles where you can synchronize simple filesystems. In the environments, you find it in the Databases tab, under Unstructured Files instead of the Dataset Home (which is sometimes called Installation). And the directory paths are displayed as DATABASES. vFiles is really convenient when you store your files metadata in the database and the files themselves outside of it. You probably want to get them at the same point-in-time.

Detach or Unlink

When a dSource is imported in Delphix, it is a Dataset that can be source for a new clone, or to refresh an existing one. As it is linked to a source database, you can SnapSync and LogSync. But you can also unlink it from the source and keep it as a parent of clones. This is named Detach or Unlink operation.

Managed Source Data

Managed Source Data is the important metric for licensing reasons. Basically, Delphix ingests databases from dSources and stores it in a copy-on-write filesystem on the storage attached to the VM where Delphix engine runs. The Managed Source Data is the sum of all root parent before compression. This means that if you ingested two databases DB1 and DB2 and have plenty of clones (virtual databases) you count only the size of DB1 and DB2 for licensing. This is really good because this is where you save the most: storage thanks to compression and thin provisioning. If you drop the source database, for example DB2 but still keep clones on it, the parent snapshot must be kept in the engine and this still counts for licensing. However, be careful that as soon as a dSource is unlinked (when you don’t want to refresh from it anymore, and maybe even delete the source) the engine cannot query it to know the size. So this will not be displayed on Managed Source Data dashboard but should count for licensing purpose.

Cet article Delphix: a glossary to get started est apparu en premier sur Blog dbi services.

JENKINS installing Artifactory plugin and fix dependencies issue

$
0
0

Hi everybody

As I planned to show you how to use Artifactory in next blog topic , we need to install it through plugin management.
During this I faced some issues.Below we will check how to fix it.

what is an artifact?

  • An artifact is a file produced as a result of a Jenkins build.
  • The name comes from Maven naming conventions
  • A single Jenkins build can produce many artifacts

why archiving artifacts?

  • By default, they are stored where they are created, so they are deleted
    when the workspace is wiped, unless they are archived
  • Jobs can be configured to archive artifacts based on filename patterns
  • Archived artifacts are available for testing and debugging after the pipeline run finishes
  • Archived artifacts are kept forever unless a retention policy is applied to builds
    to delete them periodically
  • Artefacts can be shared between members of the team
  • Artifacts can take space so it can be useful to store them in a file repository

what is Artifactory?

Artifactory is a Binary Repository Manager product from Jfrog
it is used to store binaries like   jar,dll, war, msi,exe  etc it is a a bit different from SCM which stores all the code of your application not only binaries.

install Artifactory plugin

Go to your Jenkins master and search for the Artifactory plugin in the plugin manager

plugin installation potential issue

during installation you may have installation issues ( due to other plugin update/dependencies )
we see clearly that Token macro plugin must be upgraded

Token Macro Plugin (2.12) to be updated to 2.13 or higher


When checking the plugin manager we get this
Note : the “Run condition” plugin is also asking for action

After upgrading the Token macro plugin and also the run condition plugin ( which has dependencies ), the Artifactory plugin installation is now OK


note: we can see every dependency between plugins by letting the pointer on the box beginning the line

  • Artifactory plugin is now installed and ready to be used!

check if Artifactory plugin appears in build options

  • now go to the system configuration and check for Artifactory configuration

  • add your Artifactory server in Jenkins configuration (add you credentials from Artifactory server* and click on test connection)

  • when we check on build option we have artifact feature displaying in the drop down menu

Conclusion:

That’s it, you have downloaded and fixed dependencies for Artifactory plugin ( note this principle can be applied to any plugin)
We checked also that the new features are added for our Jenkins builds
In next blog we will see how to use Artifactory with our Jenkins jobs an also  a topic to help you on Jfrog’s Artifactory server installation*
Feel to check dbi bloggers site and stay tuned for new topics on Jenkins

Cet article JENKINS installing Artifactory plugin and fix dependencies issue est apparu en premier sur Blog dbi services.

How to configure additional listeners on ODA

$
0
0

Introduction

Oracle Database Appliance has quite a lot of nice features, but when looking into the documentation, at least one thing is missing. How to configure multiple listeners? Odacli apparently doesn’t know what’s a listener. Let’s find out how to add new ones.

odacli

Everything should be done using odacli on ODA, but unfortunately odacli has no commands for configuring listeners:

odacli -h | grep listener
nothing!

The wrong way to configure a listener

One could tell me that configuring a listener is easy, you just have to describe it into the listener.ora file, for example:

echo "DBI_LSN=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda-dbi-test)(PORT=1576))))" >> $ORACLE_HOME/network/admin/listener.ora

… and start it with:
lsnrctl start DBI_LSN

But if it works fine, it’s not the best way to do that. Why? Simply because it will not survive a reboot.

A better way to configure a listener: through Grid Intrastructure

ODA makes use of Grid Infrastructure for its default listener on port 1521. The listener is an Oracle service running in Grid Infrastructure, so additional listeners should be declared in the Grid Infrastructure using srvctl. This is an example to configure a new listener on port 1576:

su - grid
which srvctl

/u01/app/19.0.0.0/oracle/bin/srvctl
srvctl add listener -listener DBI_LSN -endpoints 1576
srvctl config listener -listener DBI_LSN

Name: DBI_LSN
Type: Database Listener
Network: 1, Owner: grid
Home:
End points: TCP:1576
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
srvctl start listener -listener DBI_LSN
ps -ef | grep tnslsn | grep DBI

oracle 71530 1 0 12:41 ? 00:00:00 /u01/app/19.0.0.0/oracle/bin/tnslsnr DBI_LSN -no_crs_notify -inherit

The new listener is running fine, and the listener.ora has been completed with this new item:

cat /u01/app/19.0.0.0/oracle/network/admin/listener.ora | grep DBI
DBI_LSN=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=DBI_LSN)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_DBI_LSN=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_DBI_LSN=SUBNET # line added by Agent

For sure, configuring a listener on a particular port is only possible if this port is not in use.

Removing a listener

If you want to remove a listener, you just need to remove the service from Grid Infrastructure:
su - grid
srvctl stop listener -listener DBI_LSN
srvctl remove listener -listener DBI_LSN
ps -ef | grep tnslsn | grep DBI

no more listener DBI_LSN running
cat /u01/app/19.0.0.0/oracle/network/admin/listener.ora | grep DBI
no more configuration in the listener.ora file

Obviously, if you plan to remove a listener, please make sure that no database is using it prior removing it.

How to use this listener for my database

Since 12c, a new LREG process in the instance is doing the registration of the database in a listener. Previously, this job was done by PMON process. The default behavior is to register the instance in the standard listener on 1521. If you want to configure your database with the new listener you just created with srvctl, configure the local_listener parameter:

su - oracle
. oraenv <<< DBTEST
sqlplus / as sysdba
alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=oda-dbi-test)(PORT=1576))' scope=both;
alter system register;
exit;

No need to reboot anything.

What about standard 1521 listener?

You may think about removing the standard listener on port 1521. But I wouldn’t do that. I think it’s better to keep this default one, even if none of your databases are using it. It could later cause troubles when patching or configuring something else on your ODA.

Conclusion

The listener management with odacli could come one day, but for now (19.9 and before) you still have to configure it using Grid Infrastructure. It’s quite easy and pretty straightforward if you do it the proper way.

Cet article How to configure additional listeners on ODA est apparu en premier sur Blog dbi services.

JENKINS installing Artifactory server and add it to your Jenkins master

$
0
0

Hi everybody

As promised in my previous post on Artifactory plugin here is the installation guide to use Artifactory with your  Jenkins master.First we need to check the prerequisites and then after installation will see how to link it to our Jenkins master.
We will use the trial version and also further the open source version.

  • A quick presentation of Artifactory  from Jfrog’s site ( description also available in my previous blog )

prerequisite

  • as requested launch the artifactory.bat file in /app/bin/ folder

  • a cmd prompt will appear and start installation

  • after cmd prompt installation, the installation wizard will appear

  • go to http://localhost:8082/ url and update the password as asked

  • as I have chosen the PRO version ,it’s asked to enter a license key ( so I have requested it from Jfrog’s site picked up from my mailbox,we will see further how to use opensource version)

  • you can see the admin user default password that you must change after your first login

  • enter your license to activate your account

  • you can configure the platform proxy here if needed

  • Once installation wizard is done, connect to your URL  http://localhost:8082/ui/login/

default admin password must be updated later ( first connection done with user : admin and password :password )

  • Now it’s time to create and manage your repository

  • you can select your package type within a huge choice of software ( note that I chose the trial version with more choices than Open source version )
  • you also have a quick configuration mode to add a generic repository and to select if it will be local or remote

  • as I have subscribed for a free trial I am able to see more package types than on opensource version*

  •  let’s create a Maven repository

  • we can also add a generic repository

  • you can now configure the repository as you want( you can filter the type of file you want to store and share)

link this repository to a Jenkins build

  • when selecting Artifactory configuration in you build, you can now see the repositories created on Artifactory server displaying on a drop down menu

  • when adding a new repository you can refresh the configuration to display it in the drop down menu

Artifactory open source installation

  • let’s check how to install it on CENTOS  (you can select other OS types )go to this URL :

https://jfrog.com/open-source/

  • Here,you got many choices to perform your installation,in our case we will download the targ.gz file,unzip it and launch the .sh file
[root@localhost bin]# pwd
/home/Nour2020/Téléchargements/jfrog-artifactory-oss-7.15.3-linux/artifactory-oss-7.15.3/app/bin
[root@localhost bin]# ll
total 492
-rwxr-xr-x. 1 Nour2020 Nour2020  32364 14 févr. 15:37 artifactoryCommon.sh
-rwxr-xr-x. 1 Nour2020 Nour2020    382 14 févr. 15:37 artifactoryctl
-rwxr-xr-x. 1 Nour2020 Nour2020   1544 14 févr. 15:37 artifactory.default
-rwxr-xr-x. 1 Nour2020 Nour2020   9581 14 févr. 15:37 artifactoryManage.sh
-rwxr-xr-x. 1 Nour2020 Nour2020  17562 14 févr. 15:37 artifactory.sh
drwxr-xr-x. 2 Nour2020 Nour2020     53  3 mars  09:13 diagnostics
-rwxr-xr-x. 1 Nour2020 Nour2020 163909  4 févr. 17:44 installerCommon.sh
-rwxr-xr-x. 1 Nour2020 Nour2020  10606 14 févr. 15:37 installService.sh
-rwxr-xr-x. 1 Nour2020 Nour2020 175347  4 févr. 17:44 migrate.sh
-rwxr-xr-x. 1 Nour2020 Nour2020   4433 14 févr. 15:37 migrationComposeInfo.yaml
-rwxr-xr-x. 1 Nour2020 Nour2020   4223 14 févr. 15:37 migrationDockerInfo.yaml
-rwxr-xr-x. 1 Nour2020 Nour2020   6689 14 févr. 15:37 migrationRpmInfo.yaml
-rwxr-xr-x. 1 Nour2020 Nour2020   4537 14 févr. 15:37 migrationZipInfo.yaml
-rwxr-xr-x. 1 Nour2020 Nour2020  31431  4 févr. 17:44 systemYamlHelper.sh
-rwxr-xr-x. 1 Nour2020 Nour2020   6879 14 févr. 15:37 uninstallService.sh
[root@localhost bin]#
[root@localhost bin]# ./artifactory.sh
2021-03-03T14:17:00.523Z [jfrt ] [INFO ] [32e8d27585514cf1] [SchemaInitializationManager:51] [ocalhost-startStop-2] - Post-DB initialization manager initialized
2021-03-03T14:17:02.949Z [jfrt ] [INFO ] [a8fe707a163f3e3c] [ctoryContextConfigListener:325] [art-init ] -
_ _ __ _ ____ _____ _____
/\ | | (_)/ _| | | / __ \ / ____/ ____|
/ \ _ __| |_ _| |_ __ _ ___| |_ ___ _ __ _ _ | | | | (___| (___
/ /\ \ | '__| __| | _/ _` |/ __| __/ _ \| '__| | | | | | | |\___ \\___ \
/ ____ \| | | |_| | || (_| | (__| || (_) | | | |_| | | |__| |____) |___) |
/_/ \_\_| \__|_|_| \__,_|\___|\__\___/|_| \__, | \____/|_____/_____/
Version: 7.15.3 __/ |
Revision: 71503900 |___/
Artifactory Home: '/home/Nour2020/Téléchargements/jfrog-artifactory-oss-7.15.3-linux/artifactory-oss-7.15.3'
Node ID: 'localhost.localdomain'
2021-03-03T14:17:51.794Z [jfrt ] [INFO ] [ ] [d.DatabaseConverterRunnable:37] [pool-40-thread-1 ] - Starting Async converter thread.
2021-03-03T14:17:51.795Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:33] [pool-40-thread-1 ] - Starting attempt #1 of async conversion for v225_change_nodes_node_name_idx
2021-03-03T14:17:51.796Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:35] [pool-40-thread-1 ] - Conversion of v225_change_nodes_node_name_idx finished successfully.
2021-03-03T14:17:51.796Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:33] [pool-40-thread-1 ] - Starting attempt #1 of async conversion for v225_change_nodes_node_path_idx
2021-03-03T14:17:51.796Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:35] [pool-40-thread-1 ] - Conversion of v225_change_nodes_node_path_idx finished successfully.
2021-03-03T14:17:51.796Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:33] [pool-40-thread-1 ] - Starting attempt #1 of async conversion for v225_change_nodes_node_repo_path_idx
2021-03-03T14:17:51.796Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:35] [pool-40-thread-1 ] - Conversion of v225_change_nodes_node_repo_path_idx finished successfully.
2021-03-03T14:17:51.796Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:33] [pool-40-thread-1 ] - Starting attempt #1 of async conversion for v225_add_bundle_files_node_id_index
2021-03-03T14:17:51.856Z [jfrt ] [INFO ] [c0781707ef1ffc27] [adsFolderCleanupServiceImpl:52] [art-exec-4 ] - Starting docker temp folder cleanup
2021-03-03T14:17:51.857Z [jfrt ] [INFO ] [c0781707ef1ffc27] [adsFolderCleanupServiceImpl:54] [art-exec-4 ] - Docker temp folder cleanup finished, time took: 1 millis
2021-03-03T14:17:52.328Z [jfrt ] [INFO ] [ ] [ncDBSqlConditionalConverter:35] [pool-40-thread-1 ] - Conversion of v225_add_bundle_files_node_id_index finished successfully.
2021-03-03T14:18:05.069Z [jfrt ] [INFO ] [ ] [o.j.c.ConfigWrapperImpl:342 ] [pool-31-thread-1 ] - [Node ID: localhost.localdomain] detected local modify for config 'artifactory/config/security/access/access.admin.token'
2021-03-03T14:18:05.399Z [jfac ] [INFO ] [254b212df9e1c5bf] [a.c.RefreshableScheduledJob:53] [27.0.0.1-8040-exec-5] - Scheduling federationCleanupService task to run every 1209600 seconds
2021-03-03T14:18:05.422Z [jfac ] [INFO ] [cb57992813ed0fc7] [.f.FederationCleanupService:59] [jf-access-task1 ] - Running clean up of outdated Federation events
2021-03-03T14:18:05.464Z [jfac ] [INFO ] [254b212df9e1c5bf] [s.r.NodeRegistryServiceImpl:68] [27.0.0.1-8040-exec-5] - Cluster join: Successfully joined jffe@000 with node id localhost.localdomain
2021-03-03T14:18:10.614Z [jfac ] [INFO ] [ ] [CertificateFileHandlerBase:181] [c-default-executor-1] - Loading ca certificate from database.
2021-03-03T14:18:10.732Z [jfac ] [INFO ] [ ] [CertificateFileHandlerBase:328] [c-default-executor-1] - [ACCESS BOOTSTRAP] Saved new ca certificate at: /home/Nour2020/Téléchargements/jfrog-artifactory-oss-7.15.3-linux/artifactory-oss-7.15.3/var/etc/access/keys/ca.crt
2021-03-03T14:18:10.733Z [jfac ] [INFO ] [ ] [CertificateFileHandlerBase:190] [c-default-executor-1] - Finished loading ca certificate from database.
2021-03-03T14:18:10.733Z [jfac ] [INFO ] [ ] [CertificateFileHandlerBase:181] [c-default-executor-1] - Loading root certificate from database.
2021-03-03T14:18:10.794Z [jfac ] [INFO ] [ ] [CertificateFileHandlerBase:328] [c-default-executor-1] - [ACCESS BOOTSTRAP] Saved new root certificate at: /home/Nour2020/Téléchargements/jfrog-artifactory-oss-7.15.3-linux/artifactory-oss-7.15.3/var/etc/access/keys/root.crt
2021-03-03T14:18:10.795Z [jfac ] [INFO ] [ ] [CertificateFileHandlerBase:190] [c-default-executor-1] - Finished loading root certificate from database.
2021-03-03T14:18:10.797Z [jfac ] [INFO ] [ ] [alConfigurationServiceBase:182] [c-default-executor-1] - Loading configuration from db finished successfully
2021-03-03T14:18:20.847Z [jfrt ] [INFO ] [fa913282bc0a4c6f] [a.e.EventsLogCleanUpService:59] [art-exec-2 ] - Starting cleanup of old events from event log
2021-03-03T14:18:20.855Z [jfrt ] [INFO ] [fa913282bc0a4c6f] [a.e.EventsLogCleanUpService:81] [art-exec-2 ] - Cleanup of old events from event log finished
2021-03-03T14:21:04.897Z [jfrt ] [INFO ] [2c1dd772a21d089e] [o.a.s.SecurityServiceImpl:1518] [http-nio-8081-exec-2] - Password for user: 'admin' has been successfully changed
2021-03-03T14:21:40.478Z [jfrt ] [INFO ] [618bbbd4bf008641] [c.CentralConfigServiceImpl:697] [http-nio-8081-exec-1] - Reloading configuration... old revision 0, new revision 1
2021-03-03T14:21:40.662Z [jfrt ] [INFO ] [618bbbd4bf008641] [c.CentralConfigServiceImpl:401] [http-nio-8081-exec-1] - Force updating new descriptor (old descriptor doesn't support diff). Old descriptor revision 0
2021-03-03T14:21:40.750Z [jfrt ] [INFO ] [618bbbd4bf008641] [c.CentralConfigServiceImpl:424] [http-nio-8081-exec-1] - New configuration with revision 1 saved.
2021-03-03T14:21:40.767Z [jfrt ] [INFO ] [618bbbd4bf008641] [ifactoryApplicationContext:560] [http-nio-8081-exec-1] - Artifactory application context set to NOT READY by reload
2021-03-03T14:21:41.040Z [jfrt ] [INFO ] [618bbbd4bf008641] [ifactoryApplicationContext:560] [http-nio-8081-exec-1] - Artifactory application context set to READY by reload
  • you can now connect to Artifactory server on your Linux server browser with your url and start configuring your repositories

Conclusion

You are all set, you can now use your Jenkins builds to store and share your artifacts on your brand new Artifactory server 🙂
Next time we will check how to configure Artifactory with users, groups and manage access permission for the repositories.

Cet article JENKINS installing Artifactory server and add it to your Jenkins master est apparu en premier sur Blog dbi services.

Rancher, up and running, on EC2 – 1 – One node

$
0
0

If you want to play with Rancher you have several options, as outlined in the documentation. There are quick starts for the major public cloud providers (using Terraform), you can install it on a Linux host by using the Rancher container or you can do it on your own. We’ll be doing it step by step, as I believe that gives most information on how things actually work. We’ll start with one node and then extend the Kubernetes cluster to three nodes and you’ll notice that this is actually quite easy and convenient using Rancher.

I’ve created three Debian 10 EC2 instances:

We’ll start with the first one, and once it is ready, bring it to the latest release:

admin@ip-10-0-1-168:~$ sudo apt update && sudo apt dist-upgrade -y && sudo systemctl reboot

Once it is back, lets give it a more meaningful hostname:

admin@ip-10-0-1-168:~$ sudo hostnamectl set-hostname rancher1
admin@ip-10-0-1-168:~$ sudo bash
sudo: unable to resolve host rancher1: Name or service not known
root@rancher1:/home/admin$ echo "10.0.1.168 rancher1 rancher1.it.dbi-services.com" >> /etc/hosts
root@rancher1:/home/admin$ exit
exit

As Rancher depends on Docker, we need to install a supported version of Docker. Range provides a script for this, which does all the work:

admin@ip-10-0-1-168:~$ sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 17251  100 17251    0     0   561k      0 --:--:-- --:--:-- --:--:--  561k
+ sudo -E sh -c apt-get update
Hit:1 http://security.debian.org/debian-security buster/updates InRelease
Hit:2 http://cdn-aws.deb.debian.org/debian buster InRelease
Hit:3 http://cdn-aws.deb.debian.org/debian buster-updates InRelease
Hit:4 http://cdn-aws.deb.debian.org/debian buster-backports InRelease
Reading package lists... Done
...
+ sudo -E sh -c docker version
Client: Docker Engine - Community
 Version:           19.03.15
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        99e3ed8919
 Built:             Sat Jan 30 03:17:05 2021
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.15
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       99e3ed8919
  Built:            Sat Jan 30 03:15:34 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker admin

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

We’ll be using the Rancher Kubernetes Engine (RKE) and to get that onto the system, Rancher provides a single binary. Before proceeding with that, we need a user, configure sudo (for convenience), and create the ssh keys:

admin@ip-10-0-1-168:~$ sudo groupadd rancher
admin@ip-10-0-1-168:~$ sudo useradd -g rancher -G docker -m -s /bin/bash rancher
admin@ip-10-0-1-168:~$ sudo passwd rancher
New password: 
Retype new password: 
passwd: password updated successfully
admin@ip-10-0-1-168:~$ sudo bash
root@rancher1:/home/admin$ echo "rancher ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
root@rancher1:/home/admin$ su - rancher
rancher@rancher1:~$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rancher/.ssh/id_rsa): 
Created directory '/home/rancher/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/rancher/.ssh/id_rsa.
Your public key has been saved in /home/rancher/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:gHzFXkMttTw8dks64+1zEpt3Oef6TWs/pKoiYDDruIk rancher@rancher1
The key's randomart image is:
+---[RSA 2048]----+
|       ....o.    |
|   . . .. +o..   |
|    o o. . oB o  |
| o   . ..  . * . |
|  +     S   + .  |
| . o       . +.. |
|o . .       . ++o|
|oo   . .     o=*B|
|E.    . ..... +X@|
+----[SHA256]-----+
rancher@rancher1:~$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
rancher@rancher1:~$ ssh rancher@rancher1
Linux rancher1 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

That’s it for the requirements to get started. Download the RKE binary:

rancher@rancher1:~$ wget https://github.com/rancher/rke/releases/download/v1.1.15/rke_linux-amd64
rancher@rancher1:~$ mv rke_linux-amd64 rke
rancher@rancher1:~$ sudo mv rke /usr/local/bin/
rancher@rancher1:~$ sudo chown rancher:rancher /usr/local/bin/rke
rancher@rancher1:~$ sudo chmod 750 /usr/local/bin/rke
rancher@rancher1:~$ rke --version
rke version v1.1.15

All you need to do, to get RKE setup on a single host is this:

rancher@rancher1:~$ rke config
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: 
[+] Number of Hosts [1]: 
[+] SSH Address of host (1) [none]: 10.0.1.168
[+] SSH Port of host (1) [22]: 
[+] SSH Private Key Path of host (10.0.1.168) [none]: 
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (10.0.1.168) [none]: 
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (10.0.1.168) [ubuntu]: rancher
[+] Is host (10.0.1.168) a Control Plane host (y/n)? [y]: 
[+] Is host (10.0.1.168) a Worker host (y/n)? [n]: y
[+] Is host (10.0.1.168) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (10.0.1.168) [none]: 
[+] Internal IP of host (10.0.1.168) [none]: 10.0.1.168
[+] Docker socket path on host (10.0.1.168) [/var/run/docker.sock]: 
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]: 
[+] Authentication Strategy [x509]: 
[+] Authorization Mode (rbac, none) [rbac]: 
[+] Kubernetes Docker image [rancher/hyperkube:v1.18.16-rancher1]: 
[+] Cluster domain [cluster.local]: 
[+] Service Cluster IP Range [10.43.0.0/16]: 
[+] Enable PodSecurityPolicy [n]: 
[+] Cluster Network CIDR [10.42.0.0/16]: 
[+] Cluster DNS Service IP [10.43.0.10]: 
[+] Add addon manifest URLs or YAML files [no]: 

This creates the cluster configuration file:

rancher@rancher1:~$ ls -la cluster.yml 
-rw-r----- 1 rancher rancher 4619 Mar  6 14:40 cluster.yml

Bring it up:

rancher@rancher1:~$ rke up
INFO[0000] Running RKE version: v1.1.15                 
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [dialer] Setup tunnel for host [10.0.1.168]  
...
INFO[0157] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0157] [addons] Executing deploy job rke-ingress-controller 
INFO[0162] [ingress] ingress controller nginx deployed successfully 
INFO[0162] [addons] Setting up user addons              
INFO[0162] [addons] no user addons defined              
INFO[0162] Finished building Kubernetes cluster successfully 

That’s it. The one node Kubernetes cluster is ready (Control Plane, worker and etcd all on one host). This is of course nothing you’d do in a serious deployment, but to get started this is fine. To talk to the Kubernetes cluster you shoud install kubectl:

rancher@rancher1:~$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 38.3M  100 38.3M    0     0  73.9M      0 --:--:-- --:--:-- --:--:-- 73.8M

Same procedure as with the rke binary:

rancher@rancher1:~$ ls
cluster.rkestate  cluster.yml  kube_config_cluster.yml  kubectl
rancher@rancher1:~$ sudo mv kubectl /usr/local/bin/
rancher@rancher1:~$ sudo chown rancher:rancher /usr/local/bin/kubectl
rancher@rancher1:~$ sudo chmod 750 /usr/local/bin/kubectl

Use it to talk to your cluster:

rancher@rancher1:~$ export KUBECONFIG=kube_config_cluster.yml 
rancher@rancher1:~$ kubectl get namespace
NAME              STATUS   AGE
default           Active   6m31s
ingress-nginx     Active   5m36s
kube-node-lease   Active   6m33s
kube-public       Active   6m33s
kube-system       Active   6m33s

Done. RKE is up and running on a single node. Be aware that we did not yet install Ranger, just RKE. But also notice how easy that was: We have a Kubernetes cluster running, and all we needed to do, took around 10 minutes. In the next post we’ll extend the configuration to three nodes.

Cet article Rancher, up and running, on EC2 – 1 – One node est apparu en premier sur Blog dbi services.

SUSE RANCHER 2.5 – Monitoring Setup

$
0
0

In this short blog post, I will discuss about SUSE Rancher 2.5, the latest version.
In this release, the rancher-monitoring has been introduced.
With this cool feature, you can quickly deploy open-source monitoring as Promotheus and Grafana for your kubernetes clusters

Introduction

SUSE Rancher-monitoring can help you to ensure performance, availability, reliability and scalability.
it allows you to:
– Monitor the state and processes of all your cluster nodes, Kubernetes components, and software deployments
– Create custom dashboards via Grafana
– Configure alert-based notifications via Email, Slack and PagerDuty.

Monitoring setup

As an administrator, you can configure Rancher to deploy Prometheus to monitor your Kubernetes cluster.
In the Rancher UI, go to the cluster where you want to install monitoring and click on “Cluster Explorer”.
Click on the Apps button.
On the scrolling menu, choose “Monitoring”.

You will land then on the monitoring Dashboard where you can configure Metrics and Alerts

But Rancher Monitoring deploys, by default, some exporters  as node_exporter and kube_state_metrics and also some default Prometheus alerts and Grafana dashboards onto your cluster.
You can check this by clicking directly on the Grafana link  
You will get the default home dashboard

But below this chart, you can find plenty of them, already configured as Nodes, etcd and Kubernetes.
You can optionally click Chart Options and configure the alerting, Prometheus and Grafana.

Conclusion

As you can see in this short blog post, activating monitoring is an easy game with Rancher 2.5. In another blog post, I will talk deeper about alerting.

Cet article SUSE RANCHER 2.5 – Monitoring Setup est apparu en premier sur Blog dbi services.

AWS: PostgreSQL on Graviton2

$
0
0

By Franck Pachot

.
On the AWS free tier, you can run a t2.micro instance for 750 hours per month during the first 12 month after sign-up date. And currently, until June 2021, you can also run a T4g.micro. But be careful, when the free trial ends, or if your usage exceeds the free trial restrictions, you’ll pay the standard pay-as-you-go rates. This is a good occasion to test the Graviton2 ARM processors, and you can do the same as I do in this blog post on those instances. However, as I want to compare the CPU performance during a long run I’ll use larger (and non-burstable) instances: m5d.2xlarge for x86_64 and m6gd.2xlarge aarch64 which have both 8 vCPUs and 32 GB or RAM.

I’ve installed “Amazon Linux 2 AMI (HVM)” on m5d.2xlarge (x86_64) and “Amazon ECS-Optimized Amazon Linux 2 AMI (ARM)” for m6gd.2xlarge ARM64 (aarch64)

PostgreSQL

I’ll install PostgreSQL here and measure LIOPS with PGIO (https://github.com/therealkevinc/pgio)

sudo yum install -y git gcc readline-devel zlib-devel bison bison-devel flex
git clone https://github.com/postgres/postgres.git
sudo yum install -y gcc readline-devel zlib-devel bison-devel
time ( cd postgres && ./configure && make all && sudo make install )
( cd postgres/contrib && sudo make install )

This compiles PostgreSQL from the community source (version 14devel). I can already get an idea about the CPU performance:

  • m5d.2xlarge x86_64 Xeon time: real 3m32.192s, user 3m14.176s, sys 0m18.400s
  • m6gd.2xlarge aarch64 ARM time: real 3m54.493s, user 3m39.324s, sys 0m15.373s

export PGDATA=~/pgdata
echo "$PATH" | grep /usr/local/pgsql/bin || export PATH="$PATH:/usr/local/pgsql/bin"
initdb
pg_ctl -l postgres.log start
top -bn1 -cp $(pgrep -xd, postgres)

Environment is set and instance started


sed -ie "/shared_buffers/s/^.*=.*/shared_buffers= 8500MB/" $PGDATA/postgresql.conf
sed -ie "/huge_pages/s/^.*=.*/huge_pages= true/" $PGDATA/postgresql.conf
awk '/Hugepagesize.*kB/{print MB / ( $2 / 1024 ) }' MB=9000 /proc/meminfo | sudo bash -c  "cat > /proc/sys/vm/nr_hugepages" 
pg_ctl -l postgres.log restart

I set 8GB of shared buffers to be sure to measure logical I/O from the database shared memory.

PGIO


git clone https://github.com/therealkevinc/pgio
tar -zxf pgio/pgio*tar.gz
cat > pgio/pgio.conf <<CAT
 UPDATE_PCT=0
 RUN_TIME=$(( 60 * 600 ))
 NUM_SCHEMAS=4
 NUM_THREADS=1
 WORK_UNIT=255
 UPDATE_WORK_UNIT=8
 SCALE=1024M
 DBNAME=pgio
 CONNECT_STRING="pgio"
 CREATE_BASE_TABLE=TRUE
CAT
cat pgio/pgio.conf
psql postgres <<<'create database pgio;'
time ( cd pgio && sh setup.sh )

I initialized PGIO for four 1GB schemas


echo $(curl -s http://169.254.169.254/latest/meta-data/instance-type) $(uname -m)
time ( cd pgio && sh runit.sh )
uptime

This will run PGIO as configured (4 threads on 1GB for 1 hour)

4 threads on ARM

[ec2-user@ip-172-31-46-196 ~]$ echo $(curl -s http://169.254.169.254/latest/meta-data/instance-type) $(uname -m)

m6gd.2xlarge aarch64                    

[ec2-user@ip-172-31-46-196 ~]$ time ( cd pgio && sh runit.sh )

Date: Fri Mar  5 11:35:18 UTC 2021
Database connect string: "pgio".
Shared buffers: 8GB.
Testing 4 schemas with 1 thread(s) accessing 1024M (131072 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 1 thread(s) each.
pg_stat_database stats:
          datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE:  pgio    | 12676581 |   2797475 |      2663112 |       26277 |         142
AFTER:   pgio    | 11254053367 |   2797673 |  11063125066 | 11057981670 |         162
DBNAME:  pgio. 4 schemas, 1 threads(each). Run time: 3600 seconds. RIOPS >0< CACHE_HITS/s >3122604<

This is about 780651 LIOPS / thread.

4 threads on x86


[ec2-user@ip-172-31-29-57 ~]$ echo $(curl -s http://169.254.169.254/latest/meta-data/instance-type) $(uname -m)

m5d.2xlarge x86_64

[ec2-user@ip-172-31-29-57 ~]$ time ( cd pgio && sh runit.sh )
uptimeDate: Fri Mar  5 13:20:10 UTC 2021
Database connect string: "pgio".
Shared buffers: 8500MB.
Testing 4 schemas with 1 thread(s) accessing 1024M (131072 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 1 thread(s) each.
pg_stat_database stats:
          datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE:  pgio    | 109879603 | 2696860921 |   2759130206 |  2757898526 |          20
AFTER:   pgio    | 13016322621 | 2697387365 |  15459965084 | 15455884319 |          20
DBNAME:  pgio. 4 schemas, 1 threads(each). Run time: 3600 seconds. RIOPS >146< CACHE_HITS/s >3585123<

This is about 896280 LIOPS / thread.

pgbench

For pgbench test you may want to read https://www.percona.com/blog/2021/01/22/postgresql-on-arm-based-aws-ec2-instances-is-it-any-good/ where Jobin Augustine and Sergey Kuzmichev have run long tests, read-only and read-write, with and without checksum. And also sysbench-tpcc. I’m doing only a very simple test here to show that result depend on what you test.


time pgbench -i -s 100 postgres

pgbench simple protocol


time pgbench -T 600 -c 4 --protocol=simple postgres

Result:

  • m5d.2xlarge x86_64 Xeon: tps = 2035.794849 (excluding connections establishing)
  • m6gd.2xlarge aarch64 ARM: tps = 2109.661869 (excluding connections establishing)

pgbench prepared statements


time pgbench -T 600 -c 4 --protocol=prepared postgres

Result:

  • m5d.2xlarge x86_64 Xeon: tps = 2107.070647 (excluding connections establishing)
  • m6gd.2xlarge aarch64 ARM: tps = 2121.966940 (excluding connections establishing)

Price/Performance

Here is the price of the instance I used. The software is free and the EC2 running hours for Graviton2 is 20% cheaper:

  • m5d.2xlarge x86_64 Xeon EC2 cost: $0.504/hr
  • m6gd.2xlarge aarch64 ARM EC2 cost: $0.403/hr

The compilation time for the PostgreSQL sources was 11% slower on ARM: 3 minutes 39 seconds vs. 3 minutes 14 seconds vs.
The PostgreSQL shared buffer cache hits were 13% faster x86: 896280 LIOPS / thread vs. 780651 LIOPS / thread, but that is the most optimal database work: all in shared buffers, limited calls, roundtrips and context switch. All is in CPU and RAM access.
However, where running pgbench, ARM had nearly the same performance for the prepared statement protocol and even a bit faster with the simple protocol. And that’s finally what most of database applications are doing. So, finally, the big difference is the price as the Graviton2 m6gd.2xlarge is 20% cheaper than the m5d.2xlarge x86. Here I installed PostgreSQL on EC2, but Graviton2 is available on RDS as well (in preview) with db.r6g

Cet article AWS: PostgreSQL on Graviton2 est apparu en premier sur Blog dbi services.


Rancher, up and running, on EC2 – 2 – Three nodes

$
0
0

In the last post we’ve brought up a RKE Kubernetes cluster on a single node. While that is cool for demonstration purposes or testing, this is nothing for a real life setup. Running the control pane, the etcd nodes and the worker nodes all on one node, is nothing you want to do usually, as you can not guarantee fault tolerance with such a setup. To make the RKE cluster highly available we’ll be adding two additional nodes to the configuration in this post. We’ll end up with three nodes, all running etcd, control pane and workers.

Before you can add the additional nodes, they need to be prepared in very much the same way as the first node: Bring the system to the latest release, install a supported version Docker, create the group and the user, and use the same SSH configuration as on the first node.

$ sudo apt update && sudo apt dist-upgrade -y && sudo systemctl reboot
$ sudo hostnamectl set-hostname rancher2
$ # sudo hostnamectl set-hostname rancher3 # on the third node
$ echo "10.0.1.168 rancher2 rancher2.it.dbi-services.com" >> /etc/hosts
$ # echo "10.0.1.168 rancher3 rancher3.it.dbi-services.com" >> /etc/hosts # on the third node
$ exit
$ sudo curl https://releases.rancher.com/install-docker/19.03.sh | sh
$ sudo bash
$ echo "rancher ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
$ exit
$ sudo systemctl reboot
$ sudo groupadd rancher
$ sudo useradd -g rancher -G docker -m -s /bin/bash rancher
$ sudo passwd rancher
$ sudo su - rancher

Before proceeding make sure that you use the same ssk key for the rancher user on the additional nodes, and that you can login from the first node without being prompted for a password:

$ mkdir .ssh
$ chmod 700 .ssh/
$ echo "-----BEGIN OPENSSH PRIVATE KEY-----
> b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn
> NhAAAAAwEAAQAAAQEAx+iJ2W/nGWytnVxyEeRuUDf8UyX3XOxEv7w+TeNGm3o6votXzsEY
> CclNxZ0KBt72OnPlpCjNgMOZhKC7XIDwEkhldLyMUVV8jdh/03qfJDyVBp4zqpQ2s1yf/b
> SU8cqOrj0gSYmozQdbGybZHmzgj+q9HS5iCAJZ7DUeM43E6kUvHpBJ6a1uP2fIr6+BRd25
> sejcT7kgu50Dv/cVxQ1s0hVydX29kAe0S9IFZUWIlsPCNzPxUGNJxigoC2tAcsXttyeguQ
> dtCzTYPgm3wBOoIOR9pAns8kHfiaajZK36vdF6/nEuaI2pw0IpkAct6aFqWq54utgdG9zv
> a8mqci/94QAAA8i6pMTbuqTE2wAAAAdzc2gtcnNhAAABAQDH6InZb+cZbK2dXHIR5G5QN/
> xTJfdc7ES/vD5N40abejq+i1fOwRgJyU3FnQoG3vY6c+WkKM2Aw5mEoLtcgPASSGV0vIxR
> VXyN2H/Tep8kPJUGnjOqlDazXJ/9tJTxyo6uPSBJiajNB1sbJtkebOCP6r0dLmIIAlnsNR
> 4zjcTqRS8ekEnprW4/Z8ivr4FF3bmx6NxPuSC7nQO/9xXFDWzSFXJ1fb2QB7RL0gVlRYiW
> w8I3M/FQY0nGKCgLa0Byxe23J6C5B20LNNg+CbfAE6gg5H2kCezyQd+JpqNkrfq90Xr+cS
> 5ojanDQimQBy3poWparni62B0b3O9ryapyL/3hAAAAAwEAAQAAAQAueLVK8cOUWnpFoY72
> 79ZhGZKztZi6ZkZZGCaXrqTkUdbEItpnuuWeqMhGjwocrMoqrnSM49tZ+p5+gWrsxyCH74
> J+T7KC2c+ZneGhRNkn8Flob3BtUAUjTv32WXtidgcTJCyUS8cM2o/oUPCaLQ9LBXOvC/BI
> ElvbGEIMFAHZv4+eVcZt1NJG3qlu8CXfxRAe6UPLAJOATRyFoNBycPyYu9Hhpr2vXvzksc
> QJUT177q2nu5U+UbCAatekQSGVqv18RWnECKJP4ntSbUMhg/PoPQALnWC09epD+397Yqwp
> uevR76u7S78q0SnycCvT9EMwpGRjl1e/FTZFejEs9rY9AAAAgQDlMVjYrJ4l5jIrT6GBPE
> 7cBBlMW7P0sr1qFxjQQ05JC4CpgCkvqQDqL4alErQ5KTwk9ZsgJY1N49tQk6Rtxv98BK8K
> x3d0dth/2q690iDG6LzExTFI26fjPK0a22FLouXSexoQtsHqnpefR9HuJWHPAIhBlgjX98
> Ce/A9McrIfOAAAAIEA/jhYGQaiqhZJIo7ggXVT3yj2ysXjPQ9TR+WRb+Ze3esi/bAUfKfK
> 2XtZTALNTFw6+KlorHK5ZgvMdpPLSeAg0htO5g6dLhmVv8VuAItVFQMm/R6AGFc/+EJw9k
> iWaGakJzmzCBRwfyZFh3MeMM9sxq60HyV1VHx/SzQvwKNVOJsAAACBAMlO2QU4r1H8kyzu
> jn5/NgX0lO6iHDhQWKQywrQ3NjYmtYRBhwpT62MpnpHpev6OpkR2xPOJ+9fDG2K1Q3raSP
> jfKaurZlMqmvVeziIhQEXrB3L3vnyq5Jx85oqHv7sh7PYCBD4J6zgL5o66fZOoqdc57GLC
> K+XnWjDZpULuQxUzAAAAD3JhbmNoZXJAcmFuZ2VyMQECAw==
> -----END OPENSSH PRIVATE KEY-----" > .ssh/id_rsa
$ chmod 600 .ssh/id_rsa 
$ echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDH6InZb+cZbK2dXHIR5G5QN/xTJfdc7ES/vD5N40abejq+i1fOwRgJyU3FnQoG3vY6c+WkKM2Aw5mEoLtcgPASSGV0vIxRVXyN2H/Tep8kPJUGnjOqlDazXJ/9tJTxyo6uPSBJiajNB1sbJtkebOCP6r0dLmIIAlnsNR4zjcTqRS8ekEnprW4/Z8ivr4FF3bmx6NxPuSC7nQO/9xXFDWzSFXJ1fb2QB7RL0gVlRYiWw8I3M/FQY0nGKCgLa0Byxe23J6C5B20LNNg+CbfAE6gg5H2kCezyQd+JpqNkrfq90Xr+cS5ojanDQimQBy3poWparni62B0b3O9ryapyL/3h rancher@rancher1" >> .ssh/authorized_keys
rancher@rancher1:~$ ssh 10.0.1.253
The authenticity of host '10.0.1.253 (10.0.1.253)' can't be established.
ECDSA key fingerprint is SHA256:/JzK5lFQv6qsM5zi4A+1JYwS5u0Iup3uUUV8927MF50.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.1.253' (ECDSA) to the list of known hosts.
Linux rancher2 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
rancher@rancher2:~$ logout
Connection to 10.0.1.253 closed.
rancher@rancher1:~$ ssh 10.0.1.73
The authenticity of host '10.0.1.73 (10.0.1.73)' can't be established.
ECDSA key fingerprint is SHA256:oVfRCbqh5PIdTx16+wNmMS8CNnHTnQXsjlpybHmPVlY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.1.73' (ECDSA) to the list of known hosts.
Linux rancher3 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

Once that is confirmed, we need to adjust the RKE cluster configuration file, to include the new nodes. Currently the node section looks like this:

# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 10.0.1.168
  port: "22"
  internal_address: ""
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: ""
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []

We need to add the two additional nodes. As this setup is on EC2, you need to specify the public and the internal IP addresses:

The node section in the yaml file looks like this (I am assuming that you are familiar with security groups and traffic is allowed between the node):

nodes:
- address: 18.195.249.125
  port: "22"
  internal_address: "10.0.1.168"
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: "rancher1"
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 3.64.193.173
  port: "22"
  internal_address: "10.0.1.253"
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: "rancher2"
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 18.185.105.131
  port: "22"
  internal_address: "10.0.1.73"
  role:
  - controlplane
  - worker
  - etcd
  hostname_override: "rancher3"
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []

That’s all you need to do. Use “rke up” to apply the changed configuration:

rancher@rancher1:~$ rke up
INFO[0000] Running RKE version: v1.1.15                 
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [dialer] Setup tunnel for host [3.64.193.173] 
INFO[0000] [dialer] Setup tunnel for host [18.185.105.131] 
INFO[0000] [dialer] Setup tunnel for host [18.195.249.125] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [3.64.193.173], try #1 
INFO[0000] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0000] Starting container [cluster-state-deployer] on host [3.64.193.173], try #1 
INFO[0000] [state] Successfully started [cluster-state-deployer] container on host [3.64.193.173] 
INFO[0000] Checking if container [cluster-state-deployer] is running on host [18.185.105.131], try #1 
INFO[0000] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0000] Starting container [cluster-state-deployer] on host [18.185.105.131], try #1 
INFO[0001] [state] Successfully started [cluster-state-deployer] container on host [18.185.105.131] 
INFO[0001] Checking if container [cluster-state-deployer] is running on host [18.195.249.125], try #1 
INFO[0001] [certificates] Generating CA kubernetes certificates 
INFO[0001] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0002] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
INFO[0002] [certificates] Generating Kubernetes API server certificates 
INFO[0003] [certificates] Generating Service account token key 
INFO[0003] [certificates] Generating Kube Controller certificates 
INFO[0003] [certificates] Generating Kube Scheduler certificates 
INFO[0003] [certificates] Generating Kube Proxy certificates 
INFO[0003] [certificates] Generating Node certificate   
INFO[0003] [certificates] Generating admin certificates and kubeconfig 
INFO[0003] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0004] [certificates] Generating kube-etcd-10-0-1-168 certificate and key 
INFO[0004] [certificates] Generating kube-etcd-10-0-1-253 certificate and key 
INFO[0004] [certificates] Generating kube-etcd-10-0-1-73 certificate and key 
INFO[0005] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0005] Building Kubernetes cluster                  
INFO[0005] [dialer] Setup tunnel for host [18.185.105.131] 
INFO[0005] [dialer] Setup tunnel for host [18.195.249.125] 
INFO[0005] [dialer] Setup tunnel for host [3.64.193.173] 
INFO[0005] [network] Deploying port listener containers 
INFO[0005] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0005] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0005] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0005] Starting container [rke-etcd-port-listener] on host [18.185.105.131], try #1 
INFO[0005] Starting container [rke-etcd-port-listener] on host [18.195.249.125], try #1 
INFO[0005] Starting container [rke-etcd-port-listener] on host [3.64.193.173], try #1 
INFO[0005] [network] Successfully started [rke-etcd-port-listener] container on host [18.185.105.131] 
INFO[0005] [network] Successfully started [rke-etcd-port-listener] container on host [18.195.249.125] 
INFO[0005] [network] Successfully started [rke-etcd-port-listener] container on host [3.64.193.173] 
INFO[0005] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0005] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0005] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0006] Starting container [rke-cp-port-listener] on host [18.195.249.125], try #1 
INFO[0006] Starting container [rke-cp-port-listener] on host [18.185.105.131], try #1 
INFO[0006] Starting container [rke-cp-port-listener] on host [3.64.193.173], try #1 
INFO[0006] [network] Successfully started [rke-cp-port-listener] container on host [3.64.193.173] 
INFO[0006] [network] Successfully started [rke-cp-port-listener] container on host [18.185.105.131] 
INFO[0006] [network] Successfully started [rke-cp-port-listener] container on host [18.195.249.125] 
INFO[0006] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0006] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0006] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0006] Starting container [rke-worker-port-listener] on host [18.185.105.131], try #1 
INFO[0006] Starting container [rke-worker-port-listener] on host [18.195.249.125], try #1 
INFO[0006] Starting container [rke-worker-port-listener] on host [3.64.193.173], try #1 
INFO[0006] [network] Successfully started [rke-worker-port-listener] container on host [3.64.193.173] 
INFO[0006] [network] Successfully started [rke-worker-port-listener] container on host [18.185.105.131] 
INFO[0006] [network] Successfully started [rke-worker-port-listener] container on host [18.195.249.125] 
INFO[0006] [network] Port listener containers deployed successfully 
INFO[0006] [network] Running etcd  etcd port checks  
INFO[0006] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0006] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0006] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0007] Starting container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0007] Starting container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0007] Starting container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0007] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] 
INFO[0007] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] 
INFO[0007] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] 
INFO[0007] Removing container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0007] Removing container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0008] Removing container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0008] [network] Running control plane -> etcd port checks 
INFO[0008] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0008] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0008] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0008] Starting container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0008] Starting container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0008] Starting container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0008] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] 
INFO[0008] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] 
INFO[0008] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] 
INFO[0008] Removing container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0008] Removing container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0008] Removing container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0008] [network] Running control plane -> worker port checks 
INFO[0008] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0008] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0008] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0009] Starting container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0009] Starting container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0009] Starting container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] 
INFO[0009] Removing container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0009] Removing container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0009] Removing container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0009] [network] Running workers -> control plane port checks 
INFO[0009] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0009] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0009] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0009] Starting container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0009] Starting container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0009] Starting container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [3.64.193.173] 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [18.185.105.131] 
INFO[0009] [network] Successfully started [rke-port-checker] container on host [18.195.249.125] 
INFO[0009] Removing container [rke-port-checker] on host [3.64.193.173], try #1 
INFO[0009] Removing container [rke-port-checker] on host [18.185.105.131], try #1 
INFO[0009] Removing container [rke-port-checker] on host [18.195.249.125], try #1 
INFO[0009] [network] Checking KubeAPI port Control Plane hosts 
INFO[0009] [network] Removing port listener containers  
INFO[0009] Removing container [rke-etcd-port-listener] on host [18.195.249.125], try #1 
INFO[0009] Removing container [rke-etcd-port-listener] on host [18.185.105.131], try #1 
INFO[0009] Removing container [rke-etcd-port-listener] on host [3.64.193.173], try #1 
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [3.64.193.173] 
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [18.185.105.131] 
INFO[0010] [remove/rke-etcd-port-listener] Successfully removed container on host [18.195.249.125] 
INFO[0010] Removing container [rke-cp-port-listener] on host [18.195.249.125], try #1 
INFO[0010] Removing container [rke-cp-port-listener] on host [3.64.193.173], try #1 
INFO[0010] Removing container [rke-cp-port-listener] on host [18.185.105.131], try #1 
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [18.185.105.131] 
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [18.195.249.125] 
INFO[0010] [remove/rke-cp-port-listener] Successfully removed container on host [3.64.193.173] 
INFO[0010] Removing container [rke-worker-port-listener] on host [18.195.249.125], try #1 
INFO[0010] Removing container [rke-worker-port-listener] on host [18.185.105.131], try #1 
INFO[0010] Removing container [rke-worker-port-listener] on host [3.64.193.173], try #1 
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [3.64.193.173] 
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [18.185.105.131] 
INFO[0010] [remove/rke-worker-port-listener] Successfully removed container on host [18.195.249.125] 
INFO[0010] [network] Port listener containers removed successfully 
INFO[0010] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0010] Checking if container [cert-deployer] is running on host [18.195.249.125], try #1 
INFO[0010] Checking if container [cert-deployer] is running on host [3.64.193.173], try #1 
INFO[0010] Checking if container [cert-deployer] is running on host [18.185.105.131], try #1 
INFO[0010] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0010] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0010] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0011] Starting container [cert-deployer] on host [3.64.193.173], try #1 
INFO[0011] Starting container [cert-deployer] on host [18.185.105.131], try #1 
INFO[0011] Starting container [cert-deployer] on host [18.195.249.125], try #1 
INFO[0011] Checking if container [cert-deployer] is running on host [3.64.193.173], try #1 
INFO[0011] Checking if container [cert-deployer] is running on host [18.185.105.131], try #1 
INFO[0011] Checking if container [cert-deployer] is running on host [18.195.249.125], try #1 
INFO[0016] Checking if container [cert-deployer] is running on host [3.64.193.173], try #1 
INFO[0016] Removing container [cert-deployer] on host [3.64.193.173], try #1 
INFO[0016] Checking if container [cert-deployer] is running on host [18.185.105.131], try #1 
INFO[0016] Removing container [cert-deployer] on host [18.185.105.131], try #1 
INFO[0016] Checking if container [cert-deployer] is running on host [18.195.249.125], try #1 
INFO[0016] Removing container [cert-deployer] on host [18.195.249.125], try #1 
INFO[0016] [reconcile] Rebuilding and updating local kube config 
INFO[0016] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0016] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0016] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
INFO[0016] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0016] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [18.195.249.125] 
INFO[0016] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0016] Starting container [file-deployer] on host [18.195.249.125], try #1 
INFO[0017] Successfully started [file-deployer] container on host [18.195.249.125] 
INFO[0017] Waiting for [file-deployer] container to exit on host [18.195.249.125] 
INFO[0017] Waiting for [file-deployer] container to exit on host [18.195.249.125] 
INFO[0017] Container [file-deployer] is still running on host [18.195.249.125]: stderr: [], stdout: [] 
INFO[0018] Waiting for [file-deployer] container to exit on host [18.195.249.125] 
INFO[0018] Removing container [file-deployer] on host [18.195.249.125], try #1 
INFO[0018] [remove/file-deployer] Successfully removed container on host [18.195.249.125] 
INFO[0018] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [3.64.193.173] 
INFO[0018] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0018] Starting container [file-deployer] on host [3.64.193.173], try #1 
INFO[0018] Successfully started [file-deployer] container on host [3.64.193.173] 
INFO[0018] Waiting for [file-deployer] container to exit on host [3.64.193.173] 
INFO[0018] Waiting for [file-deployer] container to exit on host [3.64.193.173] 
INFO[0018] Container [file-deployer] is still running on host [3.64.193.173]: stderr: [], stdout: [] 
INFO[0019] Waiting for [file-deployer] container to exit on host [3.64.193.173] 
INFO[0019] Removing container [file-deployer] on host [3.64.193.173], try #1 
INFO[0019] [remove/file-deployer] Successfully removed container on host [3.64.193.173] 
INFO[0019] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [18.185.105.131] 
INFO[0019] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0019] Starting container [file-deployer] on host [18.185.105.131], try #1 
INFO[0020] Successfully started [file-deployer] container on host [18.185.105.131] 
INFO[0020] Waiting for [file-deployer] container to exit on host [18.185.105.131] 
INFO[0020] Waiting for [file-deployer] container to exit on host [18.185.105.131] 
INFO[0020] Container [file-deployer] is still running on host [18.185.105.131]: stderr: [], stdout: [] 
INFO[0021] Waiting for [file-deployer] container to exit on host [18.185.105.131] 
INFO[0021] Removing container [file-deployer] on host [18.185.105.131], try #1 
INFO[0021] [remove/file-deployer] Successfully removed container on host [18.185.105.131] 
INFO[0021] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes 
INFO[0021] [reconcile] Reconciling cluster state        
INFO[0021] [reconcile] This is newly generated cluster  
INFO[0021] Pre-pulling kubernetes images                
INFO[0021] Pulling image [rancher/hyperkube:v1.18.16-rancher1] on host [18.185.105.131], try #1 
INFO[0021] Pulling image [rancher/hyperkube:v1.18.16-rancher1] on host [3.64.193.173], try #1 
INFO[0021] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] 
INFO[0047] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] 
INFO[0047] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] 
INFO[0047] Kubernetes images pulled successfully        
INFO[0047] [etcd] Building up etcd plane..              
INFO[0047] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0047] Starting container [etcd-fix-perm] on host [18.195.249.125], try #1 
INFO[0047] Successfully started [etcd-fix-perm] container on host [18.195.249.125] 
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [18.195.249.125] 
INFO[0047] Waiting for [etcd-fix-perm] container to exit on host [18.195.249.125] 
INFO[0047] Container [etcd-fix-perm] is still running on host [18.195.249.125]: stderr: [], stdout: [] 
INFO[0048] Waiting for [etcd-fix-perm] container to exit on host [18.195.249.125] 
INFO[0048] Removing container [etcd-fix-perm] on host [18.195.249.125], try #1 
INFO[0048] [remove/etcd-fix-perm] Successfully removed container on host [18.195.249.125] 
INFO[0048] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [18.195.249.125] 
INFO[0048] Starting container [etcd] on host [18.195.249.125], try #1 
INFO[0049] [etcd] Successfully started [etcd] container on host [18.195.249.125] 
INFO[0049] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [18.195.249.125] 
INFO[0049] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0049] Starting container [etcd-rolling-snapshots] on host [18.195.249.125], try #1 
INFO[0049] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.195.249.125] 
INFO[0054] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0054] Starting container [rke-bundle-cert] on host [18.195.249.125], try #1 
INFO[0054] [certificates] Successfully started [rke-bundle-cert] container on host [18.195.249.125] 
INFO[0054] Waiting for [rke-bundle-cert] container to exit on host [18.195.249.125] 
INFO[0054] Container [rke-bundle-cert] is still running on host [18.195.249.125]: stderr: [], stdout: [] 
INFO[0055] Waiting for [rke-bundle-cert] container to exit on host [18.195.249.125] 
INFO[0055] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.195.249.125] 
INFO[0055] Removing container [rke-bundle-cert] on host [18.195.249.125], try #1 
INFO[0056] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0056] Starting container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0056] [etcd] Successfully started [rke-log-linker] container on host [18.195.249.125] 
INFO[0056] Removing container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0056] [remove/rke-log-linker] Successfully removed container on host [18.195.249.125] 
INFO[0056] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0062] Starting container [etcd-fix-perm] on host [3.64.193.173], try #1 
INFO[0062] Successfully started [etcd-fix-perm] container on host [3.64.193.173] 
INFO[0062] Waiting for [etcd-fix-perm] container to exit on host [3.64.193.173] 
INFO[0062] Waiting for [etcd-fix-perm] container to exit on host [3.64.193.173] 
INFO[0062] Container [etcd-fix-perm] is still running on host [3.64.193.173]: stderr: [], stdout: [] 
INFO[0063] Waiting for [etcd-fix-perm] container to exit on host [3.64.193.173] 
INFO[0063] Removing container [etcd-fix-perm] on host [3.64.193.173], try #1 
INFO[0063] [remove/etcd-fix-perm] Successfully removed container on host [3.64.193.173] 
INFO[0063] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [3.64.193.173], try #1 
INFO[0067] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [3.64.193.173] 
INFO[0067] Starting container [etcd] on host [3.64.193.173], try #1 
INFO[0067] [etcd] Successfully started [etcd] container on host [3.64.193.173] 
INFO[0067] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [3.64.193.173] 
INFO[0067] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0067] Starting container [etcd-rolling-snapshots] on host [3.64.193.173], try #1 
INFO[0067] [etcd] Successfully started [etcd-rolling-snapshots] container on host [3.64.193.173] 
INFO[0072] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0073] Starting container [rke-bundle-cert] on host [3.64.193.173], try #1 
INFO[0073] [certificates] Successfully started [rke-bundle-cert] container on host [3.64.193.173] 
INFO[0073] Waiting for [rke-bundle-cert] container to exit on host [3.64.193.173] 
INFO[0073] Container [rke-bundle-cert] is still running on host [3.64.193.173]: stderr: [], stdout: [] 
INFO[0074] Waiting for [rke-bundle-cert] container to exit on host [3.64.193.173] 
INFO[0074] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [3.64.193.173] 
INFO[0074] Removing container [rke-bundle-cert] on host [3.64.193.173], try #1 
INFO[0074] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0074] Starting container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0074] [etcd] Successfully started [rke-log-linker] container on host [3.64.193.173] 
INFO[0074] Removing container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0075] [remove/rke-log-linker] Successfully removed container on host [3.64.193.173] 
INFO[0075] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0078] Starting container [etcd-fix-perm] on host [18.185.105.131], try #1 
INFO[0079] Successfully started [etcd-fix-perm] container on host [18.185.105.131] 
INFO[0079] Waiting for [etcd-fix-perm] container to exit on host [18.185.105.131] 
INFO[0079] Waiting for [etcd-fix-perm] container to exit on host [18.185.105.131] 
INFO[0079] Container [etcd-fix-perm] is still running on host [18.185.105.131]: stderr: [], stdout: [] 
INFO[0080] Waiting for [etcd-fix-perm] container to exit on host [18.185.105.131] 
INFO[0080] Removing container [etcd-fix-perm] on host [18.185.105.131], try #1 
INFO[0080] [remove/etcd-fix-perm] Successfully removed container on host [18.185.105.131] 
INFO[0080] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [18.185.105.131], try #1 
INFO[0084] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [18.185.105.131] 
INFO[0084] Starting container [etcd] on host [18.185.105.131], try #1 
INFO[0084] [etcd] Successfully started [etcd] container on host [18.185.105.131] 
INFO[0084] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [18.185.105.131] 
INFO[0084] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0084] Starting container [etcd-rolling-snapshots] on host [18.185.105.131], try #1 
INFO[0084] [etcd] Successfully started [etcd-rolling-snapshots] container on host [18.185.105.131] 
INFO[0089] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0089] Starting container [rke-bundle-cert] on host [18.185.105.131], try #1 
INFO[0090] [certificates] Successfully started [rke-bundle-cert] container on host [18.185.105.131] 
INFO[0090] Waiting for [rke-bundle-cert] container to exit on host [18.185.105.131] 
INFO[0090] Container [rke-bundle-cert] is still running on host [18.185.105.131]: stderr: [], stdout: [] 
INFO[0091] Waiting for [rke-bundle-cert] container to exit on host [18.185.105.131] 
INFO[0091] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [18.185.105.131] 
INFO[0091] Removing container [rke-bundle-cert] on host [18.185.105.131], try #1 
INFO[0091] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0091] Starting container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0091] [etcd] Successfully started [rke-log-linker] container on host [18.185.105.131] 
INFO[0091] Removing container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0092] [remove/rke-log-linker] Successfully removed container on host [18.185.105.131] 
INFO[0092] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0092] [etcd] etcd host [18.195.249.125] reported healthy=true 
INFO[0092] [controlplane] Building up Controller Plane.. 
INFO[0092] Checking if container [service-sidekick] is running on host [18.195.249.125], try #1 
INFO[0092] Checking if container [service-sidekick] is running on host [18.185.105.131], try #1 
INFO[0092] Checking if container [service-sidekick] is running on host [3.64.193.173], try #1 
INFO[0092] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0092] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0092] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0092] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] 
INFO[0092] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] 
INFO[0092] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] 
INFO[0092] Starting container [kube-apiserver] on host [18.185.105.131], try #1 
INFO[0092] Starting container [kube-apiserver] on host [3.64.193.173], try #1 
INFO[0092] Starting container [kube-apiserver] on host [18.195.249.125], try #1 
INFO[0092] [controlplane] Successfully started [kube-apiserver] container on host [18.185.105.131] 
INFO[0092] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.185.105.131] 
INFO[0092] [controlplane] Successfully started [kube-apiserver] container on host [18.195.249.125] 
INFO[0092] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [18.195.249.125] 
INFO[0092] [controlplane] Successfully started [kube-apiserver] container on host [3.64.193.173] 
INFO[0092] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [3.64.193.173] 
INFO[0102] [healthcheck] service [kube-apiserver] on host [18.185.105.131] is healthy 
INFO[0102] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0102] [healthcheck] service [kube-apiserver] on host [3.64.193.173] is healthy 
INFO[0102] [healthcheck] service [kube-apiserver] on host [18.195.249.125] is healthy 
INFO[0102] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0102] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0102] Starting container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0102] Starting container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0102] Starting container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.185.105.131] 
INFO[0103] Removing container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [3.64.193.173] 
INFO[0103] Removing container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0103] [controlplane] Successfully started [rke-log-linker] container on host [18.195.249.125] 
INFO[0103] Removing container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.185.105.131] 
INFO[0103] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] 
INFO[0103] [remove/rke-log-linker] Successfully removed container on host [3.64.193.173] 
INFO[0103] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] 
INFO[0103] [remove/rke-log-linker] Successfully removed container on host [18.195.249.125] 
INFO[0103] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] 
INFO[0103] Starting container [kube-controller-manager] on host [18.185.105.131], try #1 
INFO[0103] Starting container [kube-controller-manager] on host [3.64.193.173], try #1 
INFO[0103] Starting container [kube-controller-manager] on host [18.195.249.125], try #1 
INFO[0103] [controlplane] Successfully started [kube-controller-manager] container on host [18.185.105.131] 
INFO[0103] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.185.105.131] 
INFO[0103] [controlplane] Successfully started [kube-controller-manager] container on host [3.64.193.173] 
INFO[0103] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [3.64.193.173] 
INFO[0103] [controlplane] Successfully started [kube-controller-manager] container on host [18.195.249.125] 
INFO[0103] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [18.195.249.125] 
INFO[0108] [healthcheck] service [kube-controller-manager] on host [18.185.105.131] is healthy 
INFO[0108] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0108] [healthcheck] service [kube-controller-manager] on host [3.64.193.173] is healthy 
INFO[0108] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0108] [healthcheck] service [kube-controller-manager] on host [18.195.249.125] is healthy 
INFO[0108] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0109] Starting container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0109] Starting container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0109] Starting container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0109] [controlplane] Successfully started [rke-log-linker] container on host [18.185.105.131] 
INFO[0109] [controlplane] Successfully started [rke-log-linker] container on host [3.64.193.173] 
INFO[0109] Removing container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0109] Removing container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0109] [controlplane] Successfully started [rke-log-linker] container on host [18.195.249.125] 
INFO[0109] Removing container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0109] [remove/rke-log-linker] Successfully removed container on host [3.64.193.173] 
INFO[0109] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] 
INFO[0109] [remove/rke-log-linker] Successfully removed container on host [18.185.105.131] 
INFO[0109] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] 
INFO[0109] Starting container [kube-scheduler] on host [3.64.193.173], try #1 
INFO[0109] [remove/rke-log-linker] Successfully removed container on host [18.195.249.125] 
INFO[0109] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] 
INFO[0109] Starting container [kube-scheduler] on host [18.185.105.131], try #1 
INFO[0109] Starting container [kube-scheduler] on host [18.195.249.125], try #1 
INFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [3.64.193.173] 
INFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [3.64.193.173] 
INFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [18.185.105.131] 
INFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.185.105.131] 
INFO[0109] [controlplane] Successfully started [kube-scheduler] container on host [18.195.249.125] 
INFO[0109] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [18.195.249.125] 
INFO[0115] [healthcheck] service [kube-scheduler] on host [3.64.193.173] is healthy 
INFO[0115] [healthcheck] service [kube-scheduler] on host [18.185.105.131] is healthy 
INFO[0115] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0115] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0115] [healthcheck] service [kube-scheduler] on host [18.195.249.125] is healthy 
INFO[0115] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0115] Starting container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0115] Starting container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0115] Starting container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0115] [controlplane] Successfully started [rke-log-linker] container on host [3.64.193.173] 
INFO[0115] Removing container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0115] [controlplane] Successfully started [rke-log-linker] container on host [18.185.105.131] 
INFO[0115] Removing container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0115] [controlplane] Successfully started [rke-log-linker] container on host [18.195.249.125] 
INFO[0115] Removing container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0115] [remove/rke-log-linker] Successfully removed container on host [3.64.193.173] 
INFO[0115] [remove/rke-log-linker] Successfully removed container on host [18.185.105.131] 
INFO[0116] [remove/rke-log-linker] Successfully removed container on host [18.195.249.125] 
INFO[0116] [controlplane] Successfully started Controller Plane.. 
INFO[0116] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0116] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0116] [authz] Creating system:node ClusterRoleBinding 
INFO[0116] [authz] system:node ClusterRoleBinding created successfully 
INFO[0116] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding 
INFO[0116] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully 
INFO[0116] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0116] [state] Saving full cluster state to Kubernetes 
INFO[0116] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
INFO[0116] [worker] Building up Worker Plane..          
INFO[0116] Checking if container [service-sidekick] is running on host [18.185.105.131], try #1 
INFO[0116] Checking if container [service-sidekick] is running on host [18.195.249.125], try #1 
INFO[0116] Checking if container [service-sidekick] is running on host [3.64.193.173], try #1 
INFO[0116] [sidekick] Sidekick container already created on host [3.64.193.173] 
INFO[0116] [sidekick] Sidekick container already created on host [18.185.105.131] 
INFO[0116] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] 
INFO[0116] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] 
INFO[0116] [sidekick] Sidekick container already created on host [18.195.249.125] 
INFO[0116] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] 
INFO[0116] Starting container [kubelet] on host [3.64.193.173], try #1 
INFO[0116] Starting container [kubelet] on host [18.185.105.131], try #1 
INFO[0116] Starting container [kubelet] on host [18.195.249.125], try #1 
INFO[0116] [worker] Successfully started [kubelet] container on host [3.64.193.173] 
INFO[0116] [healthcheck] Start Healthcheck on service [kubelet] on host [3.64.193.173] 
INFO[0116] [worker] Successfully started [kubelet] container on host [18.185.105.131] 
INFO[0116] [healthcheck] Start Healthcheck on service [kubelet] on host [18.185.105.131] 
INFO[0116] [worker] Successfully started [kubelet] container on host [18.195.249.125] 
INFO[0116] [healthcheck] Start Healthcheck on service [kubelet] on host [18.195.249.125] 
INFO[0121] [healthcheck] service [kubelet] on host [3.64.193.173] is healthy 
INFO[0121] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0121] [healthcheck] service [kubelet] on host [18.185.105.131] is healthy 
INFO[0121] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0121] [healthcheck] service [kubelet] on host [18.195.249.125] is healthy 
INFO[0121] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0121] Starting container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0121] Starting container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0121] Starting container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0121] [worker] Successfully started [rke-log-linker] container on host [3.64.193.173] 
INFO[0121] Removing container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0121] [worker] Successfully started [rke-log-linker] container on host [18.185.105.131] 
INFO[0121] Removing container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0122] [worker] Successfully started [rke-log-linker] container on host [18.195.249.125] 
INFO[0122] Removing container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0122] [remove/rke-log-linker] Successfully removed container on host [3.64.193.173] 
INFO[0122] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [3.64.193.173] 
INFO[0122] [remove/rke-log-linker] Successfully removed container on host [18.185.105.131] 
INFO[0122] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.185.105.131] 
INFO[0122] Starting container [kube-proxy] on host [3.64.193.173], try #1 
INFO[0122] [remove/rke-log-linker] Successfully removed container on host [18.195.249.125] 
INFO[0122] Starting container [kube-proxy] on host [18.185.105.131], try #1 
INFO[0122] Image [rancher/hyperkube:v1.18.16-rancher1] exists on host [18.195.249.125] 
INFO[0122] [worker] Successfully started [kube-proxy] container on host [3.64.193.173] 
INFO[0122] [healthcheck] Start Healthcheck on service [kube-proxy] on host [3.64.193.173] 
INFO[0122] Starting container [kube-proxy] on host [18.195.249.125], try #1 
INFO[0122] [worker] Successfully started [kube-proxy] container on host [18.185.105.131] 
INFO[0122] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.185.105.131] 
INFO[0122] [worker] Successfully started [kube-proxy] container on host [18.195.249.125] 
INFO[0122] [healthcheck] Start Healthcheck on service [kube-proxy] on host [18.195.249.125] 
INFO[0127] [healthcheck] service [kube-proxy] on host [3.64.193.173] is healthy 
INFO[0127] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0127] [healthcheck] service [kube-proxy] on host [18.185.105.131] is healthy 
INFO[0127] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0127] Starting container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0127] [healthcheck] service [kube-proxy] on host [18.195.249.125] is healthy 
INFO[0127] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0127] Starting container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0127] Starting container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0128] [worker] Successfully started [rke-log-linker] container on host [3.64.193.173] 
INFO[0128] Removing container [rke-log-linker] on host [3.64.193.173], try #1 
INFO[0128] [worker] Successfully started [rke-log-linker] container on host [18.185.105.131] 
INFO[0128] Removing container [rke-log-linker] on host [18.185.105.131], try #1 
INFO[0128] [worker] Successfully started [rke-log-linker] container on host [18.195.249.125] 
INFO[0128] Removing container [rke-log-linker] on host [18.195.249.125], try #1 
INFO[0128] [remove/rke-log-linker] Successfully removed container on host [3.64.193.173] 
INFO[0128] [remove/rke-log-linker] Successfully removed container on host [18.185.105.131] 
INFO[0128] [remove/rke-log-linker] Successfully removed container on host [18.195.249.125] 
INFO[0128] [worker] Successfully started Worker Plane.. 
INFO[0128] Image [rancher/rke-tools:v0.1.72] exists on host [3.64.193.173] 
INFO[0128] Image [rancher/rke-tools:v0.1.72] exists on host [18.185.105.131] 
INFO[0128] Image [rancher/rke-tools:v0.1.72] exists on host [18.195.249.125] 
INFO[0128] Starting container [rke-log-cleaner] on host [18.185.105.131], try #1 
INFO[0128] Starting container [rke-log-cleaner] on host [18.195.249.125], try #1 
INFO[0128] Starting container [rke-log-cleaner] on host [3.64.193.173], try #1 
INFO[0129] [cleanup] Successfully started [rke-log-cleaner] container on host [18.185.105.131] 
INFO[0129] Removing container [rke-log-cleaner] on host [18.185.105.131], try #1 
INFO[0129] [cleanup] Successfully started [rke-log-cleaner] container on host [18.195.249.125] 
INFO[0129] Removing container [rke-log-cleaner] on host [18.195.249.125], try #1 
INFO[0129] [cleanup] Successfully started [rke-log-cleaner] container on host [3.64.193.173] 
INFO[0129] Removing container [rke-log-cleaner] on host [3.64.193.173], try #1 
INFO[0129] [remove/rke-log-cleaner] Successfully removed container on host [18.185.105.131] 
INFO[0129] [remove/rke-log-cleaner] Successfully removed container on host [18.195.249.125] 
INFO[0129] [remove/rke-log-cleaner] Successfully removed container on host [3.64.193.173] 
INFO[0129] [sync] Syncing nodes Labels and Taints       
INFO[0129] [sync] Successfully synced nodes Labels and Taints 
INFO[0129] [network] Setting up network plugin: canal   
INFO[0129] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0129] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0129] [addons] Executing deploy job rke-network-plugin 
INFO[0134] [addons] Setting up coredns                  
INFO[0134] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0134] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0134] [addons] Executing deploy job rke-coredns-addon 
INFO[0139] [addons] CoreDNS deployed successfully       
INFO[0139] [dns] DNS provider coredns deployed successfully 
INFO[0139] [addons] Setting up Metrics Server           
INFO[0139] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0139] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0139] [addons] Executing deploy job rke-metrics-addon 
INFO[0144] [addons] Metrics Server deployed successfully 
INFO[0144] [ingress] Setting up nginx ingress controller 
INFO[0144] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0144] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0144] [addons] Executing deploy job rke-ingress-controller 
INFO[0149] [ingress] ingress controller nginx deployed successfully 
INFO[0149] [addons] Setting up user addons              
INFO[0149] [addons] no user addons defined              
INFO[0149] Finished building Kubernetes cluster successfully

If all went fine you should have a three node cluster after a few minutes:

rancher@rancher1:~$ export KUBECONFIG=kube_config_cluster.yml 
rancher@rancher1:~$ kubectl get nodes -o wide
NAME       STATUS   ROLES                      AGE     VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION          CONTAINER-RUNTIME
rancher2    Ready    controlplane,etcd,worker   4m51s   v1.18.16   10.0.1.253            Debian GNU/Linux 10 (buster)   4.19.0-14-cloud-amd64   docker://19.3.15
rancher1    Ready    controlplane,etcd,worker   4m51s   v1.18.16   10.0.1.168            Debian GNU/Linux 10 (buster)   4.19.0-14-cloud-amd64   docker://19.3.15
rancher3    Ready    controlplane,etcd,worker   4m51s   v1.18.16   10.0.1.73             Debian GNU/Linux 10 (buster)   4.19.0-14-cloud-amd64   docker://19.3.15

Again, very easy to setup. We still do not have Rancher running, just RKE in a three node configuration. The installation of Rancher itself will be the topic for the next post.

Cet article Rancher, up and running, on EC2 – 2 – Three nodes est apparu en premier sur Blog dbi services.

Oracle Database Appliance: ODA patch 19.10 is out

$
0
0

Introduction

4 months after the 19.9, here comes the 19.10 version of the Oracle Database Appliance patch. Let’s have a look at what’s new.

Will my ODA supports this 19.10 patch?

The 19.10 release will be the same for all ODAs, as usual. The oldest ODA compatible with this release is the X5-2. Don’t expect to install this version on older models. X4-2 is stuck to 18.8, X3-2 is stuck to 18.5 and V1 is stuck to 12.2. If you are still using these models, please consider an upgrade to X8-2 and 19c to go back to supported hardware and software.

What are the new features?

As usual, 19.10 includes all the latest patches for all database homes, including for those versions no more supported with Premier Support (provided patches are the very latest from the 19th of January, 2021).

The most important new feature is dedicated to KVM virtualization. It’s now possible to create a database VM, for example if you need to isolate the systems running your databases. Up to 19.9, virtualization was only dedicated to other purpose than Oracle databases. Now with 19.10, a set of new odacli commands are available like: odacli create-dbsystem
Immediately, it makes me think about a dbsystem in OCI, the Oracle public cloud, and I’m quite sure the implementation is similar. Basically, it will create a new KVM virtual machine, with a dbhome and a database inside. As there is quite a lot of parameters to provide for creating a dbsystem, you feed the create-dbsystem with a json file, which is very similar to the one you used for provisionning the appliance. In the file, you will give host and network information, users and groups settings, database name, database version and database parameters as if it were an ODA deployment file. Brilliant.

For sure, you can also have an overview of the existing dbsystems on your ODA with: odacli list-dbsystems
You can have more information on a dbsystem, you can also start, stop and delete a dbsystem with:
odacli describe-dbsystem
odacli start-dbsystem
odacli stop-dbsystem
odacli delete-dbsystem

Quite easy to manage.

What’s also new is the internal database dedicated to ODA repository. It has now switched from JavaDB to MySQL, but actually it wouldn’t change anything for you because you’re not supposed to access to this database.

Database Security Assessment Tool is a new item available in the ODA GUI and is dedicated to discover security risks on your databases: it’s probably a nice addition and definitely usefull.

odacli restore-archivelog This is the only new feature regarding odacli appart from dbsystems: it allows you to restore a range of archivelogs. It’s probably helpfull from time to time if you make use of the odacli backup features. If you’re used to make backups with RMAN directly, you’ll probably never use this feature.

And that’s it regarding new features. And it’s not bad because it’s probably a mature patch, 19c being available on ODA for 1 year now.

Still able to run older databases with 19.10?

19.10 will let you run all versions of database starting from 11.2.0.4. Yes 11gR2 is still there and a few ODA customers in 2021 are still asking for this version. However, it’s highly recommended to migrate to 19c, as it’s the only version with long term support available now. Deploying 19.10 and planning to migrate your databases in the next months is definitely a good idea. With ODA you can easily migrate your databases with: odacli upgrade-database
It should have been replaced by: odacli move-database
But it’s not yet done on this version.

Is is possible to upgrade to 19.10 from my current release?

You will need to already run 19.6 release or later. If your ODA is running on 18.8, you will have to patch to 19.6 prior applying 19.10. If your ODA is running on 18.7 or older 18.x release, an upgrade to 18.8 will be mandatory before patching to 19.6, and then to 19.10. If you are using older versions, I highly recommend to proceed to a complete reimaging of your ODA. It will be easier than applying 3+ patches. And you’ll benefit from a brand new and clean ODA. Patching is still a lot of work, and if you don’t patch regularly, going to the latest version could be challenging. Yes, reimaging is also a lot of work, but it always works.

Conclusion

19.10 seems to be a mature release for customers using ODAs, so if you’re using previous 19.x versions, don’t hesitate. I will be able to try it next week and I will not miss to give you my feedback.

Cet article Oracle Database Appliance: ODA patch 19.10 is out est apparu en premier sur Blog dbi services.

Rancher, up and running, on EC2 – 3 – Rancher setup

$
0
0

The is the next post in this little Rancher series. After we installed a single node RKE cluster and extended this configurtation to three nodes we will finally install Rancher in this post.

As Rancher is installed with Helm we need to install that first:

rancher@rancher1:~$ wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz
rancher@rancher1:~$ tar axf helm-v3.5.2-linux-amd64.tar.gz 
rancher@rancher1:~$ sudo mv linux-amd64/helm /usr/local/bin/
rancher@rancher1:~$ sudo chown rancher:rancher /usr/local/bin/helm
rancher@rancher1:~$ sudo chmod 770 /usr/local/bin/helm
rancher@rancher1:~$ helm version
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: kube_config_cluster.yml
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
rancher@rancher1:~$ chmod 700 kube_config_cluster.yml 
rancher@rancher1:~$ helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}

The namespace for Ranger is “cattle-system” so we need to create it:

rancher@rancher1:~$ kubectl create namespace cattle-system
namespace/cattle-system created
rancher@rancher1:~$ kubectl get namespace
NAME              STATUS   AGE
cattle-system     Active   23s
default           Active   5m25s
ingress-nginx     Active   4m38s
kube-node-lease   Active   5m27s
kube-public       Active   5m27s
kube-system       Active   5m27s

When it comes to certificates with Ranger you have three options:

  • Rancher Generated Certificates (Default)
  • Let’s Encrypt
  • Certificates from Files

As this envronment is just for demo purposes we’ll be using the default, which is a self signed certificate. For this to work we need to install cert-manager:

rancher@rancher1:~$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
rancher@rancher1:~$ kubectl create namespace cert-manager
namespace/cert-manager created
rancher@rancher1:~$ helm repo add jetstack https://charts.jetstack.io
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/rancher/kube_config_cluster.yml
"jetstack" has been added to your repositories
rancher@rancher1:~$ helm repo update
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/rancher/kube_config_cluster.yml
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "rancher-stable" chart repository
Update Complete. ⎈Happy Helming!⎈
rancher@rancher1:~$ helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/rancher/kube_config_cluster.yml
NAME: cert-manager
LAST DEPLOYED: Tue Mar  9 10:05:10 2021
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

https://cert-manager.io/docs/configuration/

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:

https://cert-manager.io/docs/usage/ingress/

This deployment can take some, so please monitor the pods until they are ready:

rancher@rancher1:~$ kubectl get pods --namespace cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-75dbbd5d6-986cb               1/1     Running   0          49s
cert-manager-cainjector-85c559fd6c-td5nh   1/1     Running   0          49s
cert-manager-webhook-6c77dfbdb8-wqg9c      1/1     Running   0          49s

For installing Rancher with Helm we need the Rancher repository:

rancher@rancher1:~$ helm repo add rancher-stable https://releases.rancher.com/server-charts/stable 
"rancher-stable" has been added to your repositories
rancher@rancher1:~$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "rancher-stable" chart repository
Update Complete. ⎈Happy Helming!⎈

Finally, install Ranger:

rancher@rancher1:~$ helm install rancher rancher-stable/rancher --version v2.5.6 --namespace cattle-system --set hostname=ranger.it.dbi-services.com
NAME: rancher
LAST DEPLOYED: Tue Mar  9 07:25:23 2021
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued and Ingress comes up.

Check out our docs at https://rancher.com/docs/rancher/v2.x/en/

Browse to https://ranger.it.dbi-services.com

Happy Containering!

Wait for the deployment to complete:

rancher@rancher1:~$ kubectl get deployments --namespace cattle-system
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
rancher   3/3     3            3           117s

Get the Rancher endpoints:

rancher@rancher1:~$ kubectl -n cattle-system  get ep rancher -o wide
NAME      ENDPOINTS                                             AGE
rancher   10.42.0.11:80,10.42.1.9:80,10.42.2.8:80 + 3 more...   51m

Pointing your browser to one of the endpoints should bring you to the Rancher GUI:

Set your password and options:

Ignore the following warning, this is just a playground:

… and you’re in:

Cet article Rancher, up and running, on EC2 – 3 – Rancher setup est apparu en premier sur Blog dbi services.

Rancher on SLES 15 as a demo environment

$
0
0

If you followed the last posts (Rancher, up and running, on EC2 – 1 – One node, Rancher, up and running, on EC2 – 2 – Three nodes and Rancher, up and running, on EC2 – 3 – Rancher setup) about Rancher, you know how to setup Rancher in a highly available RKE cluster. While this is the way to go for a production clusters, you might want to play with Rancher on a local VM and there is a solution for this as well. In the previous posts I’ve used Debian as the operating system, but as Rancher Labs was recently acquired by SUSE we’ll be using SLES 15 for the scope of this post: Bring up a Rancher playground on a single VM using SLES 15.

As this is intended to be a step by step guide, we’ll start from the very beginning: Download the SLES 15 ISO from here (you’ll need to create a free SUSE account for this). I’ve downloaded “SLE-15-SP2-Online-x86_64-GM-Media1.iso”, this is a minimal ISO which fetches all required packages from the SUSE repository. I prefer to do it like this, instead of downloading the full blown ISO. No matter which virtualization product you use, go through the standard setup and opt for a minimal installation:








You should have received a registration code after you created your SUSE account and downloaded the ISO. This on needs to go here:















Once the installation completed login, set a host name and update the system (which actually already is up to do as we fetched all sources from the SUSE repository):

localhost:~ $ hostnamectl set-hostname sles15ranger
localhost:~ $ zypper update
Refreshing service 'Basesystem_Module_15_SP2_x86_64'.
...
Reading installed packages...
Nothing to do.
localhost:~ $ 

Very much the same as in the previous posts, install a supported version of Docker:

sles15ranger:~ $ curl https://releases.rancher.com/install-docker/19.03.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 17251  100 17251    0     0  84151      0 --:--:-- --:--:-- --:--:-- 84151

Either your platform is not easily detectable or is not supported by this
installer script.
Please visit the following URL for more detailed installation instructions:

https://docs.docker.com/engine/installation/

Ok, the official script from Rancher to install Docker is not working on SLES 15. Let’s try to find Docker in the official SUSE repositories:

sles15ranger:~ $ zypper search docker
Refreshing service 'Basesystem_Module_15_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_SP2_x86_64'.
Refreshing service 'Server_Applications_Module_15_SP2_x86_64'.
Loading repository data...
Reading installed packages...

S | Name       | Summary                        | Type
--+------------+--------------------------------+--------
  | ovn-docker | Docker network plugins for OVN | package


For an extended search including not yet activated remote resources you may run 'zypper
search-packages' at any time.
Do you want to run 'zypper search-packages' now? [yes/no/always/never] (no): yes

Following packages were found in following modules:

Package                       Module or Repository                                                     SUSEConnect Activation Command                                  
----------------------------  -----------------------------------------------------------------------  ----------------------------------------------------------------
cilium-docker                 SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
containment-rpm-docker        SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
docker                        Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
docker-bash-completion        Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
docker-bench-security         SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
docker-debuginfo              Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
docker-distribution-registry  SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
docker-img-store-setup        Public Cloud Module (sle-module-public-cloud/15.2/x86_64)                SUSEConnect --product sle-module-public-cloud/15.2/x86_64       
docker-libnetwork             Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
docker-libnetwork-debuginfo   Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
docker-machine-driver-kvm2    SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
docker-runc                   Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
docker-runc-debuginfo         Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
kiwi-image-docker-requires    SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
ovn-docker                    Server Applications Module (sle-module-server-applications/15.2/x86_64)  SUSEConnect --product sle-module-server-applications/15.2/x86_64
python2-docker                SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python2-docker-compose        SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python2-dockerpty             SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python2-docker-pycreds        SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python3-docker                SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python3-docker-compose        SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python3-dockerpty             SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
python3-docker-pycreds        SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
rubygem-docker-api            SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
rubygem-docker-api-doc        SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
rubygem-docker-api-testsuite  SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
yast2-docker                  SUSE Package Hub (PackageHub/15.2/x86_64)                                SUSEConnect --product PackageHub/15.2/x86_64                    
zypper-docker                 Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
zypper-docker-debuginfo       Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         
zypper-docker-debugsource     Containers Module (sle-module-containers/15.2/x86_64)                    SUSEConnect --product sle-module-containers/15.2/x86_64         

To activate the respective module or product, use SUSEConnect --product.
Use SUSEConnect --help for more details.

Register the correct module

sles15ranger:~ $ SUSEConnect --product sle-module-containers/15.2/x86_64
Registering system to SUSE Customer Center

Updating system details on https://scc.suse.com ...

Activating sle-module-containers 15.2 x86_64 ...
-> Adding service to system ...
-> Installing release package ...

Successfully registered system

.. and install Docker:

sles15ranger:~ $ zypper install docker
Refreshing service 'Basesystem_Module_15_SP2_x86_64'.
Refreshing service 'Containers_Module_15_SP2_x86_64'.
Refreshing service 'SUSE_Linux_Enterprise_Server_15_SP2_x86_64'.
...
    dracut:  root=UUID=a836ce9e-5187-4b7f-9b57-dab7061a9fc9 rootfstype=btrfs rootflags=rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots/1/snapshot,subvol=@/.snapshots/1/snapshot
    dracut: *** Creating image file '/boot/initrd-5.3.18-24.52-default' ***
    dracut: *** Creating initramfs image file '/boot/initrd-5.3.18-24.52-default' done ***

Executing %posttrans scripts ...........................................................................................................................................................................................................................................[done]
sles15ranger:~ $ rpm -qa |grep -i docker
docker-runc-1.0.0rc10+gitr3981_dc9208a3303f-6.45.3.x86_64
docker-19.03.15_ce-6.46.1.x86_64
docker-libnetwork-0.7.0.1+gitr2908_55e924b8a842-4.31.1.x86_64
docker-bash-completion-19.03.15_ce-6.46.1.noarch
sles15ranger:~ # docker --version
Docker version 19.03.15, build 99e3ed89195c

From now on it is basically the same as in the last posts: Create the group, the user, grant sudo permissions:

sles15ranger:~ $ groupadd rancher
sles15ranger:~ $ useradd -g rancher -G docker -m -s /bin/bash rancher
sles15ranger:~ $ zypper install -y sudo
sles15ranger:~ $ echo "rancher ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
sles15ranger:~ $ sudo su - rancher
rancher@sles15ranger:~> id -a
uid=1000(rancher) gid=1000(rancher) groups=1000(rancher),477(docker)
rancher@sles15ranger:~> sudo ls /
bin  boot  dev  etc  home  lib  lib64  mnt  opt  proc  root  run  sbin  selinux  srv  sys  tmp  usr  var

Enable the Docker service:

rancher@sles15ranger:~> systemctl list-unit-files | grep -i docker
docker.service                                                         disabled 
rancher@sles15ranger:~> sudo systemctl enable docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
rancher@sles15ranger:~> sudo systemctl start docker.service
rancher@sles15ranger:~> sudo systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2021-03-10 17:22:38 CET; 9s ago
     Docs: http://docs.docker.com
 Main PID: 11647 (dockerd)
    Tasks: 18
   CGroup: /system.slice/docker.service
           ├─11647 /usr/bin/dockerd --add-runtime oci=/usr/sbin/docker-runc
           └─11670 docker-containerd --config /var/run/docker/containerd/containerd.toml --log-level warn

Mar 10 17:22:37 sles15ranger systemd[1]: Starting Docker Application Container Engine...
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37+01:00" level=info msg="SUSE:secrets :: enabled"
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.235279401+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v>
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.237574049+01:00" level=warning msg="could not use snapshotter devmapper in metadata p>
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.482542178+01:00" level=warning msg="Your kernel does not support swap memory limit"
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.483274269+01:00" level=warning msg="Your kernel does not support cgroup rt period"
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.483318836+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.483337545+01:00" level=warning msg="Your kernel does not support cgroup blkio weight"
Mar 10 17:22:37 sles15ranger dockerd[11647]: time="2021-03-10T17:22:37.483353569+01:00" level=warning msg="Your kernel does not support cgroup blkio weight_>
Mar 10 17:22:38 sles15ranger systemd[1]: Started Docker Application Container Engine.
rancher@sles15ranger:~> 

Bring up the Rancher container:

rancher@sles15ranger:~> sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher
Unable to find image 'rancher/rancher:latest' locally
latest: Pulling from rancher/rancher
92dc2a97ff99: Pull complete 
be13a9d27eb8: Pull complete 
c8299583700a: Pull complete 
ae230727f130: Pull complete 
e2a418ceec64: Pull complete 
41c75955621d: Pull complete 
a25218d04df4: Pull complete 
64cf9593a3b1: Pull complete 
7f2a7535acb4: Pull complete 
2a47ce145a9a: Pull complete 
c70b3a16811c: Pull complete 
2e96fb0520ed: Pull complete 
1994015c7fb0: Pull complete 
51f27cd739d1: Pull complete 
71a5f7388eaf: Pull complete 
5b5f2e14777f: Pull complete 
01c27c5d80ce: Pull complete 
e345527b0efa: Pull complete 
6100bdb86846: Pull complete 
Digest: sha256:736b2357df459f53a97ec8e31d3d8400575671a72faa232e61f222a1e09969f2
Status: Downloaded newer image for rancher/rancher:latest
rancher@sles15ranger:~> docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                      NAMES
de177147eaa1        rancher/rancher     "entrypoint.sh"     50 seconds ago      Up 48 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   epic_engelbart

Point your browser to the IP address of your VM and confirm the warning:


Happy playing:

Cet article Rancher on SLES 15 as a demo environment est apparu en premier sur Blog dbi services.

AWS: PostgreSQL on Graviton2 with newer GCC

$
0
0

By Franck Pachot

.
In the previous post I have run PostgreSQL on AWS m6gd.2xlarge (ARM Graviton2 processor).
I didn’t precise the compilation option and this post will give more details following this feedback:

First, the PostgreSQL ./configure has correctly detected ARM and compiled with the following flags: -march=armv8-a+crc

for i in /usr/local/pgsql/bin/postgres ; do objdump -d "$i" | awk '/:$/{w=$2}/\t(ldxr|ldaxr|stxr|stlxr)\t/{printf "%-27s %-40s %-40s %-60s\n","(load and store exclusives)",$3,w,f}' f="$i" ; done | sort | uniq -c | sort -rn

Then I followed the information in https://githb.com/aws/aws-graviton-getting-started/blob/master/c-c++.md
I can check from the compiled objects which ones use Armv8.2 low-cost atomic operations (Large-System Extensions).


for i in $(find postgres/src/backend -name "*.o") ; do objdump -d "$i" | awk '/:$/{w=$2}/aarch64_(cas|casp|swp|ldadd|stadd|ldclr|stclr|ldeor|steor|ldset|stset|ldsmax|stsmax|ldsmin|stsmin|ldumax|stumax|ldumin|stumin)/{printf "%-27s %-20s %-30s %-60s\n","(LSE instructions)",$NF,w,f}' f="$i" ; done | sort | uniq -c | sort -rnk1,4


      8 (LSE instructions)          <__aarch64_swp4_acq> <StartupXLOG>:                 postgres/src/backend/access/transam/xlog.o
      7 (LSE instructions)          <__aarch64_swp4_acq> <BitmapHeapNext>:              postgres/src/backend/executor/nodeBitmapHeapscan.o
      6 (LSE instructions)          <__aarch64_ldclr4_acq_rel> <LWLockDequeueSelf>:           postgres/src/backend/storage/lmgr/lwlock.o
      6 (LSE instructions)          <__aarch64_cas8_acq_rel> <shm_mq_send_bytes>:           postgres/src/backend/storage/ipc/shm_mq.o
      5 (LSE instructions)          <__aarch64_swp4_acq> <WalReceiverMain>:             postgres/src/backend/replication/walreceiver.o
      5 (LSE instructions)          <__aarch64_cas8_acq_rel> <shm_mq_receive_bytes.isra.0>: postgres/src/backend/storage/ipc/shm_mq.o
      4 (LSE instructions)          <__aarch64_swp4_acq> <ProcessRepliesIfAny>:         postgres/src/backend/replication/walsender.o
      4 (LSE instructions)          <__aarch64_swp4_acq> <hash_search_with_hash_value>: postgres/src/backend/utils/hash/dynahash.o
      4 (LSE instructions)          <__aarch64_swp4_acq> <copy_replication_slot>:       postgres/src/backend/replication/slotfuncs.o
      4 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <parallel_vacuum_index>:       postgres/src/backend/access/heap/vacuumlazy.o
      4 (LSE instructions)          <__aarch64_cas4_acq_rel> <LWLockAcquire>:               postgres/src/backend/storage/lmgr/lwlock.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <xlog_redo>:                   postgres/src/backend/access/transam/xlog.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <XLogInsertRecord>:            postgres/src/backend/access/transam/xlog.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <SaveSlotToPath>:              postgres/src/backend/replication/slot.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <RequestCheckpoint>:           postgres/src/backend/postmaster/checkpointer.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <LogicalRepSyncTableStart>:    postgres/src/backend/replication/logical/tablesync.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <LogicalConfirmReceivedLocation>: postgres/src/backend/replication/logical/logical.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <InvalidateObsoleteReplicationSlots>: postgres/src/backend/replication/slot.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <CreateInitDecodingContext>:   postgres/src/backend/replication/logical/logical.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <CreateCheckPoint>:            postgres/src/backend/access/transam/xlog.o
      3 (LSE instructions)          <__aarch64_swp4_acq> <CheckpointerMain>:            postgres/src/backend/postmaster/checkpointer.o
      3 (LSE instructions)          <__aarch64_ldclr4_acq_rel> <LWLockQueueSelf>:             postgres/src/backend/storage/lmgr/lwlock.o
      3 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <tbm_prepare_shared_iterate>:  postgres/src/backend/nodes/tidbitmap.o
      3 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <tbm_free_shared_area>:        postgres/src/backend/nodes/tidbitmap.o
      3 (LSE instructions)          <__aarch64_cas8_acq_rel> <ProcessProcSignalBarrier>:    postgres/src/backend/storage/ipc/procsignal.o
      3 (LSE instructions)          <__aarch64_cas8_acq_rel> <ExecParallelHashIncreaseNumBatches>: postgres/src/backend/executor/nodeHash.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <XLogWrite>:                   postgres/src/backend/access/transam/xlog.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <XLogSendPhysical>:            postgres/src/backend/replication/walsender.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <XLogBackgroundFlush>:         postgres/src/backend/access/transam/xlog.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <WalRcvStreaming>:             postgres/src/backend/replication/walreceiverfuncs.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <WalRcvRunning>:               postgres/src/backend/replication/walreceiverfuncs.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <WalRcvDie>:                   postgres/src/backend/replication/walreceiver.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <TransactionIdLimitedForOldSnapshots>: postgres/src/backend/utils/time/snapmgr.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <StrategyGetBuffer>:           postgres/src/backend/storage/buffer/freelist.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_wait_internal>:        postgres/src/backend/storage/ipc/shm_mq.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotReserveWal>:   postgres/src/backend/replication/slot.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotRelease>:      postgres/src/backend/replication/slot.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <ProcKill>:                    postgres/src/backend/storage/lmgr/proc.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <process_syncing_tables>:      postgres/src/backend/replication/logical/tablesync.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <pg_get_replication_slots>:    postgres/src/backend/replication/slotfuncs.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <exec_replication_command>:    postgres/src/backend/replication/walsender.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <CreateRestartPoint>:          postgres/src/backend/access/transam/xlog.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <ConditionVariableBroadcast>:  postgres/src/backend/storage/lmgr/condition_variable.o
      2 (LSE instructions)          <__aarch64_swp4_acq> <BarrierArriveAndWait>:        postgres/src/backend/storage/ipc/barrier.o
      2 (LSE instructions)          <__aarch64_ldset4_acq_rel> <LWLockWaitListLock>:          postgres/src/backend/storage/lmgr/lwlock.o
      2 (LSE instructions)          <__aarch64_ldclr4_acq_rel> <LWLockWaitForVar>:            postgres/src/backend/storage/lmgr/lwlock.o
      2 (LSE instructions)          <__aarch64_ldclr4_acq_rel> <LWLockUpdateVar>:             postgres/src/backend/storage/lmgr/lwlock.o
      2 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <vacuum_delay_point>:          postgres/src/backend/commands/vacuum.o
      2 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <StrategyGetBuffer>:           postgres/src/backend/storage/buffer/freelist.o
      2 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <LWLockRelease>:               postgres/src/backend/storage/lmgr/lwlock.o
      2 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <lazy_parallel_vacuum_indexes>: postgres/src/backend/access/heap/vacuumlazy.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <WalReceiverMain>:             postgres/src/backend/replication/walreceiver.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <WaitForProcSignalBarrier>:    postgres/src/backend/storage/ipc/procsignal.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <shm_mq_receive>:              postgres/src/backend/storage/ipc/shm_mq.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <ResolveRecoveryConflictWithLock>: postgres/src/backend/storage/ipc/standby.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <ProcSignalInit>:              postgres/src/backend/storage/ipc/procsignal.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <ExecParallelHashTableInsert>: postgres/src/backend/executor/nodeHash.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <ExecParallelHashTableInsertCurrentBatch>: postgres/src/backend/executor/nodeHash.o
      2 (LSE instructions)          <__aarch64_cas8_acq_rel> <ExecParallelHashIncreaseNumBuckets>: postgres/src/backend/executor/nodeHash.o
      2 (LSE instructions)          <__aarch64_cas4_acq_rel> <TransactionIdSetTreeStatus>:  postgres/src/backend/access/transam/clog.o
      2 (LSE instructions)          <__aarch64_cas4_acq_rel> <ProcArrayEndTransaction>:     postgres/src/backend/storage/ipc/procarray.o
      2 (LSE instructions)          <__aarch64_cas4_acq_rel> <LWLockAcquireOrWait>:         postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogWalRcvFlush.part.4>:      postgres/src/backend/replication/walreceiver.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogSetReplicationSlotMinimumLSN>: postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogSetAsyncXactLSN>:         postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogSendLogical>:             postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogPageRead>:                postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogNeedsFlush>:              postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogGetLastRemovedSegno>:     postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <XLogFlush>:                   postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <worker_freeze_result_tape>:   postgres/src/backend/utils/sort/tuplesort.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalSndWakeup>:                postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalSndWaitStopping>:          postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalSndSetState>:              postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalSndRqstFileReload>:        postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalSndKill>:                  postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalSndInitStopping>:          postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WalRcvForceReply>:            postgres/src/backend/replication/walreceiver.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <WaitXLogInsertionsToFinish>:  postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <UpdateMinRecoveryPoint.part.10>: postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <tuplesort_performsort>:       postgres/src/backend/utils/sort/tuplesort.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <tuplesort_begin_common>:      postgres/src/backend/utils/sort/tuplesort.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <table_block_parallelscan_startblock_init>: postgres/src/backend/access/table/tableam.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SyncRepInitConfig>:           postgres/src/backend/replication/syncrep.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SyncRepGetCandidateStandbys>: postgres/src/backend/replication/syncrep.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <StrategySyncStart>:           postgres/src/backend/storage/buffer/freelist.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <StrategyNotifyBgWriter>:      postgres/src/backend/storage/buffer/freelist.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <StrategyFreeBuffer>:          postgres/src/backend/storage/buffer/freelist.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SnapshotTooOldMagicForTest>:  postgres/src/backend/utils/time/snapmgr.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <s_lock>:                      postgres/src/backend/storage/lmgr/s_lock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SIInsertDataEntries>:         postgres/src/backend/storage/ipc/sinvaladt.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SIGetDataEntries>:            postgres/src/backend/storage/ipc/sinvaladt.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ShutdownWalRcv>:              postgres/src/backend/replication/walreceiverfuncs.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_toc_insert>:              postgres/src/backend/storage/ipc/shm_toc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_toc_freespace>:           postgres/src/backend/storage/ipc/shm_toc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_toc_allocate>:            postgres/src/backend/storage/ipc/shm_toc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_set_sender>:           postgres/src/backend/storage/ipc/shm_mq.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_set_receiver>:         postgres/src/backend/storage/ipc/shm_mq.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_sendv>:                postgres/src/backend/storage/ipc/shm_mq.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_get_sender>:           postgres/src/backend/storage/ipc/shm_mq.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_get_receiver>:         postgres/src/backend/storage/ipc/shm_mq.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <shm_mq_detach_internal>:      postgres/src/backend/storage/ipc/shm_mq.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ShmemAllocRaw>:               postgres/src/backend/storage/ipc/shmem.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SharedFileSetOnDetach>:       postgres/src/backend/storage/file/sharedfileset.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SharedFileSetAttach>:         postgres/src/backend/storage/file/sharedfileset.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SetWalWriterSleeping>:        postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SetRecoveryPause>:            postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SetPromoteIsTriggered>:       postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <SetOldSnapshotThresholdTimestamp>: postgres/src/backend/utils/time/snapmgr.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <RequestXLogStreaming>:        postgres/src/backend/replication/walreceiverfuncs.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotsDropDBSlots>: postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotsCountDBSlots>: postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotsComputeRequiredXmin>: postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotsComputeRequiredLSN>: postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotsComputeLogicalRestartLSN>: postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotPersist>:      postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotMarkDirty>:    postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotDropPtr>:      postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotCreate>:       postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotCleanup>:      postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReplicationSlotAcquireInternal>: postgres/src/backend/replication/slot.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <RemoveOldXlogFiles>:          postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <RemoveLocalLock>:             postgres/src/backend/storage/lmgr/lock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <RecoveryRestartPoint>:        postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <RecoveryIsPaused>:            postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ReadRecord>:                  postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <PublishStartupProcessInformation>: postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <PromoteIsTriggered>:          postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ProcSendSignal>:              postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ProcessWalSndrMessage>:       postgres/src/backend/replication/walreceiver.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <PhysicalReplicationSlotNewXmin>: postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <pg_stat_get_wal_senders>:     postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <pg_stat_get_wal_receiver>:    postgres/src/backend/replication/walreceiver.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <pg_replication_slot_advance>: postgres/src/backend/replication/slotfuncs.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ParallelWorkerReportLastRecEnd>: postgres/src/backend/access/transam/parallel.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <MaintainOldSnapshotTimeMapping>: postgres/src/backend/utils/time/snapmgr.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <LWLockNewTrancheId>:          postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <LogicalIncreaseXminForSlot>:  postgres/src/backend/replication/logical/logical.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <LogicalIncreaseRestartDecodingForSlot>: postgres/src/backend/replication/logical/logical.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <lock_twophase_recover>:       postgres/src/backend/storage/lmgr/lock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <LockRefindAndRelease>:        postgres/src/backend/storage/lmgr/lock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <LockAcquireExtended>:         postgres/src/backend/storage/lmgr/lock.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <KnownAssignedXidsSearch>:     postgres/src/backend/storage/ipc/procarray.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <KnownAssignedXidsGetAndSetXmin>: postgres/src/backend/storage/ipc/procarray.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <KnownAssignedXidsAdd>:        postgres/src/backend/storage/ipc/procarray.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <KeepLogSeg>:                  postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <InitWalSender>:               postgres/src/backend/replication/walsender.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <InitProcess>:                 postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <InitAuxiliaryProcess>:        postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <HotStandbyActive>:            postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <HaveNFreeProcs>:              postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetXLogWriteRecPtr>:          postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetXLogReplayRecPtr>:         postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetXLogInsertRecPtr>:         postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetWalRcvFlushRecPtr>:        postgres/src/backend/replication/walreceiverfuncs.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetSnapshotCurrentTimestamp>: postgres/src/backend/utils/time/snapmgr.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetReplicationTransferLatency>: postgres/src/backend/replication/walreceiverfuncs.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetReplicationApplyDelay>:    postgres/src/backend/replication/walreceiverfuncs.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetRedoRecPtr>:               postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetRecoveryState>:            postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetOldSnapshotThresholdTimestamp>: postgres/src/backend/utils/time/snapmgr.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetLatestXTime>:              postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetInsertRecPtr>:             postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetFlushRecPtr>:              postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetFakeLSNForUnloggedRel>:    postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <GetCurrentChunkReplayStartTime>: postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <FirstCallSinceLastCheckpoint>: postgres/src/backend/postmaster/checkpointer.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <element_alloc>:               postgres/src/backend/utils/hash/dynahash.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <do_pg_stop_backup>:           postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <do_pg_start_backup>:          postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <DecodingContextFindStartpoint>: postgres/src/backend/replication/logical/logical.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ConditionVariableTimedSleep>: postgres/src/backend/storage/lmgr/condition_variable.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ConditionVariableSignal>:     postgres/src/backend/storage/lmgr/condition_variable.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ConditionVariablePrepareToSleep>: postgres/src/backend/storage/lmgr/condition_variable.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ConditionVariableCancelSleep>: postgres/src/backend/storage/lmgr/condition_variable.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <ComputeXidHorizons>:          postgres/src/backend/storage/ipc/procarray.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <CheckXLogRemoved>:            postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <CheckRecoveryConsistency.part.11>: postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <_bt_parallel_seize>:          postgres/src/backend/access/nbtree/nbtree.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <_bt_parallel_scan_and_sort>:  postgres/src/backend/access/nbtree/nbtsort.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <btparallelrescan>:            postgres/src/backend/access/nbtree/nbtree.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <_bt_parallel_release>:        postgres/src/backend/access/nbtree/nbtree.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <_bt_parallel_done>:           postgres/src/backend/access/nbtree/nbtree.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <_bt_parallel_advance_array_keys>: postgres/src/backend/access/nbtree/nbtree.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <btbuild>:                     postgres/src/backend/access/nbtree/nbtsort.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <BarrierParticipants>:         postgres/src/backend/storage/ipc/barrier.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <BarrierDetach>:               postgres/src/backend/storage/ipc/barrier.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <BarrierAttach>:               postgres/src/backend/storage/ipc/barrier.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <BarrierArriveAndDetach>:      postgres/src/backend/storage/ipc/barrier.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <BarrierArriveAndDetachExceptLast>: postgres/src/backend/storage/ipc/barrier.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <AuxiliaryProcKill>:           postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <AdvanceXLInsertBuffer>:       postgres/src/backend/access/transam/xlog.o
      1 (LSE instructions)          <__aarch64_swp4_acq> <AbortStrongLockAcquire>:      postgres/src/backend/storage/lmgr/lock.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <ProcessProcSignalBarrier>:    postgres/src/backend/storage/ipc/procsignal.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <LWLockWaitForVar>:            postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <LWLockQueueSelf>:             postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <LWLockDequeueSelf>:           postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <LWLockAcquire>:               postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <LockBufHdr>:                  postgres/src/backend/storage/buffer/bufmgr.o
      1 (LSE instructions)          <__aarch64_ldset4_acq_rel> <EmitProcSignalBarrier>:       postgres/src/backend/storage/ipc/procsignal.o
      1 (LSE instructions)          <__aarch64_ldclr4_acq_rel> <LWLockReleaseClearVar>:       postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_ldadd8_acq_rel> <table_block_parallelscan_nextpage>: postgres/src/backend/access/table/tableam.o
      1 (LSE instructions)          <__aarch64_ldadd8_acq_rel> <EmitProcSignalBarrier>:       postgres/src/backend/storage/ipc/procsignal.o
      1 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <find_or_make_matching_shared_tupledesc>: postgres/src/backend/utils/cache/typcache.o
      1 (LSE instructions)          <__aarch64_ldadd4_acq_rel> <ExecParallelHashJoin>:        postgres/src/backend/executor/nodeHashjoin.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <table_block_parallelscan_reinitialize>: postgres/src/backend/access/table/tableam.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <ProcWakeup>:                  postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <ProcSleep>:                   postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <pg_stat_get_wal_receiver>:    postgres/src/backend/replication/walreceiver.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <InitProcess>:                 postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <InitAuxiliaryProcess>:        postgres/src/backend/storage/lmgr/proc.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <GetWalRcvWriteRecPtr>:        postgres/src/backend/replication/walreceiverfuncs.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <GetLockStatusData>:           postgres/src/backend/storage/lmgr/lock.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <ExecParallelScanHashBucket>:  postgres/src/backend/executor/nodeHash.o
      1 (LSE instructions)          <__aarch64_cas8_acq_rel> <CleanupProcSignalState>:      postgres/src/backend/storage/ipc/procsignal.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <UnpinBuffer.constprop.11>:    postgres/src/backend/storage/buffer/bufmgr.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <StrategySyncStart>:           postgres/src/backend/storage/buffer/freelist.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <StrategyGetBuffer>:           postgres/src/backend/storage/buffer/freelist.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <ProcessProcSignalBarrier>:    postgres/src/backend/storage/ipc/procsignal.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <PinBuffer>:                   postgres/src/backend/storage/buffer/bufmgr.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <MarkBufferDirty>:             postgres/src/backend/storage/buffer/bufmgr.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <LWLockRelease>:               postgres/src/backend/storage/lmgr/lwlock.o
      1 (LSE instructions)          <__aarch64_cas4_acq_rel> <LWLockConditionalAcquire>:    postgres/src/backend/storage/lmgr/lwlock.o

So, this confirms that it was compiled with -march=armv8.2-a

for i in /usr/local/pgsql/bin/postgres $(find postgres/src/backend -name "*.o") ; do objdump -d "$i" | awk '/:$/{w=$2}/aarch64_(cas|casp|swp|ldadd|stadd|ldclr|stclr|ldeor|steor|ldset|stset|ldsmax|stsmax|ldsmin|stsmin|ldumax|stumax|ldumin|stumin)/{printf "%-27s %-40s %-40s %-60s\n","(LSE instructions)",$NF,w,f}/\t(ldxr|ldaxr|stxr|stlxr)\t/{printf "%-27s %-40s %-40s %-60s\n","(load and store exclusives)",$3,w,f}' f="$i" ; done | sort | uniq -c | sort -rn

      1 (load and store exclusives) stxr                                     <__aarch64_swp4_acq>:                    /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) stlxr                                    <__aarch64_ldset4_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) stlxr                                    <__aarch64_ldclr4_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) stlxr                                    <__aarch64_ldadd8_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) stlxr                                    <__aarch64_ldadd4_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) stlxr                                    <__aarch64_cas8_acq_rel>:                /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) stlxr                                    <__aarch64_cas4_acq_rel>:                /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_swp4_acq>:                    /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_ldset4_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_ldclr4_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_ldadd8_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_ldadd4_acq_rel>:              /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_cas8_acq_rel>:                /usr/local/pgsql/bin/postgres
      1 (load and store exclusives) ldaxr                                    <__aarch64_cas4_acq_rel>:                /usr/local/pgsql/bin/postgres

This confirms that the PostgreSQL binary contains load and store exclusives


[ec2-user@ip-172-31-11-116 ~]$ nm /usr/local/pgsql/bin/postgres | grep -E "aarch64(_have_lse_atomics)?"

00000000008fb460 t __aarch64_cas4_acq_rel
00000000008fb490 t __aarch64_cas8_acq_rel
0000000000bbe640 b __aarch64_have_lse_atomics
00000000008fb4f0 t __aarch64_ldadd4_acq_rel
00000000008fb580 t __aarch64_ldadd8_acq_rel
00000000008fb520 t __aarch64_ldclr4_acq_rel
00000000008fb550 t __aarch64_ldset4_acq_rel
00000000008fb4c0 t __aarch64_swp4_acq

this confirms that Postgresql has been compiled with -moutline-atomics


[ec2-user@ip-172-31-11-116 ~]$ gcc --version
gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-12)
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

This is GCC 7 I used to compile PostgreSQL and newer versions may have better optimisations for ARM.

Install latest version of GCC (version 11 experimental)

Here is how I compiled the latest GCC available:


gcc --version
sudo yum -y install bzip2 git gcc gcc-c++ gmp-devel mpfr-devel libmpc-devel make flex bison
git clone https://github.com/gcc-mirror/gcc.git
cd gcc
make distclean
./configure --enable-languages=c,c++
make
sudo make install

This basically get the latest GCC fron source, compiles and installs it (please remember this is a lab – use stable versions elswhere)

[ec2-user@ip-172-31-38-254 ~]$ gcc --version
gcc (GCC) 11.0.1 20210309 (experimental)
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Here we are: gcc 11.0.1 20210309 (experimental)

PGIO LIOPS

I’m running the same PGIO as in previous post


Date: Wed Mar 10 14:39:38 UTC 2021
Database connect string: "pgio".
Shared buffers: 8500MB.
Testing 4 schemas with 1 thread(s) accessing 1024M (131072 blocks) of each schema.
Running iostat, vmstat and mpstat on current host--in background.
Launching sessions. 4 schema(s) will be accessed by 1 thread(s) each.
pg_stat_database stats:
          datname| blks_hit| blks_read|tup_returned|tup_fetched|tup_updated
BEFORE:  pgio    | 38262338086 |    562443 |  37644815538 | 37635763756 |          24
AFTER:   pgio    | 49691750429 |    562449 |  48890461241 | 48878858651 |          49
DBNAME:  pgio. 4 schemas, 1 threads(each). Run time: 3600 seconds. RIOPS >03174836<

This is a little higher than what I had: 793709 LIOPS / CPU where I had 780651 with GCC 7 but that’s still lower than the 896280 I had on x86.

Of course, there can be more optimisations as mentioned in https://github.com/aws/aws-graviton-getting-started/blob/master/c-c++.md
I’ll recompile with the recommended flags

(
cd postgres
CFLAGS="-march=armv8.2-a+fp16+rcpc+dotprod+crypto -mtune=neoverse-n1 -fsigned-char" ./configure
make clean
make
make install
)

I didn’t make any difference in the PGIO run. Of course, this may change with a read-write workload with checksum.

Cet article AWS: PostgreSQL on Graviton2 with newer GCC est apparu en premier sur Blog dbi services.

Viewing all 2833 articles
Browse latest View live