Quantcast
Channel: dbi Blog
Viewing all 2875 articles
Browse latest View live

OTN Appreciation Day – tnsping

$
0
0

Tim Hall had the idea that as many people as possible would write a small blog post about their favorite Oracle feature and we all post them on the same day. I do have a lot of favorite Oracle tools, and the one I choose today is: tnsping

tnsping tells you, if your connect string can be resolved and if the listener where the connect string is pointing to, is available, and in the end, it displays an estimate of the round trip time (in milliseconds) it takes to reach the Oracle Net service.

All in all, tnsping is very easy to use and that’s why I love it, and not so overloaded like e.g. crsctl.  In fact, tnsping knows only 2 parameters. <address> and optionally <count>, like shown in the following example.

Usage: tnsping <address> [<count>]

Getting the option list of tnsping, a few lines are enough. I don’t need to scroll down several pages, like e.g. for emctl.  emctl is another one besides crsctl were you can spend a lifetime only reading the manual.  No, I picked tnsping this time because I like the option list.

Here we go … now I run one tnsping without and one with count.

oracle@oel001:/home/oracle/ [OCM121] tnsping RMAN
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 11-OCT-2016 14:28:14
Copyright (c) 1997, 2014, Oracle.  All rights reserved.
Used parameter files:
/u00/app/oracle/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))) 
(CONNECT_DATA = (SERVICE_NAME = OCM121)))
OK (0 msec)

oracle@oel001:/home/oracle/ [OCM121] tnsping RMAN 5
TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 11-OCT-2016 14:28:20
Copyright (c) 1997, 2014, Oracle.  All rights reserved.
Used parameter files:
/u00/app/oracle/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))) 
(CONNECT_DATA = (SERVICE_NAME = OCM121)))
OK (0 msec)
OK (10 msec)
OK (0 msec)
OK (0 msec)
OK (0 msec)

But … wait a second … my RMAN connect string points to host oel001, but it should point to oel002. Let’s take a look in $ORACLE_HOME/network/admin/tnsnames.ora

oracle@oel001:/u00/app/oracle/ [OCM121] cat /u00/app/oracle/product/12.1.0.2/network/admin/tnsnames.ora
RMAN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel002)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = OCM121)
    )
  )

It looks correct. So what is going on here. There are several explanations to this issue.

1.) You might have set the TNS_ADMIN environment variable, which points to a total different directory
2.) Or your sqlnet.ora might point to a LDAP server first, which resolves the name
3.) Or a total different tnsnames.ora file is taken into account, but which one?
4.) Or something totally different, e.g. corrupt nscd, symlinks …

For quite a long time, Oracle is not searching first in the $ORACLE_HOME/network/admin/tnsnames.ora
to get the name resolved. The tnsnames.ora search order is the following:

1.) $HOME/.tnsnames.ora    # yes, it looks up the a hidden file in your home directory first
2.) /etc/tnsnames.ora    # then, a global tnsnames.ora in the /etc directory
3.) $ORACLE_HOME/network/admin/tnsnames.ora    # and last but not least, it looks it up in the $ORACLE_HOME/network/admin

To prove it, simply run a strace on your tnsping command and take a look at the trace file.

$ strace -o /tmp/tnsping.trc -f tnsping RMAN
$ cat /tmp/tnsping.trc | grep tnsnames

21919 access("/home/oracle/.tnsnames.ora", F_OK) = 0
21919 access("/etc/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
21919 access("/u00/app/oracle/product/12.1.0.2/network/admin/tnsnames.ora", F_OK) = -1 ENOENT (No such file or directory)
21919 stat("/home/oracle/.tnsnames.ora", {st_mode=S_IFREG|0644, st_size=173, ...}) = 0
21919 open("/home/oracle/.tnsnames.ora", O_RDONLY) = 3

Here we go … in my case, the “/home/oracle/.tnsnames.ora” was taken into account. Let’s take a look.
Indeed, I have found an entry here.

oracle@oel001:/home/oracle/ [OCM121] cat /home/oracle/.tnsnames.ora
RMAN =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oel001)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = OCM121)
    )
  )

Have fun with tnsping.

Cheers,
William

 

 

 

Cet article OTN Appreciation Day – tnsping est apparu en premier sur Blog dbi services.


OTN Appreciation Day : OSWatcher Black Box (OSWBB)

$
0
0

In this post, I will present a usefull and easy-to-use Oracle tool : OSWatcher.

Image1What is it ?

OSWatcher Black Box (OSWBB), for its full name, is a free Oracle Tool which will help you to diagnose performance issues on the OS side.
Of course, it will not solve the issue for you, but it gives a system health state at a given moment.
OSWBB is multi-platforms supported (AIX, Solaris, HP-UX, Linux and Windows) and is installed by default on Oracle Database Appliance (ODA).

How does it work ?

OSWatcher invoke OS utilities like vmstat, netstat, iostat, etc. by creating a “Data Collectors” for each of them available on the system. The “Data Collectors” works as background processes to collect periodically the data provided by these different OS utilities.
Once collected, all the statistics are stored inside a common destination (archive directory).

Image2

Below is the content of the archive directory. As you can see there is a dedicated folder for each type of OS statistics collected :
oracle@srvtestoel7:/u01/app/oracle/product/oswbb/archive/ [JOCDB1] ll
total 36
-rw-r--r-- 1 oracle oinstall 1835 28 sept. 16:55 heartbeat
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswifconfig
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswiostat
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswmeminfo
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswmpstat
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswnetstat
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswprvtnet
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswps
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswslabinfo
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswtop
drwxr-xr-x 2 oracle oinstall 4096 28 sept. 16:52 oswvmstat

Downloading

You can download OSWatcher from My Oracle Support – Doc ID 301137.1 (.tar file – 6Mb)

Installing

To install OSWatcher, you simply have to untar the downloaded file :
$ tar -xvf oswbb733.tar
All necessary files are stored in the oswbb folder.

Uninstalling

To remove OSWatcher from your server, you only have to :
– Stop all OSWatcher running processes
– Delete the oswbb folder

Starting

$ nohup ./OSWatcher.sh P1 P2 P3 P4

Parameters

– P1 = snapshot interval in seconds (default : 30 seconds)
– P2 = number of hours of archive data to store (default : 48 hours)
– P3 = name of a compress utility to compress each file automatically (default : none)
– P4 = alternate location to store the archive directory (default : oswbb/archive)

You can also set the UNIX environment variable oswbb_ARCHIVE_DEST to specify a non-default location.

Startup steps

Starting OSWatcher involve 4 steps :

  1. Check parameters
    $ ./OSWatcher.sh 60 24 gzip /tmp/oswbb/archive
    Info...Zip option IS specified.
    Info...OSW will use gzip to compress files.
    ...
  2. Discover OS utilities
    Testing for discovery of OS Utilities...
    VMSTAT found on your system.
    IOSTAT found on your system.
    MPSTAT found on your system.
    ...
    ...
    Discovery completed.
  3. Discover CPU count
    Testing for discovery of OS CPU COUNT
    oswbb is looking for the CPU COUNT on your system
    CPU COUNT will be used by oswbba to automatically look for cpu problems
    CPU COUNT found on your system.
    CPU COUNT = 1
  4. Data collection
    Data is stored in directory: /tmp/oswbb/archive
    Starting Data Collection...
    oswbb heartbeat:mar. sept. 13 22:03:33 CEST 2016
    oswbb heartbeat:mar. sept. 13 22:04:33 CEST 2016
    oswbb heartbeat:mar. sept. 13 22:05:33 CEST 2016

Check if OSWBB is running

$ ps -ef | grep OSWatcher | grep -v grep
oracle    8130     1  0 13:47 pts/0    00:00:33 /bin/sh ./OSWatcher.sh 5 48
oracle    8188  8130  0 13:47 pts/0    00:00:00 /bin/sh ./OSWatcherFM.sh 48 /u01/app/oracle/product/oswbb/archive

The OSWatcherFM.sh process is the file manager who delete collected statitstics once they have reached their retention.

Stopping

Run the stopOSWbb.sh to stop all OSWatcher processes
$ ./stopOSWbb.sh

Configure automatic startup

Oracle provide a RPM package to configure auto-start of OSWatcher when the system starts.
You can download it here : My Oracle Support – Doc ID 580513.1
Once downloaded, install the package (as root) :
$ rpm -ihv oswbb-service-7.2.0-1.noarch.rpm
Preparing... ######################################### [100%] 1:oswbb-service    ######################################### [100%]

You can adapt the following values in /usr/libexec/oswbb-service/oswbb-helper to define the parameters with which OSWatcher will auto-starts :
OSW_HOME='/u01/app/oracle/product/oswbb/'
OSW_INTERVAL='10'
OSW_RETENTION='48'
OSW_USER='oracle'
OSW_COMPRESSION='gzip'
OSW_ARCHIVE='archive'

Start the service :
$ service oswbb start
Starting oswbb (via systemctl): [ OK ]

Check the service :
$ service oswbb status
OSWatcher is running.

Stop the service :
$ service oswbb stop
Stopping oswbb (via systemctl):  Warning: Unit file of oswbb.service changed on disk, 'systemctl daemon-reload' recommended.
[  OK  ]

Enable the service when the system start :
$/sbin/chkconfig oswbb on

Systemd commands (Linux 7) :
$ systemctl stop oswbb.service
$ systemctl start oswbb.service
$ systemctl status oswbb.service
$ systemctl enable oswbb.service

Inside the archive directory, one dedicated folder is created by type of collected statistics :
oracle@srvtestoel7:/u01/app/oracle/product/oswbb/archive/ [JOCDB1] ll
total 0
drwxr-xr-x 2 oracle oinstall 136 23 sept. 10:00 oswifconfig
drwxr-xr-x 2 oracle oinstall 132 23 sept. 10:00 oswiostat
drwxr-xr-x 2 oracle oinstall 134 23 sept. 10:00 oswmeminfo
drwxr-xr-x 2 oracle oinstall 132 23 sept. 10:00 oswmpstat
drwxr-xr-x 2 oracle oinstall 134 23 sept. 10:00 oswnetstat
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswprvtnet
drwxr-xr-x 2 oracle oinstall 124 23 sept. 10:00 oswps
drwxr-xr-x 2 oracle oinstall 6 23 sept. 09:18 oswslabinfo
drwxr-xr-x 2 oracle oinstall 126 23 sept. 10:00 oswtop
drwxr-xr-x 2 oracle oinstall 132 23 sept. 10:00 oswvmstat

In a following blog post, I’ll present OSWatcher Black Box Analyzer (oswbaa), which is a tool used to analyze graphically the collected data.

 

Cet article OTN Appreciation Day : OSWatcher Black Box (OSWBB) est apparu en premier sur Blog dbi services.

OTN Appreciation Day : OSWatcher Black Box Analyzer (OSWBBA)

$
0
0

Following my last blog post about OSWatcher, I will present in this one OSWatcher Black Box Analyzer (OSWBBA), which is the tool that you can use to display graphically the data collected by OSWBB.
This tool is a Java utility and exists since OSWatcher version 4.0.0. It permits to create graphs and complete HTML reports containing collected OS statistics.  Image1

OSWBBA require no installation. It is embedded in the OSWatcher home directory.
To start the Analyser, run oswbba.jar :
$ java -jar oswbba.jar -i ./archive
Starting OSW Analyzer V7.3.3
OSWatcher Analyzer Written by Oracle Center of Expertise
Copyright (c)  2014 by Oracle Corporation
Parsing Data. Please Wait...
Scanning file headers for version and platform info...
Parsing file srvtestoel7.it.dbi-services.com_iostat_16.07.25.2000.dat
...

The ”–i” parameter indicates the OSWatcher archive directory and is mandatory.
Once launched, the main menu is displayed :
Enter 1 to Display CPU Process Queue Graphs
Enter 2 to Display CPU Utilization Graphs
Enter 3 to Display CPU Other Graphs
Enter 4 to Display Memory Graphs
Enter 5 to Display Disk IO Graphs
Enter 6 to Generate All CPU Gif Files
Enter 7 to Generate All Memory Gif Files
Enter 8 to Generate All Disk Gif Files
Enter L to Specify Alternate Location of Gif Directory
Enter T to Alter Graph Time Scale Only (Does not change analysis dataset)
Enter D to Return to Default Graph Time Scale
Enter R to Remove Currently Displayed Graphs
Enter A to Analyze Data
Enter S to Analyze Subset of Data(Changes analysis dataset including graph time scale)
Enter P to Generate A Profile
Enter X to Export Parsed Data to File
Enter Q to Quit Program

You must enable a X-Windows environment to display graphs.

If you don’t want to go through this menu every time you want to display graph or generate report, you can pass all of the above options to OSWBBA from the command line, for example :
$ java -jar oswbba.jar -i ./archive -4 -P last_crash

  • -i  : Specify the archive directory
  • -4 : Create memory graphs
  • -P : Create a profile called “last_crash”

Other options :

  • -6..8 : Same options as in the menu
  • -L : User specified location to place gif files
  • -A : Create a report
  • -B : Specify the start time to analyze (format Mon DD HH:MM:SS YYYY)
  • -E : Specify the end time to analyze  (format Mon DD HH:MM:SS YYYY)
  • -F : Specify a filename of a text file containing a list of options
    (all others options are ignored if –F is used)

Example :
$ java -jar oswbba.jar -i ./archive -6 -B Sep 23 09:25:00 2016 -E Sep 23 09:30:00 2016
Will start OSWatcher Analyzer with the following parameters :

  • Archive directory : $OSWatcher_HOME/archive
  • Generate all CPU GIF files
  • Time slot : 23 of Septembre 2016 – 09:25:00 to 09:30:00

Generated file :
Untitled

It’s also possible to specify in a text file all options you want to use and then run OSWBBA with the “-f” parameter :
$ cat input.txt
-P today_crash -B Sep 23 09:00:00 2016 -E Sep 23 11:00:00 2016
$ java -jar oswbba.jar -i ./archive -F input.txt

This will generate a complete HTML report (called “today_crash”) with all available graphs based on the statistics stored in the archive directory.

 

Cet article OTN Appreciation Day : OSWatcher Black Box Analyzer (OSWBBA) est apparu en premier sur Blog dbi services.

Mirantis OpenStack 9.0 installation using VirtualBox – part 1

$
0
0

In this series of  blogs, I am going to  give a quick overview of OpenStack and learn how to install it using Mirantis.

“Mirantis is the #1 pure play OpenStack company. More customers rely on Mirantis than any other company to scale out OpenStack without vendor lock-in”
(source: https://www.openstack.org/marketplace/distros/distribution/mirantis/mirantis-openstack)

OpenStack is an open source Infrastructure as a Service (IaaS) platform and was born in July 2010 as a collaboration between NASA and Rackspace.

OpenStack is not monolithic but is composed of several projects. I am not going to  detail in all of them now but here are the components I am going to  install in the lab :

  • Horizon : OpenStack Dashboard
  • Keystone : handles all the authentification processes
  • Neutron : creates virtual networks
  • Nova :  heart of the OpenStack project, provides virtualization capabilities
  • Cinder: provides persistent storage to the instances
  • Glance:  provides ready operating systems to the virtual instances

OpenStackCloud

 

Of course, there are many ways to install OpenStack :

  • manually  –> not recommended because very difficult to maintain
  • using a deployment tool like : Ansible, Puppet, Chef, etc ..
  • using a distribution like :
    • Mirantis –> which uses Fuel as automation tool
    • Red Hat –> which is based on  TripleO
    • Rackspace –> which uses Ansible
    • Canonical –> which uses Juju and MaaS amoung other tools
    • […]

Distributions are here to handle:

  • OpenStack’s lifecycle
  • Patches & Upgrades
  • Documentation
  • Bug fixing and so on..

In this series of blogs I am going to  focus on Mirantis which is one of the best ways to get an OpenStack stable, up and running very quickly.
As said before, Mirantis uses Fuel (based on Puppet) as a deployment tool for OpenStack.

This is how the architecture of Fuel looks like:

 

Fuel_Architecture

  • Web UI : provides the Fuel User Interface based on Nginx
  • Keystone : for the authentification process
  • PosgreSQL Database: stores Fuel Master’s informations, about the deployment of the Fuel slave nodes
  • Nailgun : is the heart of the Fuel project which basically converts the choice of the user into commands for Astute workers
  • AMQP : is the message queue which Nailgun uses to give orders to Astute workers
  • Astute : gives node’s configuration to Cobbler and reboot the Fuel slaves node to let Cobller do its job
  • Cobbler :  installs the base Operating System on the Fuel slave nodes
  • MCollective : Orchestration tool for deploying Puppet via MCollective agents
  • MCollective agents: run on all Fuel slave node

 

Sotfware requirements:
– Virtual Box 4.2.12 – 5.0.x
– VirtualBox Extension Pack (to enable PXE boot)
can be downloaded at: https://www.virtualbox.org/wiki/Downloads
– Mirantis 9.0 ISO and Mirantis VirtualBox scripts
can be  download it from https://www.mirantis.com/how-to-install-openstack/

Hardware requirements:
– 64 bit  host operating system
– 8GB RAM at least
– 300GB+ Disk

 

1. Download the openstack/fuel-virtualbox project:

$ git clone https://github.com/openstack/fuel-virtualbox.git
Cloning into 'fuel-virtualbox'...
remote: Counting objects: 741, done.
remote: Total 741 (delta 0), reused 0 (delta 0), pack-reused 741
Receiving objects: 100% (741/741), 338.50 KiB | 0 bytes/s, done.
Resolving deltas: 100% (492/492), done.
Checking connectivity... done.


2. Go to the fuel-virtualbox directory and put the Mirantis OpenStack .ISO in the iso/ directory

$ cd fuel-virtualbox/
$ ls -l
total 104
drwx------ 1 sbe sbe 4096 Oct 4 11:14 actions
-rw------- 1 sbe sbe 1091 Jun 15 15:04 clean.sh
-rw------- 1 sbe sbe 7277 Oct 10 10:14 config.sh
drwx------ 1 sbe sbe 0 Oct 3 14:02 contrib
drwx------ 1 sbe sbe 0 Oct 3 14:02 drivers
-rw------- 1 sbe sbe 61122 Jun 15 15:04 dumpkeys.cache
drwx------ 1 sbe sbe 4096 Oct 4 10:44 functions
drwx------ 1 sbe sbe 0 Oct 10 10:11 iso
-rw------- 1 sbe sbe 653 Oct 4 10:40 launch_16GB.sh
-rw------- 1 sbe sbe 652 Jun 15 15:04 launch_8GB.sh
-rw------- 1 sbe sbe 1308 Jun 15 15:04 launch.sh
-rw------- 1 sbe sbe 1462 Jun 15 15:04 MAINTAINERS
-rw------- 1 sbe sbe 1939 Jun 15 15:04 README.md

You can see that there are two launch_X.sh file; one for 16GB and one for 8 GB. For testing purpose I will use the launch_8GB.sh script. One important file here is config.sh because it is where you set up the hardware configurations (RAM, Disk, CPU) for the Fuel master node and the Fuel slave nodes. You can have a look on it for more details. If you run a 16GB RAM machine, then you can use the “launch_16GB.sh” script.

By default, for 8 GB, the script will create 4 machines:

– one Fuel Master node with 2 GB RAM and 60 GB disk
– 3 Fuel slave nodes with 1.5 GB RAM and 3 disk of 65 GB

So the lab will looklike to this :

fuelarchicorrige

 

  • PXE network :  used by the Fuel Master node administrate the Fuel slave nodes and install OpenStack
  • Managament network :  for OpenStack services communication within the cloud
  • External network : to access the Internet
  • Private network : the inter-instances communication network within the OpenStack cloud
  • Storage network : used by instances to access the storage

3. Then launch the script :

$  ./launch_8GB.sh 
Prepare the host system...
Checking for 'dumpkeys.cache'... OK
Checking for 'expect'... OK
Checking for 'xxd'... OK
Checking for 'VBoxManage'... OK
Checking for VirtualBox Extension Pack... OK
Checking for VirtualBox iPXE firmware...SKIP
VirtualBox iPXE firmware is not found. Used standard firmware from the VirtualBox Extension Pack.
Checking for Mirantis OpenStack ISO image... OK
Going to use Mirantis OpenStack ISO file: iso/MirantisOpenStack-9.0.iso
Checking if SSH client installed... OK
Checking if ipconfig or ifconfig installed... OK
Done.


bootstrapfinished

Now the Fuel Master node is going to download a special Linux image.

Once the bootstrap image is downloaded the Fuel slave nodes boots up with this image :

bootstrap

This image will send to the Fuel Master all the hardware informations of the Fuel slave nodes which are called “facts”.

This is an important step because via this image the Fuel Master node will discover the slave nodes.

bootstrapfinished2

Slave nodes have been created. They will boot over PXE and get discovered by the master node.
To access master node, please point your browser to:

http://10.20.0.2:8000/

The default username and password is admin:admin

This concludes the first part of the blog. In the next blog, I will show you  the interface of Fuel and how to install OpenStack on the Fuel slave nodes.

 

Cet article Mirantis OpenStack 9.0 installation using VirtualBox – part 1 est apparu en premier sur Blog dbi services.

Mirantis OpenStack 9.0 installation using VirtualBox – part 2

$
0
0

The  last blog ended up with a successful installation of the Fuel Master.

In this blog, I will deploy Openstack on the three Fuel slave nodes.

The first node will be the controller node and all these components run on it:

  • a MySQL database
  • RabbitMQ : which is the message broker that OpenStack uses for inter-communication (asynchronous) between the OpenStack components
  • Keystone
  • Glance
  • OpenStack API’s
  • Neutron agents

There are several other components but not used in this lab.

 

The  second node will be the hypervisor node one named compute node in Mirantis, it will create and host the virtual instances created within the OpenStack cloud.

The third one will be the storage node and will provide persistent storage (LVMs) to the virtual instances.

 

Now let’s connect to the Fuel Master node and see what’s going on.. the password by default is : r00tme (two zeros)

$ ssh root@10.20.0.2 The authenticity of host '10.20.0.2 (10.20.0.2)' can't be established.
ECDSA key fingerprint is 20:56:7b:99:c4:2e:c4:f9:79:a8:d2:ff:4d:06:57:4d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.20.0.2' (ECDSA) to the list of known hosts.
root@10.20.0.2's password:
Last login: Mon Oct 10 14:46:20 2016 from 10.20.0.1
[root@fuel ~]#

 

Let’s check if all service are ready and running on the Fuel Master :

[root@fuel ~]# fuel-utils check_all | grep 'ready'
nailgun is ready.
ostf is ready.
cobbler is ready.
rabbitmq is ready.
postgres is ready.
astute is ready.
mcollective is ready.
nginx is ready.
keystone is ready.
rsyslog is ready.
rsync is ready.
[root@fuel ~]#

So all the services are running, let’s continue..

Where are the Fuel slave nodes ?

[root@fuel ~]# fuel2 node list
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| id | name | status | os_platform | roles | ip | mac | cluster | platform_name | online |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| 2 | Untitled (85:69) | discover | ubuntu | [] | 10.20.0.4 | 08:00:27:cc:85:69 | None | None | True |
| 3 | Untitled (b0:77) | discover | ubuntu | [] | 10.20.0.3 | 08:00:27:35:b0:77 | None | None | True |
| 1 | Untitled (04:e8) | discover | ubuntu | [] | 10.20.0.5 | 08:00:27:80:04:e8 | None | None | True |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+

Here they are! All of the Fuel slave nodes will run on an Ubuntu (14.04). The Fuel slave nodes received an IP address (attributed by the Fuel Master) from the 10.20.0.0/24 range which is the PXE network (see last blog)

Now I want to access the Fuel Web Interface which listens to the 8443 port (https) and the 8000 one (http). Let’s see if the Fuel Master is listening to these ports:

[root@fuel ~]# netstat -plan | grep '8000\|8443'
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      868/nginx: master p
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      868/nginx: master p
[root@fuel ~]#

Type https://10.20.0.2:8443 in your  browser. The username & password is : admin

fuelportal

Then click on Start Using Fuel. You can have a look at this Web Interface but I am going to use the Command Line Interface

Let’s change the name of the nodes in order to not confuse them

[root@fuel ~]# fuel2 node update --name Controller 3
numa_nodes is not found in the supplied data.
[root@fuel ~]# fuel2 node list
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| id | name             | status   | os_platform | roles | ip        | mac               | cluster | platform_name | online |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
|  3 | Controller       | discover | ubuntu      | []    | 10.20.0.3 | 08:00:27:35:b0:77 | None    | None          | True   |
|  1 | Untitled (04:e8) | discover | ubuntu      | []    | 10.20.0.5 | 08:00:27:80:04:e8 | None    | None          | True   |
|  2 | Untitled (85:69) | discover | ubuntu      | []    | 10.20.0.4 | 08:00:27:cc:85:69 | None    | None          | True   |
+----+------------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
[root@fuel ~]# fuel2 node update --name Compute 2
numa_nodes is not found in the supplied data.
[root@fuel ~]# fuel2 node update --name Storage 1
numa_nodes is not found in the supplied data.
[root@fuel ~]# fuel2 node list
+----+------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
| id | name       | status   | os_platform | roles | ip        | mac               | cluster | platform_name | online |
+----+------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+
|  2 | Compute    | discover | ubuntu      | []    | 10.20.0.4 | 08:00:27:cc:85:69 | None    | None          | True   |
|  3 | Controller | discover | ubuntu      | []    | 10.20.0.3 | 08:00:27:35:b0:77 | None    | None          | True   |
|  1 | Storage    | discover | ubuntu      | []    | 10.20.0.5 | 08:00:27:80:04:e8 | None    | None          | True   |
+----+------------+----------+-------------+-------+-----------+-------------------+---------+---------------+--------+

The slave nodes are not member to any environment, so let’s create one

[root@fuel ~]# fuel2 env create -r 2 -nst vlan Mirantis_Test_Lab
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+
| id | 1 |
| status | new |
| fuel_version | 9.0 |
| name | Mirantis_Test_Lab |
| release_id | 2 |
| is_customized | False |
| changes | [{u'node_id': None, u'name': u'attributes'}, {u'node_id': None, u'name': u'networks'}, {u'node_id': None, u'name': u'vmware_attributes'}] |
+---------------+-------------------------------------------------------------------------------------------------------------------------------------------+

You can check in the Fuel Web Interface that one environment was created

The next step is to attribute one role to each nodes. I  want :

  • 1 controller node
  • 1 compute node
  • 1 storage node

Let’s list all the available roles in this release :

[root@fuel ~]# fuel role --release 2
name               
-------------------
compute-vmware     
compute            
cinder-vmware      
virt               
base-os            
controller         
ceph-osd           
ironic             
cinder             
cinder-block-device
mongo


We set a role to each slave nodes

[root@fuel ~]# fuel node
id | status   | name       | cluster | ip        | mac               | roles | pending_roles | online | group_id
---+----------+------------+---------+-----------+-------------------+-------+---------------+--------+---------
 1 | discover | Storage    |         | 10.20.0.5 | 08:00:27:80:04:e8 |       |               |      1 |         
 3 | discover | Controller |         | 10.20.0.3 | 08:00:27:35:b0:77 |       |               |      1 |         
 2 | discover | Compute    |         | 10.20.0.4 | 08:00:27:cc:85:69 |       |               |      1 |         
[root@fuel ~]#
[root@fuel ~]# fuel node set --node 1 --role cinder  --env 1
Nodes [1] with roles ['cinder'] were added to environment 1
[root@fuel ~]# fuel node set --node 2 --role compute  --env 1
Nodes [2] with roles ['compute'] were added to environment 1
[root@fuel ~]# fuel node set --node 3 --role controller  --env 1
Nodes [3] with roles ['controller'] were added to environment 1
[root@fuel ~]#
[root@fuel ~]# fuel node
id | status   | name       | cluster | ip        | mac               | roles | pending_roles | online | group_id
---+----------+------------+---------+-----------+-------------------+-------+---------------+--------+---------
 1 | discover | Storage    |       1 | 10.20.0.5 | 08:00:27:80:04:e8 |       | cinder        |      1 |        1
 2 | discover | Compute    |       1 | 10.20.0.4 | 08:00:27:cc:85:69 |       | compute       |      1 |        1
 3 | discover | Controller |       1 | 10.20.0.3 | 08:00:27:35:b0:77 |       | controller    |      1 |        1
[root@fuel ~]#



Mirantis (via Fuel) provides the ability to check the network configuration.. Go to the Environement tab / Networks / Connectivity Check and click Verify Networks

networksucceed

All seems good,  the deployment  can be started

[root@fuel ~]# fuel2 env deploy 1
Deployment task with id 5 for the environment 1 has been started.

 

Now it is time to wait because the deployment gonna take some time. Indeed, the Fuel master node will install Ubuntu on the Fuel slave nodes and install the right OpenStak packages to the right node via Puppet. The installation can be follow via the log tab

installingubuntu

You can follow the installation via the Fuel Dashboard or via CLI

[root@fuel ~]# fuel task list
id | status  | name                               | cluster | progress | uuid                                
---+---------+------------------------------------+---------+----------+-------------------------------------
1  | ready   | verify_networks                    | 1       | 100      | 4fcff1ad-6b1e-4b00-bfae-b7bf904d15e6
2  | ready   | check_dhcp                         | 1       | 100      | 2c580b79-62e8-4de1-a8be-a265a26aa2a9
3  | ready   | check_repo_availability            | 1       | 100      | b8414b26-5173-4f0c-b387-255491dc6bf9
4  | ready   | check_repo_availability_with_setup | 1       | 100      | fac884ee-9d56-410e-8198-8499561ccbad
9  | running | deployment                         | 1       | 0        | e12c0dec-b5a5-48b4-a70c-3fb105f41096
5  | running | deploy                             | 1       | 3        | 4631113c-05e9-411f-a337-9910c9388477
8  | running | provision                          | 1       | 12       | c96a2952-c764-4928-bac0-ccbe50f6c

Fuel is installing OpenStack on the nodes :

OpenStack_Deploying

PS : If the deployment fails (thing that could happen), do not hesitate to redeploy it

deploymentsuccess

And welcome to OpenStack

horizonwelcome

This ended the second part of this blog. For the next one, I will show how to create an instance and get into more details in OpenStack.

 

Cet article Mirantis OpenStack 9.0 installation using VirtualBox – part 2 est apparu en premier sur Blog dbi services.

OTN Appreciation Day : External tables

$
0
0

As part of the OTN Appreciation Day (see https://oracle-base.com/blog/2016/09/28/otn-appreciation-day/) I’m writing about one of my favorite Oracle features: External tables.

Traditionally people loaded data in an Oracle database using SQL*Loader. With the introduction of external tables, SQL*Loader became obsolete (in my view ;-)), because external tables provide the same loading capabilities and so much more than SQL*loader. Why? Because external tables can be accessed through SQL. You have all possibilities SQL-queries offer. Prallelism, difficult joins with internal or other external tables and of course all complex operations SQL allows. ETL became much easier using external tables, because it allowed to process data through SQL joins and filters already before it was loaded in the database.

For more info see
– http://docs.oracle.com/database/121/CNCPT/tablecls.htm#CNCPT88821
– or search on Ask Tom about external tables. You’ll be surprised :-)

 

Cet article OTN Appreciation Day : External tables est apparu en premier sur Blog dbi services.

Documentum story – dm_LogPurge and dfc.date_format

$
0
0

What is the relation between dfc.date_format and dm_LogPurge? This is the question we had to answer as we hit an issue. An issue with the dm_LogPurge job.
As usual once a repository has been created we are configuring several Documentum jobs for the housekeeping.
One of them is the dm_LogPurge. It is configured to run once a day with a cutoff_days of 90 days.
So all ran fine until we did another change.
On request of an application team we had to change the dfc.date_format to dfc.date_format=dd/MMM/yyyy HH:mm:ss to allow the D2 clients to use Months in letters and not digits.
This change fulfilled the application requirement but since that day, the dm_LogPurge job started to remove too many log files (to not write ALL). :(

So let’s explain how we proceed to find out the reason of the issue and more important the solution to avoid it.
We have been informed not by seeing that too many files have been removed but by checking the repository log file. BTW, this file is checked automatically using nagios with our own dbi scripts. So in the repository log file we had errors like:

2016-04-11T20:30:41.453453      16395[16395]    01xxxxxx80028223        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213d2 failed "
2016-04-11T20:30:41.453504      16395[16395]    01xxxxxx80028223        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content of StateOfDocbase sysobject. "
2016-04-11T20:26:10.157989      14679[14679]    01xxxxxx80028220        [DM_OBJ_MGR_E_FETCH_FAIL]error:   "attempt to fetch object with handle 06xxxxxx800213c7 failed "
2016-04-11T20:26:10.158059      14679[14679]    01xxxxxx80028220        [DM_SYSOBJECT_E_CANT_GET_CONTENT]error:   "Cannot get  format for 0 content

 

Based on the time stamp, I saw that the issue could be related to the dm_LogPurge. So I checked the job log file as well the folders which are cleaned out. In the folder all old log files were removed:

[dmadmin@content_server_01 log]$ date
Wed Apr 13 06:28:35 UTC 2016
[dmadmin@content_server_01 log]$ pwd
$DOCUMENTUM/dba/log
[dmadmin@content_server_01 log]$ ls -ltr REPO1*
lrwxrwxrwx. 1 dmadmin dmadmin      34 Oct 22 09:14 REPO1 -> $DOCUMENTUM/dba/log/<hex docbaseID>/
-rw-rw-rw-. 1 dmadmin dmadmin 8540926 Apr 13 06:28 REPO1.log

 

To have more information, I set the trace level of the dm_LogPurge job to 10 and analyzed the trace file.
In the trace file we had:

[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,sessionconfig,r_date_format ") ==> "31/1212/1995 24:00:00 "
[main] com.documentum.dmcl.impl.DmclApiNativeAdapter@9276326.get( "get,c,08xxxxxx80000362,method_arguments[ 1] ") ==> "-cutoff_days 90 "

 

So why did we have 31/1212/1995 ?

Using API I confirmed an issue related to the date format

API> get,c,sessionconfig,r_date_format
...
31/1212/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
14/Apr/2016 08:36:52

(1 row affected)

 

Date format? So as all our changes are documented, I easily found that we changed the dfc_date_format for the D2 application.
By cross-checking with another installation, used by another application where we did not change the dfc.date_format, I could confirm that the issue was related to this dfc parameter change.

Without dfc.date_format in dfc.properties:

API> get,c,sessionconfig,r_date_format
...
12/31/1995 24:00:00

API> ?,c,select date(now) as dateNow from dm_server_config
datenow
-------------------------
4/14/2016 08:56:13

(1 row affected)

 

Just to be sure that I did not miss something, I checked also if not all log files were removed after starting manually the job. They were still there.
Now the solution would be to rollback the dfc.date_format change but this would only help the platform but not the application team. As the initial dfc.date_format change was validated by EMC we had to find a solution for both teams.

After investigating we found the final solution:
Add dfc.date_format=dd/MMM/yyyyy HH:mm:ss in the dfc.properties file of the ServerApps (in the JMS directly so!)

With this solution the dm_LogPurge job does not remove too many files and the Application Team can still use the Month written in letters in its D2 applications.

 

 

Cet article Documentum story – dm_LogPurge and dfc.date_format est apparu en premier sur Blog dbi services.

Mirantis OpenStack 9.0 installation using VirtualBox – part 3

$
0
0

In the two first blogs, I installed the Fuel environment and deployed OpenStack in the Fuel slave nodes and all of that from the Fuel Master node.

In this blog, I will show you  all the steps to follow in order to create an instance in OpenStack. All is going to be done via the Command Line Interface and not via Horizon – the OpenStack Dashboard (I will explain this in an other blog) .

 

Let’s begin!

First of all let’s connect to the Fuel Master node and list all the nodes :

[root@fuel ~]# fuel2 node list
+----+------------+--------+-------------+-----------------+-----------+-------------------+---------+---------------+--------+
| id | name | status | os_platform | roles | ip | mac | cluster | platform_name | online |
+----+------------+--------+-------------+-----------------+-----------+-------------------+---------+---------------+--------+
| 1 | Storage | ready | ubuntu | [u'cinder'] | 10.20.0.5 | 08:00:27:80:04:e8 | 1 | None | True |
| 2 | Compute | ready | ubuntu | [u'compute'] | 10.20.0.4 | 08:00:27:cc:85:69 | 1 | None | True |
| 3 | Controller | ready | ubuntu | [u'controller'] | 10.20.0.3 | 08:00:27:35:b0:77 | 1 | None | True |
+----+------------+--------+-------------+-----------------+-----------+-------------------+---------+---------------+--------+

 

Now, I connect to the controller node

[root@fuel ~]# ssh 10.20.0.3
Warning: Permanently added '10.20.0.3' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation:  https://help.ubuntu.com/
Last login: Tue Oct 11 09:55:54 2016 from 10.20.0.2
root@node-3:~#

 

Let’s put an alias for each of the Fuel slave nodes  :

[root@fuel ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.20.0.2       fuel.domain.tld fuel
10.20.0.3       fuel.domain.tld controller
10.20.0.4       fuel.domain.tld compute
10.20.0.5       fuel.domain.tld storage

 

Now I can connect with the aliases:

[root@fuel ~]# ssh controller
The authenticity of host 'controller (10.20.0.3)' can't be established.
ECDSA key fingerprint is 01:b5:15:22:03:d0:f9:bb:86:3a:06:a7:8c:19:bd:22.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'controller,10.20.0.3' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 11 10:12:34 2016 from 10.20.0.2
root@node-3:~# exit
logout

 

Repeat this step for the two left nodes (compute & storage)

I connect to the controller node :

[root@fuel ~]# ssh controller
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation:  https://help.ubuntu.com/
Last login: Tue Oct 11 11:24:16 2016 from 10.20.0.2
root@node-3:~#

 

I  try to enter an OpenStack Command Line (list the instances for example) :

root@node-3:~# nova list
 ERROR (CommandError): You must provide a username or user ID via --os-username, --os-user-id, env[OS_USERNAME] or env[OS_USER_ID]
 root@node-3:~#

It is normal because using OpenStack Clients Command Line  requires before to get a token from Keystone.

In order to do that, you have to specify where the controller node can reach Keystone and get the required informations from the  OpenStack API’s. Because it is an authentification process you will also need to specify a username and a password.

But doing this at each time you use the OpenStack Command Lines clients can  quickly become inconvenient. Hopefully, in OpenStack, there is a way to avoid this. The solution is to create a file with all environments variables that need to be exported for getting this token from Keystone.

Mirantis creates this file for us:

root@node-3:~# ls
openrc

Let’s see what this file contains :

root@node-3:~# cat openrc
#!/bin/sh
export OS_NO_CACHE='true'
export OS_TENANT_NAME='admin'
export OS_PROJECT_NAME='admin'
export OS_USERNAME='admin'
export OS_PASSWORD='admin'
export OS_AUTH_URL='http://192.168.0.2:5000/'
export OS_DEFAULT_DOMAIN='Default'
export OS_AUTH_STRATEGY='keystone'
export OS_REGION_NAME='RegionOne'
export CINDER_ENDPOINT_TYPE='internalURL'
export GLANCE_ENDPOINT_TYPE='internalURL'
export KEYSTONE_ENDPOINT_TYPE='internalURL'
export NOVA_ENDPOINT_TYPE='internalURL'
export NEUTRON_ENDPOINT_TYPE='internalURL'
export OS_ENDPOINT_TYPE='internalURL'
export MURANO_REPO_URL='http://storage.apps.openstack.org/'
export MURANO_PACKAGES_SERVICE='glance'

 

I source the file to export the environment variables :

root@node-3:~# source openrc

 

I check if the variables were exported. It is via the OS_AUTH_URL that the controller node can reach Keystone

root@node-3:~# echo $OS_USERNAME
admin
root@node-3:~# echo $OS_PASSWORD
admin
root@node-3:~# echo $OS_AUTH_URL

http://192.168.0.2:5000/

 

Now I can list if there are some instances running :

root@node-3:~# nova list
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

 

 

Let’s create the first instance

In order to create this instance, I need :

  • a flavor
  • an OS
  • a network
  • a keypair
  • a name

 

I list the available flavors

root@node-3:~# nova flavor-list
 +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 | ID                                   | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
 +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 | 1                                    | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
 | 2                                    | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
 | 3                                    | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
 | 4                                    | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
 | 5                                    | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
 | d942587a-c48b-48ca-9c96-cad3c358eb6e | m1.micro  | 64        | 0    | 0         |      | 1     | 1.0         | True      |
 +--------------------------------------+-----------+-----------+------+-----------+------+-------+-------------+-----------+

 

Then, I list the Operating Systems or images available. There is only one image created by default by Fuel which is a mini Linux, it is called Cirros.

root@node-3:~# nova image-list
+--------------------------------------+--------+--------+--------+
| ID                                   | Name   | Status | Server |
+--------------------------------------+--------+--------+--------+
| a3708fe7-60f7-49c9-91ed-a6eee1ab8ba4 | TestVM | ACTIVE |        |
+--------------------------------------+--------+--------+--------+

 

I list also the networks (they had been created by Fuel during the deployment of OpenStack)

root@node-3:~# neutron net-list
+--------------------------------------+--------------------+-------------------------------------------------------+
| id                                   | name               | subnets                                               |
+--------------------------------------+--------------------+-------------------------------------------------------+
| b22e82c9-df6b-4580-a77e-cde8e93f30d8 | admin_floating_net | 7acc6b15-1c00-4447-b4f7-0fcced7a594b 172.16.0.0/24    |
| 09b1e122-cb63-44d5-af0b-244d3aa06331 | admin_internal_net | aea6fc29-dfb6-4586-9b09-70ce5c992315 192.168.111.0/24 |
+--------------------------------------+--------------------+-------------------------------------------------------+

 

I create a keypair that will be injected in the future instance in order to avoid a password authentification (unless if a password authentification was set up).. This step is not useful for this case because the Cirros image provides a password by default. But it can be helpful if you use other cloud images (ubuntu, centos, etc..)

root@node-3:~# nova keypair-add --pub-key ~/.ssh/authorized_keys mykey
root@node-3:~# nova keypair-list
+-------+------+-------------------------------------------------+
| Name  | Type | Fingerprint                                     |
+-------+------+-------------------------------------------------+
| mykey | ssh  | ee:56:e6:c0:7b:e2:d5:2b:61:23:d7:76:49:b3:d8:d5 |
+-------+------+-------------------------------------------------+

 

Now I got all the informations that I need, let’s create the instance which I will name InstanceTest01

root@node-3:~# nova boot \
> --flavor m1.micro \
> --image TestVM \
> --key-name mykey \
> --nic net-id=09b1e122-cb63-44d5-af0b-244d3aa06331 \
> InstanceTest01
+--------------------------------------+-------------------------------------------------+
| Property | Value |
+--------------------------------------+-------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | instancetest01 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000001 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-dyo0086w |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | bUQJDtwM3vjr |
| config_drive | |
| created | 2016-10-11T13:08:34Z |
| description | - |
| flavor | m1.micro (d942587a-c48b-48ca-9c96-cad3c358eb6e) |
| hostId | |
| host_status | |
| id | b84b49f1-1b01-4aa9-bd9c-c8691fca9298 |
| image | TestVM (a3708fe7-60f7-49c9-91ed-a6eee1ab8ba4) |
| key_name | mykey |
| locked | False |
| metadata | {} |
| name | InstanceTest01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | abfec6fc54c14da28f9971e04c344ec8 |
| updated | 2016-10-11T13:08:36Z |
| user_id | d0e1e11f84064f4c8aa02381f0d42ed2 |
+--------------------------------------+-------------------------------------------------+

Here is the instance running :

root@node-3:~# nova list
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+
| b84b49f1-1b01-4aa9-bd9c-c8691fca9298 | InstanceTest01 | ACTIVE | - | Running | admin_internal_net=192.168.111.3 |
+--------------------------------------+----------------+--------+------------+-------------+----------------------------------+

So the instance is up and running

 

We connect to the compute node to check if the instance is really running on it:

Connection to controller closed.
[root@fuel ~]# ssh compute
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-98-generic x86_64)

* Documentation: https://help.ubuntu.com/
Last login: Tue Oct 11 18:07:23 2016 from 10.20.0.2
root@node-2:~# virsh list
Id Name State
----------------------------------------------------
2 instance-00000001 running

root@node-2:~#

Yes, the instance is running.

I need to add the ssh rule for accessing the instance, I will add also the icmp one. These rules are managed by security groups.

Let’s try to ping the instance just created, from now all the operations are done on the controller node  :

 root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ping 192.168.111.3
 PING 192.168.111.3 (192.168.111.3) 56(84) bytes of data.
 ^C
 --- 192.168.111.3 ping statistics ---
 3 packets transmitted, 0 received, 100% packet loss, time 2004ms

 

I can not ping my instance, let’s see what rules are currently in the default security group created by OpenStack:

root@node-3:~# nova secgroup-list
 +--------------------------------------+---------+------------------------+
 | Id                                   | Name    | Description            |
 +--------------------------------------+---------+------------------------+
 | 954e4d85-ea38-4a4b-bbe6-e355946fdfb0 | default | Default security group |
 +--------------------------------------+---------+------------------------+
root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
|                                             | | | | default |
|                                             | | | | default |
+-------------+-----------+---------+-----------+--------------+

There are no rules. So, I add the icmp rule in this default security group

root@node-3:~# nova secgroup-add-rule 954e4d85-ea38-4a4b-bbe6-e355946fdfb0 icmp -1 -1 0.0.0.0/0
 +-------------+-----------+---------+-----------+--------------+
 | IP Protocol | From Port | To Port | IP Range | Source Group |
 +-------------+-----------+---------+-----------+--------------+
 | icmp | -1 | -1 | 0.0.0.0/0 | |
 +-------------+-----------+---------+-----------+--------------+

 

I check if the rule was added

root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
 +-------------+-----------+---------+-----------+--------------+
 | IP Protocol | From Port | To Port | IP Range  | Source Group |
 +-------------+-----------+---------+-----------+--------------+
 |             |           |         |           | default      |
 |             |           |         |           | default      |
 | icmp        | -1        | -1      | 0.0.0.0/0 |              |
 +-------------+-----------+---------+-----------+--------------+

 

Let’s test if I can ping the instance, we need to use the network namespace qrouter which is basically the router that the controller node will use to reach the instance on the compute node

root@node-3:~# ip netns list
qdhcp-09b1e122-cb63-44d5-af0b-244d3aa06331
qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934
haproxy
vrouter

 

IDqRouter

 

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ping 192.168.111.3
 PING 192.168.111.3 (192.168.111.3) 56(84) bytes of data.
 64 bytes from 192.168.111.3: icmp_seq=1 ttl=64 time=2.29 ms
 64 bytes from 192.168.111.3: icmp_seq=2 ttl=64 time=0.789 ms
 ^C
 --- 192.168.111.3 ping statistics ---
 2 packets transmitted, 2 received, 0% packet loss, time 1001ms

 

I do the same with the ssh rule

First, I test if I can access my instance

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ssh cirros@192.168.111.3 -v
 OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
 debug1: Reading configuration data /etc/ssh/ssh_config
 debug1: /etc/ssh/ssh_config line 19: Applying options for *
 debug1: Connecting to 192.168.111.3 [192.168.111.3] port 22.
 ^C

 

No I can not, so I add the rule

 
root@node-3:~# nova secgroup-add-rule  954e4d85-ea38-4a4b-bbe6-e355946fdfb0 tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp         | 22        | 22      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+

 

I check if the rule was added :

root@node-3:~# nova secgroup-list-rules 954e4d85-ea38-4a4b-bbe6-e355946fdfb0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| | | | | default |
| tcp | 22 | 22 | 0.0.0.0/0 | |
| | | | | default |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+

I connect to my instance, the username by default is cirros

root@node-3:~# ip netns exec qrouter-0c67b78a-93d4-417c-9ad3-3b29e1480934 ssh cirros@192.168.111.3 -v
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to 192.168.111.3 [192.168.111.3] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8
debug1: Remote protocol version 2.0, remote software version dropbear_2015.67
debug1: no match: dropbear_2015.67
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA 1b:f1:1c:34:13:83:cd:b1:37:a9:e4:32:37:65:91:c4
The authenticity of host '192.168.111.3 (192.168.111.3)' can't be established.
ECDSA key fingerprint is 1b:f1:1c:34:13:83:cd:b1:37:a9:e4:32:37:65:91:c4.
Are you sure you want to continue connecting (yes/no)? yes

 

 

Type yes, and the password is cubswin:)

Warning: Permanently added '192.168.111.3' (ECDSA) to the list of known hosts.
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /root/.ssh/id_rsa
debug1: Trying private key: /root/.ssh/id_dsa
debug1: Trying private key: /root/.ssh/id_ecdsa
debug1: Trying private key: /root/.ssh/id_ed25519
debug1: Next authentication method: password
cirros@192.168.111.3's password: 
debug1: Authentication succeeded (password).
Authenticated to 192.168.111.3 ([192.168.111.3]:22).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: Sending environment.
debug1: Sending env LANG = en_US.UTF-8
$ 
$ # "I am connected to my instance"
$

So now, I am connected to my instance

 

Let’s check if I can ping google

$ ping www.google.com
PING www.google.com (172.217.18.100): 56 data bytes
64 bytes from 172.217.18.100: seq=0 ttl=59 time=29.485 ms
64 bytes from 172.217.18.100: seq=1 ttl=59 time=9.089 ms
64 bytes from 172.217.18.100: seq=2 ttl=59 time=27.027 ms
64 bytes from 172.217.18.100: seq=3 ttl=59 time=9.992 ms
64 bytes from 172.217.18.100: seq=4 ttl=59 time=26.734 ms

This ended this series. Mirantis made OpenStack very simple to install but it requires good skills in Puppet and Astute if you want to customize the installation (like create a network node role or customize the installation of MySQL database in the controller node for example)

In other blogs, I will show you how to add nodes to the Fuel environment to make the cloud more powerful.

 

Cet article Mirantis OpenStack 9.0 installation using VirtualBox – part 3 est apparu en premier sur Blog dbi services.


Documentum story – ADTS not working anymore?

$
0
0

A few weeks ago, on one of our Documentum environments, we find out thanks to our monitoring that the Renditions weren’t generated anymore by our CTS/ADTS Server… This happened in a Sandbox environment where a lot of dev/testing was done in parallel between EMC, the different Application Teams and the Platform/Architecture Team (us). A lot of changes at the same time means that it might not be easy to find out what caused this issue…

 

After a few checks on our monitoring scripts just to ensure that the issue wasn’t the monitoring itself, it appears that this part was working properly and indeed the renditions weren’t generated anymore. Therefore we checked the configuration of the docbase/rendition server but didn’t find anything suspicious on the configuration side and therefore we checked the logs of the Rendition Server. The CTS/ADTS Server often print a lot of different errors that are all linked but which appear to have a different root cause. Therefore to know which error is really relevant, I cleaned up the log file (stop CTS/ADTS, backup log file and remove it) and then I launched our monitoring script that basically remove all existing renditions for a test document, if any, and then request a new set of renditions to be generated by the CTS/ADTS Server.

 

After doing that, it was clear that the following error was the real one I needed to take a look at:

11:14:58,562  INFO [ Thread-61] CTSThreadPoolManagerImpl -       Added ICTSTask to the ICTSThreadPoolManager: dm_transcode_content
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Start. About to get Next ICTSTask from pool manager...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       ICTSThreadPoolManager: removing first item from the list for processing...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Removing a task to execute it. Number in waiting list: 1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Next CTSTask received...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       CTSThreadPoolManager has threadlimit -1
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       Processing next CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get notifier from CTSTask...
11:14:58,562  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get session from CTSTask...
11:14:59,203  INFO [Thread-153] CTSThreadPoolManagerImpl -       About to get RUN CTSTask...
11:14:59,859  WARN [Thread-153] CTSOperationsUtils -       [BOCS/ACS] exportContentFiles error - javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: java.security.cert.CertPathBuilderException: Could not build a validated path.

https://content_server_01:9082/ACS/servlet/ACS?command=read&version=2.3&docbaseid=0f3f245a&basepath=%2Fdata%2Fdctm%2Frepo01%2Fcontent_storage_01%2F003f245a&filepath=80%2F06%2F3f%2F46.docx&objectid=093f245a800a303f&cacheid=dAAEAgA%3D%3DRj8GgA%3D%3D&format=msw12&pagenum=0&signature=wnJx0Z9%2Brzhk3ZMWNQj5hkRq1ZtAwqZGigeLdG%2FLUsc8WDs8WUHBIPf5FHbrYsmbU%2Bby7pTbxtcxtcwMsGIhwyLzREkrec%2BZzMYY3bLY88sad%2BLlzJfqzYveIEu4iebZnOwm4g%2FxyZzfR3C4Yu3W5FgBaxulngiopjVMe587B6k%3D&servername=content_server_01ACS1&mode=1&timestamp=1465809299&length=12586&mime_type=application%2Fvnd.openxmlformats-officedocument.wordprocessingml.document&parallel_streaming=true&expire_delta=360

11:14:59,859 ERROR [Thread-153] CTSThreadPoolManagerImpl -       Exception in CTSThreadPoolManagerImpl, notification :
com.documentum.cts.exceptions.internal.CTSServerTaskException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
com.documentum.cts.exceptions.CTSException: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (093f245a800a303f, msw12, null)
Cause Exception was:
DfException:: THREAD: Thread-153; MSG: Error while exporting file(s) from repo01, objectName: check_ADTS_rendition_creation errorMessage: Executing Export operation failed
                Could not get the content file for check_ADTS_rendition_creation (090f446e800a303f, msw12, null); ERRORCODE: ff; NEXT: null
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx5(CTSOperationsUtils.java:626)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx4(CTSOperationsUtils.java:332)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx3(CTSOperationsUtils.java:276)
                at com.documentum.cts.util.CTSOperationsUtils.getExportedFileEx2(CTSOperationsUtils.java:256)
                at com.documentum.cts.impl.services.task.CTSTask.exportInputContent(CTSTask.java:4716)
                at com.documentum.cts.impl.services.task.CTSTask.retrieveInputURL(CTSTask.java:4594)
                at com.documentum.cts.impl.services.task.CTSTask.initializeFromCommand(CTSTask.java:2523)
                at com.documentum.cts.impl.services.task.CTSTask.execute(CTSTask.java:922)
                at com.documentum.cts.impl.services.task.CTSTaskBase.doExecute(CTSTaskBase.java:514)
                at com.documentum.cts.impl.services.task.CTSTaskBase.run(CTSTaskBase.java:460)
                at com.documentum.cts.impl.services.thread.CTSTaskRunnable.run(CTSTaskRunnable.java:207)
                at java.lang.Thread.run(Thread.java:745)
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotificationAdmin : false
11:14:59,859  INFO [Thread-153] CTSQueueProcessor -       _failureNotification : true

 

Ok so now it is clear that the error is actually the following one: “java.security.cert.CertPathBuilderException: Could not build a validated path”. This always means that a specific SSL Certificate Chain isn’t trusted. As you can see above, it is mentioned “BOCS/ACS” on the same line and actually the line just below contains the URL of the ACS… Therefore I thought about that and yes indeed one of the changes planned for that day was that the D2-BOCS has been installed and enabled on this environment. So what is the link between the ACS URL and the D2-BOCS installation? Well actually when installing the D2-BOCS, if you want to keep your environment secured, then you need the ACS to be switched to HTTPS because the D2-BOCS will force D2 to use the ACS URLs to download the documents to the client’s workstation when it is actually not using the ACS at all when there is no D2-BOCS installed. Therefore the installation of the D2-BOCS isn’t linked to the CTS/ADTS at all but one of our pre-requisites to install it was to setup the ACS in HTTPS and that is linked to the CTS/ADTS Server because it is actually using it to download the documents as you can see in the error above.

 

Now we know what was the error and just to confirm that, I switched back the ACS URL to HTTP (using DA: Administration > Distributed Content Configuration > ACS Servers > Right-click on ACS objects > Properties > ACS Server Connections) and re-init the Content Server (using DA: Administration > Basic Configuration > Content Servers > Right-click on CS objects > Properties > Check “Re-Initialize Server” and click OK).

 

Right after doing that, the monitoring switched back to green, meaning that renditions were created again and therefore this was indeed the one and only issue. So what to do if we want to use the ACS in HTTPS in correlation with the Rendition? Well we just have to explain to the CTS/ADTS Server that he can trust the ACS SSL Certificate and this is done by updating the cacerts file of Java used by the Rendition Server. This is done pretty easily using the following commands for which I will suppose that the Rendition Server has been installed on a D: Drive under “D:\CTS”.

 

So the first thing to do is to upload your Certificate Chain to the Rendition Server and put them under “D:\certs” (I will suppose there are two SSL Certificates in the chain: a Root and a Gold). Then simply start a command prompt as Administrator and execute the following commands to update the cacerts file of Java:

D:\> copy D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts.bck_YYYYMMDD
        1 file(s) copied.

D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias root_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Root_CA.cer
Enter keystore password:
…
Trust this certificate? [no]:  yes
Certificate was added to keystore
 
D:\> D:\CTS\java64\1.7.0_72\bin\keytool.exe -import -trustcacerts -alias gold_ca -keystore D:\CTS\java64\1.7.0_72\jre\lib\security\cacerts -file D:\certs\Internal_Gold_CA1.cer
Enter keystore password:
Certificate was added to keystore

 

Now just switch the ACS to use HTTPS again, restart the Rendition services using the Windows Services or command line tool and the next time you will request a rendition, it will work without errors even in HTTPS. That’s actually a very common mistake to setup the SSL on the Content Server side which is great but you need not to forget that some other components might use what you switch to HTTPS and therefore these additional components need to trust your SSL Certificates too!

 

Note 1: in our case, the WebLogic Server hosting D2 was already in HTTPS and therefore it was already trusting the Internal Root & Gold SSL Certificates, reason why we could use the ACS in HTTPS from D2 without issue.

Note 2: in case you didn’t know about it, I think it is now clear that the CTS/ADTS Server is using the ACS to download the files… Therefore if you want a secured environment even without D2-BOCS, you absolutely need to switch your ACS to HTTPS!

 

Cet article Documentum story – ADTS not working anymore? est apparu en premier sur Blog dbi services.

Partitioning – When data movement is not performed as expected

$
0
0

This blog is about an interesting partitioning story and curious data movements during merge operation. I was at my one of my customer uses intensively partitioning for various reasons including archiving and manageability. A couple of days ago, we decided to test the new fresh developed script, which will carry out the automatic archiving stuff against the concerned database in quality environment.

Let’s describe a little bit the context.

We used a range-based data distribution model. Boundary are number-based and are incremented monolithically with identity values.

We defined a partition function with range right values strategy as follows:

CREATE PARTITION FUNCTION pfOrderID (INT) 
AS RANGE RIGHT FOR VALUES(@boundary_archive, @boundary_current)
go

 

The first implementation of the partition scheme consisted in storing all partitioned data inside the same filegroup.

CREATE PARTITION SCHEME psOrderID 
AS PARTITION pfOrderID 
ALL TO ([PRIMARY])
go

Well, the initial configuration was as follows. We applied page compression to the archive partition because it contained cold data which is not supposed to be updated very frequently.

blog 106 - 0 - partitioned table starting point

blog 106 - 1 - partitioned table starting point config

As you may expect, the next step will consist in merging ARCHIVE and CURRENT partitions by using the MERGE command as follows:

ALTER PARTITION FUNCTION pfOrderID() 
MERGE RANGE (@b_archive);

 

Basically, we achieve a merge operation according to Microsoft documentation:

The filegroup that originally held boundary_value is removed from the partition scheme unless it is used by a remaining partition, or is marked with the NEXT USED property.

 

blog 106 - 2 - partitioned table after merge

At this point we may expect SQL Server has moved data from the middle partition to the left partition and according to this other Microsoft pointer

When two partitions are merged, the resultant partition inherits the data compression attribute of the destination partition.

But the results come as a little surprise because I expected to see a page compression value from my archive partition.

blog 106 - 3 - partitioned table after merge config

At this point, my assumption was either compression value doesn’t inherit correctly as mentioned in the Microsoft documentation or data movement is not performed as expected. The latter may be checked very quickly by looking at the corresponding records inside the transaction log file.

SELECT 
	[AllocUnitName],
	Operation,
	COUNT(*) as nb_ops
FROM ::fn_dblog(NULL,NULL)
WHERE [Transaction ID] = (
     SELECT TOP 1 [Transaction ID]
     FROM ::fn_dblog(NULL,NULL)
     WHERE [Xact ID] = 164670)
GROUP BY [AllocUnitName], Operation
ORDER BY nb_ops DESC
GO

I put only the relevant sample here

blog 106 - 5 - log file after merge config

Referring to what I saw above, I noticed data movement was not performed as expected but rather as shown below:

blog 106 - 4 - partitioned table after merge config 2

So, this strange behavior seems to explain why compression state switched from PAGE to NONE in my case. Another strange thing is when we changed the partition scheme to include an archive filegroup, we returned to normality.

CREATE PARTITION SCHEME psOrderID 
AS PARTITION pfOrderID 
TO ([ARCHIVE], [PRIMARY], [PRIMARY])
go

 

I doubled checked the Microsoft documentation to see if it exists one section to figure out the behavior I experienced in this case but without success. After further investigations I found out an interesting blog post from Sunil Agarwal (Microsoft) about partition merging and some performance improvements shipped with SQL Server 2008 R2. In short, I was in the specific context described in the blog post (same filegroup) and merging optimization came into action transparently because number of rows in the archive partition was lower than the one in the current partition at the moment of the MERGE operation.

Let me introduce another relevant information – the number of lines for each partition – to the following picture:

blog 106 - 6 - partitioned table optimization stuff

Just to be clear, this is an interesting and smart mechanism provided by Microsoft but it may be a little bit confusing when you’re not aware of this optimization. In my case, we finally decided with my customer to dedicate an archiving partition to store cold data that will be rarely accessed in this context and keep up the possibility to store cold data on cheaper storage when archiving partition will grow to a critical size. But in other cases, if storing data is the same filegroup is a still relevant scenario, keep in mind this optimization to not experience unexpected behaviors.

Happy partitioning!

 

 

Cet article Partitioning – When data movement is not performed as expected est apparu en premier sur Blog dbi services.

MariaDB: audit plugin

$
0
0

Why should you Audit your MySQL Instances?

First to provide you a way to track user accessing sensible data
Secondly to investigate on suspicious queries in all your critical databases
Thirdly to comply with law and industry standards

The MariaDB Audit Plugin can help you to log all or part of the server activity as:
– who was connected and at which time
– which databases and tables were accessed
– which action/event (CONNECT, TABLE,…)
– which queries were run
All of them can be stored in a dedicated audit log file

This Plugin provides auditing not only for MariaDB where it is included by default, but also for Percona Server and
even for Oracle MySQL when using the community version

Installation on MariaDB

The MariaDB Audit Plugin library is shipped with the MariaDB server
Once the server is installed, you need still to locate your plugin directory and install/load it
mysqld7-[MariaDB]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
+---------------+-----------------------------------------------------------------+
| Variable_name | Value |
+---------------+-----------------------------------------------------------------+
| plugin_dir | /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/ |
+---------------+-----------------------------------------------------------------+
mysqld7-[MariaDB]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';

Check then if it has been loaded
mysqld7-[MariaDB]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_NAME: SERVER_AUDIT
PLUGIN_VERSION: 1.4
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: AUDIT
PLUGIN_TYPE_VERSION: 3.2
PLUGIN_LIBRARY: server_audit.so
PLUGIN_LIBRARY_VERSION: 1.11
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity
PLUGIN_LICENSE: GPL
LOAD_OPTION: FORCE_PLUS_PERMANENT
PLUGIN_MATURITY: Stable
PLUGIN_AUTH_VERSION: 1.4.0

Configuration of the important audit system variables

You can either set them manually with SET GLOBAL but I recommend to define them in the option file (my.cnf)
#
## Audit Plugin MariaDB
#
server_audit_events = CONNECT,QUERY,TABLE # specifies the types of events to log
server_audit_logging = ON # Enable logging
server_audit = FORCE_PLUS_PERMANENT # Load the plugin at startup & prevent it from beeing removed
server_audit_file_path = /u00/app/mysql/admin/mysqld8/log/mysqld8-audit.log # Path & log name
server_audit_output_type = FILE # separate log file
server_audit_file_rotate_size = 100000 # Limit of the log size in Bytes before rotation
server_audit_file_rotations = 9 # Number of audit files before the first will be overwritten
server_audit_excl_users = root # User(s) not audited

Restart the server and check the audit system variables
mysqld7-[MariaDB]>show global variables like "server_audit%";
+-------------------------------+----------------------------------------------------+
| Variable_name | Value |
+-------------------------------+----------------------------------------------------+
| server_audit_events | CONNECT,QUERY,TABLE |
| server_audit_excl_users | root |
| server_audit_file_path | /u00/app/mysql/admin/mysqld7/log/mysqld7-audit.log |
| server_audit_file_rotate_now | OFF |
| server_audit_file_rotate_size | 10000 |
| server_audit_file_rotations | 9 |
| server_audit_incl_users | sme |
| server_audit_loc_info | |
| server_audit_logging | ON |
| server_audit_mode | 0 |
| server_audit_output_type | file |
| server_audit_query_log_limit | 1024 |
| server_audit_syslog_facility | LOG_USER |
| server_audit_syslog_ident | mysql-server_auditing |
| server_audit_syslog_info | |
| server_audit_syslog_priority | LOG_INFO |
+-------------------------------+----------------------------------------------------+
16 rows in set (0.00 sec)

Audit log file

In the Audit log file directory, you will find one current audit file and 9 archived audit log file as defined by the parameter server_audit_file_rotations
mysql@MariaDB:/u00/app/mysql/admin/mysqld7/log/ [mysqld7] ll
total 344
-rw-rw----. 1 mysql mysql 242 Oct 13 10:35
-rw-rw----. 1 mysql mysql 556 Oct 13 10:35 mysqld7-audit.log
-rw-rw----. 1 mysql mysql 10009 Oct 13 10:35 mysqld7-audit.log.1
-rw-rw----. 1 mysql mysql 10001 Oct 12 14:22 mysqld7-audit.log.2
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:47 mysqld7-audit.log.3
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:41 mysqld7-audit.log.4
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:34 mysqld7-audit.log.5
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:28 mysqld7-audit.log.6
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:22 mysqld7-audit.log.7
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:15 mysqld7-audit.log.8
-rw-rw----. 1 mysql mysql 10033 Oct 12 13:09 mysqld7-audit.log.9

You can check the latest records in the current one
mysqld7-[MariaDB]>tail -f mysqld7-audit.log
20161013 14:22:57,MYSQL,sme,localhost,31,179,QUERY,employees,'create tables tst(name varchar(20))',1064
20161013 14:23:05,MYSQL,sme,localhost,31,180,CREATE,employees,tst,
20161013 14:23:05,MYSQL,sme,localhost,31,180,QUERY,employees,'create table tst(name varchar(20))',0
20161013 14:23:41,MYSQL,sme,localhost,31,181,WRITE,employees,tst,
20161013 14:23:41,MYSQL,sme,localhost,31,181,QUERY,employees,'insert into tst values(\'toto\')',0
20161013 14:24:26,MYSQL,sme,localhost,31,182,WRITE,employees,tst,
20161013 14:24:26,MYSQL,sme,localhost,31,182,QUERY,employees,'update tst set name=\'titi\'',0
20161013 14:24:53,MYSQL,sme,localhost,31,183,QUERY,employees,'delete from tst',1142
20161013 14:48:34,MYSQL,sme,localhost,33,207,READ,employees,tst,
20161013 14:48:34,MYSQL,sme,localhost,33,207,QUERY,employees,'select * from tst',0
20161013 15:35:16,MYSQL,sme,localhost,34,0,FAILED_CONNECT,,,1045
20161013 15:35:16,MYSQL,sme,localhost,34,0,DISCONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,0,CONNECT,,,0
20161013 15:35:56,MYSQL,sme,localhost,35,210,QUERY,,'select @@version_comment limit 1',0
20161013 15:36:03,MYSQL,sme,localhost,35,0,DISCONNECT,,,0

The audit log file which is a plain-text format contains the following commas separated fields
timestamp : 20161012 10:33:58
serverhost : MYSQL
user : sme
host : localhost
connection id: 9
query id : 77
operation : QUERY
database : employees
object : ‘show databases’ # Executed query if QUERY or table name if TABLE operation.
return code : 0

Installation on Percona or MySQL

It is quite the same as on MariaDB
Once your server is installed, you have to look after your plugin directory
mysqld8-[Percona]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
+---------------+--------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------+--------------------------------------------------------------------------------------+
| plugin_dir | /u00/app/mysql/product/Percona-Server-5.7.14-7-Linux.x86_64.ssl101/lib/mysql/plugin/ |
+---------------+--------------------------------------------------------------------------------------+

mysqld10-[MySQL]>SHOW GLOBAL VARIABLES LIKE 'plugin_dir';
+---------------+-----------------------------------------------------------------------+
| Variable_name | Value |
+---------------+-----------------------------------------------------------------------+
| plugin_dir | /u00/app/mysql/product/mysql-5.6.14-linux-glibc2.5-x86_64/lib/plugin/ |
+---------------+-----------------------------------------------------------------------+

then copy the MariaDB plugin server_audit.so in the Percona/MySQL plugin directory
mysql@Percona:[mysqld8] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so
/u00/app/mysql/product/Percona-Server-5.7.14-7-Linux.x86_64.ssl101/lib/mysql/plugin/

mysql@MySQL:[mysqld10] cp /u00/app/mysql/product/mariadb-10.1.16-linux-x86_64/lib/plugin/server_audit.so
/u00/app/mysql/product/mysql-5.6.14-linux-glibc2.5-x86_64/lib/plugin/

Install/load the plugin & check
mysqld8-[(Percona)]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld8-[Percona]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
*************************** 1. row ***************************
PLUGIN_NAME: SERVER_AUDIT
PLUGIN_VERSION: 1.4
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: AUDIT
PLUGIN_TYPE_VERSION: 4.1
PLUGIN_LIBRARY: server_audit.so
PLUGIN_LIBRARY_VERSION: 1.4
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity
PLUGIN_LICENSE: GPL
LOAD_OPTION: FORCE_PLUS_PERMANENT

mysqld10-[MySQL]>INSTALL PLUGIN server_audit SONAME 'server_audit.so';
mysqld10-[MySQL]>SELECT * from information_schema.plugins where plugin_name='server_audit'\G
**************************** 1. row ***************************
PLUGIN_NAME: SERVER_AUDIT
PLUGIN_VERSION: 1.4
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: AUDIT
PLUGIN_TYPE_VERSION: 3.2
PLUGIN_LIBRARY: server_audit.so
PLUGIN_LIBRARY_VERSION: 1.4
PLUGIN_AUTHOR: Alexey Botchkov (MariaDB Corporation)
PLUGIN_DESCRIPTION: Audit the server activity
PLUGIN_LICENSE: GPL
LOAD_OPTION: FORCE_PLUS_PERMANENT

Configuration of the important audit system variables

You can take exactly the same audit system variables defined for MariaDB

Conclusion

The MariaDB Auditing Plugin is really quick and easy to install
It is a good and cheap auditing solution and can be installed on different distributions
It lets you see exactly what SQL queries are being processed
Auditing information can really help you to track suspicious queries, detect mistakes and overall troubleshoot abnormal activity

 

Cet article MariaDB: audit plugin est apparu en premier sur Blog dbi services.

How to destroy your performance: PL/SQL vs SQL

$
0
0

Disclaimer: This is in no way a recommendation to avoid PL/SQL. This post just describes a case I faced at a customer with a specific implementation in PL/SQL the customer (and me) believed is the most efficient way of doing it in PL/SQL. This was a very good example for myself to remind me to check the documentation and to verify if what I believed a feature does is really what the feature is actually doing. When I was doing PL/SQL full time in one my of previous jobs I used the feature heavily without really thinking on what happened in the background. Always keep learning …

Lets start by building the test case. The issue was on 12.1.0.2 on Linux but I think this will be reproducible on any release (although, never be sure :) ).

SQL> create table t1 as select * from dba_objects;
SQL> insert into t1 select * from t1;
SQL> /
SQL> /
SQL> /
SQL> /
SQL> /
SQL commit;
SQL> select count(*) from t1;

  COUNT(*)
----------
   5565632

SQL> create table t2 as select object_id from t1 where mod(object_id,33)=0;
SQL> select count(*) from t2;

  COUNT(*)
----------
    168896

This are my two tables used for the test: t1 contains around 5,5 millions rows and there is t2 which contains 168896 rows. Coming to the issue: There is a procedure which does this:

create or replace procedure test_update
is
  cursor c1 is select object_id from t2;
  type tab is table of t2.object_id%type index by pls_integer;
  ltab tab;
begin
  open c1;
  fetch c1 bulk collect into ltab;
  close c1;
  forall indx in 1..ltab.count
    update t1 set owner = 'AAA' where object_id = ltab(indx);
end test_update;
/

The procedure uses “bulk collect” and “forall” to fetch the keys from t2 in a first step and then uses these keys to update t1 in a second step. Seemed pretty well done: Not a loop over each single row, compare with the list and then do the update when there is a match. I really couldn’t see an issue here. But when you execute this procedure you’ll wait for ages (at least if you are in VM running on a notebook and not on super fast hardware).

The situation at the customer was that I was told that the update, when executed as plain SQL in sqlplus, takes less than a second. And really, when you execute this on the test case from above:

SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:05.30
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.44
SQL> update t1 set owner = 'AAA' where object_id in ( select object_id from t2 );

168896 rows updated.

Elapsed: 00:00:06.34
SQL> rollback;

Rollback complete.

Elapsed: 00:00:02.70
SQL>

It is quite fast (between 5 and 6 seconds on my environment). So why is the PL/SQL version so much slower? Aren’t “bulk collect” and “forall” the right methods to boost performance? Lets take a look at the execution plan for the plain SQL version:

----------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |    A-Time     | Buffers | Reads  |  OMem |  1Mem |  O/1/M|
----------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |      1 |       |       | 24303 (100)|          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|   1 |  UPDATE               | T1       |      1 |       |       |            |          |       0 |00:00:04.52    |     259K|   9325 |       |       |       |
|*  2 |   HASH JOIN           |          |      1 |    48 |  4416 | 24303   (1)| 00:00:01 |     168K|00:00:01.76    |   86719 |   9325 |  2293K|  2293K|  1/0/0|
|   3 |    VIEW               | VW_NSO_1 |      1 |   161K|  2044K|    72   (2)| 00:00:01 |    2639 |00:00:00.05    |     261 |     78 |       |       |       |
|   4 |     SORT UNIQUE       |          |      1 |     1 |  2044K|            |          |    2639 |00:00:00.04    |     261 |     78 |   142K|   142K|  1/0/0|
|   5 |      TABLE ACCESS FULL| T2       |      1 |   161K|  2044K|    72   (2)| 00:00:01 |     168K|00:00:00.01    |     261 |     78 |       |       |       |
|   6 |    TABLE ACCESS FULL  | T1       |      1 |  5700K|   429M| 23453   (1)| 00:00:01 |    5566K|00:00:05.88    |   86458 |   9247 |       |       |       |
----------------------------------------------------------------------------------------------------------------------------------------------------------------

It is doing a hash join as expected. What about the PL/SQL version? It is doing this:

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
SQL_ID  4hh65t1u4basp, child number 0
-------------------------------------
UPDATE T1 SET OWNER = 'AAA' WHERE OBJECT_ID = :B1

Plan hash value: 2927627013

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | UPDATE STATEMENT   |      |       |       | 23459 (100)|          |
|   1 |  UPDATE            | T1   |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |   951 | 75129 | 23459   (1)| 00:00:01 |
---------------------------------------------------------------------------

Uh! Why that? This is what I wasn’t aware of. I always thought when you use “forall” to send PL/SQL’s SQL to the SQL engine Oracle would rewrite the statement to expand the list in the where clause or do other optimizations. But this does not happen. The only optimization that takes place when you use “forall” is that the statements are send in batches to the SQL engine rather then sending each statement after another. What happens here is that you execute 168896 full table scans because the same statement (with a another bind variable value) is executed 168896 times. Can’t be really fast compared to the SQL version.

Of course you could rewrite the procedure to do the same as the SQL but this is not the point here. The point is: When you think what you have implemented in PL/SQL is the same as what you compare to when you run it SQL: Better think twice and even better read the f* manuals, even when you think you are sure what a feature really does :)

 

Cet article How to destroy your performance: PL/SQL vs SQL est apparu en premier sur Blog dbi services.

Documentum story – IAPI login with a DM_TICKET for a specific user

$
0
0

During our last project, one of the Application Teams requested our help because they needed to execute some commands in IAPI with a specific user for which they didn’t know the password. They tried to use a DM_TICKET as I suggested them but they weren’t able to do so. Therefore I gave them detailed explanations of how to do that and I thought I should do the same in this blog because I was thinking that maybe a lot of people don’t know how to do that.

 

So let’s begin! The first thing to do is obviously to obtain a DM_TICKET… For that purpose, you can log in to the Content Server and use the local trust to login to the docbase with the Installation Owner (I will use “dmadmin” below). As said just before, there is a local trust in the Content Server and therefore you can put any password, the login will always work for the Installation Owner (if the docbroker and docbase are up of course…):

[dmadmin@content_server_01 ~]$ iapi DOCBASE -Udmadmin -Pxxx
 
 
        EMC Documentum iapi - Interactive API interface
        (c) Copyright EMC Corp., 1992 - 2015
        All rights reserved.
        Client Library Release 7.2.0050.0084
 
 
Connecting to Server using docbase DOCBASE
[DM_SESSION_I_SESSION_START]info:  "Session 013f245a8000b7ff started for user dmadmin."
 
 
Connected to Documentum Server running Release 7.2.0050.0214  Linux64.Oracle
Session id is s0

 

Ok so now we have a session with the Installation Owner and we can therefore get a DM_TICKET for the specific user I was talking about before. In this blog, I will use “adtsuser” as the “specific user” (ADTS user used for renditions). Getting a DM_TICKET is really simple in IAPI:

API> getlogin,c,adtsuser
...
DM_TICKET=T0JKIE5VTEwgMAoxMwp2ZXJzaW9uIElOVCBTIDAKMwpmbGFncyBJTlQgUyAwCjEKc2VxdWVuY2VfbnVtIElOVCBTIDAKMTgwCmNyZWF0ZV90aW1lIElOVCBTIDAKMTQ1MDE2NzIzNwpleHBpcmVfdGltZSBJTlQgUyAwCjE0NTAxNjc1MzcKZG9tYWluIElOVCBTIDAKMAp1c2VyX25hbWUgU1RSSU5HIFMgMApBIDggYWR0c3VzZXIKcGFzc3dvcmQgSU5UIFMgMAowCmRvY2Jhc2VfbmFtZSBTVFJJTkcgUyAwCkEgMTEgU3ViV2F5X2RlbW8KaG9zdF9uYW1lIFNUUklORyBTIDAKQSAzMSBQSENIQlMtU0QyMjAwNDYuZXUubm92YXJ0aXMubmV0CnNlcnZlcl9uYW1lIFNUUklORyBTIDAKQSAxMSBTdWJXYXlfZGVtbwpzaWduYXR1cmVfbGVuIElOVCBTIDAKMTEyCnNpZ25hdHVyZSBTVFJJTkcgUyAwCkEgMTEyIEFBQUFFSWlRMHFST1lIZEFrZ2hab3hTUUEySityd2xPdnZVcVdKbFdVdTUrR2lDV3ZtY1dkRzVwZnRwVWRDeVVldE42QjVOMnVxajZwYnI3MEthaVNpdGU5aWdmRk43bDA0cjM0d0JtYlloaUpQWXgK
API> exit
Bye

 

Now we do have a DM_TICKET for the user “adtsuser” so let’s try to use it. You can try to login in the “common” way as I did above but that will just not work because what we got is a DM_TICKET and that’s not a valid password. Therefore you will need to use something else:

[dmadmin@content_server_01 ~]$ iapi -Sapi
Running with non-standard init level: api
API> connect,DOCBASE,adtsuser,DM_TICKET=T0JKIE5VTEwgMAoxMwp2ZXJzaW9uIElOVCBTIDAKMwpmbGFncyBJTlQgUyAwCjEKc2VxdWVuY2VfbnVtIElOVCBTIDAKMTgwCmNyZWF0ZV90aW1lIElOVCBTIDAKMTQ1MDE2NzIzNwpleHBpcmVfdGltZSBJTlQgUyAwCjE0NTAxNjc1MzcKZG9tYWluIElOVCBTIDAKMAp1c2VyX25hbWUgU1RSSU5HIFMgMApBIDggYWR0c3VzZXIKcGFzc3dvcmQgSU5UIFMgMAowCmRvY2Jhc2VfbmFtZSBTVFJJTkcgUyAwCkEgMTEgU3ViV2F5X2RlbW8KaG9zdF9uYW1lIFNUUklORyBTIDAKQSAzMSBQSENIQlMtU0QyMjAwNDYuZXUubm92YXJ0aXMubmV0CnNlcnZlcl9uYW1lIFNUUklORyBTIDAKQSAxMSBTdWJXYXlfZGVtbwpzaWduYXR1cmVfbGVuIElOVCBTIDAKMTEyCnNpZ25hdHVyZSBTVFJJTkcgUyAwCkEgMTEyIEFBQUFFSWlRMHFST1lIZEFrZ2hab3hTUUEySityd2xPdnZVcVdKbFdVdTUrR2lDV3ZtY1dkRzVwZnRwVWRDeVVldE42QjVOMnVxajZwYnI3MEthaVNpdGU5aWdmRk43bDA0cjM0d0JtYlloaUpQWXgK
...
s0

 

Pretty simple, right? So let’s try to use our session like we always do:

API> retrieve,c,dm_server_config
...
3d3f245a80000102
API> dump,c,l
...
USER ATTRIBUTES

  object_name                                   : DOCBASE
  title                                         : 
  ...

 

And that’s it, you have a working session with a specific user without the need to know any password, you just have to obtain a DM_TICKET for this user using the local trust of the Installation Owner!

 

Cet article Documentum story – IAPI login with a DM_TICKET for a specific user est apparu en premier sur Blog dbi services.

Manage Azure in PowerShell (RM)

$
0
0

Azure offers two deployment models for cloud components: Resource Manager (RM) and Classic deployment model. Newer and more easier to manage, Microsoft recommends to use the Resource Manager.
Even if these two models can exist at the same time in Azure, they are different and managed differently: in PowerShell cmdlets are specific to RM.

In order to be able to communicate with Azure from On-Premises in PowerShell, you need to download and install the Azure PowerShell from WebPI. For more details, please refer to this Microsoft Azure post “How to install and configure Azure PowerShell“.
 
 
Azure PowerShell installs many modules located in C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell:
Get-module -ListAvailable -Name *AzureRm*
ModuleType Version Name ExportedCommands
---------- ------- ---- ----------------
Manifest 1.1.3 AzureRM.ApiManagement {Add-AzureRmApiManagementRegion, Get-AzureRmApiManagementSsoToken, New-AzureRmApiManagementHostnam...
Manifest 1.0.11 AzureRM.Automation {Get-AzureRMAutomationHybridWorkerGroup, Get-AzureRmAutomationJobOutputRecord, Import-AzureRmAutom...
Binary 0.9.8 AzureRM.AzureStackAdmin {Get-AzureRMManagedLocation, New-AzureRMManagedLocation, Remove-AzureRMManagedLocation, Set-AzureR...
Manifest 0.9.9 AzureRM.AzureStackStorage {Add-ACSFarm, Get-ACSEvent, Get-ACSEventQuery, Get-ACSFarm...}
Manifest 1.0.11 AzureRM.Backup {Backup-AzureRmBackupItem, Enable-AzureRmBackupContainerReregistration, Get-AzureRmBackupContainer...
Manifest 1.1.3 AzureRM.Batch {Remove-AzureRmBatchAccount, Get-AzureRmBatchAccount, Get-AzureRmBatchAccountKeys, New-AzureRmBatc...
Manifest 1.0.5 AzureRM.Cdn {Get-AzureRmCdnCustomDomain, New-AzureRmCdnCustomDomain, Remove-AzureRmCdnCustomDomain, Get-AzureR...
Manifest 0.1.2 AzureRM.CognitiveServices {Get-AzureRmCognitiveServicesAccount, Get-AzureRmCognitiveServicesAccountKey, Get-AzureRmCognitive...
Manifest 1.3.3 AzureRM.Compute {Remove-AzureRmAvailabilitySet, Get-AzureRmAvailabilitySet, New-AzureRmAvailabilitySet, Get-AzureR...
Manifest 1.0.11 AzureRM.DataFactories {Remove-AzureRmDataFactory, Get-AzureRmDataFactoryRun, Get-AzureRmDataFactorySlice, Save-AzureRmDa...
Manifest 1.1.3 AzureRM.DataLakeAnalytics {Get-AzureRmDataLakeAnalyticsDataSource, Remove-AzureRmDataLakeAnalyticsCatalogSecret, Set-AzureRm...
Manifest 1.0.11 AzureRM.DataLakeStore {Add-AzureRmDataLakeStoreItemContent, Export-AzureRmDataLakeStoreItem, Get-AzureRmDataLakeStoreChi...
Manifest 1.0.2 AzureRM.DevTestLabs {Get-AzureRmDtlAllowedVMSizesPolicy, Get-AzureRmDtlAutoShutdownPolicy, Get-AzureRmDtlAutoStartPoli...
Manifest 1.0.11 AzureRM.Dns {Get-AzureRmDnsRecordSet, New-AzureRmDnsRecordConfig, Remove-AzureRmDnsRecordSet, Set-AzureRmDnsRe...
Manifest 1.1.3 AzureRM.HDInsight {Get-AzureRmHDInsightJob, New-AzureRmHDInsightSqoopJobDefinition, Wait-AzureRmHDInsightJob, New-Az...
Manifest 1.0.11 AzureRM.Insights {Add-AzureRmMetricAlertRule, Add-AzureRmLogAlertRule, Add-AzureRmWebtestAlertRule, Get-AzureRmAler...
Manifest 1.1.10 AzureRM.KeyVault {Get-AzureRmKeyVault, New-AzureRmKeyVault, Remove-AzureRmKeyVault, Remove-AzureRmKeyVaultAccessPol...
Manifest 1.0.7 AzureRM.LogicApp {Get-AzureRmIntegrationAccountAgreement, Get-AzureRmIntegrationAccountCallbackUrl, Get-AzureRmInte...
Manifest 0.9.2 AzureRM.MachineLearning {Export-AzureRmMlWebService, Get-AzureRmMlWebServiceKeys, Import-AzureRmMlWebService, Remove-Azure...
Manifest 1.0.12 AzureRM.Network {Add-AzureRmApplicationGatewayBackendAddressPool, Get-AzureRmApplicationGatewayBackendAddressPool,...
Manifest 1.0.11 AzureRM.NotificationHubs {Get-AzureRmNotificationHubsNamespaceAuthorizationRules, Get-AzureRmNotificationHubsNamespaceListK...
Manifest 1.0.11 AzureRM.OperationalInsights {Get-AzureRmOperationalInsightsSavedSearch, Get-AzureRmOperationalInsightsSavedSearchResults, Get-...
Manifest 1.0.0 AzureRM.PowerBIEmbedded {Remove-AzureRmPowerBIWorkspaceCollection, Get-AzureRmPowerBIWorkspaceCollection, Get-AzureRmPower...
Manifest 1.0.11 AzureRM.Profile {Enable-AzureRmDataCollection, Disable-AzureRmDataCollection, Remove-AzureRmEnvironment, Get-Azure...
Manifest 1.1.3 AzureRM.RecoveryServices {Get-AzureRmRecoveryServicesBackupProperties, Get-AzureRmRecoveryServicesVault, Get-AzureRmRecover...
Manifest 1.0.3 AzureRM.RecoveryServices.Backup {Backup-AzureRmRecoveryServicesBackupItem, Get-AzureRmRecoveryServicesBackupManagementServer, Get-...
Manifest 1.1.9 AzureRM.RedisCache {Reset-AzureRmRedisCache, Export-AzureRmRedisCache, Import-AzureRmRedisCache, Remove-AzureRmRedisC...
Manifest 2.0.2 AzureRM.Resources {Get-AzureRmADApplication, Get-AzureRmADGroupMember, Get-AzureRmADGroup, Get-AzureRmADServicePrinc...
Manifest 1.0.2 AzureRM.ServerManagement {Install-AzureRmServerManagementGatewayProfile, Reset-AzureRmServerManagementGatewayProfile, Save-...
Manifest 1.1.10 AzureRM.SiteRecovery {Stop-AzureRmSiteRecoveryJob, Get-AzureRmSiteRecoveryNetwork, Get-AzureRmSiteRecoveryNetworkMappin...
Manifest 1.0.11 AzureRM.Sql {Get-AzureRmSqlDatabaseImportExportStatus, New-AzureRmSqlDatabaseExport, New-AzureRmSqlDatabaseImp...
Manifest 1.1.3 AzureRM.Storage {Get-AzureRmStorageAccount, Get-AzureRmStorageAccountKey, Get-AzureRmStorageAccountNameAvailabilit...
Manifest 1.0.11 AzureRM.StreamAnalytics {Get-AzureRmStreamAnalyticsFunction, Get-AzureRmStreamAnalyticsDefaultFunctionDefinition, New-Azur...
Manifest 1.0.11 AzureRM.Tags {Remove-AzureRmTag, Get-AzureRmTag, New-AzureRmTag}
Manifest 1.0.11 AzureRM.TrafficManager {Disable-AzureRmTrafficManagerEndpoint, Enable-AzureRmTrafficManagerEndpoint, Set-AzureRmTrafficMa...
Manifest 1.0.11 AzureRM.UsageAggregates Get-UsageAggregates
Manifest 1.1.3 AzureRM.Websites {Get-AzureRmAppServicePlanMetrics, New-AzureRmWebAppDatabaseBackupSetting, Restore-AzureRmWebAppBa...

 
The basic cmdlets to connect and navigate between your different Accounts or Subscriptions are located in “AzureRM.Profile” module:
PS C:\> Get-Command -Module AzureRM.Profile
CommandType Name Version Source
----------- ---- ------- ------
Alias Login-AzureRmAccount 1.0.11 AzureRM.Profile
Alias Select-AzureRmSubscription 1.0.11 AzureRM.Profile
Cmdlet Add-AzureRmAccount 1.0.11 AzureRM.Profile
Cmdlet Add-AzureRmEnvironment 1.0.11 AzureRM.Profile
Cmdlet Disable-AzureRmDataCollection 1.0.11 AzureRM.Profile
Cmdlet Enable-AzureRmDataCollection 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmContext 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmEnvironment 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmSubscription 1.0.11 AzureRM.Profile
Cmdlet Get-AzureRmTenant 1.0.11 AzureRM.Profile
Cmdlet Remove-AzureRmEnvironment 1.0.11 AzureRM.Profile
Cmdlet Save-AzureRmProfile 1.0.11 AzureRM.Profile
Cmdlet Select-AzureRmProfile 1.0.11 AzureRM.Profile
Cmdlet Set-AzureRmContext 1.0.11 AzureRM.Profile
Cmdlet Set-AzureRmEnvironment 1.0.11 AzureRM.Profile

According to the cmdlets present in “AzureRM.Profile” module, you will be able to connect to your Azure Account(enter your credentials):
PS C:\> Login-AzureRmAccount
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :

 
You can list your associated Azure Subscriptions:
Get-AzureRmSubscription
SubscriptionName : ** Subscription Name **
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
TenantId : a123456b-789b-123c-4de5-67890fg123h4

 
To switch your Subscription, do as follows:
Select-AzureRmSubscription -SubscriptionId z123456y-789x-123w-4vu5-67890ts123r4
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :

 
Or you can take a specific “snapshot” of your current location in Azure. It will help you to easily return to a specific context at the moment you ran the command:
PS C:\> $context = Get-AzureRmContext
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :
...
PS C:\> Set-AzureRmContext -Context $context
Environment : AzureCloud
Account : n.courtine@xxxxxx.com
TenantId : a123456b-789b-123c-4de5-67890fg123h4
SubscriptionId : z123456y-789x-123w-4vu5-67890ts123r4
SubscriptionName : ** Subscription Name **
CurrentStorageAccount :

 
It is also possible to list all the available Storage Account associated to your current subscriptions:
PS C:\> Get-AzureRmStorageAccount | Select StorageAccountName, Location
StorageAccountName Location
------------------ --------
semicroustillants259 westeurope
semicroustillants4007 westeurope
semicroustillants8802 westeurope

 
To see the existing blob container in each Storage Account:
PS C:\> Get-AzureRmStorageAccount | Select StorageAccountName, ResourceGroupName, Location
Blob End Point: https://dbimssql.blob.core.windows.net/
Name Uri LastModified
---- --- ------------
bootdiagnostics-t... https://dbimssql.blob.core.windows.net/bootdiagnostics-ta... 30.09.2016 12:36:12 +00:00
demo https://dbimssql.blob.core.windows.net/demo 05.10.2016 14:16:01 +00:00
vhds https://dbimssql.blob.core.windows.net/vhds 30.09.2016 12:36:12 +00:00
Blob End Point: https://semicroustillants259.blob.core.windows.net/
Name Uri LastModified
---- --- ------------
mastervhds https://semicroustillants259.blob.core.windows.net/master... 28.09.2016 13:41:19 +00:00
uploads https://semicroustillants259.blob.core.windows.net/uploads 28.09.2016 13:41:19 +00:00
vhds https://semicroustillants259.blob.core.windows.net/vhds 28.09.2016 13:55:57 +00:00
Blob End Point: https://semicroustillants4007.blob.core.windows.net/
Name Uri LastModified
---- --- ------------
artifacts https://semicroustillants4007.blob.core.windows.net/artif... 28.09.2016 13:59:47 +00:00

Azure infrastructure can be easily managed from On-Premises in PowerShell. In a previous post, I explained how to deploy a Virtual Machine from an Image in Azure PowerShell.
If you have remarks or advises, do not hesitate to share ;-)

 

Cet article Manage Azure in PowerShell (RM) est apparu en premier sur Blog dbi services.

Using Apache JMeter to run load test on a Web applications protected by Microsoft Advanced Directory Federation Services

$
0
0

One of my last mission was to configure Apache JMeter for performance and load tests on a Web Application. The access to this Web Application requires authentication provided by a Microsoft Advanced Directory Federation Services single Sign On environment.
This Single Sign On communication is based on SAML (Security Assertion Markup Language). SAML is an XML-based open standard data format for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. ADFS login steps relies on several parameters that need to be fetched and re-injected to the following steps like ‘SAMLRequest’, ‘RelayState’ and ‘SAMLResponse’.
This step-by-step tutorial shows the SAML JMeter scenario part to perform those ADFS login steps.

Record a first scenario

After installing the Apache JMeter tool, you are ready to record a first scenario. Have a look on the JMeter user manual to configure JMeter for recording scenario.

1. Adapt the HTTP(s) Test Script Recorder

For this task we need to record all HTTP(S) requests. Those from the Application and those from the Single Sign On Server. We need then to change the HTTP(S) test Proxy Recorder parameters as below

Open the “WorkBench” on the tree and click on the “HTTP(S) Test Script Recorder”.

JMeter ADFS
The scenario recording requires some changes onto the “HTTP(S) Test Script Recorder”.
Change the:
Port:  this is the port on the server running JMeter that will act as proxy. Default value is 8080.
URL Patterns to Include: Add “.*” to include all requests (you may exclude some later, if you desire).

2.    Configure the Browser to use the Test Script recorder as proxy

Go to your favourite browser (Firefox, Internet Explorer, Chrome, etc.) and configure the proxy as explained as follow:
The example below is for Internet Explorer 11 (it may differ from version to version):

  1. Go to Tools > Internet Options.
  2. Select the “Connection” tab.
  3. Click the “LAN settings” button.
  4. Check the  “Use a proxy server for your LAN” check-box. The address and port fields should be enabled now.
  5. In the Address type the server name or the IP address of the server running JMeter HTTP(S) Test Script Recorder and in the Port, enter the port entered in Step 1.

From now, the JMeter is proxying the requests.

3.    Record your first scenario

Connect to the Web Application using the browser you have configured in the previous step. Run a simple scenario including the authentication steps. Once done, stop the HTTP(S) Test Script Recorder in JMeter.

4.    Analyse the recorded entries

Analyse the recorded entries to find out the entry that redirects to the login page. In this specific case, it was the first request because the Web Application automatically displays the login page for all users not authenticated. From this request, we need to fetch two values ‘SAMLRequest’ and ‘RelayState’ included in the page response data and submit them to the ADFS login URL. After successful login, ADFS will provide a SAMLResponse that need to be submitted back to the callback URL.  This can be done by using  Regular Expression Extractors. Refer to the image below  to see how to do this.

JMeter AFFS IMG2

Extractor Name Associated variable Regular Expression
SAMLRequest Extractor SAMLRequest name=”SAMLRequest” value=”([0-9A-Za-z;.: \/=+]*)”
RelayState Extractor RelayState name=”RelayState” value=”([&#;._a-zA-Z0-9]*)”
SAMLResponse Extractor SAMLResponse name=”SAMLResponse” value=”([&#;._+=a-zA-Z0-9]*)”

In the registered scenario look for the entries having SAMLRequest, RelayState and SAMLResponse as parameter and replace them with the corresponding variable set in the regular expressions created in the previous step.

* Click on the image to increase the size
JMeter ADFS IMG3
JMeter ADFS IMG4

Once this is done the login test scenario can be executed now.

This JMeter test plan can be cleaned from the URL requests and be used as a base plan to record more complex test plans.

 

Cet article Using Apache JMeter to run load test on a Web applications protected by Microsoft Advanced Directory Federation Services est apparu en premier sur Blog dbi services.


Documentum story – Monitoring of WebLogic Servers

$
0
0

As you already know if you are following our Documentum Story, we are building, working and managing, for some time now, a huge Documentum Platform with more than 115 servers so far (still growing). To be able to manage properly this platform, we need an efficient monitoring tool. In this blog, I will not talk about Documentum but rather I will talk a little bit about the monitoring solution we integrated with Nagios to be able to support all of our WebLogic Servers. For those of you who don’t know, Nagios is a very popular Open Source monitoring tool launched in 1999. By default Nagios doesn’t provide any interface to monitor WebLogic or Documentum and therefore we choose to build our own script package to be able to properly monitor our Platform.

 

At the beginning of the project when we were installing the first WebLogic Servers, we used the monitoring scripts coming from the old Platform (a Documentum 6.7 Platform not managed by us). The idea behind these monitoring scripts was the following one:

  • The Nagios Server needs to perform a check of a service
  • The Nagios Server contacts the Nagios Agent which executes the check
  • The Check is starting its own WLST script to retrieve only the value needed for this check (each check calls a different WLST script)
  • The Nagios Agent returns the value to the Nagios Server which is then happy with it

 

This pretty simple approach was working fine at the beginning when we only had a few WebLogic Servers with not so much to monitor on them… The problem is that the Platform was growing very fast and we quickly started to see a few timeouts on the different checks because Nagios was trying to execute a lot of check at the same time on the same host. For example on a specific environment, we had two WebLogic Domains running with 4 or 5 Managed Servers for each domain that were hosting a Documentum Application (DA, D2, D2-Config, …). We were monitoring the heapSize, the number of threads, the server state, the number of sessions, the different URLs with and without Load Balancer, aso… for each Managed Server and for the AdminServers too. Therefore we quickly reached a point where 5 or 10 WLST scripts were running at the same time for the monitoring and only the monitoring.

 

The problem with the WLST script is that it takes a lot of time to initialize itself and start (several seconds) and during that time, 1 or 2 CPUs are fully used only for that. Now correlate this figure with the fact that there are dozens of checks running every 5 minutes for each domain and that are all starting their own WLST script. In the end, you will get a WebLogic Server highly used with a huge CPU consumption only for the monitoring… So that might be sufficient for a small installation but that’s definitively not the right thing to do for a huge Platform.

 

Therefore we needed to do something else. To solve this particular problem, I developed a new set to scripts that I integrated with Nagios to replace the old ones. The idea behind these new scripts was that it should be able to provide us at least the same thing as the old ones but without starting so much WLST scripts and it should be easily extensible. I worked on this small development and this is what I came with:

  • The Nagios Server needs to perform a check of a service
  • The Nagios Server contacts the Nagios Agent which executes the check
  • The Check is reading a log file to find the value needed for this check
  • The Nagios Agent returns the value to the Nagios Server which is then happy with it

 

Pretty similar isn’t it? Indeed… And yet so different! The main idea behind this new version is that instead of starting a WLST script for each check which will fully use 1 or 2 CPUs and last for 2 to 10 seconds (depending on the type of check and on the load), this new version will only read a very short log file (1 log file per check) that contains one line: the result of the check. Reading a log file like that takes a few milliseconds and it doesn’t consume 2 CPUs for doing that… Now the remaining question is how can we handle the process that will populate the log files? Because yes checking a log file is fast but how can we ensure that this log file will contain the correct data?

 

To manage that, this is what I did:

  • Creation of a shell script that will:
    • Be executed by the Nagios Agent for each check
    • Check if the WebLogic Domain is running and exit if not
    • Check if the WLST script is running and start it if not
    • Ensure the log file has been updated in the last 5 minutes (meaning the monitoring is running and the value that will be read is correct)
    • Read the log file
    • Analyze the information coming from the log file and return that to the Nagios Agent
  • Creation of a WLST script that will:
    • Be started once, do its job, sleep for 2 minutes and then do it again
    • Retrieve the monitoring values and store that in log files
    • Store error messages in the log files if there is any issue

 

It will not describe any longer the shell script because that’s just basic shell commands but I will show you instead an example of a WLST script that can be used to monitor a few things (ThreadPool of all Servers, HeapFree of all Severs, Sessions of all Applications deployed on all Servers):

[nagios@weblogic_server_01 scripts]$ cat DOMAIN_check_weblogic.wls
from java.io import File
from java.io import FileOutputStream

directory='/app/nagios/etc/objects/scripts'
userConfig=directory + '/DOMAIN_configfile.secure'
userKey=directory + '/DOMAIN_keyfile.secure'
address='weblogic_server_01'
port='8443'

connect(userConfigFile=userConfig, userKeyFile=userKey, url='t3s://' + address + ':' + port)

def setOutputToFile(fileName):
  outputFile=File(fileName)
  fos=FileOutputStream(outputFile)
  theInterpreter.setOut(fos)

def setOutputToNull():
  outputFile=File('/dev/null')
  fos=FileOutputStream(outputFile)
  theInterpreter.setOut(fos)

while 1:
  domainRuntime()
  for server in domainRuntimeService.getServerRuntimes():
    setOutputToFile(directory + '/threadpool_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/ThreadPoolRuntime/ThreadPoolRuntime')
    print 'threadpool_' + domainName + '_' + server.getName() + '_OUT',get('ExecuteThreadTotalCount'),get('HoggingThreadCount'),get('PendingUserRequestCount'),get('CompletedRequestCount'),get('Throughput'),get('HealthState')
    setOutputToNull()
    setOutputToFile(directory + '/heapfree_' + domainName + '_' + server.getName() + '.out')
    cd('/ServerRuntimes/' + server.getName() + '/JVMRuntime/' + server.getName())
    print 'heapfree_' + domainName + '_' + server.getName() + '_OUT',get('HeapFreeCurrent'),get('HeapSizeCurrent'),get('HeapFreePercent')
    setOutputToNull()

  try:
    setOutputToFile(directory + '/sessions_' + domainName + '_console.out')
    cd('/ServerRuntimes/AdminServer/ApplicationRuntimes/consoleapp/ComponentRuntimes/AdminServer_/console')
    print 'sessions_' + domainName + '_console_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
    setOutputToNull()
  except WLSTException,e:
    setOutputToFile(directory + '/sessions_' + domainName + '_console.out')
    print 'CRITICAL - The Server AdminServer or the Administrator Console is not started'
    setOutputToNull()

  domainConfig()
  for app in cmo.getAppDeployments():
    domainConfig()
    cd('/AppDeployments/' + app.getName())
    for appServer in cmo.getTargets():
      domainRuntime()
      try:
        setOutputToFile(directory + '/sessions_' + domainName + '_' + app.getName() + '.out')
        cd('/ServerRuntimes/' + appServer.getName() + '/ApplicationRuntimes/' + app.getName() + '/ComponentRuntimes/' + appServer.getName() + '_/' + app.getName())
        print 'sessions_' + domainName + '_' + app.getName() + '_OUT',get('OpenSessionsCurrentCount'),get('SessionsOpenedTotalCount')
        setOutputToNull()
      except WLSTException,e:
        setOutputToFile(directory + '/sessions_' + domainName + '_' + app.getName() + '.out')
        print 'CRITICAL - The Managed Server ' + appServer.getName() + ' or the Application ' + app.getName() + ' is not started'
        setOutputToNull()

  java.lang.Thread.sleep(120000)

[nagios@weblogic_server_01 scripts]$

 

A few notes related to the above WLST script:

  • userConfig and userKey are two files created previously in WLST that contain the username/password of the current user (at the time of creation of these files) in an encrypted way. This allows you to login to WLST without having to type your username and password and more importantly, without having to put a clear text password in this file…
  • To ensure the security of this environment we are always using t3s to perform the monitoring checks and this requires you to configure the AdminServer to HTTPS.
  • In the script, I’m using the “setOutputToFile” and “setOutputToNull” functions. The first one is to redirect the output to the file mentioned in parameter while the second one is to remove all output. That’s basically to ensure that the log files generated ONLY contain the needed lines and nothing else.
  • There is an infinite loop (while 1) that executes all checks, create/update all log files and then sleep for 120 000 ms (so that’s 2 minutes) before repeating it.

 

As said above, this is easily extendable and therefore you can just add a new paragraph with the new values to retrieve. So have fun with that! :)

 

Comparison between the two methods. I will use below real figures coming from one of our WebLogic Server:

  • Old:
    • 40 monitoring checks running every 5 minutes => 40 WLST scripts started
    • each one for a duration of 6 seconds (average)
    • each one using 200% CPU during that time (2 CPUs)
  • New:
    • Shell script:
      • 40 monitoring checks running every 5 minutes => 40 log files read
      • each one for a duration of 0,1s (average)
      • each one using 100% CPU during that time (1 CPU)
    • WLST script:
      • One loop every 2 minutes (so 2.5 loops in 5 minutes)
      • each one for a duration of 0.5s (average)
      • each one using 100% CPU during that time (1 CPU)

 

Period CPU Time (Old) CPU Time (New)
5 minutes 40*6*2 <~> 480 s 40*0.1*1 + 2.5*0.5*1 <~> 5.25 s
1 day 480*(1440/5) <~> 138 240 s
<~> 2 304 min
<~> 38.4 h
4.25*(1440/5) <~> 1 512 s
<~> 25.2 min
<~> 0.42 h

Based on these figures, we can see that our new monitoring solution is almost 100 times more efficient than the old one so that’s a success: instead of spending 38.4 hours using the CPU on a 24 hours period (so that’s 1.6 CPU the whole day), we are now using 1 CPU for only 25 minutes! Here I’m just talking about the CPU Time but of course you can do the same thing for the memory, processes, aso…

 

Note: Starting with WebLogic 12c, Oracle introduced the RESTful services which can now be used to monitor WebLogic too… It has been improved in 12.2 and that can become a pretty good alternative to WLST scripting but for now we are still using this WLST approach with one single execution every 2 minutes and then Nagios reading the log files when needs be.

 

Cet article Documentum story – Monitoring of WebLogic Servers est apparu en premier sur Blog dbi services.

Documentum story – Documentum installers fail with various errors

$
0
0

Some months ago when installing/removing/upgrading several Documentum components, we ended up facing a strange issue (yes I know, another one!). We were able to see these specific errors during the installation or removal of a Docbase, during the installation of a patch for the Content Server, the installation of the Thumbnail Server, aso… The errors we faced change for different installers but in the end, all of these errors were linked to the same issue. The only error that wasn’t completely useless was the one faced during the installation of a new docbase: “Content is not allowed in trailing section”. Yes I know this might not be really meaningful for everybody but this kind of error usually appears when an XML file isn’t formatted properly: some content isn’t allowed at this location in the file…

 

The strange thing is that these installers were working fine a few days before so what changed in the meantime exactly? After some research and analysis, I finally found the guilty! One thing that has been added in these few days was D2 which has been installed a few hours before the first error. Now what can be the link between D2 and these errors when running some installers? The first thing to do when there is an issue with D2 on the Content Server is to check the Java Method Server. The first time I saw this error, it was during the installation of a new docbase. As said before, I checked the logs of the Java Method Server and I found the following WARNING which confirmed what I suspected:

2015-10-24 09:39:59,948 UTC WARNING [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-3) JSF1078: Unable to process deployment descriptor for context ''{0}''.: org.xml.sax.SAXParseException; lineNumber: 40; columnNumber: 1; Content is not allowed in trailing section.
        at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:175) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:394) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:322) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentScannerImpl$TrailingMiscDispatcher.dispatch(XMLDocumentScannerImpl.java:1302) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1196) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:555) [xercesImpl-2.9.1-jbossas-1.jar:]
        at org.apache.xerces.jaxp.SAXParserImpl.parse(SAXParserImpl.java:289) [xercesImpl-2.9.1-jbossas-1.jar:]
        at javax.xml.parsers.SAXParser.parse(SAXParser.java:195) [rt.jar:1.7.0_72]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.scanForFacesServlet(ConfigureListener.java:815) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener$WebXmlProcessor.<init>(ConfigureListener.java:768) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at com.sun.faces.config.ConfigureListener.contextInitialized(ConfigureListener.java:178) [jsf-impl-2.1.7-jbossorg-2.jar:]
        at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3392) [jbossweb-7.0.13.Final.jar:]
        at org.apache.catalina.core.StandardContext.start(StandardContext.java:3850) [jbossweb-7.0.13.Final.jar:]
        at org.jboss.as.web.deployment.WebDeploymentService.start(WebDeploymentService.java:90) [jboss-as-web-7.1.1.Final.jar:7.1.1.Final]
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
        at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_72]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_72]
        at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_72]

 

So the error “Content is not allowed in trailing section” comes from the JMS which isn’t able to properly read the first character of the line 40 coming from an XML file “deployment descriptor”. So which file is that? That’s where the fun begin! There are several deployment descriptors in JBoss like web.xml, jboss-app.xml, jboss-deployment-structure.xml, jboss-web.xml, aso…

 

The D2 installer is updating some configuration files like the server.ini. This is a text file, pretty simple to update and indeed the file is properly formatted so no issue on this side. Except this file, the D2 installer is mainly updating XML files like the following ones:
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/META-INF/jboss-deployment-structure.xml
  • $DOCUMENTUM_SHARED/jboss7.1.1/modules/emc/d2/lockbox/main/module.xml
  • aso…

 

At this point, it was pretty simple to figure out the issue: I just checked all these files until I found the wrongly updated/corrupted XML file. And the winner was… the file web.xml for the DmMethods inside the ServerApps. The D2 installer usually update/read this file but in the process of doing so, it actually does also corrupt it… It is not a big corruption but it is still boring since it will prevent some installers from working properly and it will display the error shown above in the Java Method Server. Basically whenever you have some parsing errors, I would suggest you to take a look at the files web.xml across the JMS. The D2 Installer in our case added at the end of this file the word “ap”. As you know, an XML file should be properly formatted to be readable and “ap” isn’t a correct XML ending tag:

[dmadmin@content_server_01 ~]$ cat $DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/DmMethods.war/WEB-INF/web.xml
<?xml version="1.0" encoding="UTF-8"?>
<web-app>
    <display-name>Documentum Method Invocation Servlet</display-name>
    <description>This servlet is for Java method invocation using the DO_METHOD apply call.</description>
    <servlet>
        <servlet-name>DoMethod</servlet-name>
        <description>Documentum Method Invocation Servlet</description>
        <servlet-class>com.documentum.mthdservlet.DoMethod</servlet-class>
        <init-param>
            <param-name>trace</param-name>
            <param-value>f</param-value>
        </init-param>
        <init-param>
            <param-name>docbase_install_owner_name</param-name>
            <param-value>dmadmin</param-value>
        </init-param>
        <init-param>
            <param-name>methodlocation-1</param-name>
            <param-value>$DOCUMENTUM/dba/java_methods</param-value>
        </init-param>
        <init-param>
            <param-name>docbase-GLOBAL_REGISTRY</param-name>
            <param-value>GLOBAL_REGISTRY</param-value>
        </init-param>
        <init-param>
            <param-name>docbase-DOCBASE1</param-name>
            <param-value>DOCBASE1</param-value>
        </init-param>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>DoMethod</servlet-name>
        <url-pattern>/servlet/DoMethod</url-pattern>
    </servlet-mapping>
</web-app>
ap
[dmadmin@content_server_01 ~]$

 

So to correct this issue, you just have to remove the word “ap” from the end of this file, restart the JMS and finally restart any installer and the issue should be gone. That’s pretty simple but still annoying that installers provided by EMC can cause such trouble on their own products.

 

The errors mentioned above are related to these XML files being wrongly updated by the D2 installer but that’s actually not the only installer that is often wrongly updating XML files. As far as I remember, the BPM installer and Thumbnail Server installer can also produce the exact same issue and the reason behind that is probably that the XML files of the Java Method Server on Linux Boxes have a wrong FileFormat… We faced this issue with all versions that we installed so far on our different environments: CS 7.2 P02, P05, P16… Each and every time we install a new Documentum Content Server, all XML files of the JMS are all using the DOS FileFormat and this prevents the D2/Thumbnail/BPM installers to do their job.

 

As a sub-note, I have also seen some issues with the file “jboss-deployment-structure.xml”. Just like the “web.xml” above, this one is also present for all applications deployed under the Java Method Server. Some installers will try to update this file (including D2, in order to configure the Lockbox in it) but again the same issue is happening, mostly because of the wrong FileFormat: I have already seen the whole content of this file just being removed by a Documentum installer… So before doing anything, I would suggest you to take a backup of the JMS as soon as it is installed and running and before installing all additional components like D2, bpm, Thumbnail Server, aso… On Linux, it is pretty easy to see and change the FileFormat of a file. Just open it using “vi” for example and then write “:set ff?”. This will display the current FileFormat and you can then change it using: “:set ff=unix”, if needed.

 

I don’t remember seeing such kind of behavior before the CS 7.2 so maybe it is just linked to this specific version… If you already have seen such thing for a previous version, don’t hesitate to share!

 

Cet article Documentum story – Documentum installers fail with various errors est apparu en premier sur Blog dbi services.

CDB resource plan: shares and utilization_limit

$
0
0

I’m preparing some slides about PDB security (lockdown) and isolation (resource) for DOAG and as usual I’ve more info to share than what can fit in 45 minutes. In order to avoid the frustration of removing slides, I usually share them in blog posts. Here is the basic concepts of CDB resource plans in multitenant: shares and resource limit.

The CDB resource plan is mainly about CPU. It also governs the degree when in parallel query and the I/O when on Exadata, but the main resource is the CPU: sessions that are not allowed to used more CPU will wait on ‘resmgr: cpu quantum’. In a cloud environment where you provision a PDB, like in the new Exadata Express Cloud Service, you need to ensure that one PDB do not take all CDB resources, but you also have to ensure that resources are fairly shared.

resource_limit

Let’s start with resource limit. This do not depend on the number of PDB: it is defined as a percentage of the CDB resources. Here I have a CDB with two PDBs and I’ll run a workload on one PDB only. I run 8 sessions, all cpu bound, on PDB1.

I’ve defined a CDB resource plan that sets the resource_limit to 50% for PDB1:

CURRENT_TIMESTAMP PLAN PLUGGABLE_DATABASE DIRECTIVE_TYPE SHARES UTIL
------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN ORA$AUTOTASK AUTOTASK 90
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE DEFAULT_DIRECTIVE 1 100
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN PDB1 PDB 1 50
14-OCT-16 08.46.53.077947 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

This is an upper limit. I’ve 8 CPUs so PDB1 will be allowed to run only 4 sessions in CPU at a time. Here is the result:

CDB_RESOURCE_PLAN_1_PDB_1_SHARE_50_LIMIT

What you see here is that when more than the allowed percentage has been used the sessions are scheduled out of CPU and wait on ‘resmgr: cpu quantum’. And the interesting thing is that they seem to be stopped all at the same time:

CDB_RESOURCE_PLAN_1_PDB_1_SHARE_50_LIMIT-2

This make sense because the suspended sessions may hold resources that are used by others. However, this pattern does not reproduce for any workload. More work and future blog posts are probably required about that.

Well, the goal here is to explain that resource_limit is there to define a maximum resource usage. Even if there is no other activity, you will not be able to use all CDB resources if you have a resource limit lower than 100%.

Shares

Share are there for the opposite reason: guarantee a minimum of ressources to a PDB.
However, the unit is not the same. It cannot be the same. You cannot guarantee a percentage of CDB ressources to one PDB because you don’t know how many other PDBs you have. Let’s say you have 4 PDBs and you want to have them equal. You want to define a minimum of 25% percent for each. But then, what happens when a new PDB is created? You need to change all 25% to 20%. To avoid that, the minimum ressources is allocated by shares. You give shares to each PDB and they will get a percentage of ressources calculated from their share divided by the total number of shares.

The result is that when there is not enough ressources in the CDB to run all the sessions, then the PDBs that use more than their share will wait. Here is an example where PDB1 has 2 shares and PDB2 has 1 share, which means that PDB1 will get at least 66% of ressources and PDB2 at least 33%:

CURRENT_TIMESTAMP PLAN PLUGGABLE_DATABASE DIRECTIVE_TYPE SHARES UTIL
------------------------------------ ------------ ------------------------- ------------------------------ ---------- ----------
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN ORA$AUTOTASK AUTOTASK 90
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN ORA$DEFAULT_PDB_DIRECTIVE DEFAULT_DIRECTIVE 1 100
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN PDB1 PDB 2 100
14-OCT-16 09.14.59.302771 PM +00:00 MY_CDB_PLAN PDB2 PDB 1 100

Here is the ASH on each PDB when I run 8 CPU-bound sessions on each. System is saturated because I have only 8 CPUs.

CDB_RESOURCE_PLAN_2_PDB_2_SHARE_100_LIMIT-PDB1

CDB_RESOURCE_PLAN_2_PDB_1_SHARE_100_LIMIT-PDB2

Because of the shares difference (2 shares for PDB1 and 1 share for PDB2) PDB1 has been able ti use more CPU than PDB2 when the system was saturated:
PDB1 was 72% in cpu and 22% waiting, PDB2 was 50% in cpu and 50% waiting.

CDB_RESOURCE_PLAN_2_PDB_1_SHARE_100_LIMIT-SUM

In order to illustrate what changes when the system is saturated, I’ve run 16 sessions on PDB1 and then, after 60 seconds, 4 sessions on PDB2.

Here is the activity of PDB1:

CDB_RESOURCE_PLAN_SHARE_PDB1

and PDB2:

CDB_RESOURCE_PLAN_SHARE_PDB2

At 22:14 PDB1 was able to use all available CPU because there is no utilization_limit and no other PDB have activity. The system is saturated, but from PDB1 only.
At 22:15 PDB has also activity, so the resource manager must limit PDB1 in order to give ressources to PDB2 proportionally to its share. PDB1 with 2 shares are guaranteed to be able to use 2/3 of cpu. PDB1 with 1 share is guaranteed to use 1/3 of it.
At 22:16 PDB1 activity has completed, so PDB2 can use more resources. The 4 sessions are lower than the available cpu, so the system is not saturated and there is no wait.

What to remember?

Shares are there to guarantee a minimum of ressources utilization when system is saturated.
Resource_limit is there to set a maximum of resource utilization, whether the system is saturated or not.

 

Cet article CDB resource plan: shares and utilization_limit est apparu en premier sur Blog dbi services.

Documentum story – Change the location of the Xhive Database for the DSearch (xPlore)

$
0
0

When using xPlore with Documentum, you will need to setup a DSearch which will be used to perform the searches and this DSearch uses in the background an Xhive Database. This is a native XML Database that persists XML DOMs and provide access to them using XPath and XQuery. In this blog, I will share the steps needed to change the location of the Xhive Database used by the DSearch. You usually don’t want to move this XML Database everyday but it might be useful as a one-time action. In this customer case, one of the DSearch in a Sandbox/Dev environment has been installed using a wrong path for the Xhive Database (not following our installation conventions) and therefore we had to correct that just to keep the alignment between all environments and to avoid a complete uninstall/reinstall of the IndexAgents + DSearch.

 

In the steps below, I will suppose that xPlore has been installed under “/app/xPlore” and that the Xhive Database has been created under “/app/xPlore/data”. This is the default value and then when installing an IndexAgent, it will create, under the data folder, a sub-folder with a name equal to the DSearch Domain’s name (usually the name of the docbase/repository). In this blog I will show you how to move this Xhive Database to “/app/xPlore/test-data” without having to reinstall everything. This means that the Xhive Database will NOT be deleted/recreated from scratch (this is also possible) and therefore you will NOT have to perform a full reindex which would have taken a looong time.

 

So let’s start with stopping all components first:

[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopIndexagent.sh"
[xplore@xplore_server_01 ~]$ sh -c "/app/xPlore/jboss7.1.1/server/stopPrimaryDsearch.sh"

 

Once this is done, we need to backup the data and config files, just in case…

[xplore@xplore_server_01 ~]$ current_date=$(date "+%Y%m%d")
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/data/ /app/xPlore/data_bck_$current_date
[xplore@xplore_server_01 ~]$ cp -R /app/xPlore/config/ /app/xPlore/config_bck_$current_date
[xplore@xplore_server_01 ~]$ mv /app/xPlore/data/ /app/xPlore/test-data/

 

Ok now everything in the background is prepared and we can start the actual steps to move the Xhive Database. The first step is to change the data location in the files stored in the config folder. There is actually two files that need to be updated: indexserverconfig.xml and XhiveDatabase.bootstrap. In the first file, you need to update the “storage-location” path that defines where the data are kept and in the second file you need to update all paths pointing to the Database files. Here are some simple commands to replace the old path with the new path and check that it has been done properly:

[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/indexserverconfig.xml
[xplore@xplore_server_01 ~]$ sed -i "s,/app/xPlore/data,/app/xPlore/test-data," /app/xPlore/config/XhiveDatabase.bootstrap
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep -A2 "<storage-locations>" /app/xPlore/config/indexserverconfig.xml
    <storage-locations>
        <storage-location path="/app/xPlore/test-data" quota_in_MB="10" status="not_full" name="default"/>
    </storage-locations>
[xplore@xplore_server_01 ~]$ 
[xplore@xplore_server_01 ~]$ grep "/app/xPlore/test-data" /app/xPlore/config/XhiveDatabase.bootstrap | grep 'id="[0-4]"'
        <file path="/app/xPlore/test-data/xhivedb-default-0.XhiveDatabase.DB" id="0"/>
        <file path="/app/xPlore/test-data/SystemData/xhivedb-SystemData-0.XhiveDatabase.DB" id="2"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/xhivedb-SystemData#MetricsDB-0.XhiveDatabase.DB" id="3"/>
        <file path="/app/xPlore/test-data/SystemData/MetricsDB/PrimaryDsearch/xhivedb-SystemData#MetricsDB#PrimaryDsearch-0.XhiveDatabase.DB" id="4"/>
        <file path="/app/xPlore/test-data/xhivedb-temporary-0.XhiveDatabase.DB" id="1"/>

 

The next step is to announce the new location of the data folder to the DSearch so it can create future Xhive Databases at the right location and this is done inside the file indexserver-bootstrap.properties. After the update, this file should look like the following:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/indexserver-bootstrap.properties
# (c) 1994-2009, EMC Corporation. All Rights Reserved.
#Wed May 20 10:40:49 PDT 2009
#Note: Do not change the values of the properties in this file except xhive-pagesize and force-restart-xdb.
node-name=PrimaryDsearch
configuration-service-class=com.emc.documentum.core.fulltext.indexserver.core.config.impl.xmlfile.IndexServerConfig
indexserver.config.file=/app/xPlore/config/indexserverconfig.xml
xhive-database-name=xhivedb
superuser-name=superuser
superuser-password=****************************************************
adminuser-name=Administrator
adminuser-password=****************************************************
xhive-bootstrapfile-name=/app/xPlore/config/XhiveDatabase.bootstrap
xhive-connection-string=xhive://xplore_server_01:9330
xhive-pagesize=4096
# xhive-cache-pages=40960
isPrimary = true
licensekey=**************************************************************
xhive-data-directory=/app/xPlore/test-data
xhive-log-directory=

 

In this file:

  • indexserver.config.file => defines the location of the indexserverconfig.xml file that must be used to recreate the DSearch Xhive Database.
  • xhive-bootstrapfile-name => defines the location and name of the Xhive bootstrap file that will be generated during bootstrap and will be used to create the empty DSearch Xhive Database.
  • xhive-data-directory => defines the path of the data folder that will be used by the Xhive bootstrap file. This will therefore be the future location of the DSearch Xhive Database.

 

As you probably understood, to change the data folder, you just have to adjust the value of the parameter “xhive-data-directory” to point to the new location: /app/xPlore/test-data.

 

When this is done, the third step is to change the Lucene temp path:

[xplore@xplore_server_01 ~]$ cat /app/xPlore/jboss7.1.1/server/DctmServer_PrimaryDsearch/deployments/dsearch.war/WEB-INF/classes/xdb.properties
xdb.lucene.temp.path=/app/xPlore/test-data/temp

 

In this file, xdb.lucene.temp.path defines the path for temporary uncommitted indexes. Therefore it will just be used for temporary indexes but it is still a good practice to change this location since it’s also talking about the data of the DSearch and it helps to keep everything consistent.

 

Then the next step is to clean the cache and restart the DSearch. You can use your custom start/stop script if you have one or use something like this:

[xplore@xplore_server_01 ~]$ rm -rf /app/xPlore/jboss7.1.1/server/DctmServer_*/tmp/work/*
[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startPrimaryDsearch.sh & sleep 5;mv nohup.out nohup-PrimaryDsearch.out"

 

Once done, just verify in the log file generated by the start command (for me: /app/xPlore/jboss7.1.1/server/nohup-PrimaryDsearch.out) that the DSearch has been started successfully. If that’s true, then you can also start the IndexAgent:

[xplore@xplore_server_01 ~]$ sh -c "cd /app/xPlore/jboss7.1.1/server;nohup ./startIndexagent.sh & sleep 5;mv nohup.out nohup-Indexagent.out"

 

And here we are, the Xhive Database is now located under the “test-data” folder!

 

 

Additional note: As said at the beginning of this blog, it is also possible to recreate an empty Xhive Database and change its location at the same time. Doing a recreation of am empty DB will result in the same thing as the steps above BUT you will have to perform a full reindexing which will take a lot of time if this isn’t a new installation (the more documents are indexed, the more time it will take)… To perform this operation, the steps are mostly the same and are summarized below:

  1. Backup the data and config folders
  2. Remove all files inside the config folder except the indexserverconfig.xml
  3. Create a new (empty) data folder with a different name like “test-data” or “new-data” or…
  4. Update the file indexserver-bootstrap.properties with the reference to the new path
  5. Update the file xdb.properties with the reference to the new path
  6. Clean the cache and restart the DSearch+IndexAgents

Basically, the steps are exactly the same except that you don’t need to update the files indexserverconfig.xml and XhiveDatabase.bootstrap. The first one is normally updated by the DSearch automatically and the second file will be recreated from scratch using the right data path thanks to the update of the file indexserver-bootstrap.properties.

 

Have fun :)

 

Cet article Documentum story – Change the location of the Xhive Database for the DSearch (xPlore) est apparu en premier sur Blog dbi services.

Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On

$
0
0

Introduction

In one of my missions, I was requested to  run performance and load tests on a ADF application running in a Oracle Fusion Middleware environment protected using Oracle Access Manager. For this task we decided to use Apache JMeter because it  provides the control needed on the tests and uses multiple threads to emulate multiple users. It can be used to do distributed testing which uses multiple systems to do stress test.  Additionally, the GUI interface provides an easy way to manage the load test scenarios that can be easily recorded using the HTTP(s) Test Script Recorder.

Prepare a JMeter test plan

A first start is to review the following Blog: My Shot on Using JMeter to Load Test Oracle ADF Applications

The blog above explains how to record and use a test plan in JMeter.
It provides a SimplifiedADFJMeterPlan.jmx  JMeter test plan that can be used as a base for the JMeter test plan creation.
But this ADF starter test plan has to be reviewed for the jsessionId and afrLoop Extractors. As the regular expression associated with them might need to be adapted as they might change depending on the version of the ADF software.

In this environment, Oracle Fusion Middleware ADF 11.1.2.4 WebLogic Server 10.3.6 and Oracle Access Manager 11.2.3 were used.
The regular expressions for afrLoop and jsessionid needed to be updated as shown below:

reference name regular expression
afrLoop _afrLoop\’, \';([0-9]{13,16})
jsesionId ;jsessionid=([-_0-9A-Za-z!]{62,63})

Coming to the single Sign On layer, it appears that the Oracle Access Manager compatible login screen requires three parameters:

  • username
  • password
  • request_id

First username and password pattern values will be provided by the recording of the test scenario. To run the same scenario with multiple users, a CSV file is used to store test users and passwords. This will be detailed later in this blog.
The request_id is provided by the Oracle Access Manager Single Sign On layer and needs to be fetched and re-injected to the authentication URL.
To resolve this, a new variable needed to be created and the regular expression below is used.

reference name regular expression
requestId name=\’request_id\’ value=\'([&amp;#;0-9]{18,25})\';

Once the test plan scenario is recorded, look for the OAM standard “/oam/server/auth_cred_submit” URL and change the request_id parameter to use the defined requestId variable.

**  click on the image to increase the size
OAM Authentication URL
name: request_id   value: ${requestId}

After those changes, the new JMeter test plan can be run.

Steps to run the test plan with multiple users

In JMeter,
Right click on the “Thread Group” on the tree.
Select “Add” – “Config Element” – “CSV Data Set Config”.
Add CSV config in JMeter

Create a CSV file which contains USERNAME,PASSWORD and saved it in a folder on your Jmeter server. Make sure the users exist in OAM/OID:

ahunold,welcome1
jcooper,welcome1
monty,welcome1
king,welcome1
scott,welcome1

Adapt the path in the “CSV Data Set Config” and define the variable values (USERNAME and PASSWORD) in “Variable Names comma-delimited”
JMeter ADF OAM IMG3
Look for the URL that is submitting the authentication – /oam/server/auth_cred_submit- and click on it. In the right frame, replace the username and password got during the recording with respectively ${USERNAME} and ${PASSWORD} as shown below:
JMeter ADF OAM IMG4
At last you can adapt the thread group of your test plan to the number of users (Number of Threads) and loop (Loop Count) you want to run and execute it. The Ramp-Up Period in Seconds is the time between the Threads start.
JMeter test plan IMG5
The test plan can be executed now and results visualised in tree, graph or table views.

 

 

Cet article Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On est apparu en premier sur Blog dbi services.

Viewing all 2875 articles
Browse latest View live