Aug 4, 2022

Observation to Add VMs to your VMCluster on ExaCC for Dataguard Database

 An observation when adding new VMs to your vmcluster in an EXACC when your DBs are a standby or dataguard.

When I decided to add these new VMs to my VMCluster in Exacc, I noticed that after the process was finished, the new DB instances were not created on the new nodes. And this happened because this is how it works when the DBs involved in the vmcluster to which I am adding nodes are under Dataguard or standby mode, as the following note explains: https://docs.oracle.com/en/engineered-systems/exadata-cloud-at-customer/ecccm/ecc-manage-vm-clusters.html#GUID-3BA71388-B5B3-4EBD-9450-1A5908AF83B8

"
  • Adding a VM to a cluster will not automatically extend any database which is part of a Data Guard configuration (either primary or standby) to the newly provisioned VM.
  • For databases not part of a Data Guard configuration, only databases that are running on all VMs in the existing cluster will be added to the newly provisioned VM. Any database running on a subset of VMs will not extend automatically to run on the newly added VM.
"

As you can see that is the expected behaviour. So when i see that my new instances it was not created on my new nodes (using srvctl config database -d xxxx )  i had to create this new instances using the commands:

srvctl add instance -db BDPRD -instance BDPRD03   -node exaccprd03

srvctl add instance -db BDPRD -instance BDPRD04   -node exaccprd04


Note: on my situation my new VMs added are called: exaccprd03 and exaccprd04


PD: don't forget to add or reconfigure your existing services to take up these new instances you added. Example:

srvctl modify service -d BDPRD -s BDPRDSRV -r exaccprd01, exaccprd02, exaccprd03, exaccprd04 -q TRUE -e SESSION -j LONG -B SERVICE_TIME -m BASIC -z 300 -w 1

srvctl stop service -db BDPRD -service BDPRDSRV

srvctl start service -db BDPRD -service BDPRDSRV


That is all. Thanks


Apr 26, 2022

Real application Testing (RAT) - Capture & Replay (Part 01)


First of all i want to ask my apologies because this article was redacted very very fast and therefore there are a lot of errors with the words, semantic and vocabulary, also my english is very bad and poor. I repeat my apologies and in case of any doubt or error please you feel free for contact-me to my email (felipe@felipedonoso.cl, felipe.doonoso@oracle.com) or in the comments for this posts.  

These are the steps about how implement a very basic RAT example. This was executed between  a 12.2.0.1 environment (source and target).  The first one you need execute the capture process on the source database for get the transactions files (or rat or *.rec files) and after copy that files to the target and replay these ones to the target db using the replay processes.

* Link for documentation: 
* CAPTURE PROCESS: https://docs.oracle.com/database/121/ARPLS/d_workload_capture.htm#ARPLS69044
* REPLAY PROCESS: https://docs.oracle.com/database/121/ARPLS/d_workload_replay.htm#ARPLS208


* DocID for RAT on PDBs (multitenant): 
- How to Setup and Run a Database Testing Replay in an Oracle Multitenant Environment (Real Application Testing - RAT) (Doc ID 1937920.1)

Capture Process 

1.- Preparing the capture

We need to create a directory for capture files:

NOTE: It's not possible capture RAT directly on PDB
(for now until 18c, maybe on 19c it will be possible)


mkdir /u01/RAT_DIR_PDB1_20190821

alter session set container=cdb$root ;
CREATE OR REPLACE DIRECTORY RAT_DIR_PDB1_20190821 AS '/u01/RAT_DIR_PDB1_20190821';
    

2.- How can I calculated the space required for database capture?

For estimate the size of our all capture process we can do it the next one:

(from note: Real Application Testing: Database Capture FAQ (Doc ID 1920275.1))

Space required for capture can be approximated using:

2 * Bytes received via SQL*Net from client (from AWR).

The AWR report tracks the total in-coming network traffic under “bytes received via SQL*Net from client". By using an AWR report covering the period of a similar workload that ran before, you can get a good estimate for the on-going workload capture. For example: suppose we are planning a workload capture from 9am to 11am this Friday, the AWR report for last Friday of the same duration might give us a good estimate.

In the example below the space required can be calculated as 2*367,991,283 = 735,982,566 bytes


3.- Define filter for capture

Add the next for PDB capture. Put the pdb's name on the fvalue
Note: for delete filter use DELETE_FILTER procedure

For more information about the use of filter  review the next Support note:
* [ How to Create Filters for Either a Capture or Replay with Real Application Testing (RAT) (Doc ID 2285287.1) ]

* PRIOR 19c you only can capture from all CDB LEVEL:
19c New Feature Workload Capture and Replay in a PDB(Doc ID 2644357.1)
 but if you need capture only for PDB prior 19c you can use this:



alter session set container=cdb$root ;
BEGIN
DBMS_WORKLOAD_CAPTURE.ADD_FILTER (
fname => 'RAT_FILTER_PDB1_20190821',
fattribute => 'PDB_NAME',
fvalue => 'PDB1'
);
END;
/

This is the command for delete filter in case you need it:
exec DBMS_WORKLOAD_CAPTURE.DELETE_FILTER('RAT_FILTER_PDB1_20190821');

This is import regarding delete filter:
(from https://docs.oracle.com/database/121/ARPLS/d_workload_capture.htm#ARPLS69044)

The DELETE_FILTER Procedure only affects filters that have not been used by any previous capture.
Consequently, filters can be deleted only if they have been added using the ADD_FILTER Procedures
after any capture has been completed. Filters that have been added using ADD_FILTER  before a START_CAPTURE and FINISH_CAPTURE cannot be deleted anymore using this subprogram.


4.- Beginning the capture

We need to use a name for capture and the directory's name:


alter session set container=cdb$root ;
BEGIN
DBMS_WORKLOAD_CAPTURE.start_capture (
name => 'RAT_CAPTURE_PDB1_20190821',
dir => 'RAT_DIR_PDB1_20190821',
duration => NULL,
-- the next is for include the above defined filter
default_action =>'INCLUDE'
);
END;
/

5.- Do work or insert transactions on the source database

In this part we need to have our database in operational mode or insert a lot of transactions. In that way our capture process begin to create a lot of files (rat files) on the directory created in early steps. It's a very good idea have the capture process running for at least 3-5 hours. In my example we leave the capture process running with 8 hours.

6.- Stop capture

We finish capture (BTW the associate filter is not available anymore) we need to execute:

alter session set container=cdb$root ;
exec DBMS_WORKLOAD_CAPTURE.FINISH_CAPTURE(timeout => 30, reason => 'STOP CAPTURE');


7.- Reviewing the results of our capture

With this querys we can review basic  information about the captures (using the directory's name)

SELECT DBMS_WORKLOAD_CAPTURE.get_capture_info('RAT_DIR_PDB1_20190821')
FROM dual;

COLUMN name FORMAT a70
SELECT id capture_id, name,to_char(start_time,'dd/mm/yy hh24:mi') start_time FROM dba_workload_captures;



We can generate detailed info about capture in HTML or TEXT format using the next one
set serveroutput on size unlimited
set pagesize 0 long 30000000 longchunksize 2000 linesize 600
--col output format a600
spool report_capture.html
-- capture_id is get from above query on dba_workload_captures view
-- (you need get for the last capture id, my example is capture_id = 101 )
select dbms_workload_capture.report ( capture_id => 101,format => 'HTML') output from dual;
spool off
-- Now you can open report_capture.html with web browser

/*
* the capture report example (in this example is on TEXT mode):
Avg Active
Event Event Class % Event Sessions
----------------------------------- --------------- ---------- ----------
CPU + Wait for CPU CPU 4.17 0.01
db file sequential read User I/O 4.17 0.01
-------------------------------------------------------------
Top Service/Module Filtered Out DB: CDB1 Snaps: 2-3
Service Module % Activity Action % Action
-------------- ------------------------ ---------- ------------------ ----------
pdb1 sqlplus@lab-db12-2-ol7 ( 8.33 UNNAMED 8.33
-------------------------------------------------------------
Top SQL Filtered Out DB: CDB1 Snaps: 2-3
SQL ID % Activity Event % Event
----------------------- -------------- ------------------------------ -------
ab8xx2hrk1rku 8.33 CPU + Wait for CPU 4.17
** SQL Text Not Available **
db file sequential read 4.17
*/


NOTE: Also remember the capture process generate a HTML report on this directory when we finish capture (Remember for my example the name of my directory is RAT_DIR_PDB1_20190821):
/u01/RAT_DIR_PDB1_20190821/cap/wcr_cr.html


After finish or stop catpure we can see the next files (*.rec) generated (Example)
[oracle@oraclelab RAT_DIR_PDB1_20190821]$ ls -lptrR /u01/RAT_DIR_PDB1_20190821/capfiles/
/u01/RAT_DIR_PDB1_20190821/capfiles/:
....
/u01/RAT_DIR_PDB1_20190821/capfiles/inst1/aa:
total 4
-rw-r--r-- 1 oracle asmadmin 2423 Aug 6 13:59 wcr_0uxm9h000000u.rec


8.- Post task capture process

These are very recommendable steps for execute on the source for use them in the future for compare the behaviour on the target when we execute the replay on the target. In summary is the creation of sql baselines and export of sqlsets for use them in the future (is not mandatory execute these steps)


/*
* TASK POST CAPTURE
* Get AWR from Capture
*
*/

-- Define these values
-- replace every name with current date, example: 'xxxx_20210818'
define sqlsettable = 'TABLE_STS_database_20210818'
define sqlset = 'STS_database_RAT_PROD_0210818'
define baseline = 'database_baseline_0210818'


-- Please create a new snapshot:
execute dbms_workload_repository.create_snapshot();


-- Execute this query and get capture id
-- (the last row it must to be the last capture finished)
column id new_value capture_id
select max(id) id--,name
from dba_workload_captures ;


-- Generating Workload Capture Reports Using APIs
DECLARE
cap_id NUMBER;
cap_rpt CLOB;
BEGIN
cap_id := DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO(dir => 'dec06');
cap_rpt := DBMS_WORKLOAD_CAPTURE.REPORT(capture_id => cap_id,
format => DBMS_WORKLOAD_CAPTURE.TYPE_TEXT);
END;
/

-- This will generate dmp into de DB directory capture
-- using the previous capture_id
BEGIN
DBMS_WORKLOAD_CAPTURE.EXPORT_AWR (capture_id => &capture_id);
END;
/


-- Create baseline using the snaps id beetwen the start and finish capture hours.
column AWR_BEGIN_SNAP new_value begin_snap
column AWR_END_SNAP new_value end_snap
select AWR_BEGIN_SNAP,AWR_END_SNAP
from dba_workload_captures
where id = &capture_id;
EXECUTE DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (start_snap_id => &begin_snap, end_snap_id => &end_snap ,baseline_name => '&baseline');


-- Create SQLSET TABLE
BEGIN
DBMS_SQLTUNE.create_stgtab_sqlset (table_name => '&sqlsettable', schema_name => 'SYSTEM', tablespace_name => 'SYSTEM' );
END;
/

-- Create SQLSET
BEGIN
DBMS_SQLTUNE.CREATE_SQLSET (
sqlset_name => '&sqlset',
description => '12c STS database');
END;
/


-- LOAD_SQLSET with the baseline, using the begin and end snap of capture
declare
baseline_ref_cur DBMS_SQLTUNE.SQLSET_CURSOR;
begin
open baseline_ref_cur for select VALUE(p) from table(
DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(&begin_snap,&end_snap, 'parsing_schema_name not in (''DBSNMP'',''SYS'')' ,NULL,NULL,NULL,NULL,NULL,NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET('&sqlset', baseline_ref_cur);
end;
/



-- pack the sqlset into the sqlset_table
BEGIN
DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => '&sqlset',
sqlset_owner => 'SYS',
staging_table_name => '&sqlsettable',
staging_schema_owner => 'SYSTEM');
END;
/


-- Please generate dump with the SQLSet table and copy to the exa target
-- (the value of &sqlsettable)
exp system/xxxxx tables='&sqlsettable' file=exp_sts_database_rat.dmp log=exp_sts_database_rat.log


--- After we need to copy this file to the target database environment.



NEXT CHAPTER SOON.................