This time I’ll explain AWR statistics migration between databases.
This is not a new feature (available from 10g) but many DBA/Architects doesn’t know how to use it properly or even that this feature exist.
Basically, the migration itself is very simple, and Oracle has provided scripts (awrextr.sql & awrload.sql) to make migartion as simple as possible.
First question should be when I need to use this feature?
Short answer is to transfer/migrate execution statistics from production into the test or dev environment with minimal impact.
The second question would probably be something like: Why should I migrate execution statistics into the test/dev environment?
Short answer is to get many valuable input data from production, that can help your DBA/Database designers and developers to better understand behavior of the production system as well as to develop a new code that should be scalable and fast.
Besides it’s very useful for troubleshooting and tuning optimization issues on production as well in dev environment where is safe to try your ideas to fix the issue.
What is also important to mention, this method has very low impact and is straightforward and simple, comparing with numerous alternative methods that requires to transfer complete or portion of the production database to dev/test environments.
Let’s first go through the migration process itself briefly. You can find on many places on the web description of this very simple process.
After that I’ll show some interesting points that are not described well on other sites.
--source database (for example production)
cd $ORACLE_HOME/rdbms/admin
SYS@prod> @awrextr.sql
~~~~~~~~~~~~~
AWR EXTRACT
~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will extract the AWR data for a range of snapshots ~
~ into a dump file. The script will prompt users for the ~
~ following information: ~
~ (1) database id ~
~ (2) snapshot range to extract ~
~ (3) name of directory object ~
~ (4) name of dump file ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Databases in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id DB Name Host
------------ ------------ ------------
* 1339390690 prod xxx-rms-db-prod
782550019 MOM2D xxx-rms-db-dev
The default database id is the local one: '1339390690'. To use this
database id, press to continue, otherwise enter an alternative.
Using 1339390690 for Database ID
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.
Enter value for num_days: 2
Listing the last 2 days of Completed Snapshots
DB Name Snap Id Snap Started
------------ --------- ------------------
prod 232850 05 Sep 2017 00:00
232851 05 Sep 2017 00:02
232852 05 Sep 2017 00:15
232853 05 Sep 2017 00:30
232854 05 Sep 2017 00:45
232855 05 Sep 2017 01:00
...
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 232850
Begin Snapshot Id specified: 232850
Enter value for end_snap: 233035
Specify the Directory Name
------------------------------ -------------------------------------------------
DUMP_DIR /u01/oradata/dump_dir
Choose a Directory Name from the above list (case-sensitive).
Enter value for directory_name: EXP_DIR
Specify the Name of the Extract Dump File
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The prefix for the default dump file name is awrdat_232850_233035.
To use this name, press to continue, otherwise enter
an alternative.
Enter value for file_name:
Using the dump file prefix: awrdat_232850_233035
|
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| The AWR extract dump file will be located
| in the following directory/file:
| /archive/export
| awrdat_232850_233035.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
| *** AWR Extract Started ...
|
| This operation will take a few moments. The
| progress of the AWR extract operation can be
| monitored in the following directory/file:
| /archive/export
| awrdat_232850_233035.log
--TARGET SYSTEM
scp dmp file to the target box.
create or replace directory awr_migration as '/tmp/';
cd $ORACLE_HOME/rdbms/admin
SYS@test> @awrload.sql
~~~~~~~~~~
AWR LOAD
~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ This script will load the AWR data from a dump file. The ~
~ script will prompt users for the following information: ~
~ (1) name of directory object ~
~ (2) name of dump file ~
~ (3) staging schema name to load AWR data into ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Specify the Directory Name
~~~~~~~~~~~~~~~~~~~~~~~~~~
Directory Name Directory Path
------------------------------ -------------------------------------------------
AWR_MIGRATION /dbarchive/expdp/
Choose a Directory Name from the list above (case-sensitive).
Enter value for directory_name: AWR_MIGRATION
Using the dump directory: AWR_MIGRATION
Specify the Name of the Dump File to Load
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Please specify the prefix of the dump file (.dmp) to load:
Enter value for file_name: awrdat_232850_233035 <------ without dmp extension!!!!!
Loading from the file name: awrdat_232850_233035.dmp
Staging Schema to Load AWR Snapshot Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The next step is to create the staging schema
where the AWR snapshot data will be loaded.
After loading the data into the staging schema,
the data will be transferred into the AWR tables
in the SYS schema.
The default staging schema name is AWR_STAGE.
To use this name, press to continue, otherwise enter
an alternative.
Using the staging schema name: AWR_STAGE
Choose the Default tablespace for the AWR_STAGE user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Choose the AWR_STAGE users's default tablespace. This is the
tablespace in which the AWR data will be staged.
TABLESPACE_NAME
--------------------------------------------------------------------------------
CONTENTS DEFAULT TABLESPACE
--------------------------- ------------------
SYSAUX
PERMANENT *
Pressing will result in the recommended default
tablespace (identified by *) being used.
Enter value for default_tablespace:
Using tablespace SYSAUX as the default tablespace for the AWR_STAGE
Choose the Temporary tablespace for the AWR_STAGE user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Choose the AWR_STAGE user's temporary tablespace.
Pressing will result in the database's default temporary
tablespace (identified by *) being used.
Enter value for temporary_tablespace:
Using tablespace TEMP_GROUP as the temporary tablespace for AWR_STAGE
... Creating AWR_STAGE user
|
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Loading the AWR data from the following
| directory/file:
| /dbarchive/expdp/
| awrdat_232850_233035.dmp
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
| *** AWR Load Started ...
|
| This operation will take a few moments. The
| progress of the AWR load operation can be
| monitored in the following directory/file:
| /dbarchive/expdp/
| awrdat_232850_233035.log
|
... Dropping AWR_STAGE user
End of AWR Load
After you finish importing AWR statistics, and your DBID didn’t change:
select * from v$database;
4270653599
you got some surprises with DBID.
Namely, if you run the following statement:
select distinct dbid from dba_hist_sqlstat;
1339390690
4270653599
you can spot that after import you have two DBID, former from the source database, while later from the target database.
This means if you are going to user scripts for generating AWR report from the OS (like awrrpt.sql), you need to change that scripts, which is not hard to do but it’s inconvenient.
Much better approach would be to use dbms interface like in the following example, where I choose snapshots from the source Db (e.g. production):
select * from dba_hist_snapshot where dbid = 1339390690 order by end_interval_time desc;
To generate AWR report:
select * from table(dbms_workload_repository.awr_report_html(l_dbid => 1339390690, l_inst_num => 1, l_bid => 233028, l_eid => 233035));
Now you can do many wonderful things like comparing the same statistics from production and your machine, SQL’s, hotspots, everything.
You really got the source of so valuable production information that are staying on your disposal to play with in the safety of dev or test environment, so It’s only up to your imagination to find the limits.
If you know how to use all that information, then it will be really rarely necessary to refresh/re-sync your development or test environment with production.
Comments
2017-09-07 05:49:50