I must thank my fellow DBA Franky Weber Faust for his publication in his blog.

The main purpose of this type of monitoring, in my opinion, is to know the most altered tables, the amount of changes in these tables since the last collection of statistics. Why since the last collection of statistics? Because every time a statistic is collected from a table this information is zeroed.

Do you know that account that Oracle does to know when the statistics are outdated? Then, you can do the same account by referring to the dba_tab_modifications view. By default, when a table suffers a 10% change, Oracle considers its statistics to be out of date, so if you use dbms_stats.gather _ * _ stats with OPTIONS => ‘GATHER STALE’ statistics will be collected after this amount of changes. You can change the Stale Statistics percentage with the dbms_stats.set _ * _ prefs procedure and the STALE_PERCENT parameter.

The version of my Oracle is 12.1.0.2, but all this works the same for the Oracle Database 11g and 10g versions (I believe it works on 9i too, but I think there is not only the flush_database_monitoring_info procedure, I did not test it).

Let’s go to examples and tests …

I’ll create a test table first.

SQL> create table t as select * from hr.jobs;

Table created.

Let’s see how this table is created by default.

 SQL> SET PAGES 200 LIN 200
SQL> COL OWNER FOR A20
SQL> COL TABLE_NAME FOR A30
SQL> select owner, table_name, monitoring, num_rows, blocks, empty_blocks, avg_space, to_char(last_analyzed, 'dd/mm/yyyy hh24:mi:ss') last_analyzed
from dba_tables
where owner='SYS' and table_name='T';OWNER TABLE_NAME MON NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE LAST_ANALYZED
-------------------- ------------------------------ --- ---------- ---------- ------------ ---------- -------------------
SYS T YES

Note that the table is already created with monitoring enabled, this is because the statistics_level parameter in my database is set to TYPICAL. The same thing happens if the parameter is set to ALL, monitoring will not be enabled by default if the table is created when the parameter in question is set to BASIC.

It is important to know how to disable and enable monitoring in tables.

To disable:

SQL> alter table T no monitoring;

Table altered.

Then we check if it has been disabled:

SQL> select owner, table_name, monitoring from dba_tables where owner='SYS' and table_name='T';

OWNER TABLE_NAME MON
-------------------- ------------------------------ ---
SYS T NO

Now let’s re-enable:

 SQL> alter table T monitoring;

Table altered.

Now we see that the table is being monitored again:

 SQL> select owner, table_name, monitoring from dba_tables where owner='SYS' and table_name='T';

OWNER TABLE_NAME MON
-------------------- ------------------------------ ---
SYS T YES

Let’s check if our table already has any changes:

 SQL> select table_name,inserts,updates,deletes,truncated,timestamp
from dba_tab_modifications
where table_name='T';no rows selected
We will insert some records, make some changes and exclusions from this table to observe the result of the monitoring.

SQL> insert into t values ('OP_MGR', 'Operations Manager', 10000, 20000);

1 row created.

SQL> insert into t values ('BD_MEM', 'Board Member', 100000, 500000);

1 row created.

SQL> insert into t values ('IT_MGR', 'IT Manager', 10000, 30000);

1 row created.

SQL> update t set max_salary=35000 where job_id='IT_MGR';

1 row updated.

SQL> update t set max_salary=50000 where job_id='OP_MGR';

1 row updated.

SQL> update t set max_salary=900000 where job_id='BD_MEM';

1 row updated.

SQL> delete t where job_id='BD_MEM';

1 row deleted.

SQL> commit;

Commit complete.

Let’s check if the changes were saved by monitoring the table:
SQL> select table_name,inserts,updates,deletes,truncated,timestamp
from dba_tab_modifications
where table_name='T';no rows selected

where are the changes made to the T table?

This information does not always go immediately to the tables, so it is necessary to download them from the memory area of the Oracle instance to the tables using the dbms_stats.flush_database_monitoring_info procedure.

SQL> exec dbms_stats.flush_database_monitoring_info;

PL/SQL procedure successfully completed.

Now we can see the changes that table T suffered.

 SQL> select table_name,inserts,updates,deletes,truncated,timestamp
from dba_tab_modifications
where table_name='T';TABLE_NAME INSERTS UPDATES DELETES TRU TIMESTAMP
------------------------------ ---------- ---------- ---------- --- ---------
T 3 3 1 NO 10-SEP-16
We note that when collecting statistics from the table the monitoring information is deleted.

SQL> exec dbms_stats.gather_table_stats('SYS','T');

PL/SQL procedure successfully completed.

SQL> select table_name,inserts,updates,deletes,truncated,timestamp
from dba_tab_modifications
where table_name='T';

no rows selected

Downloading the information from memory to the tables is done at different intervals/times depending on each version. In Oracle 9i I read somewhere that information is downloaded every 15 minutes, at 10g every 3 hours, but I think in the current versions, 11g and 12c, this interval is not worth more, so I was able to validate the download happens when the Buffer is full. When I find the reference I update the article because I believe I read this in some book.

This is a fairly simple but interesting understanding of Oracle database behaviors.

I have already used this type of procedure presented in the article up to as a trace type of sessions coming from an application server with Tomcat to be able to know the tables that were being altered by a certain process of the system. In this scenario, only one user was working on the system, but since it was not possible to isolate the session, it was the only way I could identify something that was being done by the system and this helped the developer to correct the code.

About The Author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.