Automatically and Manual Purging Alert Log and Trace files with Automatic Diagnostic Repository Command Interpreter (ADRCI)


Automatically purging

Sometimes we are facing with space related issues due to the huge number of trace file generation. Automatic purging can help us in this situations if set short retention time for trace files.

  1. Type show homes to see current homes supported by ADRCI and set homepath.
    adrci> show homes
    ADR Homes: 
    diag/rdbms/db12c_stby/db12c_stby
    diag/rdbms/db12c_stby/db12c
    diag/rdbms/db12c_stby/DB12c
    diag/tnslsnr/Standby/listener
    diag/clients/user_oracle/host_2436394132_82
     
    adrci> set homepath diag/rdbms/db12c_stby/db12c_stby
    


  2. The automatic purging runs on schedule defined in retention policy .To check current policy for home, we can use two methods :
    adrci> show control
    
    ADR Home = /u01/app/oracle/diag/rdbms/db12c_stby/db12c_stby:
    *************************************************************************
    ADRID                SHORTP_POLICY        LONGP_POLICY         LAST_MOD_TIME                            LAST_AUTOPRG_TIME                        LAST_MANUPRG_TIME                        ADRDIR_VERSION       ADRSCHM_VERSION      ADRSCHMV_SUMMARY     ADRALERT_VERSION     CREATE_TIME                              
    -------------------- -------------------- -------------------- ---------------------------------------- ---------------------------------------- ---------------------------------------- -------------------- -------------------- -------------------- -------------------- ---------------------------------------- 
    3514698554           720                  8760                 2016-05-31 00:54:57.566484 +05:00        2016-07-07 23:27:37.789953 +05:00                                                 1                    2                    82                   1                    2016-05-31 00:54:57.566484 +05:00       
    1 rows fetched

    or

    adrci> select SHORTP_POLICY,LONGP_POLICY from ADR_CONTROL;
    
    ADR Home = /u01/app/oracle/diag/rdbms/db12c_stby/db12c_stby:
    *************************************************************************
    SHORTP_POLICY        LONGP_POLICY         
    -------------------- -------------------- 
    720                  8760                
    1 rows fetched

    By default retention is 720 hours for short policy and 8760 hours for long policy.
    Short policy include the following files :(Trace files, Core dump files, Packaging information)
    Long policy include the following files:(Incident information, Incident dumps, Alert logs)

  3. To change the retention policy :
    adrci> set control (SHORTP_POLICY=120); -- 5 days
    
    adrci> set control (LONGP_POLICY=720); -- 1 week
    
    
    --check changes
    
    adrci> select SHORTP_POLICY,LONGP_POLICY,LAST_AUTOPRG_TIME,LAST_MANUPRG_TIME from ADR_CONTROL; 
    
    ADR Home = /u01/app/oracle/diag/rdbms/db12c_stby/db12c_stby:
    *************************************************************************
    SHORTP_POLICY LONGP_POLICY LAST_AUTOPRG_TIME LAST_MANUPRG_TIME 
    -------------------- -------------------- ---------------------------------------- ---------------------------------------- 
    120 720 2016-07-07 23:27:37.789953 +05:00 2016-07-16 22:37:52.700123 +05:00 
    1 rows fetched
    
    adrci> 
    
    

    Initially automatic purging runs 2 days after instance startup, then by policy defined interval.

 


Manual purging

  1. Type show homes to see current homes supported by ADRCI and set homepath.
    [oracle@MAINDB ~]$ adrci
    
    ADRCI: Release 12.1.0.2.0 - Production on Tue Feb 23 19:18:42 2016
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
    
    ADR base = "/u01/app/oracle"
    
    adrci> show homes
    ADR Homes: 
    diag/tnslsnr/MAINDB/listener
    diag/rdbms/cdb1/cdb1
    
    adrci> set homepath diag/rdbms/cdb1/cdb1
    
    

     

  2. To purge diagnostic data that is older than the amount of time (minutes) given in the purge command. For ex to purge diagnostic data that is over 1 day old (1440 minutes).
    adrci> PURGE -age 1440 -type ALERT
    
    --You can also pruge TRACE files with same method:
    adrci> PURGE -age 1440 -type TRACE
    
    
  3. To remove all data older than one minute use:
    adrci> PURGE -age 1 -type ALERT
    adrci> SHOW ALERT
    
    ADR Home = /u01/app/oracle/diag/rdbms/cdb1/cdb1:
    *************************************************************************
    
    No alert log in selected home
    adrci> exit

Dataguard commands and SQL scripts


PHYSICAL STANDBY COMMANDS

To start redo apply in foreground:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

To stop redo apply process on the Standby database (to stop MRP):

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

To start real-time redo apply:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;

To start redo apply in background:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
or
SQL> RECOVER MANAGED STANDBY DATABASE DISCONNECT;

To check redo apply  and Media recovery service status:

SQL> SELECT PROCESS,STATUS, THREAD#,SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY ;

If managed standby recovery is not running or not started with real-time apply, restart managed recovery with real-time apply enabled:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

To gather Data Guard configuration information(standby)

SQL> SELECT DATABASE_ROLE,OPEN_MODE, PROTECTION_MODE FROM V$DATABASE 

DATABASE_ROLE OPEN_MODE PROTECTION_MODE
---------------- -------------------- --------------------
PHYSICAL STANDBY MOUNTED MAXIMUM PERFORMANCE

SQL> SELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS WHERE RECOVERY_MODE!='IDLE';

RECOVERY_MODE
-----------------------
MANAGED REAL TIME APPLY

To calculate the Redo bytes per second

SQL> SELECT SUM (BLOCKS * BLOCK_SIZE) / 1024 / 1024 / 60 / 60 / 30 REDO_MB_PER_SEC
 FROM GV$ARCHIVED_LOG
 WHERE FIRST_TIME BETWEEN TO_DATE ('01.05.2016', 'DD.MM.YYYY')
 AND TO_DATE ('01.06.2016', 'DD.MM.YYYY')

To check status of Data Guard synchronization(standby):

SQL> SELECT NAME, VALUE FROM V$DATAGUARD_STATS;

NAME VALUE
--------------------- -------------------------------
transport lag          +00 00:00:00
apply lag              +00 00:00:00
apply finish time      +00 00:00:00.000
estimated startup time 32

To verify there is no log file gap between the primary and the standby database:

SQL> SELECT STATUS, GAP_STATUS FROM V$ARCHIVE_DEST_STATUS WHERE DEST_ID = 3;

STATUS GAP_STATUS
--------- ------------------------
VALID NO GAP

 

To verify that the primary database can be switched to the standby role:

A value of TO STANDBY or SESSIONS ACTIVE indicates that the primary database can be switched to the standby role. If neither of these values is returned, a switchover is not possible because redo transport is either misconfigured or is not functioning properly.

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE; 

SWITCHOVER_STATUS
--------------------
TO STANDBY

To convert the primary database into a physical standby :

Before switchover the current control file is backed up to the current SQL session trace file and it possible to reconstruct a current control file, if necessary.

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;

Database altered.

To verify Managed Recovery is running on the standby :

SQL> SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%'; 

PROCESS
---------
MRP0

To show information about the protection mode, the protection level, the role of the database, and switchover status:

SQL> SELECT DATABASE_ROLE, DB_UNIQUE_NAME INSTANCE, OPEN_MODE, PROTECTION_MODE, PROTECTION_LEVEL, SWITCHOVER_STATUS FROM V$DATABASE;

DATABASE_ROLE     INSTANCE    OPEN_MODE    PROTECTION_MODE     PROTECTION_LEVEL     SWITCHOVER_STATUS
---------------- ---------- ------------ -------------------- -------------------- -------------------- --------------------
PRIMARY           TESTCDB    READ WRITE    MAXIMUM PERFORMANCE MAXIMUM PERFORMANCE   TO STANDBY

On the standby database, query the V$ARCHIVED_LOG view to identify existing files in the archived redo log.

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
Or
SQL> SELECT THREAD#, MAX(SEQUENCE#) AS "LAST_APPLIED_LOG" FROM V$LOG_HISTORY GROUP BY THREAD#;

On the standby database, query the V$ARCHIVED_LOG view to verify the archived redo log files were applied.

SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

 

 

To determine which log files were not received by the standby site.

SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE#
FROM (SELECT THREAD#, SEQUENCE#
FROM V$ARCHIVED_LOG
WHERE DEST_ID = 1) LOCAL
WHERE LOCAL.SEQUENCE# NOT IN (SELECT SEQUENCE#
FROM V$ARCHIVED_LOG
WHERE DEST_ID = 2 AND THREAD# = LOCAL.THREAD#);

 

Archivelog difference: Run this on the primary database. (not for real-time apply):

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
SQL> 
SELECT A.THREAD#,
 B.LAST_SEQ,
 A.APPLIED_SEQ,
 A.LAST_APP_TIMESTAMP,
 B.LAST_SEQ - A.APPLIED_SEQ ARC_DIFF
 FROM ( SELECT THREAD#,
 MAX (SEQUENCE#) APPLIED_SEQ,
 MAX (NEXT_TIME) LAST_APP_TIMESTAMP
 FROM GV$ARCHIVED_LOG
 WHERE APPLIED = 'YES'
 GROUP BY THREAD#) A,
 ( SELECT THREAD#, MAX (SEQUENCE#) LAST_SEQ
 FROM GV$ARCHIVED_LOG
 GROUP BY THREAD#) B
 WHERE A.THREAD# = B.THREAD#;

 THREAD#   LAST_SEQ    APPLIED_SEQ  LAST_APP_TIMESTAMP   ARC_DIFF
---------- ---------- -----------   --------------------- ----------
 1         21282      21281         09-IYUL-2016 12:06:5     1
 2         23747      23746         09-IYUL-2016 12:16:13    1

2 rows selected.

 

To check archive log apply  on primary database:

SQL> SET LINESIZE 150
SET PAGESIZE 999
COL NAME FORMAT A60
COL DEST_TYPE FORMAT A10
COL ARCHIVED FORMAT A10
COL APPLIED FORMAT A10

SELECT SEQUENCE#,
NAME,
DEST_ID ,
CASE WHEN STANDBY_DEST = 'YES' THEN 'Standby' ELSE 'Local' END
AS DEST_TYPE ,
ARCHIVED ,
APPLIED
FROM V$ARCHIVED_LOG
WHERE SEQUENCE# > (SELECT MAX (SEQUENCE#)
FROM V$ARCHIVED_LOG
WHERE STANDBY_DEST = 'YES' AND APPLIED = 'YES')
ORDER BY SEQUENCE# , DEST_ID ;


 SEQUENCE#  NAME                                                          DEST_ID  DEST_TYPE  ARCHIVED APPLIED
---------- -------------------------------------------------------------- -------  ---------- -------- --------
 23748      +FRA/TESTCDB/ARCHIVELOG/2016_07_09/thread_2_seq_23748.10041.9   1      Local        YES       NO
 23748      +DATA/TESTCDB/ARCHIVELOG/2016_07_09/thread_2_seq_23748.10062.   2      Local        YES       NO
 23748      TESTSTB                                                         3      Standby      YES       NO

3 rows selected.


 

DG BROKER COMMANDS

 

How to configure Data Guard broker:

1.Start the DMON process on both the primary and standby databases:

SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH;
System altered.

2.Set the log_archive_dest_2 settings from both the Primary and Standby databases
to be nothing , then try to create the broker configuration (it will automatically 
set the log_archive_dest_n when you'll add a database to the configuration)

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='';
System altered.

Connect DGMGRL on the primary DB and create the configuration 

[oracle@primary ~]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production
Copyright (c) 2000, 2013, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/test
Connected as SYSDG.

DGMGRL> CREATE CONFIGURATION 'TEST' AS PRIMARY DATABASE IS 'DB12C' CONNECT IDENTIFIER IS DB12C;
Configuration "TEST" created with primary database "DB12C"

Next add a standby database to the Data Guard broker configuration:

DGMGRL> ADD DATABASE 'db12c_stby' AS CONNECT IDENTIFIER IS 'db12c_stby';
Database "db12c" added

Enable dataguard broker configuration 

DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration;

Configuration - TEST

 Protection Mode: MaxPerformance
 Members:
 db12c - Primary database
 DB12C_STBY - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 40 seconds ago)

 

To remove DG broker configuration:

DGMGRL> remove configuration;
Removed configuration

Rename the database name in the Data Guard broker as follows:

DGMGRL> edit database 'db12c_stby' rename to 'STBY';

To turn off redo transport to all remote destinations on the primary database:

 DGMGRL> edit database 'DB12C' SET STATE="LOG-TRANSPORT-OFF";

To stop and start redo transport services to specific standby databases:

DGMGRL> edit database 'db12c_stby' SET PROPERTY 'LogShipping'='OFF';
Property "LogShipping" updated
DGMGRL> SHOW DATABASE 'db12c_stby' 'LogShipping';
 LogShipping = 'OFF'
DGMGRL> edit database 'db12c_stby' SET PROPERTY 'LogShipping'='ON';
Property "LogShipping" updated
DGMGRL> SHOW DATABASE 'db12c_stby' 'LogShipping';
 LogShipping = 'ON'

To change the state of the standby database to read-only and back APPLY-ON:

DGMGRL> EDIT DATABASE 'db12c' SET STATE='READ-ONLY';
Succeeded.
DGMGRL> show database db12c

Database - db12c

 Role: PHYSICAL STANDBY
 Intended State: READ-ONLY
<<OUTPUT TRIMMED>>
Database Status:
SUCCESS

To change back:

DGMGRL> shutdown 
DGMGRL> startup mount;
DGMGRL> show database db12c

Database - db12c
 Role: PHYSICAL STANDBY
 Intended State: OFFLINE
  <<OUTPUT TRIMMED>>

DGMGRL> EDIT DATABASE DB12C SET STATE = APPLY-ON;
Succeeded.
DGMGRL> show database db12c

Database - db12c

 Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 <<OUTPUT TRIMMED>>

 

 



LOGICAL STANDBY COMMANDS

To Restart SQL apply on logical standby

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

To Stop SQL apply on logical standby

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;

Run the following SQL against the logical standby to start real-time SQL apply if the SQL apply failed with an error, and you are 100% certain that the transaction is safe to skip

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE SKIP FAILED TRANSACTION;

To see unsupported tables for logical standby:

SQL> SELECT * FROM DBA_LOGSTDBY_UNSUPPORTED_TABLE ORDER BY OWNER, TABLE_NAME;

To know which archive log sequences are at what stage for logical standby?

SQL> SELECT 'RESTART' "TYPE",
 P.RESTART_SCN "SCN",
 TO_CHAR (P.RESTART_TIME, 'yyyy/mm/dd hh24:mi:ss') "TIME",
 L.SEQUENCE# "SEQ#"
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.RESTART_SCN >= L.FIRST_CHANGE# AND P.RESTART_SCN < L.NEXT_CHANGE#
UNION
SELECT 'RESTART',
 P.RESTART_SCN,
 TO_CHAR (P.RESTART_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.RESTART_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#
UNION
SELECT 'APPLIED',
 P.APPLIED_SCN,
 TO_CHAR (P.APPLIED_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.APPLIED_SCN >= L.FIRST_CHANGE# AND P.APPLIED_SCN < L.NEXT_CHANGE#
UNION
SELECT 'APPLIED',
 P.APPLIED_SCN,
 TO_CHAR (P.APPLIED_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.APPLIED_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#
UNION
SELECT 'MINING',
 P.MINING_SCN,
 TO_CHAR (P.MINING_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.MINING_SCN >= L.FIRST_CHANGE# AND P.MINING_SCN < L.NEXT_CHANGE#
UNION
SELECT 'MINING',
 P.MINING_SCN,
 TO_CHAR (P.MINING_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.MINING_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#
UNION
SELECT 'SHIPPED',
 P.LATEST_SCN,
 TO_CHAR (P.LATEST_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, DBA_LOGSTDBY_LOG L
 WHERE P.LATEST_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN < L.NEXT_CHANGE#
UNION
SELECT 'SHIPPED',
 P.LATEST_SCN,
 TO_CHAR (P.LATEST_TIME, 'yyyy/mm/dd hh24:mi:ss'),
 L.SEQUENCE#
 FROM V$LOGSTDBY_PROGRESS P, V$STANDBY_LOG L
 WHERE P.LATEST_SCN >= L.FIRST_CHANGE# AND P.LATEST_SCN <= L.LAST_CHANGE#;

To know is the SQL Apply up to date

SQL> SELECT TO_CHAR(LATEST_TIME,'yyyy/mm/dd hh24:mi:ss') "LATEST_TIME", 
TO_CHAR(APPLIED_TIME,'yyyy/mm/dd hh24:mi:ss') "APPLIED_TIME", 
APPLIED_SCN, LATEST_SCN 
FROM V$LOGSTDBY_PROGRESS;

To know is the Logical standby applying changes? Run the following SQL against the Logical standby database:

SQL> SELECT REALTIME_APPLY, STATE FROM V$LOGSTDBY_STATE;

If the value of STATE is “NULL” or “SQL APPLY NOT ON” then the Sql Apply is not running.The value of REALTIME_APPLY should be Y to allow for real time apply from the standby redo logs. To know what major Sql Apply events have occurred, run the following SQL against the Logical standby database:

SQL> SELECT TO_CHAR (EVENT_TIME, 'YYYY/MM/DD HH24:MI:SS') "EVENT_TIME",
STATUS, EVENT
FROM DBA_LOGSTDBY_EVENTS
ORDER BY EVENT_TIME;

To know what major Dataguard events have occurred, run the following SQL against the Logical standby database:

SQL> SELECT TO_CHAR (TIMESTAMP, 'yyyy/mm/dd hh24:mi:ss') "TIME",
ERROR_CODE "ERROR", DEST_ID "DEST", MESSAGE
FROM V$DATAGUARD_STATUS
WHERE timestamp > TRUNC (SYSDATE + 6 / 24)
ORDER BY timestamp DESC;

 

To know where are the archive logs going and are there any achieving issues, run the following SQL against either the logical standby or primary database:

SQL> SELECT DEST_ID "DID",
STATUS, DESTINATION, ARCHIVER, VALID_NOW, VALID_TYPE, VALID_ROLE, ERROR
FROM V$ARCHIVE_DEST
WHERE STATUS <> 'INACTIVE';


 

How to automatically copy archivelogs from ASM to local disc


This script will create folder on local disk, copy archivelogs of yesterday to this folder and compress folder with zip

#!/bin/bash
. /home/oracle/.bash_profile
cd /home/oracle/backup/rman/archivelogs/
mkdir `date -d "1 day ago" "+%Y_%m_%d"`
. /home/oracle/grid_env
for i in $(asmcmd ls +FRA/TESTDB/ARCHIVELOG/`date -d "1 day ago" "+%Y_%m_%d"`);
do asmcmd cp +FRA/TESTDB/ARCHIVELOG/`date -d "1 day ago" "+%Y_%m_%d"`/$i /home/oracle/backup/rman/archivelogs/`date -d "1 day ago" "+%Y_%m_%d"`;
done
zip -r `date -d "1 day ago" "+%Y_%m_%d"`.zip `date -d "1 day ago" "+%Y_%m_%d"`
rm -r `date -d "1 day ago" "+%Y_%m_%d"`

Enable Cluster Time Synchronization Service on Oracle 11gr2 RAC


1) If Synchronization Service works with NTP   then CTSS must be in Observer mode. Lets check it:

[oracle@node1 ~]$grid_env
[oracle@node1 ~]$ crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

[oracle@node2 ~]$ grid_env
[oracle@node2 ~]$ crsctl check ctss
CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

This means RAC works with NTP.

2) Now lets disable NTP

[oracle@node1 ~]$ su -
Password:
[root@node1 ~]# grid_env
[root@node1 ~]# crsctl stop crs
<<<output trimmed>>>
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node2 ~]# crsctl stop crs
<<<output trimmed>>>
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[oracle@node1 ~]$ /sbin/service ntpd stop
[root@node1 ~]# chkconfig ntpd off
[oracle@node2 ~]$ /sbin/service ntpd stop
[root@node2 ~]# chkconfig ntpd off
[root@node1 ~]# mv /etc/ntp.conf /tmp/
[root@node2 ~]# mv /etc/ntp.conf /tmp/

3. Start the cluster on all nodes

[root@node1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@node2 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

4. Now let’s check ctss .  And we saw that ctss starts in active mode :

[oracle@node1 ~]$ crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0
[root@node2 ~]# crsctl check ctss
CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset (in msec): 0

Oracle Database Vault (part2)


Oracle database Vault Access Control Components

  • REALM  is a basically group of database schemas, objects or rolls that need to be secured in the database. For example: You may have table called EMP which contains employee salaries in schema called HR and you don’t want all the users to access this table specifically by SYS user which have an administrative access. If you want to prevent SYS user from accessing this particular table in the HR schema, then you can protect this table by configuring realm. So that this table will be protected from the administrators access.
  • COMMAND RULE  can be created to control execution of SQL statements. For example : You don’t want to create a table in HR schema, but user has system privilege create any table that means user can create table in any of the schemas of the database. As you want to restrict this user from running create table statement you can configure command rule and restrict the execution of create table statement. The SQL statements can be DDL’s or DML’s and you can restrict the execution all of this statements.
  • FACTOR- there are certain situations in which you will have to prevent access based on the user location, IP address or a particular user. In that scenario you configure variable called factor and this name attribute will recognize the components such as user locations, database IP addresses or session user and secures the area of database which you want to prevent from being accessed by the administrative users
  • RULE SET is a collection of one or more rules. You can associate this rule with realm authorization, command rules, factor assignments and also secure application roles
  • SECURE APPLICATION ROLE can be enabled based on the evaluation of the Oracle Database Vault rule set, the rule set evaluates to true or false depending on the evaluation of the rule which is associated with the rule set

.

Changes  after database vault installation:

Changes in initialization and Password Parameter settings:

  • AUDIT_SYS_OPERATIONS value changes from FALSE to TRUE – This means all operations performed by sys user will be audited
  • OS_ROLES paraeter values changes to FALSE- by default this is not configured, after you instal database vault this will be set to FALSE . This will disable operating system granting and revoking roles and privileges to users
  • RECYCLEBIN parameters values changes from ON to OFF- is this parameter Turned ON, the dropped objects in the database will be moved to recyclebean.By default recycle ben is turned on. When we enable database vault this parameter is turned off
  • REMOTE_LOGIN_PASSWORDFILE-this parameter is set to EXCLUSIVE by default, after installation od Database Vault value of this parameter is set to EXCLUSIVE again.
  • SQL92_SECURITY parameters value changes from FALSE to TRUE – if a user is granted update and delete privileges on a table, select privilege must enable that user from updating and deleting the table. SQL92_SECURITY enforces this functionality when update and delete privileges are granted to a user.
    .
    .New Database Roles
  • DV_OWNER-Oracle Database Vault Owner
  • DV_ACCTMGR-Oracle Database Vault Account Manager
    .
    .Changes to Database Auditing
  • After installing Database Vault $AUD table moved to SYSTEM schema from SYS schema
  • Modified audit settings- yo can also see certain change in the audit settings. dv will configure certain audit settings in the database. this again depends on the setting of audit_trail initialization parameter, if it set to none audit settings not configured, if it set db os os then audit setings can configured in the database vault in management.
    .
    .Privileges prevented for existing users:
  • ALTER PROFILE
  • ALTER USER
  • CREATE PROFILE
  • CREATE USER
  • DROP PROFILE
  • DROP USER
    .
    Basically this privileges are prevented from execution for sys and sysdba users . in database vault and management this privileges are granted to the special user who has DV_ACCTMGR privilege and this is will long be able perform any of the create user, drop user or any alter user operations in a database vault and management.
    .

      Privileges revoked from existing users and roles

  • DBA
  • PUBLIC
  • IMP_FULL_DATABASE
  • EXECUTE_CATALOG_ROLE
  • SCHEDULER_ADMIN
    .

Oracle Database Vault Schemas

When you install Database Vault two schemas (DVSYS and DVF)  are created during configuration.

  • DVSYS
    DVSYS schema basically contains Oracle database vault database objects in which Oracle Database Vault configuration information is stored
  • DVF
    There is a function called DBMS_MACSEC_FUNCTION package, this contains functions which can retrieve factor identitis. This package is owned by DVF schema.
    .

Oracle Database Vault Roles

  • DV_OWNER
    During the database vault installation and configuration when we were confiring it using DBCA we got the option to specify DV owner and DV account manager. We had provided DV account owner name as DVOWNER. This DVOWNER has a dv_owner role , this rule basically manages oracle database vault rules and its configuration
  • DV_ADMIN
    This role controls database vault PL/SQL packages
  • DV_ACCTMGR
    This role granted to Oracle Database Vault account manager account. This basically creates and manages database accounts and profiles. In the database vault and management sys user will not be able to perform any database account relatively activity such as creating user, altering user  to change the password, drop a user, create profile to maintain database users. So such a scenario dv_acctmgr will perform all this operations. A user who has DV_ACCTMGR rule will be performing all this operations in the database vault and management.
  • DV_SECANALYST
    User with this role can run reports in the Database Vault  administrative console
  • DV_PATCH_ADMIN
    This role granting to a user which performing patching the database vault and management. When applying patches in the database vault and management in earliest version database vault had to be disabled, this new role was introduced to avoid this step. Whenever yo needs to apply the patch in a database vault and management temporarily this rule is granted to the user. After the application of the patch this rule will be revoked from the user.

DBMS_MACADM Package
This package basically contains the procedures and the functions to create and configure the different components of the database vault such as realms, command rules, factors, rule sets and secure application roles. This package can be executed by the users who a granted dv_owner or dv_admin roles. Configuration of realms, command rules, factors also can be done using database vault administrative console. Below I will show you how to create and configure realms, command rules, factors and rule sets using database vault administrative console, I am also providing an example how it can be done using dbms_macadm package:

Creating a Realm

DBMS_MACADM.CREATE_REALM(
realm_name IN VARCHAR2,
description IN VARCHAR2,
enabled IN VARCHAR2,
audit_options IN NUMBER);

Creating a Comman rule

DBMS_MACADM.CREATE_COMMAND_RULE(
command IN VARCHAR2,
rule_set_name IN VARCHAR2,
object_owner IN VARCHAR2,
object_name IN NUMBER
enabled IN VARCHAR2);

Let’s test DV configuration and management:

SQL> conn dvacctmgr/<<your password >>
Connected.
SQL> create user demo identified by demo quota unlimited on users;
User created.
SQL> conn sys as sysdba
Enter password: 
Connected.
SQL> 
SQL> grant create session, select any table to demo;

Grant succeeded.
SQL> grant create session, select any table to demo;
Grant succeeded.
SQL> conn demo/demo
Connected.

SQL> select count(*) from hr.employees;

COUNT(*)
----------
107

As you can see user DEMO can fetch the records from EMPLOYEES table in HR schema. Let me configure REALM to restrict access to this particular table.
When you access the DV administration console you give the hostname or the IP address, the port number and slash DVA:

https://database_ip_adress: port_number/dva

Type your URL in your browser and press ok:

bp9

This is the login page database DV administration console.
Here we specify user name for the database vault owner dvowner and specify the password, hostname   or ip address of the my database server, and default port of oracle 1521. You can specify SID or service name in my example I am specifying SID name  which is DB11G and click login.

bp10

You can see different tabs on the console – Administration, Database Vault Reports, General Security Reports and Monitor. To Configure Realm click on  Administration tab and select Realms:

bp11

There are certain default realms which are configured during database vault installation. These are 4 realms which are default realms offered by oracle database vault.

  • Database Vault Account Management realm defines realm for users who perform account management activities in the database
  • Oracle Data Dictionary this defines the realm for catalog users
  • Oracle Database Vault this defines realm for oracle database vault users such as DVF and DVSYS schemas
  • Oracle Enterpise Manager this is basically for the enterprise manager users such as SYSMAN and DBSNMP who wants to access oracle database information

Now let’s create new realm for securing EMPLOYEES table in the HR schema, click create. In the create realm page enter the name of the realm, the description of the realm, status  and different auditing options( Audit on failure – will generate an audit record when there is a realm failure) and click OK to create realm.

bp12

Now you can see HR_realm in this list:

bp12

To protect the objects of the HR schema, select HR_realm and click edit .

bp14

In this realm   under the Realm Secured Objects you can see there are no objects which are protected. Click create and specify object owner, object type and object name which need to be secured.  As I am securing the object of HR schema,  i am specifying object owner as HR, object type as Table and object name as %-this will protect all the tables in the HR schema and click OK.

bp15

Now all tables under HR are protected. Let’s go to database and test how the securing objects of HR_realm has affected access of user demo to this tables .Earlier when the realm was not created demo user was able to access to table EMPLOYEES of HR schema.

Connect as DEMO user and again perform select statement :

SQL> select * from HR.EMPLOYEES;

ERROR at line 1:
ORA-01031: insufficient privileges

And you will get error “insufficient privileges” this means HR.EMPLOYEES table is protected by the REALM. This is how the REALM authorization or REALM protection  works. Lets go back database vault administration console.

bp16

We saw that some of object are protected here, but users not authorized to access the objects of this realm. To authorize users to access objects of this realm select HR_realm , click  edit , go to section Realm Authorizations and click the create button. In the create Realm Authorization page you can see Grantee, Authorization type and Authorization Rule Set. I am specifying Grantee as Demo user , authorization type is participant-that means this user will be able to access the objects of this realm, if  I specify the Authorization Options as OWNER I would have same functionality as Participant along with that it will have  additional privilege to grantee access to the objects authorized by this realm and  Authorization Rule Set – not specifying any rule set here:

bp17

Now let’s see how a Command rule functions in the DV and management.

Connect as sysdba, grantee create any table privilege to demo user.
Then create table SCOTT.TBL_TEST with user demo.

[oracle@orcl ~]$ sqlplus / as sysdba
SQL> grant create any table to demo ;
Grant succeeded.
SQL> conn demo/demo
Connected.
SQL> create table SCOTT.TBL_TEST(NAME VARCHAR2(10),MARKS NUMBER);
Table created.

Now let’s go to Database Vault console and click on Command rules.

bp18

As we had default realms we also have certain default command rules offered by oracle database vault . This command rules is to restrict execution of following commands in the database. To create new command rule, click create button. In the create command rule page, you can select the command which you want to restrict. Here I want to restrict creating any table in SCOTT schema by user demo:

bp19

And click OK to create command rule. Here you can see that  there is new command rule created.

bp20

Now let’s go back to database and test how create table statement works now:

SQL> conn demo/demo
Connected.
SQL> create table SCOTT.TEST as select * from ALL_OBJECTS;
create table SCOTT.TEST as select * from ALL_OBJECTS
                                         *
ERROR at line 1:
ORA-47400: Command Rule violation for CREATE TABLE on SCOTT.TEST

Here I try to create table under SCOTT, but I get  command rule violation for CREATE TABLE on SCOTT

Let’s look to small example how we can restrict access from a certain programs or modules.
Lets create Factor click on Factors and select create:

bp21

In the create new factor page fill specified name, description and Factor type.
Here we will create factor to find name the application from which database is going to connect and where we are going to block access from SQL*PLUS.

bp22

Now I will configure rule set:
bp23

Click create and  specify name, description for rule set , select status enabled and Evaluation Options All True – which means all the conditions which you specified should be evaluated to true, auditing options disabled. Then click OK.

bp24

After creating rule set edit it and click on create button to create Rules Associated to The Rule Set. Here we specified Rule Expression as DVF.F$MODULE=’SQL*PLUS.EXE’ and DVF.F$SESSION_USER IN(‘SCOTT’). DVF.F$MODULE here means that we created factor called MODULE, ‘SQL*PLUS.EXE’ here means any connections coming from sqlplus, DVF.F$SESSION_USER IN(‘SCOTT’)- means session user as SCOTT. All this means that if the user is SCOTT and SCOTT is connecting to the database using SQLPLUS it should be allowed to connect and none of the other users should be allowed to connect. Click ok and create rule.

bp25

Now I will create command rule to restrict connection to the database:

bp26

Now let’s go back to my database and test how this command rule works:

SQL> conn scott
Enter password: <your password>
Connected.
SQL> conn demo
Enter password: <your password>
ERROR:
ORA-47400: Command Rule violation for CONNECT on LOGON
Warning: You are no longer connected to ORACLE.
SQL>

During the connect with demo user we get ERROR Command Rule violation. This means that any user other than scott are not allowed to connect to the database using SQLPLUS.

ORACLE DATABASE VAULT REPORTS

There are two categories of DV reports- Oracle database Vault reports and General Security reports

  • Database Vault Reports are basically to give the information about the different components of the database vault such as Realms, their authorization violations, command rule violations, factors, rule sets and security application rules.
  • General Security Reports are the general reports such as user authorization, privileges, rules, object and system privileges and also it gives special reports for the security vulnerability issues

To see this reports login to the Database Vault Administration console and click on the Database Vault Reports tab:

bp27

Click on Realm Audit Report and Run Report, now you can see report which shows you different kind of realm violations:

bp28

Now let’s look command rule audit reports, here we can see command and returned error codes:

bp29

Let’s see General Security Reports:

bp30

Similarly you can select and run different reports.

Oracle Database Vault (part1)


1593501

Database Vault restricts access to the specific areas of the database from different users in the database as well as the administrative users. It also helps us to protect the database against insider threats. Insider threats, meaning the protecting database from the access users who have sysdba and sysoper privileges. Using database vault we can also enforce separation of duties where we can allocate different accounts for database account management and other database activities.

Database vault installation

Installation of the database vault includes two steps, enabling the database vault at the binary level and the registering database vault using dbca:
Enabling the database vault at the binary level
First of all, stop database console, listener and shutdown database. Database vault is dependent on oracle label security, so beforeinstallation of database vault you must enable label security.

Execute following  command :
For enabling label security component Chopt enable lbac
For enabling  database vault components Chopt enable dv

SQL> SHUTDOWN IMMEDIATE
SQL> EXIT
[oracle@orcl ~]$ emctl stop dbconsole
[oracle@orcl ~]$ lsnrctl stop
[oracle@orcl ~]$ chopt enable lbac 
Writing to /u01/app/oracle/product/11.2/db_1/install/enable_lbac.log...
%s_unixOSDMakePath% -f /u01/app/oracle/product/11.2/db_1/rdbms/lib/ins_rdbms.mk lbac_on
%s_unixOSDMakePath% -f /u01/app/oracle/product/11.2/db_1/rdbms/lib/ins_rdbms.mk ioracle

[oracle@orcl ~]$ chopt enable dv
Writing to /u01/app/oracle/product/11.2/db_1/install/enable_dv.log...
%s_unixOSDMakePath% -f /u01/app/oracle/product/11.2/db_1/rdbms/lib/ins_rdbms.mk dv_on
%s_unixOSDMakePath% -f /u01/app/oracle/product/11.2/db_1/rdbms/lib/ins_rdbms.mk ioracle

Run the following commands. The make command enables both Oracle Database Vault (dv_on) and Oracle Label Security (lbac_on). You must enable Oracle Label Security before you can use Database Vault.

[oracle@orcl Datafiles]$ cd $ORACLE_HOME/rdbms/lib
[oracle@orcl lib]$ make -f ins_rdbms.mk dv_on lbac_on
/usr/bin/ar d /u01/app/oracle/product/11.2/db_1/rdbms/lib/libknlopt.a kzvndv.o
/usr/bin/ar cr /u01/app/oracle/product/11.2/db_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/11.2/db_1/rdbms/lib/kzvidv.o 
/usr/bin/ar d /u01/app/oracle/product/11.2/db_1/rdbms/lib/libknlopt.a kzlnlbac.o
/usr/bin/ar cr /u01/app/oracle/product/11.2/db_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/11.2/db_1/rdbms/lib/kzlilbac.o 
[oracle@orcl lib]$ cd $ORACLE_HOME/bin
[oracle@orcl bin]$ relink all
writing relink log to: /u01/app/oracle/product/11.2/db_1/install/relink.log

Start the database, database control console process, and listener. Then run DBCA command to register Database Vault

[oracle@orcl ~]$ emctl start dbconsole
[oracle@orcl ~]$ lsnrctl start
SQL> startup;
ORACLE instance started.
Total System Global Area  534462464 bytes
Fixed Size		    2254952 bytes
Variable Size		  213911448 bytes
Database Buffers	  314572800 bytes
Redo Buffers		    3723264 bytes
Database mounted.
Database opened.

[oracle@orcl ~]$ dbca

Select configure database option to configure database vault

bp1

Select the database which will be configured for database vault

bp2

Select label security component and dv components click next

bp3

Also we can select this options when installing oracle database :

bp4

In the next screen you must specify the username and password for database vault owner and account manager.

bp5

And click next, select dedicated server mode , click ok and then finsh database vault configuration.

bp6

bp7

bp8

You can check if Oracle Database Vault is enabled or disabled by querying the V$OPTION data dictionary view. If Oracle Database Vault is enabled, the query returns TRUE. Otherwise, it returns FALSE.

Remember that the PARAMETER column value is case sensitive. For example:

SQL> SELECT * FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

PARAMETER                                                                   VALUE
---------------------------------------------------------------- ---------------
Oracle Database Vault                                                      TRUE

In the next blog post i will write about Oracle database Vault Access Control Components, changes happened after Database Vault installation and management of Database Vault.

Using Password Files


Password files allow you to set passwords that are stored outside the database and that are used for authenticating administrators. These passwords are stored in an external file that is encrypted by Oracle. Password files can be even used if the database is down-so. You can use this passwords even if database is down .

To create a password file use the ORAPWD utility:

[oracle@orcl ~]$ orapwd file=./test_pwd entries=100 ignorecase=n
Enter password for SYS: 
[oracle@orcl ~]$ 

After creating the password file, you must set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to an appropriate value (  NONE, EXCLUSIVE and SHARED ).
NONE-causing Oracle to behave as though a password file does not exist. EXCLUSIVE means that the password file is being used only by your database and that you can modify it from within the database . SHARED-allows you to use a single password file for multiple databases, but none of them can update the password file. If you need to update the password file then you need to switch this parameter to EXCLUSIVE in one of the databases, change the password file and then change it back to be used as SHARED.

SQL> show parameter remote_login_passwordfile;

NAME				        TYPE	   VALUE
--------------------------  ----------- ---------------
remote_login_passwordfile	  string	   EXCLUSIVE

Now try connect to oracle as sysdba from other machine

C:UsersValeh>sqlplus sys@db11g as sysdba
SQL*Plus: Release 11.2.0.1.0 Production on Sun Jul 5 00:17:21 2015
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>