acfs mount and unmount

2.6 Unmounting ACFS File Systems

ACFS file systems can be used by Oracle RAC for hosting software files for the database. It can also be used as a general purpose file system for non-database files. The ACFS file system is managed and administered by the Oracle GRID Infrastructure. So ACFS file systems will be impacted when shutting down the GI stack for patching GI homes.

Shut down the processes using the software files on ACFS and then unmount the ACFS file system.

Note:

Make sure to stop the non-Oracle processes that use ACFS file systems.

If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.

  1. Execute the following command to find the names of the CRS managed ACFS file system resource.
    # crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
    
  2. Execute the following command to stop the CRS managed ACFS file system resource with the resource name found from Step 1.

    As root user execute:

    # srvctl stop filesystem -d <volume device path> -n <node to stop file system on>
    

If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, perform the following steps.

  1. Execute the following command to find all ACFS file system mount points.

    As the root user execute:

    # /sbin/acfsutil registry
    
  2. Unmount ACFS file systems found in Step 1.

    As the root user execute:

    # /bin/umount <mount-point>
    

    Note:

    On Solaris operating system use: /sbin/umount.

    On AIX operating system, use: /etc/umount.

  3. Verify that the ACFS file systems are unmounted. Execute the following command to verify.

    As the root user execute:

    # /sbin/acfsutil info fs
    

    The previous command should return the following message if there is no ACFS file systems mounted.

    "acfsutil info fs: ACFS-03036: no mounted ACFS file systems"
    

2.7 Mounting ACFS File Systems

If the ACFS file system is used by Oracle database software, then perform Steps 1 and 2.

  1. Execute the following command to find the names of the CRS managed ACFS file system resource.

    As root user execute:

    # crsctl stat res -w "TYPE = ora.acfs.type" -p | grep VOLUME
    
  2. Execute the following command to start and mount the CRS managed ACFS file system resource with the resource name found from Step 1.

    As root user execute:

    # srvctl start filesystem -d <volume device path> -n <node to start file system on>
    

If the ACFS file system is not used for Oracle Database software and is registered in the ACFS registry, these file systems should get automatically mounted when the CRS stack comes up. Perform Steps 1 and 2 if it is not already mounted.

  1. Execute the following command to find all ACFS file system mount points.

    As the root user execute:

    # /sbin/acfsutil registry
    
  2. Mount ACFS file systems found in Step 1.

    As the root user execute:

    # /bin/mount <mount-point>
    

Converting Dataguard from SYNC to ASYNC

SYNC TO ASYNC

login to PRIMARY database as a DBA:

alter system set log_archive_dest_state_2=defer scope=memory sid=’*’;
alter system set log_archive_dest_2=’SERVICE=PRIMARY_SERVICE ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBYDB’ scope=both sid=’*’;
alter system set log_archive_dest_state_2=enable scope=memory sid=’*’;

Monitor the database until we re-sync.
Monitor the database during bulk loading process.

ASYNC TO SYNC

alter system set log_archive_dest_state_2=defer scope=memory sid=’*’;
alter system set log_archive_dest_2=’SERVICE=STANDBY_SERVICE SYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=STANDBYDB’ scope=both sid=’*’;
alter system set log_archive_dest_state_2=enable scope=memory sid=’*’;

 

Notes on SYNC and ASYNC

All database changes (insert/update/delete) in PRIMARY must be ACK’d by the disk subsystem (redo) and by DataGuard in STANDBY as we are currently running in SYNC mode.

When in Snapshot mode for testing, STANDBY is not shipping any transactions anywhere else, and only needs the local disk-level (redo) ACK.

– SYNC means we will not lose any data.

– ASYNC means we may lose an amount of data. The amount cannot be quantified without testing.

Switching between SYNC and ASYNC requires the change of an Oracle initialisation parameter and can be done online whilst the system is running.

We cannot state definitively that this is the difference between PRIMARY and STANDBY at this point. Ideally we need to monitor the “slow” PRIMARY database whilst it is processing transactions so we can observe the wait events. The nature of the wait events will tell us where we are spending our time and what the bottleneck is. There is a reasonable probability that it is the SYNC mode, but we should confirm this before making infrastructure and/or data-loss decisions.

FAR SYNC

There is a half-way house with DataGuard called FarSync. This puts an intermediary server containing only transactional information (redo only – not the whole DB) in PRIMARY on the LAN, which will ACK local SYNC processing and then send it on ASYNC to STANDBY. If this server is located on hardware which is not shared with the DB in any way (e.g. the SAN), then we protect against data loss for all scenarios except total DC outage or total network failure between sites.

Identifying slow running sqls on a table

we had a scenario where we introduced a new index on a table in the recent releases and I had to find if all the sqls using this new index and the elapsed time. A simple script will be able to workout well for you.

select vsq.sql_id,substr(sql_text,1,60) sql_text,round((vsq.elapsed_time/vsq.executions)/1000000,0) elap_sec from
v$sqlarea vsq, v$sql_plan dsq
where vsq. sql_id = dsq.sql_id
and dsq.object_name = ‘xxx’ order by 3;

The ‘xxx’ can be replaced by any object name that you would like to look for the sqls using these object names. If you wanted to identify all the sqls hitting a particular table and their elapsed time then replace the ‘xxx’ with the table_name. This is really useful if we wanted on scenarios when we want to identify slow running sqls on any particular table.

Querying AWR tables for session details

To historically find about the session history between two dates, i use the following query to identify the sessions and the object it used to access at a point it time. The instance number can be ignored if its a single node.

SELECT
TO_CHAR(ASH.SAMPLE_TIME, ‘YYYY-MM-DD HH24:MI:SS’)
SAMPLE_TIME,
ASH.SESSION_ID,
ASH.BLOCKING_SESSION,
ASH.EVENT,
O.OBJECT_NAME,
ash.sql_id
FROM
DBA_HIST_ACTIVE_SESS_HISTORY ASH,
DBA_OBJECTS O,
DBA_HIST_SNAPSHOT DBS
WHERE
ASH.CURRENT_OBJ# = O.OBJECT_ID(+) AND
dbs.snap_id = ash.snap_id and
dbs.BEGIN_INTERVAL_TIME between
TO_TIMESTAMP(‘12.10.2012 04:30:00’, ‘dd.mm.yyyy hh24:mi:ss’) AND
TO_TIMESTAMP(‘12.10.2012 05:00:00’, ‘dd.mm.yyyy hh24:mi:ss’) AND
ASH.SESSION_STATE = ‘WAITING’
and ash.instance_number = 1
ORDER BY
SAMPLE_TIME ASC;