Thursday, October 7, 2010

Snapshot standby database Oracle 11g dataguard new feature.

Snapshot Standby Database


Oracle 11g introduces a new Feature in Dataguard – the Snapshot Standby Database which can be in read-write mode. A snapshot standby database is a fully updatable standby database that is created by converting a physical standby database into a snapshot standby database.

As this database is in the read-write mode it is possible to process transactions apart from the primary database (used for Real Application Testing). It still maintains protection by archiving the logs that are shipped from the primary database.

Characteristics of snapshot standby database:

1. Snapshot standby database receives the logs but does not apply the redo data.

2. Redo data received from the primary database is applied automatically once the standby database is converted back into a physical standby database.

3. All updates that were made in the snapshot standby database will be discarded when it is converted back to physical standby database.

4. Snapshot standby database cannot be the candidate for a switchover or failover. A snapshot standby database must first be converted back into a physical standby database before performing a role transition.

5. After a switchover or failover between the primary database and one of the physical or logical standby databases in a configuration, the snapshot standby database can receive redo data from the new primary database after the role transition.

6. Snapshot standby database cannot be the only standby database in a Maximum Protection Data Guard configuration.

Steps to Convert Physical Standby Database to the Snapshot Standby Database

The conversion from physical standby to snapshot standby database and vice versa can be done in three ways.

1. Dataguard Broker.

2. Enterprise Manager.

3. Manual method. This can be done by issuing the below SQL command in the physical standby database.

SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;

1. If flash recovery is not already configured, configure it as shown below:

first set the size of the recovery area

SQL> alter system set db_recovery_file_dest_size=’xxxx’;

2. Next, set the location

SQL> alter system set db_recovery_file_dest=’yyyy’;

3. Bring the physical standby database to mount stage:

4. Stop managed recovery if it is active and then convert the physical standby database to snapshot standby database

SQL> alter database convert to snapshot standby;

The database is dismounted during conversion and must be restarted after the conversion completes.

The database is now ready for transactions.



When a physical standby database is converted into a snapshot standby database, an implicit restore point is created and this restore point is used to flashback a snapshot standby database to its original state when it is converted back into a physical standby database from a snapshot standby database.

5. Steps to Convert the Snapshot Standby Database Back to the Physical Standby Database:

1. Shutdown the snapshot standby database

2. Bring the database to the mount state

3. Issue the following command

SQL> alter database convert to physical standby;

6. Shutdown the database and mount it again

SQL> select open_mode, database_role from v$database;

You are now ready to start the media recovery process.

Once a snapshot standby database has been converted back into a physical standby database and restarted – start the process of applying the redo logs and all of the redo received by the snapshot standby database will be applied to the physical standby database to roll it forward.

Wednesday, August 25, 2010

Manual cleanup of failed CRS installation (10gR2)

Manual cleanup of CRS installation


At times we will have to perform a manual cleanup of failed CRS installation or some old CRS installation that was done.

Oracle provides two scripts to cleanup the CRS, The names and location of the scripts are given below.

1) Run rootdelete.sh script located at ‘$ORA_CRS_HOME/install’ directory

2) Run rootdeinstall.sh script located at ‘$ORA_CRS_HOME/install’ directory

If the CRSHOME is deleted before running the above mentioned scripts then, one can follow the following steps to cleanup the CRS manually.

Note: The locations used in the following cleanup process are based on the LINUX platform, the locations may slightly differ with the corresponding flavor of the OS.

Manual Cleanup of CRS on LINUX

1. Remove below mentioned files as root user on ALL NODES that form a CLUSTER

rm /etc/oracle/*

rm -f /etc/init.d/init.cssd

rm -f /etc/init.d/init.crs

rm -f /etc/init.d/init.crsd

rm -f /etc/init.d/init.evmd

rm -f /etc/rc2.d/K96init.crs

rm -f /etc/rc2.d/S96init.crs

rm -f /etc/rc3.d/K96init.crs

rm -f /etc/rc3.d/S96init.crs

rm -f /etc/rc5.d/K96init.crs

rm -f /etc/rc5.d/S96init.crs

rm -Rf /etc/oracle/scls_scr

rm -f /etc/inittab.crs

cp /etc/inittab.orig /etc/inittab

2. If the EVM, CRS and CSS processes are not already down then kill them off as root user on ALL NODES

ps -ef | grep evm

kill 'evm_pid'

ps -ef | grep crs

kill 'crs_pid'

ps -ef | grep css

kill 'css_pid'

3. Remove the files in /var/tmp/.oracle or /tmp/.oracle on ALL NODES

rm -f /var/tmp/.oracle/*

or

rm -f /tmp/.oracle/*

4. Make sure that you remove the ocr.loc at /etc/oracle on ALL NODES

5. De-install the CRS home using the Oracle Universal Installer; this is to remove the CRS home from inventory. Installer may not be able to remove all the files, because CRS home is already deleted, but this will help in cleaning up the inventory.

6. Remove the CRS install location on ALL NODES

7. Remove OCR and Voting disk from shared location from any 1 node

Once these steps are performed then the hosts are ready for the next CRS installation.

Wednesday, August 18, 2010

Data guard broker

Data Guard Broker


Data guard broker is an assistance utility given along with the Oralce Software, which can be used to create and manage the data guard configurations. Data guard broker consists of server side and client side components.

The server side components include dataguard monitor (DMON) and configuration files. The client side components include Data guard manager and Command line interface (CLI).

Data guard broker manages a data guard configuration using a broker management model. The broker management model is a hierarchial structure comprised of configuration,site and database resources. Broker can manage all three layers of the management model.


Dataguard broker can manage the proimary database and upto nine (9) standby databases in one configuration (in Oacle 10g). These nine standby database can be a mixture of physical and logical standby databases. Standby databases can be created using data guard broker. In addition, it can be used to add an existing standby database to a configuration.

When a standby database is created using Data guard broker, broker takes care of the entire set of supportinf files such as SPFILE, oracle net configuration files, etc., if in case an existing standby database is to be added to the data guard configuration, the SPFILE and oracle net files must be explicitly configured manually before the standby database is added to a configuration. If data guard broker is to be used for the management of standby databases, SPFILE must be used on all the databases.



Some of the advantages of using dataguard broker are listed:

1. It is a centralized management tool that can be used to manage the data guard configuration using a CLI / GUI interface.

2. It reduces the complexity of role management services (Switchover and Failover).

3. It provides an extensive health check mechanism for both the primary and standby databases.

4. It can be used to gather useful statistics for fine tuning the log transfer and log appky services.

In oracle 9i Dataguard broker cannot be used with Oracle Real Application Cluster (RAC). RAC support is provided with Oracle 10g.

Tuesday, August 17, 2010

Obsolete Parameters for 9i

The following are the obsolete parameters for 9i


1 ) _average_dirties_half_life 2 ) _lm_statistics
3 ) allow_partial_sn_results 4 ) always_anti_join
5 ) always_semi_join 6 ) arch_io_slaves
7 ) b_tree_bitmap_plans 8 ) backup_disk_io_slaves
9 ) cache_size_tdreshold 10 ) cleanup_rollback_entries
11 ) close_cached_open_cursors 12 ) compatible_no_recovery
13 ) complex_view_merging 14 ) cpu_count
15 ) db_block_checkpoint_batch 16 ) db_block_lru_extended_statisti
17 ) db_block_lru_latches 18 ) db_block_lru_statistics
19 ) db_block_max_dirty_target 20 ) db_file_simultaneous_writes
21 ) delayed_logging_block_cleanout 22 ) discrete_transactions_enabled
23 ) distributed_lock_timeout 24 ) distributed_recovery_connectio
25 ) distributed_transactions 26 ) dblink_encrypt_login
27 ) fast_full_scan_enabled 28 ) freeze_DB_for_fast_instance_re
29 ) gc_defer_time 30 ) gc_latches
31 ) gc_lck_procs 32 ) gc_releasable_locks
33 ) gc_rollback_locks 34 ) hash_multiblock_io_count
35 ) instance_nodeset 36 ) job_queue_interval
37 ) job_queue_keep_connections 38 ) large_pool_min_alloc
39 ) lgwr_io_slaves 40 ) lm_locks
41 ) lm_procs 42 ) lm_procs
43 ) lm_ress 44 ) lock_sga_areas
45 ) log_block_checksum 46 ) log_files
47 ) log_simultaneous_copies 48 ) log_small_entry_max_size
49 ) log_archive_buffers 50 ) log_archive_buffer_size
51 ) ogms_home 52 ) ops_admin_group
53 ) ops_interconnects 54 ) optimizer_percent_parallel
55 ) optimizer_search_limit 56 ) parallel_default_max_instances
57 ) parallel_min_message_pool 58 ) parallel_server_idle_time
59 ) parallel_transaction_resource_ 60 ) push_join_predicate
61 ) row_cache_cursors 62 ) sequence_cache_entries
63 ) sequence_cache_hash_buckets 64 ) shared_pool_reserved_min_alloc
65 ) snapshot_refresh_interval 66 ) snapshot_refresh_keep_connecti
67 ) snapshot_refresh_processes 68 ) sort_direct_writes
69 ) sort_multiblock_read_count 70 ) sort_read_fac
71 ) sort_spacemap_size 72 ) sort_write_buffer_size
73 ) sort_write_buffers 74 ) spin_count
75 ) temporary_table_locks 76 ) text_enable
77 ) use_ism

Wednesday, July 21, 2010

RAC User Equivalence - RSA and DSA

Why to use both the RSA and DSA while configuring SSH for User Equivalence?


There are two versions of the SSH protocol; version 1 uses RSA and version 2 uses DSA, so we will create both types of keys to ensure that SSH can use either version. The ssh-keygen program will generate public and private keys of either type depending upon the parameters passed to it.


Most SSH clients support both SSH protocols (SSH1 and SSH2). SSH2 is regarded as a more secure protocol, so many users like to make certain it is used first, before any connection using SSH1 is attempted (this is in case the remote server doesn't support your primary protocol selection, it can fall back to the other). It is highly recommended that users should configure their clients for the SSH2 protocol for the first connection attempt.

* The SSH1 protocol supports RSA keys.
* The SSH2 protocol supports RSA and DSA keys.

In order to make sure that the connection is success, we use both RSA and DSA while configuring SSH for user equivalence.

Usage for RSA:

ssh-keygen -t rsa

Usage for DSA:

ssh-keygen -t dsa