Search This Blog
Friday, September 11, 2009
Data Pump
Export and import operations in Data Pump can detach from a long-running job and reattach to it later with out affecting the job. You can also remap data during export and import processes. The names of data files, schema names, and tablespaces from the source can be altered to different names on the target system. It also supports fine-grained object selection using the EXCLUDE, INCLUDE, and CONTENT parameters.
Data Pump Export (dpexp) is the utility for unloading data and metadata from the source database to a set of operating system files (dump file sets). Data Pump Import (dpimp) is used to load data and metadata stored in these export dump file sets to a target database.
The advantages of using Data Pump utilities are as follows.
You can detach from a long-running job and reattach to it later without affecting the job. The DBA can monitor jobs from multiple locations, stop the jobs, and restart them later from where they were left.
Data Pump supports fine-grained object selection using the EXCLUDE, INCLUDE, and CONTENT parameters. This will help in exporting and importing a subset of data from a large database to development databases or from data warehouses to datamarts, and so on.
You can control the number of threads working for the Data Pump job and control the speed (only in the Enterprise version of the database).
You can remap data during the export and import processes.
In Oracle Database 10g Release 2, a default DATA_PUMP_DIR directory object and additional DBMS_DATAPUMP API calls have been added, along with provisions for compression of metadata in dump files, and the capability to control the dump file size with the FILESIZE parameter.
Tuesday, September 8, 2009
Oracle 10g Processes
An Oracle database instance can have many background processes, but all of them are not always needed. When the database instance is started, these background processes are automatically created. The important background processes are given next, with brief explanations of each:
Database Writer process (DBWn). The database writer process (DBWn) is responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one process (DBW0) is sufficient for most systems, you can have additional processes up to a maximum of 20 processes (DBW1 through DBW9 and DBWa through DBWj) to improve the write performance on heavy online transaction processing (OLTP) systems. By moving the data in dirty buffers to disk, DBWn improves the performance of finding free buffers for new transactions, while retaining the recently used buffers in the memory.
Log Writer process (LGWR). The log writer process (LGWR) manages the redo log buffers by writing the redo log buffer to a redo log file on disk in a circular fashion. After the LGWR moves the redo entries from the redo log buffer to a redo log file, server processes can overwrite new entries in to the redo log buffer. It writes fast enough to disk to have space available in the buffer for new entries.
Checkpoint process (CKPT). Checkpoint is the database event to synchronize the modified data blocks in memory with the data files on disk. It helps to establish data consistency and allows faster database recovery. When a checkpoint occurs, Oracle updates the headers of all data files using the CKPT process and records the details of the checkpoint. The dirtied blocks are written to disk by the DBWn process.
System Monitor process (SMON). SMON coalesces the contiguous free extents within dictionary managed tablespaces, cleans up the unused temporary segments, and does the database recovery at instance startup (as needed). During the instance recovery, SMON also recovers any skipped transactions. SMON checks periodically to see if the instance or other processes need its service.
Process Monitor process (PMON). When a user process fails, the process monitor (PMON) does process recovery by cleaning up the database buffer cache and releasing the resources held by that user process. It also periodically checks the status of dispatcher and server processes, and restarts the stopped process. PMON conveys the status of instance and dispatcher processes to the network listener and is activated (like SMON) whenever its service is needed.
Job Queue processes (Jnnn). Job queue processes run user jobs for batch processing like a scheduler service. When a start date and a time interval are assigned, the job queue processes run the job at the next occurrence of the interval. These processes are dynamically managed, allowing the job queue clients to demand more job queue processes (J000J999) when needed. The job queue processes are spawned as required by the coordinator process (CJQ0 or CJQnn) for completion of scheduled jobs.
Archiver processes (ARCn). The archiver processes (ARCn) copy the redo log files to an assigned destination after the log switch. These processes are present only in databases in ARCHIVELOG mode. An Oracle instance can have up to 10 ARCn processes (ARC0ARC9). Whenever the current number of ARCn processes becomes insufficient for the workload, the LGWR process invokes additional ARCn processes, which are recorded in the alert file. The number of archiver processes is set with the initialization parameter LOG_ARCHIVE_MAX_PROCESSES and changed with the ALTER SYSTEM command.
Queue Monitor processes (QMNn). The queue monitor process is an optional background process that monitors the message queues for Oracle Advanced Queuing in Streams.
Memory Monitor process (MMON). MMON is the acronym for Memory Monitor, a new process introduced in Oracle Database 10g associated with the Automatic Workload Repository (AWR). AWR gets the necessary statistics for automatic problem detection and tuning with the help of MMON. MMON writes out the required statistics for AWR to disk on a regularly scheduled basis.
Memory Monitor Light process (MMNL). Memory Monitor Light (MMNL) is another new process in Oracle Database 10g; it assists the AWR with writing the full statistics buffers to disk on an as-needed basis.
Monday, September 7, 2009
Automatic Storage Management
Oracle Database 10g introduces Automatic Storage Management (ASM), a service that provides management of disk drives. ASM can be used on a variety of configurations, including Oracle9i Database RAC installations. ASM is an alternative to the use of raw or cooked file systems and is part of Oracle's overall desire to make management of the Oracle Database 10g easier, overall.
ASM Features
ASM offers a number of features, including:
Simplified daily administration
The performance of raw disk I/O for all ASM files
Compatibility with any type of disk configuration, be it just a bunch of disks (JBOD) or complex Storage Area Network (SAN)
Use of a specific file-naming convention to name files, enforcing an enterprise-wide file-naming convention
Prevention of the accidental deletion of files, since there is no file system interface and ASM is solely responsible for file management
Load balancing of data across all ASM managed disk drives, which helps improve performance by removing disk hot spots
Dynamic load balancing of disks as usage patterns change and when additional disks are added or removed
Ability to mirror data on different disks to provide fault tolerance
Saturday, September 5, 2009
Flashback Feature
The term flashback was first introduced with Oracle 9i in the form of the new technology known as flashback query. When enabled, flashback query provided a new alternative to correcting user errors, recovering lost or corrupted data, or even performing historical analysis without all the hassle of point-in-time recovery.
The new flashback technology was a wonderful addition to Oracle 9i, but it still had several dependencies and limitations. First, flashback forced you to use Automatic Undo Management (AUM) and to set the UNDO_RETENTION parameter. Second, you needed to make sure your flashback users had the proper privileges on the DBMS_FLASHBACK package. After these prerequisite steps were completed, users were able to use the flashback technology based on the SCN (system change number) or by time. However it's important to note that regardless of whether users used the time or SCN method to enable flashback, Oracle would only find the flashback copy to the nearest five-minute internal. Furthermore, when using the flashback time method in a database that had been running continuously, you could never flashback more than five days, irrespective of your UNDO_RETENTION setting. Also, flashback query did not support DDL operations that modified any columns or even any drop or truncate commands on any tables.
Oracle 9i offers an unsupported method for extending the five-day limit of UNDO_RETENTION. Basically, the SCN and timestamp are stored in SYS.SMON_SCN_TIME, which has only 1,440 rows by default (5 daysx24 hoursx12 months = 1,440). At each five-minute interval, the current SCN is inserted into SMON_SCN_TIME, while the oldest SCN is removed, keeping the record count at 1,440 at all times. To extend the five-day limit, all you need to do is insert new rows to the SMON_SCN_TIME table. Oracle 10g extends this limit to eight days by extending the default record count of SMON_SCN_TIME to 2,304 (8 daysx24 hoursx12 months = 2,304).
Oracle 10g has evolved the flashback technology to cover every spectrum of the database. Following is a short summary of the new flashback technology that is available with Oracle 10g:
-
Flashback Database. Now you can set your entire database to a previous point in time by undoing all modifications that occurred since that time without the need to restore from a backup.
-
Flashback Table. This feature allows you to recover an entire table to a point in time without having to restore from a backup.
-
Flashback Drop. This feature allows you to restore any accidentally dropped tables.
-
Flashback Versions Query. This feature allows you to retrieve a history of changes for a specified set of rows within a table.
-
Flashback Transaction Query. This feature allows users to review and/or recover modifications made to the database at the transaction level.
Tuesday, August 18, 2009
Oracle 10g New Features
The SYSAUX tablespace is an auxiliary tablespace that provides storage for all non sys-related tables and indexes that would have been placed in the SYSTEM tablespace. SYSAUX is required for all Oracle Database 10g installations and should be created along with new installs or upgrades. Many database components use SYSAUX as their default tablespace to store data. The SYSAUX tablespace also reduces the number of tablespaces created by default in the seed database and user-defined database. For a RAC implementation with raw devices, a raw device is created for every tablespace, which complicates raw-device management. By consolidating these tablespaces into the SYSAUX tablespace, you can reduce the number of raw devices. The SYSAUX and SYSTEM tablespaces share the same security attributes.
When the CREATE DATABASE command is executed, the SYSAUX tablespace and its occupants are created. When a database is upgraded to 10g, the CREATE TABLESPACE command is explicitly used. CREATE TABLESPACE SYSAUX is called only in a database-migration mode.
Rename Tablespace Option
Oracle Database 10g has implemented the provision to rename tablespaces. To accomplish this in older versions, you had to create a new tablespace, copy the contents from the old tablespace to the new tablespace, and drop the old tablespace. The Rename Tablespace feature enables simplified processing of tablespace migrations within a database and ease of transporting a tablespace between two databases. The rename tablespace feature applies only to database versions 10.0.1 and higher. An offline tablespace, a tablespace with offline data files, and tablespaces owned by SYSTEM or SYSAUX cannot be renamed using this option. As an example, the syntax to rename a tablespace from REPORTS to REPORTS_HISTORY is as follows.
SQL> ALTER TABLESPACE REPORTS RENAME TO REPORTS_HISTORY;
Automatic Storage Management
Automatic Storage Management (ASM) is a new database feature for efficient management of storage with round-the-clock availability. It helps prevent the DBA from managing thousands of database files across multiple database instances by using disk groups. These disk groups are comprised of disks and resident files on the disks. ASM does not eliminate any existing database functionalities with file systems or raw devices, or Oracle Managed Files (OMFs). ASM also supports RAC configurations.
In Oracle Database 10g Release 2, the ASM command-line interface (ASMCMD) has been improved to access files and directories within ASM disk groups. Other enhancements include uploading and extracting of files into an ASM-managed storage pool, a WAIT option on disk rebalance operations, and the facility to perform batch operations on multiple disks.
Now it's time to take a deeper look into the new storage structures in Oracle Database 10gtemporary tablespace groups and BigFile tablespaces. These will help you to better utilize the database space and to manage space more easily.
Temporary Tablespace Group
A temporary tablespace is used by the database for storing temporary data, which is not accounted for in recovery operations. A temporary tablespace group (TTG) is a group of temporary tablespaces. A TTG contains at least one temporary tablespace, with a different name from the tablespace. Multiple temporary tablespaces can be specified at the database level and used in different sessions at the same time. Similarly, if TTG is used, a database user with multiple sessions can have different temporary tablespaces for the sessions. More than one default temporary tablespace can be assigned to a database. A single database operation can use multiple temporary tablespaces in sorting operations, thereby speeding up the process. This prevents large tablespace operations from running out of space.
In Oracle Database 10g, each database user will have a permanent tablespace for storing permanent data and a temporary tablespace for storing temporary data. In previous versions of Oracle, if a user was created without specifying a default tablespace, SYSTEM tablespace would have become the default tablespace. For Oracle Database 10g, a default permanent tablespace can be defined to be used for all new users without a specific permanent tablespace. By creating a default permanent tablespace, nonsystem user objects can be prevented from being created in the SYSTEM tablespace. Consider the many benefits of adding users in a large database environment by a simple command and not having to worry about them placing their objects in the wrong tablespaces!
BigFile Tablespace
We have gotten past the days of tablespaces in the range of a few megabytes. These days, database tables hold a lot of data and are always hungry for storage. To address this craving, Oracle has come up with the Bigfile tablespace concept. A BigFile tablespace (BFT) is a tablespace containing a single, very large data file. With the new addressing scheme in 10g, four billion blocks are permitted in a single data file and file sizes can be from 8TB to 128TB, depending on the block size. To differentiate a regular tablespace from a BFT, a regular tablespace is called a small file tablespace. Oracle Database 10g can be a mixture of small file and BigFile tablespaces.
BFTs are supported only for locally managed tablespaces with ASM segments and locally managed undo and temporary tablespaces. When BFTs are used with Oracle Managed Files, data files become completely transparent to the DBA and no reference is needed for them. BFT makes a tablespace logically equivalent to data files (allowing tablespace operations) of earlier releases. BFTs should be used only with logical volume manager or ASM supporting dynamically extensible logical volumes and with systems that support striping to prevent negative consequences on RMAN backup parallelization and parallel query execution. BFT should not be used when there is limited free disk space available.
Prior to Oracle Database 10g, K and M were used to specify data file sizes. Because the newer version introduces larger file sizes up to 128TB using BFTs, the sizes can be specified using G and T for gigabytes and terabytes, respectively. Almost all the data warehouse implementations with older versions of Oracle database utilize data files sized from 16GB to 32GB. With the advent of the BigFile tablespaces, however, DBAs can build larger data warehouses without getting intimidated by the sheer number of smaller data files
Oracle block size Maximum file size
2KB 8TB
4KB 16TB
8KB 32TB
16KB 64TB
32KB 128TB
For using BFT, the underlying operating system should support Large Files. In other words the file system should have Large File Support (LFS).
Cross-Platform Transportable Tablespaces
In Oracle 8i database, the transportable tablespace feature enabled a tablespace to be moved across different Oracle databases using the same operating system. Oracle Database 10g has significantly improved this functionality to permit the movement of data across different platforms. This will help transportable tablespaces to move data from one environment to another on selected heterogeneous platforms (operating systems). Using cross-platform transportable tablespaces, a database can be migrated from one platform to another by rebuilding the database catalog and transporting the user tablespaces. By default, the converted files are placed in the flash recovery area (also new to Oracle Database 10g), which is discussed later in this chapter. A list of fully supported platforms can be found in v$transportable_platform.
A new data dictionary view, v$transportable_platform, lists all supported platforms, along with the platform ID and endian format information. The v$database dictionary view has two new columns (PLATFORM_ID, PLATFORM_NAME) to support it.
The table v$TRansportable_platform has three fields: PLATFORM_ID (number), PLATFORM_NAME (varchar2(101)), and ENDIAN_FORMAT (varchar2(14)). Endianness is the pattern (big endian or little endian) for byte ordering of data files in native types. In big-endian format, the most significant byte comes first; in little-endian format, the least significant byte comes first.
Performance Management Using AWR
Automatic Workload Repository (AWR) is the most important feature among the new Oracle Database 10g manageability infrastructure components. AWR provides the background services to collect, maintain, and utilize the statistics for problem detection and self-tuning. The AWR collects system-performance data at frequent intervals and stores them as historical system workload information for analysis. These metrics are stored in the memory for performance reasons. These statistics are regularly written from memory to disk by a new background process called Memory Monitor (MMON). This data will later be used for analysis of performance problems that occurred in a certain time period and to do trend analysis. Oracle does all this without any DBA intervention. Automatic Database Diagnostic Monitor (ADDM), which is discussed in the next section, analyzes the information collected by the AWR for database-performance problems.
By default, the data-collection interval is 60 minutes, and this data is stored for seven days, after which it is purged. This interval and data-retention period can be altered.
This captured data can be used for system-level and user-level analysis. This data is optimized to minimize any database overhead. In a nutshell, AWR is the basis for all self-management functionalities of the database. It helps the database with the historical perspective on its usage, enabling it to make accurate decisions quickly.
The AWR infrastructure has two major components:
In-memory statistics collection. The in-memory statistics collection facility is used by 10g components to collect statistics and store them in memory. These statistics can be read using v$performance views. The memory version of the statistics is written to disk regularly by a new background process called Memory Monitor or Memory Manageability Monitor (MMON). We will review MMON in background processes as well.
AWR snapshots. The AWR snapshots form the persistent portion of the Oracle Database 10g manageability infrastructure. They can be viewed through data dictionary views. These statistics retained in persistent storage will be safe even in database instance crashes and provide historical data for baseline comparisons.
AWR collects many statistics, such as time model statistics (time spent by the various activities), object statistics (access and usage statistics of database segments), some session and system statistics in v$sesstat and v$sysstat, some optimizer statistics for self-learning and tuning, and Active Session History (ASH). We will review this in greater detail in later chapters.
Automatic Database Diagnostic Monitor (ADDM)
Automatic Database Diagnostic Monitor (ADDM) is the best resource for database tuning. Introduced in 10g, ADDM provides proactive and reactive monitoring instead of the tedious tuning process found in earlier Oracle versions. Proactive monitoring is done by ADDM and Server Generated Alerts (SGAs). Reactive monitoring is done by the DBA, who does manual tuning through Oracle Enterprise Manager or SQL scripts.
Statistical information captured from SGAs is stored inside the workload repository in the form of snapshots every 60 minutes. These detailed snapshots (similar to STATSPACK snapshots) are then written to disk. The ADDM initiates the MMON process to automatically run on every database instance and proactively find problems.
Whenever a snapshot is taken, the ADDM triggers an analysis of the period corresponding to the last two snapshots. Thus it proactively monitors the instances and detects problems before they become severe. The analysis results stored in the workload repository are accessible through the Oracle Enterprise Manager. In Oracle Database 10g, the new wait and time model statistics help the ADDM to identify the top performance issues and concentrate its analysis on those problems. In Oracle Database 10g Release 2, ADDM spans more server components like Streams, RMAN, and RAC.
DROP DATABASE Command
Oracle Database 10g has introduced a means to drop the entire database with a single command: DROP DATABASE. The DROP DATABASE command deletes all database files, online log files, control files, and the server parameter (spfile) file. The archive logs and backups, however, have to be deleted manually.