Search This Blog

Saturday, September 19, 2009

Persons you wana to know






Founders of Google










Persons Behind ORKUT









Linus Torvalds of Linux Operating System






Friday, September 18, 2009

Different Types Of Databases

Operational database
These databases store detailed data needed to support the operations of the entire organization. They are also called subject-area databases (SADB), transaction databases, and production databases. These are all examples:
Customer databases
Personal databases
Inventory databases
Analytical database
These databases stores data and information extracted from selected operational and external databases. They consist of summarized data and information most needed by an organizations manager and other end user. They may also be called multidimensional database, Management database, and Information database.
Data warehouse
A data warehouse stores data from current and previous years that has been extracted from the various operational databases of an organization. It is the central source of data that has been screened, edited, standardized and integrated so that it can be used by managers and other end user professionals throughout an organization
Distributed database
These are databases of local work groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and common user databases, as well as data generated and used only at a user’s own site.
End-user database
These databases consist of a variety of data files developed by end-users at their workstations. Examples of these are collection of documents in spreadsheets, word processing and even downloaded files.
External database
These databases where access to external, privately owned online databases or data banks is available for a fee to end users and organizations from commercial services. Access to a wealth of information from external database is available for a fee from commercial online services and with or without charge from many sources in the internet.
Hypermedia databases on the web
These are set of interconnected multimedia pages at a web-site. It consists of home page and other hyperlinked pages of multimedia or mixed media such as text, graphic, photographic images, video clips, audio etc.
Navigational database
Navigational databases are characterized by the fact that objects in it are found primarily by following references from other objects. Traditionally navigational interfaces are procedural, though one could characterize some modern systems like XPath as being simultaneously navigational and declarative.
In-memory databases
In-memory databases are database management systems that primarily rely on main memory for computer data storage. It is contrasted with database management systems which employ a disk storage mechanism. Main memory databases are faster than disk-optimized databases since the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory provides faster and more predictable performance than disk. In applications where response time is critical, such as telecommunications network equipment that operates 9-11 emergency systems, main memory databases are often used.
Document-oriented databases
Document-oriented databases are computer programs designed for document-oriented applications. These systems may be implemented as a layer above a relational database or an object database. As opposed to relational databases, document-based databases do not store data in tables with uniform sized fields for each record. Instead, each record is stored as a document that has certain characteristics. Any number of fields of any length can be added to a document. Fields can also contain multiple pieces of data.
Real-time databases
A real-time database is a processing system designed to handle workloads whose state is constantly changing. This differs from traditional databases containing persistent data, mostly unaffected by time. For example, a stock market changes very rapidly and is dynamic. Real-time processing means that a transaction is processed fast enough for the result to come back and be acted on right away. Real-time databases are useful for accounting, banking, law, medical records, multi-media, process control, reservation systems, and scientific data analysis. As computers increase in power and can store more data, they are integrating themselves into our society and are employed in many applications.

Physical Structure

Physical Database Structures
The Oracle database consists of several types of operating system files, which it uses to manage itself. Although you call these files physical files to distinguish them from the logical entities they contain, understand that from an operating system point of view, even these files are not really physical, but rather logical components of the physical disk storage supported by the operating system.
Oracle Data Files
Data files are the way Oracle allocates space to the tablespaces that make up the logical database. The data files are the way Oracle maps the logical tablespaces to the physical disks. Each tablespace consists of one or more data files, which in turn belong to an operating system file system. Therefore, the total space allocated to all the data files in an Oracle database will give you the total size of that database.
When you create your database, your system administrator assigns you a certain amount of disk space based on your initial database sizing estimates. All you have at this point are the mount points of the various disks you are assigned (e.g., /prod01, /prod02, /prod03, and so on). You then need to create your directory structure on the various files. After you install your software and create the Oracle administrative directories, you can use the remaining file system space for storing database objects such as tables and indexes. The Oracle data files make up the largest part of the physical storage of your database. However, other files are also part of the Oracle physical storage, as you'll see in the following sections.
The Control File
Control files are key files that the Oracle DBMS maintains about the state of the database. They are a bit like a database within a database, and they include, for example, the names and locations of the data files and the all-important System Change Number (SCN), which indicates the most recent version of committed changes in the database. Control files are key to the functioning of the database, and recovery is difficult without access to an up-to-date control file. Oracle creates the control files during the initial database creation process.
Due to its obvious importance, Oracle recommends that you keep multiple copies of the control file. If you lose all your control files, you won't be able to bring up your database without re-creating the control file using special commands. You specify the name, location, and the number of copies of the control file in the initialization file (init.ora). Oracle highly recommends that you provide for more than one control file in the initialization file. Here's a brief list of the key information captured by the Oracle control file:
Checkpoint progress records
Redo thread records
Log file records
Data file records
Tablespace records
Log file history records
Archived log records
Backup set and data file copy records
Control files are vital during the time the Oracle instance is operating and during database recovery. During the normal operation of the database, the control file is consulted periodically for necessary information on data files and other things. Note that the Oracle server process continually updates the control
file during the operation of the database. During recovery, it's the control file that has the data file information necessary to bring the database up to a stable mode.
The Redo Log Files
The Oracle redo log files are vital during the recovery of a database. The redo log files record all the changes made to the database. In case you need to restore your database from a backup, you can recover the latest change made to the database from the redo log files.
Redo log files consist of redo records, which are groups of change vectors, each referring to a distinct atomic change made to a data block in the Oracle database. Note that a single transaction may have more than one redo record. If your database comes down without warning, the redo log helps you determine if all transactions were committed before the crash or if some were still incomplete. Initially, the contents of the log will be in the redo log buffer (a memory area), but they will be transferred to disk very quickly.
Oracle redo log files contain the following information regarding database changes made by transactions:
Indicators to let you know when the transaction started
Name of the transaction
Name of the data object (e.g., an application table) that was being updated
The "before image" of the transaction (i.e., the data as it was before the changes were made)
The "after image" of the transaction (i.e., the data after the transaction changed the data)
Commit indicators that let you know if and when the transaction completed

Logical Structure

The Logical Structures
Oracle9i uses a set of logical database storage structures to use disk space, whether the database uses operating system files or "raw" database files. These logical structures, which primarily include tablespaces, segments, extents, and blocks, control the allocation and use of the "physical" space allocated to the Oracle database. Note that Oracle database objects such as tables, indexes, views, sequences, and others are also logical entities. These logical objects make up the relational design of the database, which I discuss in detail later on.
You can look at the logical composition of Oracle databases from either a top-down viewpoint or a bottom-up viewpoint. Let's use the bottom-up approach by first looking at the smallest logical components of an Oracle database and progressively move up to the largest entities. Before you begin learning about the logical components, remember that Oracle database objects such as tables, indexes, and packaged SQL code are actually logical entities. Taken together, a set of related database logical objects is called a schema. Dividing a database's objects among various schemas promotes ease of management and a higher level of security.
The Oracle data block is the foundation of the database storage hierarchy. The Oracle data block is the basis of all database storage in an Oracle database. Two or more contiguous Oracle data blocks form an extent. A set of extents that you allocate to a table or an index (or some other object) is termed a segment. A tablespace is a set of one or more data files, and it usually consists of related segments. The following sections explore each of these logical database structures in detail.
Data Blocks
The smallest logical component of an Oracle database is a data block. Data blocks are defined in terms of operating system bytes, and they are the smallest unit of space allocation in an Oracle database. For example, you can size an Oracle data block in units of 2KB, 4KB, 8KB, 16KB, or 32KB (or ever larger chunks). It is common to refer to the data blocks as Oracle blocks.
The storage disks on which the Oracle blocks reside are themselves divided into disk blocks, which are areas of contiguous storage containing a certain number of bytes—for example, 4096 or 32768 bytes (4KB or 32KB, because each kilobyte has 1024 bytes). Note that if the Oracle block size is smaller than the operating system file system buffer size, you may be wasting the capacity of the operating system to read and write larger chunks of data for each I/O. Ideally, the Oracle block size should be a multiple of the operating system block size. On an HP-UX system, for example, if you set your Oracle block size to a multiple of the operating system block size, you gain 5 percent in performance.
Multiple Oracle Block Sizes
The initialization parameter db_size determines the size of the Oracle data block in the database. Unlike previous Oracle database versions, Oracle9i lets you specify up to four additional nonstandard block sizes in addition to the standard block size. Thus, you can have a total of five different block sizes in an Oracle9i database. For example, you can have 2KB, 4KB, 8KB, 16KB, and 32KB block sizes all within the same database. If you choose to configure multiple Oracle block sizes, you must also configure corresponding subcaches in the buffer cache of the System Global Area (SGA, Oracle's memory allocation), as you'll see shortly. Multiple data block sizes aren't always necessary, and you'll do just fine with one standard Oracle block size.
Extents
When you combine several contiguous data blocks, you get an extent. Remember that you can allocate an extent only if you can find enough contiguous data blocks. Your choice of tablespace type will determine how the Oracle database allocates the extents. The traditional dictionary-managed tablespaces allow you to specify both the beginning allocation of space and further increments as needed.
All database objects are allocated an initial amount of space, called the initial extent, when they are created. When you create an object, you specify the size of the next and subsequent extents as well as the maximum number of extents for that object, in the object creation statement. Once allocated to a table or index, the extents remain allocated to the particular object, unless you drop the object from the database, at which time the space will revert to the pool of allocatable free space in the database.
The locally managed tablespaces (explained later in this chapter) use the simpler method of allocating a uniform extent size, which is chosen automatically by the database. Therefore, you don't have to worry about setting the actual sizes for future allocation of extents to any particular tablespace.
Segments
A set of extents forms the next higher unit of data storage, the segment. In an Oracle database you can store different kinds of objects, such as tables and indexes, as you'll see in the next few chapters. Oracle calls all the space allocated to any particular database object a segment. That is, if you have a table called customer, you simply refer to the space you allocate to it as the "customer segment."
Each object in the database has its own segment. For example, the customer table is associated with the customer segment. When you create a table called customer, Oracle will create a new segment named customer also and allocate a certain number of extents to it based on the table creation specifications for the customer table. When you create an index, it will have its own segment named after the index name.
You can have several types of segments, with the most common being the data segment and the index segment. Any space you allocate to these two types of segments will remain intact even if you truncate (remove all rows) a table, for example, as long as the object is part of the database. However, as you'll see later on, there's something called a temporary tablespace in Oracle, and Oracle deallocates the temporary segments as soon the transactions or sessions that are using the segments are completed.
Tablespaces
A tablespace is defined as the combination of a set of related segments. For example, all the data segments belonging to the out-of-town sales team can be grouped into a tablespace called out_of_town_sales. You can have other segments belonging to other types of data in the same tablespace, but usually you try to keep related information together in the same tablespace. Note that the tablespace is a purely logical construct, and it is the primary logical structure of an Oracle database. All tablespaces need not be of the same size within a database. For example, it is quite common to have tablespaces that are 100GB in size coexisting in the same database with tablespaces as small as 1GB. The size of the tablespace depends on the current and expected size of the objects included in the various segments of the tablespace.
As you have already seen, Oracle9i lets you have multiple block sizes, in addition to the default block size. Because tablespaces ultimately consist of data blocks, this means that you can have tablespaces with different block sizes in the same database. This is a great new feature, and it gives you the opportunity to pick the right block size for a tablespace based on the data structure of the tables within that tablespace. This customization of the block size for tablespaces provides several benefits:
· Optimal disk I/O: Remember that the Oracle server has to read the table data from mechanical disks into the buffer cache area for processing. One of your primary goals as a DBA is to optimize the expensive I/O involved in the reads and writes to disk. If you have tables that have very long rows, you're better off with a larger block size. Each read, for example, will fetch more data than with a smaller block size, and you'll need fewer read operations to get the same amount of data. Tables with large object (LOB) data will also benefit from a very high block size. Similarly, tables with small row lengths can have a small block size as the building block for the tablespace. If you have large indexes in your database, you need to have a large block size for their tablespace, so each read will fetch a larger number of index pointers for the data.
· Optimal caching of data: The Oracle9i feature of separate pools for the various block sizes leads to a better use of the buffer cache area.
· Easier to transport tablespaces: If you have tablespaces with multiple block sizes, it's easier to use the "transport tablespaces" feature.
Each Oracle tablespace consists of one or more operating system (or raw) files called data files, and a data file can only belong to one tablespace. An Oracle database can have any number of tablespaces in it. You could manage a database with ten or several hundred tablespaces, all sized differently. However, for every Oracle database, you'll need a minimum of two tablespaces: the System tablespace and the temporary tablespace. Later on, you can add and drop tablespaces as you wish, but you can't drop the System tablespace. At database creation time, the only tablespace you must have is the System tablespace, which contains Oracle's data dictionary. However, users need a temporary location to perform certain activities such as sorting, and if you don't provide a default temporary tablespace for them, they end up doing the temporary sorting in the System tablespace.
When you consider the importance of the System tablespace, which contains the data dictionary tables along with other important information, it quickly becomes obvious why you must have a temporary tablespace. Oracle9i allows you to create this temporary tablespace at database creation time, and all users will automatically have this tablespace as their default temporary tablespace. In addition, if you choose the Oracle-recommended Automatic Undo Management over the traditional manual rollback segment management mode, you'll also need to create the undo tablespace at database creation time. Thus, although only the System tablespace is absolutely required by Oracle, your database should have at least the System, temporary, and undo tablespaces when you initially create it.

Friday, September 11, 2009

Data Pump

Data Pump is the new high-speed infrastructure for data and metadata movement in Oracle Database 10g. The Data Pump commands are similar to the traditional export and import commands, but they are separate products. Data Pump provides a remarkable performance improvement over the original export and import utilities. It also provides faster data load and unload capability to existing tables. Using Data Pump, platform-independent flat files can be moved between multiple servers. You can use the new network mode to transfer data using database links.
Export and import operations in Data Pump can detach from a long-running job and reattach to it later with out affecting the job. You can also remap data during export and import processes. The names of data files, schema names, and tablespaces from the source can be altered to different names on the target system. It also supports fine-grained object selection using the EXCLUDE, INCLUDE, and CONTENT parameters.
Data Pump Export (dpexp) is the utility for unloading data and metadata from the source database to a set of operating system files (dump file sets). Data Pump Import (dpimp) is used to load data and metadata stored in these export dump file sets to a target database.
The advantages of using Data Pump utilities are as follows.
You can detach from a long-running job and reattach to it later without affecting the job. The DBA can monitor jobs from multiple locations, stop the jobs, and restart them later from where they were left.
Data Pump supports fine-grained object selection using the EXCLUDE, INCLUDE, and CONTENT parameters. This will help in exporting and importing a subset of data from a large database to development databases or from data warehouses to datamarts, and so on.
You can control the number of threads working for the Data Pump job and control the speed (only in the Enterprise version of the database).
You can remap data during the export and import processes.
In Oracle Database 10g Release 2, a default DATA_PUMP_DIR directory object and additional DBMS_DATAPUMP API calls have been added, along with provisions for compression of metadata in dump files, and the capability to control the dump file size with the FILESIZE parameter.

Tuesday, September 8, 2009

Oracle 10g Processes

Background Processes Revisited

An Oracle database instance can have many background processes, but all of them are not always needed. When the database instance is started, these background processes are automatically created. The important background processes are given next, with brief explanations of each:
Database Writer process (DBWn). The database writer process (DBWn) is responsible for writing modified (dirty) buffers in the database buffer cache to disk. Although one process (DBW0) is sufficient for most systems, you can have additional processes up to a maximum of 20 processes (DBW1 through DBW9 and DBWa through DBWj) to improve the write performance on heavy online transaction processing (OLTP) systems. By moving the data in dirty buffers to disk, DBWn improves the performance of finding free buffers for new transactions, while retaining the recently used buffers in the memory.
Log Writer process (LGWR). The log writer process (LGWR) manages the redo log buffers by writing the redo log buffer to a redo log file on disk in a circular fashion. After the LGWR moves the redo entries from the redo log buffer to a redo log file, server processes can overwrite new entries in to the redo log buffer. It writes fast enough to disk to have space available in the buffer for new entries.
Checkpoint process (CKPT). Checkpoint is the database event to synchronize the modified data blocks in memory with the data files on disk. It helps to establish data consistency and allows faster database recovery. When a checkpoint occurs, Oracle updates the headers of all data files using the CKPT process and records the details of the checkpoint. The dirtied blocks are written to disk by the DBWn process.
System Monitor process (SMON). SMON coalesces the contiguous free extents within dictionary managed tablespaces, cleans up the unused temporary segments, and does the database recovery at instance startup (as needed). During the instance recovery, SMON also recovers any skipped transactions. SMON checks periodically to see if the instance or other processes need its service.
Process Monitor process (PMON). When a user process fails, the process monitor (PMON) does process recovery by cleaning up the database buffer cache and releasing the resources held by that user process. It also periodically checks the status of dispatcher and server processes, and restarts the stopped process. PMON conveys the status of instance and dispatcher processes to the network listener and is activated (like SMON) whenever its service is needed.
Job Queue processes (Jnnn). Job queue processes run user jobs for batch processing like a scheduler service. When a start date and a time interval are assigned, the job queue processes run the job at the next occurrence of the interval. These processes are dynamically managed, allowing the job queue clients to demand more job queue processes (J000J999) when needed. The job queue processes are spawned as required by the coordinator process (CJQ0 or CJQnn) for completion of scheduled jobs.
Archiver processes (ARCn). The archiver processes (ARCn) copy the redo log files to an assigned destination after the log switch. These processes are present only in databases in ARCHIVELOG mode. An Oracle instance can have up to 10 ARCn processes (ARC0ARC9). Whenever the current number of ARCn processes becomes insufficient for the workload, the LGWR process invokes additional ARCn processes, which are recorded in the alert file. The number of archiver processes is set with the initialization parameter LOG_ARCHIVE_MAX_PROCESSES and changed with the ALTER SYSTEM command.
Queue Monitor processes (QMNn). The queue monitor process is an optional background process that monitors the message queues for Oracle Advanced Queuing in Streams.
Memory Monitor process (MMON). MMON is the acronym for Memory Monitor, a new process introduced in Oracle Database 10g associated with the Automatic Workload Repository (AWR). AWR gets the necessary statistics for automatic problem detection and tuning with the help of MMON. MMON writes out the required statistics for AWR to disk on a regularly scheduled basis.
Memory Monitor Light process (MMNL). Memory Monitor Light (MMNL) is another new process in Oracle Database 10g; it assists the AWR with writing the full statistics buffers to disk on an as-needed basis.

Monday, September 7, 2009

Automatic Storage Management

0028-Automatic Storage ManagementAutomatic Storage Management

Oracle Database 10g introduces Automatic Storage Management (ASM), a service that provides management of disk drives. ASM can be used on a variety of configurations, including Oracle9i Database RAC installations. ASM is an alternative to the use of raw or cooked file systems and is part of Oracle's overall desire to make management of the Oracle Database 10g easier, overall.

ASM Features

ASM offers a number of features, including:

Image Simplified daily administration

Image The performance of raw disk I/O for all ASM files

Image Compatibility with any type of disk configuration, be it just a bunch of disks (JBOD) or complex Storage Area Network (SAN)

Image Use of a specific file-naming convention to name files, enforcing an enterprise-wide file-naming convention

Image Prevention of the accidental deletion of files, since there is no file system interface and ASM is solely responsible for file management

Image Load balancing of data across all ASM managed disk drives, which helps improve performance by removing disk hot spots

Image Dynamic load balancing of disks as usage patterns change and when additional disks are added or removed

Image Ability to mirror data on different disks to provide fault tolerance

Saturday, September 5, 2009

Flashback Feature

The term flashback was first introduced with Oracle 9i in the form of the new technology known as flashback query. When enabled, flashback query provided a new alternative to correcting user errors, recovering lost or corrupted data, or even performing historical analysis without all the hassle of point-in-time recovery.

The new flashback technology was a wonderful addition to Oracle 9i, but it still had several dependencies and limitations. First, flashback forced you to use Automatic Undo Management (AUM) and to set the UNDO_RETENTION parameter. Second, you needed to make sure your flashback users had the proper privileges on the DBMS_FLASHBACK package. After these prerequisite steps were completed, users were able to use the flashback technology based on the SCN (system change number) or by time. However it's important to note that regardless of whether users used the time or SCN method to enable flashback, Oracle would only find the flashback copy to the nearest five-minute internal. Furthermore, when using the flashback time method in a database that had been running continuously, you could never flashback more than five days, irrespective of your UNDO_RETENTION setting. Also, flashback query did not support DDL operations that modified any columns or even any drop or truncate commands on any tables.

Oracle 9i offers an unsupported method for extending the five-day limit of UNDO_RETENTION. Basically, the SCN and timestamp are stored in SYS.SMON_SCN_TIME, which has only 1,440 rows by default (5 daysx24 hoursx12 months = 1,440). At each five-minute interval, the current SCN is inserted into SMON_SCN_TIME, while the oldest SCN is removed, keeping the record count at 1,440 at all times. To extend the five-day limit, all you need to do is insert new rows to the SMON_SCN_TIME table. Oracle 10g extends this limit to eight days by extending the default record count of SMON_SCN_TIME to 2,304 (8 daysx24 hoursx12 months = 2,304).


Oracle 10g has evolved the flashback technology to cover every spectrum of the database. Following is a short summary of the new flashback technology that is available with Oracle 10g:

  • Flashback Database. Now you can set your entire database to a previous point in time by undoing all modifications that occurred since that time without the need to restore from a backup.

  • Flashback Table. This feature allows you to recover an entire table to a point in time without having to restore from a backup.

  • Flashback Drop. This feature allows you to restore any accidentally dropped tables.

  • Flashback Versions Query. This feature allows you to retrieve a history of changes for a specified set of rows within a table.

  • Flashback Transaction Query. This feature allows users to review and/or recover modifications made to the database at the transaction level.

Friday, August 28, 2009

Oracle Database Proccesses

Oracle Processes
Oracle server processes running on the operating system perform all the database operations, such as inserting and deleting data. These Oracle processes, together with the memory structures allocated to Oracle by the operating system, form the working Oracle instance. There is a set of mandatory Oracle processes that need to be up and running for the database to function at all. In addition, other processes exist that are necessary only if you are using certain specialized features of the Oracle databases (e.g., replicated databases). This section's focus is on the set of mandatory processes that every Oracle instance uses to perform database activities.
Processes are essentially a connection or thread to the operating system that performs a task or job. The Oracle processes you'll encounter in this section are continuous, in the sense that they come up when the instance starts, and they stay up for the duration of the instance's life. Thus, they act like Oracle's "hooks" into the operating system resources. Note that what I call a process on a UNIX system is analogous to a thread on a Windows system.
Oracle instances need two different types of processes to perform various types of database activities. This distinction is made for efficiency purposes and the need to keep client processes separate from the database server's tasks. The first set of processes is called user processes. These processes are responsible for running the application that connects the user to the database instance. The second and more important set of processes is called the Oracle processes. These processes perform the Oracle server's tasks, and you can divide the Oracle processes into two major categories: the server processes and the background processes. Together, these processes perform all the actual work of the database, from managing connections to writing to the logs and data files to monitoring the user processes.
Interaction Between the User and Oracle Processes
User processes execute program code of application programs and that of Oracle tools such as SQL*Plus. The user processes communicate with the server processes through the user interface. These processes request that the Oracle server processes perform work on their behalf. Oracle responds by using its server processes to service the user processes' requests.
When you connect to an Oracle database, you have to use software such as SQL Net, which is Oracle's proprietary networking tool. Regardless of how you connect and establish a session with the database instance, Oracle has to manage these connections. The Oracle server creates server processes to manage the user connections. It's the job of the server processes to monitor the connection, accept requests for data, and so forth from the database, and hand the results back from the database itself. All selects, for example, involve reading data from the database, and it's the server processes that bring the output of the select statement back to the users.
You'll examine the two types of Oracle processes, the server processes and the background processes, in detail in the following sections.
The Server Process
The server process is the process that services the user process. The server process is responsible for all interaction between the user and the database. When the user submits requests to select data, for example, the server process checks the syntax of the code and executes the SQL code. It will then read the data from the data files into the memory blocks. If another user intends to read the same data, the new user's server process will read it not from disk again, but from Oracle's memory, where it usually remains for a while. Finally, the server process is also responsible for returning the requested data to the user.
The most common configuration for the server process is where you assign each user a dedicated server process. However, Oracle provides for a more sophisticated means of servicing several users through the same server process called the shared server architecture, which enables you to service a large number of users efficiently. A shared server process enables a large number of simultaneous connections using fewer server processes, thereby conserving critical system resources such as memory.
The Background Processes
The background processes are the real workhorses of the Oracle instance. These processes enable large numbers of users to use the information that is stored in the database files. By being continuously hooked into the operating system, these processes relieve Oracle software from having to constantly start numerous, separate processes for each task that needs to be done on the operating system server.
BACKGROUND PROCESS
PROCESS FUNCTION
Database writer
Writes modified data from buffer cache to disk (data files)
Log writer
Writes redo log buffer contents to online redo log files
Checkpoint
Signals database to flush its contents to disk
Process monitor
Cleans up after finished and failed processes
System monitor
Performs crash recovery and coalesces extents
Archiver
Archives filled online redo log files
Recoverer
Used only in distributed databases
Dispatcher
Used only in shared server configurations
Coordination job
Coordinates job queues to expedite job processes queue process
I discuss in detail the main Oracle background processes in the following sections.
The Database Writer
The database writer (DBW) process is responsible for writing data from the memory areas known as database buffers to the actual data files on disk. It is the database writer process's job to monitor the usage of the database buffer cache and help manage it. For example, if the free space in the database buffer area is getting low, the database writer process has to make room there by writing some of the data in the buffers to the disk files. The database writer process uses an algorithm called least recently used (LRU), which basically writes data to the data files on disk based on how long it has been since someone asked for it. If the data has been sitting in the buffers for a very long time, chances are it is "dated" and the database writer process clears that portion of the database buffers by writing the data to disk.

The Process Monitor
The process monitor (PMON) process cleans up after failed user processes and the server processes. Essentially, when processes die, the PMON process ensures that the database frees up the resources that the dead processes were using. For example, when a user process dies while holding certain table locks, the PMON process would release those locks, so other users could use them without any interference from the dead process. In addition, the PMON process restarts failed server processes. The PMON process sleeps most of the time, waking up to see if it is needed. Other processes will also wake up the PMON process if necessary.
In Oracle9i, the PMON process automatically performs dynamic service registration. When you create a new database instance, the PMON process registers the instance information with the listener. (The listener is the entity that manages requests for database connections. This dynamic service registration eliminates the need to register the new service information in the listener.ora file, which is the configuration file for the listener.
The System Monitor
The system monitor (SMON) process, as its name indicates, performs system monitoring tasks for the Oracle instance. The SMON process performs crash recovery upon the restarting of an instance that crashed. The SMON process determines if the database is consistent following a restart after an unexpected shutdown. This process is also responsible for coalescing free extents if you happen to use dictionary managed tablespaces. Coalescing free extents will enable you to assign larger contiguous free areas on disk to your database objects. In addition, the SMON process cleans up unnecessary temporary segments. Like the PMON process, the SMON process sleeps most of the time, waking up to see if it is needed. Other processes will also wake up the SMON process if they detect a need for it.
The Archiver
The archiver (ARCH) process is used when the system is being operated in an archive log mode—that is, the changes logged to the redo log files are being saved and not being overwritten by new changes. The archiver process will archive the redo log files to a specified location, assuming you chose the automatic archiving option in the init.ora file. If a huge number of changes are being made to your database, and consequently your logs are filling up at a very rapid pace, you can use multiple archiver processes up to a maximum of ten. The parameter log_archive_maximum_processes in the initialization file will determine how many archiver processes Oracle will invoke. If the log writer process is writing logs faster than the default single archiver process can archive them, it will be necessary to enable more than one archiver process.
Note that besides the processes discussed here, there are other Oracle background processes that perform specialized tasks that may be running in your system. For example, if you use Oracle Real Application Clusters (ORAC), you'll see a background process called the lock (LCKn) process, which is responsible for performing interinstance locking. If you use the Oracle Advanced Replication Option, you'll notice the background process called recoverer (RECO), which recovers terminated transactions in a distributed database environment.


Tuesday, August 18, 2009

Oracle 10g New Features

SYSAUX Tablespace
The SYSAUX tablespace is an auxiliary tablespace that provides storage for all non sys-related tables and indexes that would have been placed in the SYSTEM tablespace. SYSAUX is required for all Oracle Database 10g installations and should be created along with new installs or upgrades. Many database components use SYSAUX as their default tablespace to store data. The SYSAUX tablespace also reduces the number of tablespaces created by default in the seed database and user-defined database. For a RAC implementation with raw devices, a raw device is created for every tablespace, which complicates raw-device management. By consolidating these tablespaces into the SYSAUX tablespace, you can reduce the number of raw devices. The SYSAUX and SYSTEM tablespaces share the same security attributes.
When the CREATE DATABASE command is executed, the SYSAUX tablespace and its occupants are created. When a database is upgraded to 10g, the CREATE TABLESPACE command is explicitly used. CREATE TABLESPACE SYSAUX is called only in a database-migration mode.
Rename Tablespace Option
Oracle Database 10g has implemented the provision to rename tablespaces. To accomplish this in older versions, you had to create a new tablespace, copy the contents from the old tablespace to the new tablespace, and drop the old tablespace. The Rename Tablespace feature enables simplified processing of tablespace migrations within a database and ease of transporting a tablespace between two databases. The rename tablespace feature applies only to database versions 10.0.1 and higher. An offline tablespace, a tablespace with offline data files, and tablespaces owned by SYSTEM or SYSAUX cannot be renamed using this option. As an example, the syntax to rename a tablespace from REPORTS to REPORTS_HISTORY is as follows.
SQL> ALTER TABLESPACE REPORTS RENAME TO REPORTS_HISTORY;
Automatic Storage Management
Automatic Storage Management (ASM) is a new database feature for efficient management of storage with round-the-clock availability. It helps prevent the DBA from managing thousands of database files across multiple database instances by using disk groups. These disk groups are comprised of disks and resident files on the disks. ASM does not eliminate any existing database functionalities with file systems or raw devices, or Oracle Managed Files (OMFs). ASM also supports RAC configurations.
In Oracle Database 10g Release 2, the ASM command-line interface (ASMCMD) has been improved to access files and directories within ASM disk groups. Other enhancements include uploading and extracting of files into an ASM-managed storage pool, a WAIT option on disk rebalance operations, and the facility to perform batch operations on multiple disks.
Now it's time to take a deeper look into the new storage structures in Oracle Database 10gtemporary tablespace groups and BigFile tablespaces. These will help you to better utilize the database space and to manage space more easily.
Temporary Tablespace Group
A temporary tablespace is used by the database for storing temporary data, which is not accounted for in recovery operations. A temporary tablespace group (TTG) is a group of temporary tablespaces. A TTG contains at least one temporary tablespace, with a different name from the tablespace. Multiple temporary tablespaces can be specified at the database level and used in different sessions at the same time. Similarly, if TTG is used, a database user with multiple sessions can have different temporary tablespaces for the sessions. More than one default temporary tablespace can be assigned to a database. A single database operation can use multiple temporary tablespaces in sorting operations, thereby speeding up the process. This prevents large tablespace operations from running out of space.
In Oracle Database 10g, each database user will have a permanent tablespace for storing permanent data and a temporary tablespace for storing temporary data. In previous versions of Oracle, if a user was created without specifying a default tablespace, SYSTEM tablespace would have become the default tablespace. For Oracle Database 10g, a default permanent tablespace can be defined to be used for all new users without a specific permanent tablespace. By creating a default permanent tablespace, nonsystem user objects can be prevented from being created in the SYSTEM tablespace. Consider the many benefits of adding users in a large database environment by a simple command and not having to worry about them placing their objects in the wrong tablespaces!
BigFile Tablespace
We have gotten past the days of tablespaces in the range of a few megabytes. These days, database tables hold a lot of data and are always hungry for storage. To address this craving, Oracle has come up with the Bigfile tablespace concept. A BigFile tablespace (BFT) is a tablespace containing a single, very large data file. With the new addressing scheme in 10g, four billion blocks are permitted in a single data file and file sizes can be from 8TB to 128TB, depending on the block size. To differentiate a regular tablespace from a BFT, a regular tablespace is called a small file tablespace. Oracle Database 10g can be a mixture of small file and BigFile tablespaces.
BFTs are supported only for locally managed tablespaces with ASM segments and locally managed undo and temporary tablespaces. When BFTs are used with Oracle Managed Files, data files become completely transparent to the DBA and no reference is needed for them. BFT makes a tablespace logically equivalent to data files (allowing tablespace operations) of earlier releases. BFTs should be used only with logical volume manager or ASM supporting dynamically extensible logical volumes and with systems that support striping to prevent negative consequences on RMAN backup parallelization and parallel query execution. BFT should not be used when there is limited free disk space available.
Prior to Oracle Database 10g, K and M were used to specify data file sizes. Because the newer version introduces larger file sizes up to 128TB using BFTs, the sizes can be specified using G and T for gigabytes and terabytes, respectively. Almost all the data warehouse implementations with older versions of Oracle database utilize data files sized from 16GB to 32GB. With the advent of the BigFile tablespaces, however, DBAs can build larger data warehouses without getting intimidated by the sheer number of smaller data files
Oracle block size Maximum file size
2KB 8TB
4KB 16TB
8KB 32TB
16KB 64TB
32KB 128TB
For using BFT, the underlying operating system should support Large Files. In other words the file system should have Large File Support (LFS).
Cross-Platform Transportable Tablespaces
In Oracle 8i database, the transportable tablespace feature enabled a tablespace to be moved across different Oracle databases using the same operating system. Oracle Database 10g has significantly improved this functionality to permit the movement of data across different platforms. This will help transportable tablespaces to move data from one environment to another on selected heterogeneous platforms (operating systems). Using cross-platform transportable tablespaces, a database can be migrated from one platform to another by rebuilding the database catalog and transporting the user tablespaces. By default, the converted files are placed in the flash recovery area (also new to Oracle Database 10g), which is discussed later in this chapter. A list of fully supported platforms can be found in v$transportable_platform.
A new data dictionary view, v$transportable_platform, lists all supported platforms, along with the platform ID and endian format information. The v$database dictionary view has two new columns (PLATFORM_ID, PLATFORM_NAME) to support it.
The table v$TRansportable_platform has three fields: PLATFORM_ID (number), PLATFORM_NAME (varchar2(101)), and ENDIAN_FORMAT (varchar2(14)). Endianness is the pattern (big endian or little endian) for byte ordering of data files in native types. In big-endian format, the most significant byte comes first; in little-endian format, the least significant byte comes first.

Performance Management Using AWR
Automatic Workload Repository (AWR) is the most important feature among the new Oracle Database 10g manageability infrastructure components. AWR provides the background services to collect, maintain, and utilize the statistics for problem detection and self-tuning. The AWR collects system-performance data at frequent intervals and stores them as historical system workload information for analysis. These metrics are stored in the memory for performance reasons. These statistics are regularly written from memory to disk by a new background process called Memory Monitor (MMON). This data will later be used for analysis of performance problems that occurred in a certain time period and to do trend analysis. Oracle does all this without any DBA intervention. Automatic Database Diagnostic Monitor (ADDM), which is discussed in the next section, analyzes the information collected by the AWR for database-performance problems.
By default, the data-collection interval is 60 minutes, and this data is stored for seven days, after which it is purged. This interval and data-retention period can be altered.
This captured data can be used for system-level and user-level analysis. This data is optimized to minimize any database overhead. In a nutshell, AWR is the basis for all self-management functionalities of the database. It helps the database with the historical perspective on its usage, enabling it to make accurate decisions quickly.
The AWR infrastructure has two major components:
In-memory statistics collection. The in-memory statistics collection facility is used by 10g components to collect statistics and store them in memory. These statistics can be read using v$performance views. The memory version of the statistics is written to disk regularly by a new background process called Memory Monitor or Memory Manageability Monitor (MMON). We will review MMON in background processes as well.
AWR snapshots. The AWR snapshots form the persistent portion of the Oracle Database 10g manageability infrastructure. They can be viewed through data dictionary views. These statistics retained in persistent storage will be safe even in database instance crashes and provide historical data for baseline comparisons.
AWR collects many statistics, such as time model statistics (time spent by the various activities), object statistics (access and usage statistics of database segments), some session and system statistics in v$sesstat and v$sysstat, some optimizer statistics for self-learning and tuning, and Active Session History (ASH). We will review this in greater detail in later chapters.
Automatic Database Diagnostic Monitor (ADDM)
Automatic Database Diagnostic Monitor (ADDM) is the best resource for database tuning. Introduced in 10g, ADDM provides proactive and reactive monitoring instead of the tedious tuning process found in earlier Oracle versions. Proactive monitoring is done by ADDM and Server Generated Alerts (SGAs). Reactive monitoring is done by the DBA, who does manual tuning through Oracle Enterprise Manager or SQL scripts.
Statistical information captured from SGAs is stored inside the workload repository in the form of snapshots every 60 minutes. These detailed snapshots (similar to STATSPACK snapshots) are then written to disk. The ADDM initiates the MMON process to automatically run on every database instance and proactively find problems.
Whenever a snapshot is taken, the ADDM triggers an analysis of the period corresponding to the last two snapshots. Thus it proactively monitors the instances and detects problems before they become severe. The analysis results stored in the workload repository are accessible through the Oracle Enterprise Manager. In Oracle Database 10g, the new wait and time model statistics help the ADDM to identify the top performance issues and concentrate its analysis on those problems. In Oracle Database 10g Release 2, ADDM spans more server components like Streams, RMAN, and RAC.
DROP DATABASE Command
Oracle Database 10g has introduced a means to drop the entire database with a single command: DROP DATABASE. The DROP DATABASE command deletes all database files, online log files, control files, and the server parameter (spfile) file. The archive logs and backups, however, have to be deleted manually.