Step 01

Decide on the location of
Three control files (minimum) on different disks/ drives/volumes
Three groups of redo log files, with each group having two members (minimum) on different disks/ drives/volumes
Create a file structure for the data files

Step 02

Creating database manually without auto undo management:

1. a. Decide on a unique instance name on NT/2000 machines up to 8.0.x cannot have a name with more than 4 characters and 8.1.x cannot have more than 8 characters

b. Database character set. Modifying the character set at a later date lands in many issues.

2. Set the operating system variables
(UNIX)
ORACLE_HOME
ORACLE_SID
ORACLE_BASE
ORA_NLS33
PATH

(NT)
ORADIM80 -NEW -SID u16
-INTPWD password -STARTMODE auto
-PFILE ORACLE_HOMEDATABASEinitu16.ora

You must decide the SID, Password, and
Create init<sid_name>.ora file

SET ORACLE_SID=<sid_name>

Make <sid_name> the current SID

3. Prepare the parameter file

use init.ora as a parameter file template

db_name = Eight characters or fewer that identify the database

control_files = the location of three control files

DB_BLOCK_SIZE = Determines the database block size (can not
change after the database has been created)

4. Create a password file

5. Start the instance

STARTUP NOMOUNT pfile=init.ora

6. Create the database

MANUALLY FROM Server Manager

CREATE DATABASE “<sid_name>”
MAXLOGFILES 6
MAXLOGMEMBERS 6
MAXDATAFILES 30
MAXLOGHISTORY 100
ARCHIVELOG

LOGFILE
GROUP 1
‘E:DATA sid GROUP1log1a.rdo’ SIZE 200K,
‘E:DATA sid GROUP1log1b.rdo’ SIZE 200K
GROUP 2
‘E:DATA sid GROUP2log2a.rdo’ SIZE 200K,
‘E:DATA sid GROUP2log2b.rdo’ SIZE 200K
GROUP 3
‘E:DATA sid GROUP3log3a.rdo’ SIZE 200K,
‘E:DATA sid GROUP3log3b.rdo’ SIZE 200K
DATAFILE
‘E:DATA sid DATAFILESsystem01.dbf’
size 30M

CHARACTER SET WE8ISO8859P1;

Database
select name,created,log_mode from v$database;

Thread
select status, instance from v$thread;

Datafiles
select name from v$datafile;

Logfiles
select member from v$logfile;

Controlfiles
select name from v$controlfile;

Verify System Creation

select file_name from dba_data_files
where tablespace_name=’SYSTEM’;

Look at the database users

select username, created from dba_users;


7. Run scripts to generate the data dictionary and
accomplish postcreation steps.

Create the data dictionary
CATALOG.SQL

prostcreation objects
CATPROC.SQL


8. to know/determine the archive mode of the database

svrmgrl> archive log list

or issue the following statement

select log_mode from v$database;

also issue the following statements to know whether the archive process is enabled or not

select archiver from v$instance;
The output determines whether the process is started or not.

Creating a Database using Instance Manager


Step 1. Start Instance Manager
Press the New Button

Input a SID (4 characters) ->abcd

Step 2. Enter the DBA Authorization password and authenication

Step 3. Press the Advanced Button

* The Database Name must be the same as the Database Name
in the initabcd.ora file
* Enter the max logfile value and the max group member value

* Design the location of your datafiles, logfiles,
archive files, and control files

* Change the logfile location and name to meet your design

* Set the location of the Parameter file


Defining Parameters in the Parameter File

db_name = {myDBName}#database name using
to identify the database

db_files = 30 #maximum allowable
number of database files

#control file list
(Created by the Instance Manager)

control_files = (D:orantDATABASEctl1{SID}orcl.ora,
D:orantDATABASEctl2{SID}orcl.ora,
D:orantDATABASEctl3{SID}orcl.ora)

#database will be compatible with
software of this version

compatible = 7.3.0.0.0

#db_file_multiblock_read_count= number of database
blocks to read with each I/O.

#db_file_multiblock_read_count = 8 # INITIAL
# db_file_multiblock_read_count = 8 # SMALL
db_file_multiblock_read_count = 16 # MEDIUM
# db_file_multiblock_read_count = 32 # LARGE

# db_block_buffers = number of database blocks
cached in memory

db_block_buffers tells the oracle kernel the
size of the area that stores database read from the disk

#db_block_buffers = 200 # INITIAL
# db_block_buffers = 200 # SMALL
db_block_buffers = 550 # MEDIUM
# db_block_buffers = 3200 # LARGE

Size in bytes of the shared pool

#shared_pool_size = 6500000 # INITIAL
# shared_pool_size = 3500000 # SMALL
shared_pool_size = 6000000 # MEDIUM
# shared_pool_size = 9000000 # LARGE

Number of redo blocks for checkpoint
threshold

log_checkpoint_interval = 10000

Maximum number of user processes

#processes = 50 # INITIAL
# processes = 50 # SMALL
processes = 100 # MEDIUM
# processes = 200 # LARGE

DML locks – one for each table
modified in a transaction

#dml_locks = 100 # INITIAL
# dml_locks = 100 # SMALL
dml_locks = 200 # MEDIUM
# dml_locks = 500 # LARGE

Redo circular buffer size

#log_buffer = 8192 # INITIAL
# log_buffer = 8192 # SMALL
log_buffer = 32768 # MEDIUM
# log_buffer = 163840 # LARGE

Number of sequence cache entries

#sequence_cache_entries = 10 # INITIAL
# sequence_cache_entries = 10 # SMALL
sequence_cache_entries = 30 # MEDIUM
# sequence_cache_entries = 100 # LARGE

#sequence_cache_hash_buckets = 10 # INITIAL
# sequence_cache_hash_buckets = 10 # SMALL
sequence_cache_hash_buckets = 23 # MEDIUM
# sequence_cache_hash_buckets = 89 # LARGE

# audit_trail = true # if you want auditing
# timed_statistics = true # if you want timed statistics
max_dump_file_size = 10240 # limit trace file size to 5 Meg each

Start the Archiver Process

log_archive_start = true # if you want automatic archiving

LOG_ARCHIVE_DEST = E:{db_name}ARCHIVE
#location of the archive directory


# define directories to store trace and alert files

background_dump_dest=%RDBMS73%trace
user_dump_dest=%RDBMS73%trace

Size of database block in bytes. Db_block_size can not
be changed after database creation

db_block_size must be a multiple of 512K

db_block_size = 8192

Number of job queue processes to start

snapshot_refresh_processes = 1

Password file usage

remote_login_passwordfile = shared

Enable text searching

text_enable = true


Step 4. Create your database

Once the instance has been started, it is an idle instance.

Steps to starting the database

From a DOS Prompt

set ORACLE_SID=abcd
set LOCAL =2:abcd

svrmgrl (NT)

connect internal/password

startup pfile=initabcd.ora

Creating a Database with AUTO-UNDO management

Create database mydb
Controlfile reuse
Logfile group 1( ‘d:oradatamydbredomydb_redo_01a.log’,
‘e:oradatamydbredomydb_redo_01b.log’) size 10m,
group 2( ‘e:oradatamydbredomydb_redo_02a.log’,
‘f:oradatamydbredomydb_redo_02b.log’) size 10m,
group 1( ‘f:oradatamydbredomydb_redo_03a.log’,
‘d:oradatamydbredomydb_redo_03b.log’) size 10m
maxinstances 1
maxlogfiles 5
maxloghistory 100
maxdatafiles 100
archivelog
datafile ‘d:oradata mydbsystemmydb_system_01.dbf’ size 100M autoextend on next 20M maxsize unlimited,
default temporary tablespace temp
tempfile ‘f:oradata mydbtempmydb_temp_01.dbf’ size 100M
undo tablespace undo_tbsp datafile
‘e:oradata mydbrbsmydb_undo_01.dbf’ size 100M autoextend off;

Creating a Database with OMF (Oracle Managed File system)

Step 01

The following parameters are to be included in the init.ora parameter file of the database.

— Defining the destination for the redo log files

db_create_online_dest_1 = d:oradatamydbredo
db_create_online_dest_2 = e:oradatamydbredo
db_create_online_dest_3 = f:oradatamydbredo

— Defining the destination for the data files

db_create_file_dest= d:oradatamydbdata

— defining the undo auto management
undo_management=auto
undo_tablespace=undo_tbsp

Step 02

Create database mydb
Datafile size 200M
Logfile group 1 size 20M, group 2 size 20M, group 3 20M
Default temporary tablespace temp tempfile size 100M
Undo tablespace undo_tbsp datafile size 100M
Maxlogfiles=5
Maxlogmembers=5
Maxdatafiles=200
Noarchivelog;


This is not advised to be used for a production environment. Multiple datafile destinations are not supported and your ability to balance the IO is crippled.
This is recommended for the development environment to enable to developers to play with database objects easily.

PURPOSE


The following is what to check when the Oracle Universal Installer (OUI on Windows NT/Win2000/9x/XP platforms fails to install. Potential problems are described below.


Checklist:

1. If you are unable to install correctly please see the following questions:

a. Are you logged on as a user with admin rights instead of the local “Administrator” account?
b. Are you trying to install Oracle 8i on a computer with a Pentium 4 (P4) processor?
c. Is your color palette set to 256 or 65535 Colors?
d. Are you installing remotely? (VNC, PCAnywhere or other remote software)
e. Are you installing on NT or a Terminal server?
f. Is your system clock speed more then 800MHZ?
g. Did you type in an ORACLE_HOME that is mapped to a directory with blank space?
h. Do you have the required amount of memory, 256MB?
i. Have you verified that you’re running a supported version?
j. Do you have at least 75MB of free disk space in C:TEMP?
k. Does the SYSTEM account have proper rights granted to write to the drive?
l. Do you have another JDK installed.
m. Is the CD-ROM drive on a network?
n. Are you using a copy of the Oracle CD?
o. Do you have the latest OS service pack installed?
p. Do you have NetWare client software installed on this system?
q. Are you installing Oracle Developer 6.0 and Oracle 8i on the same system?
r. Do you have Microsoft’s OPTION PACK installed (VC++ 6)?
s. Do you have Hummingbird Exceed installed?
t. Are you installing on Windows XP?

What else do I need to do if this checklist does not help!!
How do I remove Oracle to start over?


Solutions:

1. This is the checklist to go through


a. Are you logged on as a user with admin rights instead of the local “Administrator” account?


Sufficient privilege is needed to access the registry and/or an NTFS drive. When installing Oracle you can NOT be a user with Administrator rights because you do NOT have the ‘LOGON AS A SERVICE’ or ‘LOGON AS A BATCH PROCESS’ privileges granted to you. To correct this, use the NT Server Manager or the NT User Manager, click on Policies, User Rights, click on the box next to ‘Show Advanced User Rights’, click on the drop down list (Rights) and choose ‘LOGON AS A BATCH JOB’, ‘LOGON LOCALLY’ and ‘LOGON AS A SERVICE’. For more info please look at the following note [NOTE:2062532.102]


Go To Top of Problems


b. Are you trying to install 8i on a computer with a P4 processor?


Oracle installer 8.1.x hangs on machines with a Pentium 4 Processor. (Fixed in Oracle 9i). Obtain patch number 1507768 from Metalink or copy the entire CD to the hard drive and rename each copy of the symcjit.dll to symcjit.old. For more information, please refer to the following note [NOTE:131299.1] .


Go To Top of Problems

c. Is your color palette set to 256 or 65535 Colors?

The colors for the Windows Desktop needs to be set to >=256 and/or <=65535 but NOT ‘TRUE xxx’ nor ‘HI xxx’ colors, the minimum requirements for Java applications. Set the color palette by right-clicking anywhere on the desktop, go to properties, settings. After setting, you will need to reboot.


Go To Top of Problems

d. Are you installing remotely using software like VNC or PCAnywhere?


Installing remotely is not supported. To install correctly on the Windows platform, you should log on locally at the server and attempt the installation.


Go To Top of Problems


e. Are you installing on NT or a Terminal server?


If you are installing using a Terminal Server version of Windows, you MUST be on the system and not remote. 9.0.1.x and 8.1.7.x has limited support for Windows 4.0 and Win2000 Terminal Server Edition. Oracle 8.1.6.x and earlier versions are NOT supported at all. Cyrix XP also is currently not supported. Please review the support matrix: [NOTE:66397.1] and [NOTE:77627.1] for updates. Some versions of Oracle do not run on Terminal Server at all, while others may have special issues. Citrix Metaframe also has many of the same issues.


Go To Top of Problems

f. Is your system clock speed more then 800MHZ?


If your system is =>800mhz, then you may have a problem creating a database right after installing. Run the Database Configuration Assistant to create a database, at the end save it as a script, then run the script

Go To Top of Problems

g. Did you type in an ORACLE_HOME that is mapped to a directory with blank space?


In older versions of the installer (pre 8.1) there were issues using Long File Names for your home directory.

Go To Top of Problems

h. Do you have the required amount of memory, 256MB?


Oracle 8i and Oracle 9i requires a minimum of 256MB of memory to install

Go To Top of Problems

i. Have you verified that you’re running a supported version?


Some desupported versions (pre 8i ) may have a Y2K issue in the installer and you must download a new installer. Oracle versions 8.0.6, 8.0.5, 7.4.3 all have this issue. For more information, please refer to the following note [NOTE:77588.1]


Go To Top of Problems

j. Do you have at least 75MB of free disk space in C:TEMP?


Oracle needs space in the TEMP directory as it installs. The Oracle Universal Installer (OUI) requires up to 75MB of space in the TEMP directory, as defined by the NT environment variables TMP or TEMP. Please note that moving the TEMP/TMP settings to another drive (non C:) may not resolve this issue.


Go To Top of Problems


k. Does the SYSTEM account have the proper rights granted to write to the drive?


For more information on this issue, please refer to the following note. [NOTE:95592.1]

Go To Top of Problems


l. Do you have another JDK installed?


When we attempt to install our JDK, the install may fail. You will need to remove the other JDK before installing Oracle

Go To Top of Problems

m. Is the CD-ROM drive on a network?


Only local CD-ROM drives are supported. Network lag time may cause the install to fail without a proper error message.

Go To Top of Problems

n. Are you using a copy of the Oracle CD?

A copy of an Oracle CD is NOT supported.

Go To Top of Problems

o. Do you have the latest OS service pack installed?

The following OS service packs are required:
· NT 4.0 Service Pack 6a is required.
· Win9x (client) latest version or Window Sockets2 upgrade is required.
· Win2000 Service Pack 2 is required. For more information on this subject, please refer to the following note [NOTE:124417.1]

Go To Top of Problems

p. Do you have the NetWare client software installed on this system?

If so, you may be running into the following bug [BUG:1249672]. You will need to uninstall the Netware client software, install Oracle, then reinstall the Netware client.

Go To Top of Problems

q. Are you installing Oracle Developer 6.0 and Oracle 8i on the same system?

The order that you install DataServer and Developer will make a difference. You must install them in different homes. Install Developer first, then Oracle DataServer second, edit the TNSNAMES.ORA for both connections. For more information, please refer to the following note [NOTE:74131.1]

Go To Top of Problems

r. Do you have Microsoft’s OPTION PACK installed (VC++ 6)?

You may be running into the following bugs [BUG:1150202] and/or [BUG:1166719]

Go To Top of Problems

s. Do you have Hummingbird Exceed installed?

If so then the autoexec.bat file may have the following:
set ClassPath=%ClassPath%;C:Program FilesExceed.95hcljrcsv.jar;C:Program FilesExceed.95
Comment out the complete line with a “REM” statement and reboot.
Note: Removing this statement will not prevent Exceed from running.

Go To Top of Problems

t. Are you installing on Windows XP?


Oracle 9i is the first version that is supported on Windows XP. XP Professional is the only version of XP currently supported. This does NOT include the Citrix Metaframe XP Terminal Server. Citrix Metaframe XP is not supported.

Go To Top of Problems

What else do I need to do when ANY of the above issues does not fix my install?


Support will need all of your install files if the install still fails. The location of the install log files is “Program FilesOracleInventorylogs”. Please create a Service Request (SR) if you do not have one already, and upload the files. (you can zip them as one file)

NOTE: Please do not paste the files in to the SR. You can upload the files by clicking on the word ‘upload’ when editing the Service Request

Also, Oracle support will need you to run the RDA tool and send in the files it creates. Please see the following note on how to install and run RDA [NOTE:153091.1] Remote Diagnostic Agent (RDA) for Windows


Go To Top of Problems

After every failed install, what do I need to do before I try again?

If the install failed for any of the above reasons, you need to correct the issue and clean your system with the following note, reboot then attempt the install.
CAUTION! THIS WILL REMOVE ALL VERSIONS OF ORACLE FROM THE SERVER! You may want to do a full export of your registry and a cold backup of your system first.
[NOTE:124353.1] WIN: Manually Removing all Oracle Components on Microsoft Windows Platforms
This note will walk you through removing the Oracle folders and cleaning out the registry.


RELATED DOCUMENTS
[NOTE:2062532.102] How to Grant “Logon as a Batch Job” Privilege on Windows NT 4.0
[NOTE:131299.1] ALERT: Oracle Installer 8.1.X Hangs on Machines with Pentium 4 Processors
[NOTE:66397.1] INTEL: Oracle Database Server support for Windows NT Terminal Server
[NOTE:77627.1] Oracle Database Server product support Matrix for Windows 2000
[NOTE:77588.1] WIN: Installing pre-8.1.5 or pre-8.0.6 products after Y2K
[NOTE:95592.1] Windows NT 4.0 – Administrative Security Structures
[NOTE:124417.1] Install process is stopped with the error OLE initialize or OCX load error while
[BUG:1249672] UNEXPECTED TERMINATION ON THE MACHINE WHERE NETWARE CLIENT IS INSTALLED
[NOTE:74131.1] Installing Developer 6.0 and Oracle 8i – Same Physical Machine / Separate O_H
[BUG:1150202] WRITING ERROR IN SOME MS REDISTRIBUTABLE FILES WHEN MS OPTION PACK IS INSTALLED
[BUG:1166719] AN ERROR OCCURS ON INSTALLATION OF OO4O
[NOTE:153091.1] Remote Diagnostic Agent (RDA) for Windows
[NOTE:124353.1] WIN: Manually Removing all Oracle Components on Microsoft Windows Platforms

For 8.17 and 9.2 databases on AIX boxes Compiled with the help of Patrick Wu

AIX version 4.3.3

Oracle 8.1.7.x / 9.2.x

 

How many OIAs one can configure on a machine/server, which has more than one Database?

There can be One Agent Per machine/server. So only one Oracle Intelligent Agent per machine/server can be configured even though there are more than one database hosted by that machine/server. The databases need not be of the same version of Oracle. They can be different versions.

Overview of the files used by Oracle Intelligent Agent (Exerts from Metalink Doc Id 97067.1)

listener.ora :            File with definitions of incoming SQL*Net connections

1 file per $ORACLE_HOME

Located in either (using this order searching for it):

* $TNS_ADMIN

* /etc or /var/opt/oracle

* $ORACLE_HOME/network/admin

nmiconf.log  :           File with Intelligent Agent discovery warnings/errors

1 per Intelligent Agent

Located in $ORACLE_HOME/network/log

nmiconf.lst  :           List of third party additional discovery script to run

1 per Intelligent Agent

Located in $ORACLE_HOME/network/agent/config

nmiconf.tcl  :           Intelligent Agent discovery script

1 per Intelligent Agent

Located in $ORACLE_HOME/network/agent/config

oratab       :            File with all databases present on the machine

Only 1 per machine

Located in either /etc or /var/opt/oracle

services.ora :           File with all service definitions the agent found

1 per Intelligent Agent

Located in $ORACLE_HOME/network/agent

snmp_ro.ora  :         File with all read-only service information

1 per Intelligent Agent

Located in $TNS_ADMIN or $ORACLE_HOME/network/admin

snmp_rw.ora  :         File with all updateable service information

1 per Intelligent Agent

Located in $TNS_ADMIN or $ORACLE_HOME/network/admin

sqlnet.ora   :           File with SQL*Net specific parameters

1 file per $ORACLE_HOME

Located in either (using this order searching for it):

* $TNS_ADMIN

* /etc or /var/opt/oracle

* $ORACLE_HOME/network/admin

tnsnames.ora :         File with the TNS aliases to connect to databases

1 file per $ORACLE_HOME

Located in either (using this order searching for it):

* $TNS_ADMIN

* /etc or /var/opt/oracle

* $ORACLE_HOME/network/admin

How and What It Does?

The Agent reads the oratab file for values of all the Oracle Homes and SIDs. Depending on the platform, the oratab file can be located in either of the following locations:

/etc

/var/opt/oracle

Based on the Oracle Homes values found in oratab, which listeners service the Agent searches for the listener.ora files to determine which databases.

The name of the discovered databases is based on the GLOBAL_DBNAME parameter defined in the listener.ora file for that database.

If GLOBAL_DBNAME parameters are not found in listener.ora, the Agent searches for a tnsnames.ora file using the same search methodology used to find the listener.ora file.

If the tnsnames.ora file is not found, the database alias, <SID>_<hostnames>, is assigned to a database service. The service will be known to the Agent by this alias, and it will be visible as such at the Oracle Enterprise Manager Console.

Oracle Intelligent’s Functions include:

(1).Providing local services or calling operating system dependent services to interact locally with the managed targets.

(2). Accepting jobs or events from the Oracle Management Server or other third-party applications.

(3). Running Oracle Enterprise Manager jobs, collecting their results and output, and/or queuing the results as required.

(4).Checking for events, and queueing the resulting event reports for Oracle Enterprise Manager.

(5).Canceling jobs or events as directed by the Console or other applications.

(6). Handling Simple Network Management Protocol (SNMP) requests, if SNMP is supported on the Agent’s platform.

Make sure that snmpd process is running on the machine/server

Ps –ef|grep snmpd

If that process is running then to configure the oracle intelligent agent

  1. Insert the following line in the “/etc/snmpd.conf” file:
    smux 0.0 “” xxx.xxx.xxx.xxx (if it’s not there already).NOTE: Replace xxx.xxx.xxx.xxx with your server’s ipaddress.
    If you have more than one smux line, place this line above
    the other lines.2. Check file permissions:
    ls -l /etc/snmpd.*All snmpd files should be readable by others. If not, use the
    chmod command to make them readable:chmod o+r /etc/snmpd.*

    The files are:
    /etc/snmpd.conf
    /etc/snmpd.peer
    /etc/snmpd.pid.

    3. So the master agent will see the changes just made to the “snmpd.conf” file, you will need to refresh or stop and start snmpd.

    To refresh, use command:
    refresh -s snmpd

    To stop and start, use command:

stopsrc -s snmpd
startsrc -s snmpd

IA uses these two ports: 1748 and/or 1754.

To confirm port is not being used:

netstat -a | grep 1748

To start/stop/check status of IA:

Agentctl start/stop/status

Note: All the above applies to clustered/OPS environments, except without $ORACLE_HOME/bin/gsdctl.sh, IA could not be started properly.

Trouble shooting the Oracle Intelligent Agent (OIA) Installation and working:

NMS-004 When starting Agent

If snmpd is running on the unix box, set the following in the server’s /etc/snmpd.conf:

smux 0.0 “” <ipaddress of server>

If the nms-4 error remains, turn on logging in the snmpd.conf file with the following parameter:

logging   file=/usr/tmp/snmpd.log   enabled

The log file gives more information about what could be happening. For example, a space between the double quotes in the smux line can cause the application to misinterpret the space as a password. The double quotes by themselves – mean no password.

NMS-0308 : ‘Failed to listen on address : another Agent may be running’.

There are two possible causes for this error:

If two Agents are installed on a machine, in two different ORACLE_HOME, then you see this message if you try to start the second Agent. This is because both Agents try to listen the same default port #1748. In the contextual case for OEMT is using 1551 and SLVR is using 1531.

Only have one Agent on a machine.

The port 1748 where the Agent listens is being used by someone else, or is not being released by dead process that were formerly using it (unfortunately common problem on SUN) .

To confirm port is being used by someone else

Use this command in UNIX

netstat -a | grep 1531  ——–à this is port #

If any result shown on screen that ends in “LISTENING” then the port is in use.

If the following is true :

netstat -a | grep 1531 —> results in “LISTENING”

LSNRCTL> dbsnmp_status ( results in “The db subagent is not started.”)

Then do this.

ps -ef | grep dbsnmp

kill -9 ______ (fill in process numbers)

restart Agent with

LSNRCTL> dbsnmp_start

If it still fails to start the Agent, go through steps again, but before re-starting the AGENT, do this.

cd $ORACLE_HOME/network/agent

rm *.q, services.ora, snmp_ro.ora, and snmp_rw.ora

restart Agent with

LSNRCTL> dbsnmp_start

This will re-start the Agent and remove all of the job and events queues it was using in the past.

If all else fails, re-booting the machine should definitely free up the port.

You may also have to relink the Agent to clear this problem. Please see the Oracle Enterprise Manager Configuration Guide and README for more information.

NMS-001 while starting the Agent

This message indicates that the SNMP Master Agent (the process on UNIX that controls the SNMP protocol) could not be contacted. By default the Agent listens and works over SQL*Net, but the Agent can also work over SNMP on UNIX systems.

This message can safely be ignored unless you are trying to communicate with a Master Agent.

NMS-205 while starting the Agent

The ‘dbsnmp’ user could not be located.

Run the catsnmp.sql script for that database with either the SYS or INTERNAL accounts.

NNL-018 while starting the Agent.

The Agent tries to contact the Names Server, but can not get in contact with it. This can happen if a Names Server is indeed installed but not used.

Add a line in the file snmp_rw.ora

NMI.REGISTER_WITH_NAMES = FALSE

NMS-351 while starting the Agent

This happens if there mismatches between the ID’s in the ‘*.q’ files in the $ORACLE_HOME/network/agent directory. Delete all the ‘*.q’ in the $ORACLE_HOME/network/agent directory. Rebuild your repository. Restart the Agent.

Configuring Oracle Intelligent Agent without installing or configuring Database on UNIX server:

Refer to Doc ID:       45755.1

Title:                      How To Configure Intelligent Agent On Unix Server Without Installing

Database

Source:                  www.metalink.oracle.com

Step 01. 

Create the user CSMIG and the schema owned by him. This schema consists of the tables and views as under

CSM$COLUMNS                            TABLE

CSM$CONSTRAINTS                      TABLE

CSM$ERRORS                              TABLE

CSM$EXTABLES                           TABLE

CSM$INDEXES                             TABLE

CSM$PARAMETERS                       TABLE

CSM$TABLES                               TABLE

CSM$TRIGGERS                            TABLE

CSMV$COLUMNS                          VIEW

CSMV$CONSTRAINTS                    VIEW

CSMV$ERRORS                             VIEW

CSMV$INDEXES                           VIEW

CSMV$TABLES                             VIEW

CSMV$TRIGGERS                          VIEW

 

This user is not a default user and schema is not created by default.

To create CSMIG user and his schema run ‘csminst.sql’ located in $ORACLE_HOME/rdbms/admin directory. If the default tablespace is to be changed pl. edit this script before it is run.

To run the script log-in or connect as ‘sysdba’ from the sqlplus or server manager (svrmgr) in case the Oracle is of 8.1 or lower version.

Verify if the objects are created or not.

Step 02

Run the ‘csscan’ utility against the database that is proposed for character conversion.

Help is available for the parameters to be passed on to ‘csscan’ utility.

Type ‘csscan help=y’ at the command prompt with no quotation marks.

USERID                             yes    username/password

FULL             N                 yes    scan entire database

USER                                yes    user name of the table to scan

TABLE                              yes    list of tables to scan

EXCLUDE                                    list of tables to exclude from scan

TOCHAR                            yes    new database character set name

FROMCHAR                                  current database character set name

TONCHAR                                   new NCHAR character set name

FROMNCHAR                                current NCHAR character set name

ARRAY           10240           yes    size of array fetch buffer

PROCESS       1                 yes    number of scan process

MAXBLOCKS                                split table if larger than MAXBLOCKS

CAPTURE       N                           capture convertible data

SUPPRESS                                  suppress error log by N per table

FEEDBACK                                   feedback progress every N rows

BOUNDARIES                               list of column size boundaries for summary report

LASTRPT       N                           generate report of the last database scan

LOG              scan                       base name of log files

PARFILE                                     parameter file name

PRESERVE      N                           preserve existing scan results

HELP             N                           show help screen (this screen)

 

HERE is the command line ‘csscan’ command with parameters passed on

csscan full=y fromchar=UTF8 tochar=US7ASCII fromnchar=AL16UTF16 tonchar=US7ASCII log=/home/oracle/char_check_US7_to_UTF8_FISAPPSP CAPTURE=Y

csscan full=y fromchar=UTF8 tochar=US7ASCII fromnchar=AL16UTF16 tonchar=US7ASCII log=/home/oracle/char_check_US7_to_UTF8_FISCOMP CAPTURE=Y

csscan full=y fromnchar=AL16UTF16 tonchar=US7ASCII log=/home/oracle/char_check_US7_to_UTF8_FISCOMP_NCHAR CAPTURE=Y

csscan full=y fromnchar=AL16UTF16 tonchar=US7ASCII log=/home/oracle/char_check_US7_to_UTF8_FISAPPSP_NCHAR CAPTURE=Y

Step 03

The output files with ‘.txt’, ’.out’,’.err’

extensions can be seen in the destination directory. Look into the .err file for any entries under the Data Dictionary individual exceptions and also Application data individual exceptions.

In case there ARE NO ENTRIES under these heads and the database DOES NOT have CLOB, NCLOB columns in any of the schemas then go to Step 04.a. 

Step 04.a.

Log in to the database of which character set is to be changed as SYS.

Issue the following SQL

Select count(*) from sys.metastylesheet;

If the out put is greater than 0(zero) then

  1. truncate table sys.metastylesheet;
  2. Edit Init.Ora file for the SID. Commect out the parameter ‘PARALLEL_SERVER’ OR set it to FALSE, if is found TRUE
  3. Shutdown immediate
  4. Backup the database (cold)
  5. Startup mount
  6. Alter system enable restricted session;
  7. Alter system set job_queue_processes=0;
  8. Alter system set aq_tm_processes=0;
  9. Alter database open;
  10. Alter database character set us7ascii;
  11. Alter database national character set us7ascii;
  12. Shutdown immediate
  13. Uncomment or restore the old values to the init.ora file
  14. Startup
  15. Connect as sysdba to run ‘catmet.sql’ to repopulate the sys.metastylesheet table. The ‘catmet.sql’ script can be found in the $ORACLE_HOME/rdbms/admin directory.

Possible Errors on Oracle 9i and Reasons there for 

ORA-012718: when the database is logged other than SYS

ORA-012719:  when the database is not in RESTRICTED MODE

ORA-012720: when the database is not in EXCLUSIVE MODE

ORA-012721:  when operation can not execute as there are other active sessions

Possible Errors on Oracle 9i when the NEW character set is not recognized

ORA-24239:   invalid character set identifier

ORA-12714:   invalid national character set specified

ORA-12715:   invalid character set specified

In case there ARE NO ENTRIES under these heads and the database DOES have CLOB, NCLOB columns in any of the schemas then go to Step 04.b.

Step 04.b 

Log in to the database of which character set is to be changed as SYS.

Issue the following SQL

Select count(*) from sys.metastylesheet;

If the out put is greater than 0(zero) then

  1. truncate table sys.metastylesheet;
  2. Edit Init.Ora file for the SID. Commect out the parameter ‘PARALLEL_SERVER’ OR set it to FALSE, if is found TRUE
  3. Shutdown immediate
  4. Backup the database (cold)
  5. Startup
  6. Export all the table containing  non-null CLOB columns, then truncate these tables secure the .dmp file and
  7. Shutdown immediate
  8. Startup mount;
  9. Alter system enable restricted session;
  10. Alter system set job_queue_processes=0;
  1. Alter system set aq_tm_processes=0;
  2. Alter database open;
  3. Alter database character set us7ascii;
  4. Alter database national character set us7ascii;
  5. Shutdown immediate
  6. Uncomment or restore the old values to the init.ora file
  7. Startup
  1. Connect as sysdba to run ‘catmet.sql’ to repopulate the sys.metastylesheet table. The ‘catmet.sql’ script can be found in the $ORACLE_HOME/rdbms/admin directory.
  2. Import the data into those tables that have been exported using the export dump file created in (f)

In case there ARE ENTRIES under these heads understand the statuses and act accordingly: Step 05.a When objects are in CHANGELESS status, they need not be converted. Then

Log in to the database of which character set is to be changed as SYS.

ALERT: The new character set should be a superset of the current character set then only …..

Issue the following SQL

Select count(*) from sys.metastylesheet;

If the out put is greater than 0(zero) then

  1. truncate table sys.metastylesheet;
  2. Edit Init.Ora file for the SID. Commect out the parameter ‘PARALLEL_SERVER’ OR set it to FALSE, if is found TRUE
  3. Shutdown immediate
  4. Backup the database (cold)
  5. Startup mount
  6. Alter system enable restricted session;
  7. Alter system set job_queue_processes=0;
  8. Alter system set aq_tm_processes=0;
  9. Alter database open;
  10. Alter database character set us7ascii;
  11. Alter database national character set us7ascii;
  12. Shutdown immediate
  13. Uncomment or restore the old values to the init.ora file
  14. Startup
  15. Connect as sysdba to run ‘catmet.sql’ to repopulate the sys.metastylesheet table. The ‘catmet.sql’ script can be found in the $ORACLE_HOME/rdbms/admin directory.

Step 05.b When objects are in CONVERTIBLE status, they need not be converted. Then

Log in to the database of which character set is to be changed as SYS.

  1. The only solution is export full database – create new database – then import
  2. When the database is large amd only a few tables are in CONVERTIBLE status where are all the others are CHANGELESS, then EXPORT selected tables + alter database character set us7ascii; (if possible) + import the selected convertible tables ( import converts appropriately).
  3. the solution is to handle the individual tables on which you have the exceptions by exporting the tables and then
  4. alter that table to increase the data size of that column by doubling the size and then
  5. import data with full=y ignore=y

Step 05.c When objects are in EXCEPTIONAL status, they need not be converted. Then Log in to the database of which character set is to be changed as SYS.

 

BibliographyDocument                                          Date                            Latest RevisionMetalink Doc ID        158577.1       24-SEP-2001           20-JUN-2003Metalink Doc ID        225938.1       14-JAN-2003            20-JUN-2003Metalink Doc ID        123670.1       02-NOV-2000           18-JUN-2003Metalink Doc ID        66320.1         23-oct-1998            03-APR-2003Metalink Doc ID        213015.1       02-OCT-2002           21-JAN-2003Metalink Doc ID        225912.1       14-JAN-2003            06-JAN-2003Metalink Doc ID        140014.1       20-APR-2001           27-NOV-2002Metalink Doc ID        70150.1         24-MAY-1999           31-MAR-2000

The recent experience of migrating and integrating multiple schemas into two large databases has prompted a retrospection and introspection. This experience should help a better planning for such future projects. The most interesting aspect is the existing schemas have divergent profiles and reside on different versions of Oracle and they migrate and integrate into the latest version of Oracle. It is not cloning database from one machine to the other, which requires totally different approach.
It is like cloning of the schemas but bit exactly that.

System Global Area (SGA)

The defining parameters of SGA are to be decided basing the requirements of those schemas as individual databases as they existed and then a clear discussion with the Users on how they plan to execute the processes in the new environment like either by serializing those processes which contains the usage of resources at a single given point of time on each day, except in some exceptional cases. If the Users have plans to concurrently run the processes as at a given point of time the processes were running when they are individual databases. This decision demands a large SGA as many User objects are to be maintained in the shared pool area. If these aspects are not considered then there will be swapping, fragmentation and ultimately the internal errors and sudden deaths to the processes. PDML processes in the source databases might have worked well on those sources, as they are the real contenders for the CPU and Resource usage. But in the destination Database, there may be more CPUs and Resources but still they are to be shared by all the Schema Owners. Hence it should be considered well to alter the associated PARALLEL clause with the DDL in such a way that the resources are properly allocated and those tables and indexes are highly active. Partitioned Objects creation is to be cared with the respective tablespaces, which host them.

At the source databases some processes fetch the data from other schemas in other databases using DB Links resulting in the distribution of resource Usage.

After the migration and integration of those schemas the resource utilization is localized on to one Database. This insists upon increased resource availability at Destination Database. Else under pressure the processes may not work proper spitting various types of memory and internal errors including ORA-04031.

Creation of Database:

Determine the size of the existing schemas and growth rates and basing on such realistic analysis the database is to be created.

Tablespace names are to be replicated in the new database to avoid a large manual job of editing the DDL extracted from those existing schemas to pre-create the objects in the new database before the data is imported. If in the existing schemas the object creation is not well planned, manual editing of DDL cannot be avoided as that helps implement Oracle OFA avoiding IO contentions.

This step shall preclude the errors while creating objects.

UNDO or RBS tablespace creation also plays a vital role here as the requirement is more and to avoid any kind of ORA-01555 and other related ORA errors find out the
UNDO usage rate of the existing databases and then plan the new UNDO or RBS tablespace to make it easy to Oracle not to complain on our failures.

Users, roles and profiles creation should be done keeping in view the existing databases. Grant of the quotas on the respective Tablespaces should be positively done as to avoid any problem.

It is essential to alter the Oracle Internal Users after they are created to with default profiles or profiles that have been tailor made to meet the post creation situation and requirements. Users that are associated with such profiles, which limit the IDLE_TIME, will definitely have adverse impact while importing large objects and large amounts of data. The sessions get timed out and the process dies and the process is to be restarted. This is a painful restart as it eats into the windows available to accomplish the job and builds up pressure.

It is essential to verify Temporary Tablespace for those users who are to perform the imports and other object creation. If the users are created first and then the Tablespaces, for all those Users SYSTEM becomes the temporary tablespace and it is taxing on the SYSTEM Tablespace as also has the inherent hidden failures when the TEMP SEGMENTS fail to extend.

Create the users and alter the users after all Tablespaces are created and set them to those tablespace and allot quotas on the respective tablespace as required making doubly sure that in the existing schemas those are the tablespace where the objects of that Schema Owner reside.

Set the mode of the Database to NOARCHIVELOG mode. Lest the management of the archive logs become a big problem, diverting attention and focus to a perilous situation.

Do not forget to compare the NLS character set of the source databases. If the source databases have different character sets find out a character set that is a superset of those source database character sets and the conversion compatibilities and issues associated with them and discuss these aspects with the users in advance.

Create the Schema Owner accounts and grant them all the roles that are associated with system privileges as existed in the source databases with temporary passwords.

Modifications to Source Code

Even though it is not necessary for the DBA to know what source code is like, it is necessary for him to know whether that code has used any deprecated or abandoned setting like hints built within them. In case the source code is only to meet the RBO and new database is CBO oriented, it is necessary to advise the Users to start looking at the CBO mechanisms to make their code compatible to the new environment, else it becomes a great headache to the DBA at a later juncture while deafening hues and cries are to be continuous heard.

Compatibility of the Source code when migrated to a new environment is to be part and parcel of the process of migration and integration.

Export and Import

When the database is ready and users that are to perform the Export and Import are ready with all the requirements such as with a profile, which gives them unlimited IDLE_TIME, unlimited QUOTA on those TABLESPACES where they objects are to be created and populated, the database in NOARCHIVELOG mode and finally the TABLESPACE names are not different from the source databases/schemas, the consider the following:

01. Check if the source databases have duplicate constraints, which get imported if not cared for which have the potential of impairing the performance, which comes back to the DBA at a later point. If they exist in the source databases take it to the notice of the users try to drop all such duplicates.
02. If the source databases and destination database are within the local network the throughput for the export and import may be good else the process of export and import may suffer slowness. If source and destination databases are local to each other, find out if these two processes can be separated by taking export dumps onto a shared file system or not. If there is not such space that can allow take export dumps and then consider the use of pipes, which eliminate the space usage and can also save time by clubbing Export and Import Processes together.
03. Pre-create the tables and do not enable the primary key constraints and do not create the Indexes. Enable the primary and Foreign Key constraints only after the process of import is complete with no errors in the import log files. Create the indexes as a separate process. Do not pre-create sequences or if they are in existence drop them.
04. Do not pre-create triggers. If they are in existence disable them.
05. Do not import the grants.
06. There could be views and PLSQL Objects that existed in the source databases, which might be interacting with other schemas, that existed as separate databases, with the help of Database Links. They become invalid and it is for the users to recreate them by correcting such view definitions and PLSQL Object Codes.
07. Compare essentially the number of objects for a given schema at the source and at the destination after they are created and make the row counts to rule out the data loss
08. Check for the invalid objects and find out the reasons and validate if they are invalid which can be attended by as a DBA and not as a developer or Owner of the Code

Post Creation of Database Activity

When once the Export Import Jobs is done, there are certain jobs to be attended.
They are:

1. Take cold backup of the database and then set the database to archive log mode and restart the database
2. Trace the control file command. It is essential to backup the control file to trace as when the database structures are modified or new structures introduced.
3. Restore the security measures that are prevailing through out the enterprise by altering the Users and their Profiles.
4. Restore the passwords for the schema owners, as they existed in the source databases.
5. Restore the grants to the roles by making grants on the objects as existed in source databases
6. Restore the Database Users, as they existed in the Source Databases.
7. In tune with the existing systems create the database structures that simplify the Database Administration and backup
8. Document the entire process including the hurdles that have been over come in the process.
9. When the Users start using the database, study the resources, memory, disk space usage and object growth rate patterns, to help fine tune the database.

How to create Configure Heterogeneous Connectivity between
Oracle and SQL Server Databases?

What is the purpose?

It enables to perform
01. DML activity
02. DDL activity and
03. PLSQL procedures can be run against the database for data management and processing
04. To copy the database from SQL server to Oracle


How do we know that the database is eligible for the configuration of Heterogeneous Connectivity?

The data dictionary tables, views, and packages may already be installed on your Oracle9i server. Check for the existence of Heterogeneous Services data dictionary views, for example, SYS.HS_FDS_CLASS.

Log into the Oracle Database as SYSTEM or SYS or as the ID that is eligible to access the data dictionary and then issue the following statement.

Select * from SYS.HS_FDS_CLASS;

This returns no rows if no HS configuration are done.

How to configure the Heterogeneous connectivity??
Step 01
Creation of .udl file (udl stands for universal data link)
On Windows Platforms

a. Decide a directory where you shall have the .udl file on the Oracle Server where the Oracle Database is living.
b. Right click on the right plane and select new and the new text document and create a file
c. Rename the file with the extension .udl (ex: MYSQL.udl)
d. Go to the SQL Server and create a global user with administrative rights to log into any database that is living on that server
e. Test the connectivity and go back to Oracle Server
f. Go back to the file which is renamed in step c to MYSQL.udl
g. Double click on that file to open the Data Link Properties Window
h. There you see 4 tabs as in the following screen shot


i. Go to the provider window and select OLE DB provider for SQL server and then click next

j. in the connection window

k. Test the connection and then click OK

Step 02

Configuring SQL*Net/Oracle Net

Set up a Oracle Net service name for the agent that can be used by the Oracle9i server. The Oracle Net service name descriptor includes protocol-specific information needed to access the Oracle Net listener. The service name descriptor must include the (HS=OK) clause to ensure the connection uses Oracle9i Heterogeneous Services. The description of this service name is defined in tnsnames.ora, the Oracle Names server, or in third-party name servers using the Oracle naming adapter.

The following is a sample entry for service name in the tnsnames.ora file:

MYSQLSERVER= # ALIAS GIVEN TO THE SQL SERVER DATABASE
(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)
(HOST=HOST_SERVER_NAME_WHERE_ORACLE_DB_IS_SITTING)
(PORT=1524) #This port number is to match with the port # in listener.ora
)
(CONNECT_DATA = (SERVICE_NAME=SQL_SERVER_DB_NAME)
)
(HS = OK)
)


Set up the listener on the gateway to listen for incoming request from the Oracle9i server and spawn Heterogeneous Services agents. Then, start the listener on the gateway machine.

The following is a sample entry for the listener in listener.ora:

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = ORACLE_DB_SERVER)
(PORT = 1524) >>> TNS Names entry should match with this port number
)
)
)
)


SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = SQL_SERVER_DB_NAME) # TNS Names entry is to match with this
(ORACLE_HOME = ORACLE_HOME_DIR_PATH) # Oracle Home
(PROGRAM = ORACLE_HOME_DIR_PATH binhsolesql.exe) # this is HS OLE Executable
(ENVS=LD_LIBRARY_PATH=non_oracle_system_lib_directory)
)
)


The value associated with PROGRAM keyword defines the name of the agent executable.

The full path of the directory, which contains the DLLs that are loaded by the Heterogeneous Services agent, is specified by LD_LIBRARY_PATH. This is required only when you are configuring the connectivity between an Oracle Database that is on UNIX system and SQL Server Database.

Presently only Solaris and Windows Platforms are working fine. I have not tested on other platforms. I saw some docs related to Oracle 8iR3 that this was then available only for Windows and SUN.

Typically, you use SID_NAME to define the initialization parameter file for the agent. Remember that this name is going to be used in future when we create an initialization parameter file for HS connectivity.

Step 03

Test the TNS Names and Listener Entries

Testing TNS Names Entries:

Go to DOS command prompt and type tnsping service_name

If this output is like this you have no issue to handle

K:>tnsping service_name

TNS Ping Utility for 32-bit Windows: Version 9.2.0.5.0 – Production on 28-SEP-20
04 16:12:43

Copyright (c) 1997 Oracle Corporation. All rights reserved.

Used parameter files:
C:oracleora92networkadminsqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp) (HOST= server_which_hosts_oracle)(PORT=1521)) (CONNECT_DATA = (SERVICE_NAME=sql_server_db_name)) (HS = OK))
OK (80 msec)
K:>

If the Oracle is not set in the path you may get a message saying that the command is not recognized as an internal or external command, operable program or batch file. Go and set oracle in the path

Oracle Codd’s 12 Rules

Rule 1:

The Information All information in a relational database is represented explicitly at the logical level and in exactly one way – by values in tables.

Rule 2:

Guaranteed Access Each and every datum (atomic value) in a relational database is guaranteed to be logically accessible by resorting to a combination of table name, primary key value, and column name.

Rule 3:

Systematic Treatment of Null Values Null values (distinct from the empty character string of blank characters and distinct from any zero or other numbers) are supported in fully relational DBMS for representing missing information and inapplicable information in a systematic way.

Rule 4:

Dynamic Online Catalog Based on the Relational Model The database description is represented at the logical level in the same way as ordinary data, so that authorized users can apply the same relational language to its interrogation as they apply to the regular data.

Rule 5:

Comprehensive Data Sublanguage A relational system may support several languages and various modes of terminal use (for example, the fill-in-the-blanks mode). However, there must be at least one language whose statements are expressible, per some well-defined syntax, as character strings, d that is comprehensive in supporting all of the following items:

Data Definition
View Definition
Data manipulation (interactive and by program)
Integrity Constraints
Authorization
Transaction boundaries (begin, commit, and rollback).

Rule 6:

View Updating All views that are theoretically updateable are also updateable by the system.

Rule 7:

High-Level Insert, Update, and Delete The capability of handling a base relation or a derived relation as a single operand applies not only to the retrieval of data but also to the insertion, update, and deletion of data.

Rule 8:

Physical Data Independence Application programs and terminal activities remain logically unimpaired whenever any changes are made in either storage representations or access methods.

Rule 9:

Logical Data Independence Application programs and terminal activities remain logically unimpaired when information-preserving changes of any kind that theoretically permit unimpairment are made to the base tables.

Rule 10:

Integrity Independence Integrity constraints specific to a particular relational database must be definable in the relational data sublanguage and storable in the catalog, not in the application programs.

Rule 11:

Distribution Independence A relational DBMS has distribution dependence.

Rule 12:

No subversion if a relational system has a low-level (single record at a time) language, that low level cannot be used to subvert or bypass the integrity rules and constraints expressed in the higher-level relational language (multiple records at a time).

Intuitively, NULL approximately represents an unknown value.

Database Standards

Database Standards and Environments:

01. Production Database Environment.
02. Pre-production Database Quality Test Environments
03. Development Database Environment.

General Database Standards are categorized into:

01. Pre-Database Creation Standards or Setting up of Environment.
02. Database Standards to be followed to create a database
03. Post-Database Creation Standards

Pre-Database Creation Standards or Setting up of Environments:

This includes:

a. Decisions based on the Hardware Architecture of the Hosting Servers per Enterprise decisions.
b. Decisions on the file-systems to be created basing on the underlying Storage Architectures.
c. Decisions derived from the Design and Architecture Team for the Databases, which also include the required initialization parameters.

a. If the database is home grown ER diagram generation
b. If the database is vendor supported application then securing the ER diagram
c. Capacity planning
d. Resource utilization
e. Process diagrams

d. Decisions based on the Enterprise Security Concerns at the OS Levels
e. Decision on the naming conventions of the file systems.
f. Decision on the naming convention of Oracle Databases

Production to be associated with P
Pre-production to be associated with S
Training to be associated with T
Quality to be associated with Q
Development to be associated with D


Database name cannot have more than 4 letters and the last letter should indicate the environment.

Example:

SIEBEL may be SIB
Production SIBP
Stage SIBS
Training SIBT
Quality SIBQ
Development SIBD

g. Decision on the ARCHIVELOG mode for the database in various environments and recoverability of the databases and data loss tolerance levels by Business
h. Decision on the backup strategy for the database and using RMAN as the standard Oracle provided tool for the backup and restore. Backup and Restore Policy is to decide on the retention of the backup catalogs within RMAN and MML catalogs, Exceptions if any, log file destination, details to be logged, database, archive log file backups.
i. The restoration policy is to include the SLA in case of data file, tablespace and database recovery and also on the log files to be applied.
j. Decision on the MML software where the backup destination is Tape.
k. Decision on the access levels to the Servers which host Oracle Databases and MML software to the Oracle DBA, System Administrators and Backup Administrators.
l. Altering the Kernel Parameters and rebooting the server before creation of Database, basing on the resources available on the server and the impact of the Hardware Architectural Design of the server.

Database Standards to be followed to create a database:

This includes:

a. Installation of Oracle Binaries based on the required components as decided by Database Standards to be followed to create a database. In case the application vendor drives the installation, the vendor has to provide a document with all the standards.
b. Naming Standards of the database.
c. Naming of the non-default schema object naming standards in case the application is home grown
d. Naming standards and Security Policies of LISTENER to the database
e. Naming standards and security policies for TNS Names file at the client level and centralized policy to update TNS Names file on the clients as and when a new database is created.
f. Decision of using SQLNet.ora file on the server and client.
g. Decisions of the purging or archiving alert log, listener log, RDBMS messages files, user dump and core dump files and audit files.
h. Decisions on the automated or user driven database startup and shutdown scripts as also listener.
Post-Database Creation Standards:

This includes:

a. Password security policy
b. Policy on grant of default roles and system privileges directly
c. Policy on creation of profile on better utilization of resources.
d. Policy on monitoring the databases
a. Monitoring RDBMS
b. Monitoring Backups
c. Monitoring Performance PERFSTAT and other tools (home grown or oracle or vendor supported)
d. Generation of statistics for the User Schemas
e. Scheduling Jobs through crontab or through dbms_job or any other schedulers or monitoring those jobs


Pre-Database Creation Standards-file system naming standards

RAID TYPE OF RAID Control files Database file Redo log file Archive log file
0 Striping Avoid* Ok* Avoid* Avoid*
1 Shadowing Ok Ok Recommended Recommended
0+1 Striping + Shadowing Ok Recommended (1) Avoid Avoid
3 Striping with Static Parity Ok Avoid(2) Avoid Avoid
5 Striping with Rotating Parity Ok Avoid(2) Avoid Avoid

* RAID 0 does not provide any protection against failures. It requires a strong backup strategy.
(1) RAID 0+1 is recommended for database files because this avoids hot spots and gives the best possible performance during a disk failure. The disadvantage of RAID 0+1 is that it is a costly configuration.
(2) When heavy write operation involves this data file

Installable Oracle Software Bundles (Base Version and Patch Sets) are to be kept on NFS mounted file systems accessible from all the database servers

Oracle Home is to be created on a separate file system. If the server is to host multiple database and if the applications supported by those databases are not based on the same Oracle Versions then with a capability to associate with the choice home the file systems are to be created and each database in its profile got set to the required home.

Example:

/opt/local/oracle/product/9.2.0.4
/opt/local/oracle/ product/9.2.0.5
/opt/local/oracle/ product/9.2.0.6
/opt/local/oracle/ product/9.2.0.7

Data files are to be located on the file systems created

The minimum size of a file system for the redo log should be 1 Gig. This gives unfettered ability to tune redo log issues.

System

/opt/local/oracle/<SID_NAME>/oradata/SYSTEM/sid_SYS_NNN.dbf

Data files

/opt/local/oracle/<SID_NAME>/oradata/dataNNN/sid_TBSP_NNN.dbf

Index files

/opt/local/oracle/<SID_NAME>/oradata/indxNNN/sid_TBSP_NNN.dbf

Redo log

/opt/local/oracle/<SID_NAME>/oradata/redoNNN/sid_REDO_G_N_M_NN.log
(No two members of the same group live on the same file system)
Control Files

/opt/local/oracle/<SID_NAME>/oradata/control_N/sid_control_N.ctl

Dump Scripts and Logs Files

/opt/local/oracle/<SID_NAME>/admin/bdump/
/opt/local/oracle/<SID_NAME>/admin/udump/
/opt/local/oracle/<SID_NAME>/admin/cdump/
/opt/local/oracle/<SID_NAME>/admin/rman/
/opt/local/oracle/<SID_NAME>/admin/exports/
/opt/local/oracle/<SID_NAME>/admin/scripts/

Archive Log Destinations

/opt/local/oracle/<SID_NAME>/admin/archive/dest_01/
/opt/local/oracle/<SID_NAME>/admin/archive/dest_02/