2008-10-03

ORACLE MEMORY

ustawianie pamięci:
SGA
parametr sga_target można zmieniać dynamicznie do wartość sga_max_size, natomiast parametr sga_max_size może być zmianiany tylko statycznie czyli z restartem bazy
sga_target > 0 - automatyczne zarządzanie pamięcią
a wpisy:
  • db_cache_size,
  • java_pool_size,
  • large_pool_size,
  • shared_pool_size
oznaczają ich minimalną wartość

sga_target = 0 - ręczne ustawianie pamięci i powyższe parametry ustawić ręcznie

sga_max_size - do jakiej wartości można ręcznie podwyższyć sga_target

SHARED_POOL_SIZE - pamięć w Bajtach dla współdzielnonych SQL i PL/SQL i dla danych słownikowych (Library Cache, Data Dictionary Cache)
DB_CACHE_SIZE - rozmiar bufora danych dla podstawowego rozmiaru bloku danych
PGA
PGA - (dla procesu) pamięć zawierająca kursor, zmienne, obszary sortowania, łaczenia itp.
ustawienie pamięci w spfile:
ALTER SYSTEM SET sga_max_size=[size]M scope=spfile;
ALTER SYSTEM SET sga_target=[size]M scope=spfile;
sprawdzenie zużycia pamięci:
SELECT name,bytes/(1024*1024) as MB FROM v$sgainfo;
SELECT component, min(final_size) low, (min(final_size/1024/1024)) lowMB,

max(final_size) high, (max(final_size/1024/1024)) highMB
FROM v$sga_resize_ops GROUP BY component ORDER BY component;
sprawdzenie ustawień pamięci:
SHOW SGA
SELECT * FROM v$sgastat;
SELECT * FROM  v$pgastat;
SELECT name, value FROM v$parameter WHERE name in ('sga_max_size', 'shared_pool_size','db_cache_size', 'large_pool_size','java_pool_size');
SELECT * FROM v$sgastat WHERE name LIKE 'free memory';

ORACLE OPTIMALIZATION

sprawdzenie liczby bloków:
SELECT blocks, extents FROM dba_segments WHERE segment_name='[table]' AND owner='[owner]';
sprawdzenie liczby bloków zajętych przez segment:
SELECT blocks, empty_blocks FROM dba_tables WHERE table_name = '[table]' AND owner='[owner]';
optymalizacja przestrzeni:

zmiejszenie ilosci bloków zformatowanych pod wcześniej zainstertowane dane, które to dane następnie zostały zdeletowane, a bloki zostały automatycznie przesunięte do FREELIST, czyli obniża się HWM, jednak zaalokowana liczba bloków przez segment pozostaje nie zmieniona, ponadto bloki migrują do wolnej przestrzeni w miejsce pustych bloków dlatego zmienia się row_id i block_id.
ALTER TABLE [table] MOVE;
następnie proces Automatic Space Segment Management powinien wszystko wyczyścić, oczywiście przestrzeń musi być w trybie Extent Management Local, jeżeli nie to trzeba to zrobić ręcznie jak poniżej

zmniejszenie zaalokowanej liczby bloków przez segment:
ALTER TABLE [table] ENABLE ROW MOVEMENT;ALTER TABLE [table] SHRINK SPACE;
obliczanie wielkości przestrzeni wycofania:
undo_blk/s - ilość bloków wycofań na sekundę,
86400 - ilość sekund w ciągu dnia
undo_blk/s = SELECT (SUM(undoblks))/ SUM ((end_time - begin_time) * 86400) FROM v$undostat;ts_size = (show parameter undo_retention) * undo_blk/s * db_block_size + db_block_size
lub zapytanie:
SELECT (UR * (UPS * DBS)) + (DBS) AS "Bytes" FROM (SELECT value AS UR FROM v$parameter WHERE name = 'undo_retention'), (SELECT (SUM(undoblks)/SUM(((end_time - begin_time)*86400))) AS UPS FROM v$undostat), (select block_size as DBS from dba_tablespaces where tablespace_name= (select value from v$parameter where name = 'undo_tablespace'));
sprawdzenie twardych zapytań:
select pa.SID, pa.VALUE "Hard parses",ex.VALUE "Execute Count" from v$sesstat pa,v$sesstat ex where pa.sid = ex.sid and pa.statistic#=(select statistic# from v$statname where name ='parse count (hard)') and ex.statistic#=(select statistic# from v$statname where name = 'execute count') order by "Hard parses" desc;
select * from (select sql_text,parse_calls,executions from v$sqlstats order by parse_calls)where rownum < 10 #jeżeli zbliżone parse_calls do executions to te pytania są za często parsowane

ORACLE UPGRADE

podstawowa procedura upgarade dla wersji 10:

1. przeczytać dokument README.html
2. sprawdzić System Requirements
3. ustawić zmienną systemową $ORACLE_HOME
4. wyłączyć wszystkie usługi bazy
5. zainstalować patcha na binaria bazy (modyfikuje,dodaje tylko pliki binarne w $ORACLE_HOME)
6. wykonać skrypt sprawdzający bazę danych:
STARTUP UPGRADE
SPOOL preupgrade_check.log
@$ORACLE_HOME/rdbms/admin/utlu102i.sql
SPOOL OFF
7. sprawdzić spool i zweryfikować, po ewentualnej korekcie wykonąć skrypt ponownie
8. wykonać upgrade bazy danych (modyfikuje obiekty w przestrzeni SYSTEM)
STARTUP UPGRADE
SPOOL patch.log
@$ORACLE_HOME/rdbms/admin/catupgrd.sql
SPOOL OFF
9. wyłączyć bazę:
SHUTDOWN IMMEDIATE
10. uruchomić bazę i skompilować pakiety:
STARTUP
@$ORACLE_HOME/rdbms/admin/utlrp.sql
11.sprawdzić komponenty po upgrade:
SELECT comp_name,version,status FROM sys.dba_registry;
OGÓLNE:
catupgrd.sql - wykonuje skrypty do upgrade, które to z kolei tworzą i zmieniają tabele słownikowe, widoki, typy, pakiety etc.,
czas upgrade poszczególnych komponentów:
SELECT * FROM registry$log;


podstawowa procedura upgarade dla wersji 9:
przeczytać dokument README.html
sprawdzić System Requirements

ALTER SYSTEM SET java_pool_size=150M scope=spfile
ALTER SYSTEM SET shared_pool_size =150M scope=spfile
wyłączyć wszystkie usługi bazy
zainstalować patcha na binaria bazy

upgrade zainstalowanej bazy
  • sqlplus /nolog
  • sql> STARTUP MIGRATE
  • sql> SPOOL PATCH.LOG
  • sql> @$ORACLE_HOME/rdbms/admin/catpatch.sql
  • sql> SPOOL OFF
zrestartować bazę
w razie problemów naprawić i wykonać ponownie
wykonać plik do kompilacji pakietów
  • sql> @$ORACLE_HOME/rdbms/admin/utlrp.sql
sprawdzanie patchy:
@$ORACLE_HOME/opatch/opatch lsinventory
@$ORACLE_HOME/opatch/opatch lsinventory -detail

ORACLE RMAN

for more read documentation

symbols:

%sset nr
%ppiece of set nr
%ccopy nr for multi copy
%ddatabase name
%Idatabase id
%ttime index in sec
%Dday of month
%Mmonth
%Yyear
%Tyear,month,day YYYYMMDD
%ueight digit name consist of set nr and time
%Uuniq file name (necessary for many channel), consist of database id,day nr,month,year,file nr
%Ffor controlfile c-[db_ID]-[YYYYMMDD]-[hex sequence from 00 to ff - max 256 different values]

connect to database without catalog using controlfile:

rman> connect target /
rman> connect target [sys_user]/pass@[sid_target]

connect to catalog, and next to database

(in one catalog may be register many databases):
rman> connect catalog [rman_user]/[pass]@[sid_rman_base]
rman> connect target [sys_user]/pass@[sid_target]
or;
rman catalog [rman_user]//[pass]@[sid_rman_base] target [sys_user]/pass@[sid_target]

settings:

rman> show all;
rman> configure retention policy to redundancy [number_of_files];
rman> configure retention policy to recovery window of [number od days] days;
rman> configure device type disk backup type to compressed backupset;
rman> configure device type disk backup type to [copy|backupset];
rman> configure [parametr] clear; #clear configuration for present parameter

default settings:

CONFIGURE RETENTION POLICY TO REDUNDANCY ;
CONFIGURE BACKUP OPTIMIZATION OFF; #default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; #default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oracle/rmandata/%F.rbk';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; #default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; #default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; #default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/oracle/rmandata/data-%I-%T-%s-%p-%U.rbk';
CONFIGURE MAXSETSIZE TO UNLIMITED; #default
CONFIGURE ENCRYPTION FOR DATABASE OFF; #default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; #default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; #default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/product/10.2.0/db_1/dbs/snapcf_sow01.f'; #default

backup script example:

run {
backup full tag "FULL"
filesperset 20
database;
sql 'alter system archive log current';
backup format '/oracle/rmandata/arc-%I-%T-%s-%p-%U.rbk'
archivelog all delete input;
}
! if you set CONFIGURE CONTROLFILE AUTOBACKUP ON - for any command "BACKUP" automaticaly will be backuped controlfile, thus we get two entries for one backup in command "list backup",

in an example above we get four entries
!setting tags is not neccessary because default setting is easy to find TAGYYYYMMDDTHH24MISS


check backup progress:

SELECT status,operation,object_type,row_type,to_char(start_time,'YYYY-MM-DD HH24:MI:SS)') FROM v$rman_status ORDER BY start_time;

last 1 day:
SELECT rs.status,rs.operation,rs.object_type,rs.row_type,to_char(rs.start_time,'YYYY-MM-DD HH24:MI:SS'),ro.output FROM v$rman_status rs,v$rman_output ro WHERE rs.start_time >= trunc(sysdate-1) AND rs.stamp=ro.stamp ORDER BY rs.start_time,ro.recid;

last 10 minutes:
SELECT rs.status,rs.operation,rs.object_type,rs.row_type,to_char(rs.start_time,'YYYY-MM-DD HH24:MI:SS'),ro.output FROM v$rman_status rs,v$rman_output ro WHERE rs.start_time >= sysdate-10/(24*60) AND rs.stamp=ro.stamp ORDER BY rs.start_time,ro.recid;

last 1 day (short version):
SELECT rs.stamp,rs.status,rs.object_type,rs.operation,rs.row_type,trunc(rs.input_bytes/1073741824,2)||' GB ' as input ,trunc(rs.output_bytes/1073741824,2) || ' GB ' as output , to_char(rs.start_time,'MM-DD HH24:MI:SS') as starttime,to_char(rs.end_time,'MM-DD HH24:MI:SS') as endtime FROM v$rman_status rs WHERE rs.start_time >= trunc(sysdate-1) ORDER BY rs.start_time;

last 1 day (long version):
SELECT rs.stamp, rs.status, rs.object_type, rs.operation, rs.row_type, trunc(rs.input_bytes/1073741824,2) || ' GB ' as bytes_input, trunc(rs.output_bytes/1073741824,2) || ' GB ' as bytes_output, to_char(rs.start_time,'YYYY-MM-DD HH24:MI:SS') as starttime, to_char(rs.end_time,'YYYY-MM-DD HH24:MI:SS') as endtime, ro.output FROM v$rman_status rs, v$rman_output ro WHERE rs.start_time >= trunc(sysdate-1) AND rs.stamp=ro.stamp ORDER BY rs.start_time,ro.recid;

list:

rman> list all script names;
rman> list copy of database;
rman> list backup [summary];
rman> list backup [summary] [completed after "04-AUG-11"]; rman> list backup by file;

rman> list backup of database [summary];
rman> list backup of database [summary] [completed after "04-AUG-11"];
rman> list backup of database by backup tag [tag_name];
rman> list backup of tablespace;
rman> list backup of controlfile;
rman> list backup of spfile;

rman> list backup of database archivelog all;
rman> list backup of archivelog all [summary];

rman> list backup of datafile '[path]' | [nr] [summary];

rman> list backupset [nr],[nr],[nr];
rman> list archivelog all; #list current archivelog

crosscheck:

rman> crosscheck backup | archivelog | copy #check if backup files are still exist and they obtain status AVAILABLE or EXPIRED, (you can test it by move backup files to other location and then bring it back)
rman> list expired backupset; #backups that were not found during a crosscheck, crosscheck must be first
rman> list expired backup of archivelog summary;
rman> list expired copy of archivelog all;
rman> list incarnation of database;

report:

rman> report schema;
rman> report need backup; #what is neccessary to backup according present backup policy
rman> report need backup days 7;
rman> report need backup redundancy 7;
rman> report need recovery window of 7 days;
rman> report obsolete; #old files according present policy, (if you change policy, report will be change also)
rman> report obsolete redundancy 7;
rman> report obsolete recovery window of 7 days; 

delete:

rman> delete obsolete; #remove expired according present backup policy
rman> delete expired archivelog all;

backup:

rman> backup as copy database #copy files in format which is as same as datafiles
rman> backup as backupset database - copy at set?
rman> backup database;

other:

sql> select obsolete,backup_type,file_type,fname from v$backup_files;
sql> select * from v$rman_status;
sql> select * from v$rman_output;
rman> run {execute script [script_name];}
rman> run {sql '[sql_command'}
rman> run {host '[os_command]'}
rman> delete script [script_script];
rman> create script {[data_script];}

create catalog database:

sql> create tablespace [ts_rman] datafile '[file]' size [size]m autoextend on next [size]m;
sql> create user [rman_user] identified by [password] default tablespace [ts];
sql> grant resource,connect,recovery_catalog_owner to [rman_user];

create structure of catalog database with schema [rman_user]):

rman catalog [rman_user]/[pass]@[sid_rman_base]
or
rman> connect catalog [rman_user]/[pass]@[sid_rman_base]
rman> create catalog

register database in catalog:

(new entries on table [rman_user].db)
rman catalog [rman_user]/[pass] target [sys_user]/[pass]@[sid]
or
rman> connect catalog [rman_user]/[pass]@[sid_rman_base]
rman> connect target [sys_user]/[pass]@[sid]
you get output:
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20001: target database not found in recovery catalog
rman> register database;

description of tables in schema [rman_user]:

DB #inf about register databases
DBINC #inf about databases and DBINC_KEY - database incarnation key
DF,DFATT #inf about database files
SCR,SCRL #scripts
ORL #inf about redologs
TS,TSATT #inf about tablespaces
TF,TFATT #inf about temp tablespaces

views:

V$BACKUP_CORRUPTION #damaged blocks of backupset
V$COPY_CORRUPTION #damaged blocks of copy files
[rman_user].RC_STORED_SCRIPT
[rman_user].RC_STORED_SCRIPT_LINE

backupsets:

FULL BACKUP #full backup all blocks, can not be used with incremental backup
default is incremental backupset
levels can be 0,1,2,3,4
backupset at level 0 #all blocks like FULL BACKUP
backupset at level 1 #contain blocks which were changed from parent backup (base on SCN which is wrote inside each block, if child block SCN > parent block SCN than it is copied)
INCREMENTAL BACKUP SET #backupset of level n, which contain blocks which were modified from backupset of level n or n-1
INCREMENTAL CUMULATIVE #backupset of level n, which contain blocks which were modified from backupset of level n-1 or lower

options:

BACKUP INCREMENTAL LEVEL=0 DATABASE
BACKUP INCREMENTAL LEVEL=1 CUMULATIVE DATABASE
BACKUP (tablespace [ts_name1,ts_name2)
BACKUP ARCHIVELOG FROM TIME '[time]' | FROM LOG SEQ=[nr_seq]
BACKUP ARCHIVELOG UNTIL TIME '[time]' | UNTIL LOG SEQ=[nr_seq]
BACKUP ARCHIVELOG ALL DELETE INPUT #backup with remove
COPIES [nr] FORMAT '[1st_copy]','[2nd_copy] ' #a number of copies (up to 4)

recovery all:

sql> STARTUP MOUNT;

rman> RESTORE DATABASE; #or
rman> RESTORE DATABASE UNTIL TIME "to_date('2009/08/18 16:00','YYYY/MM/DD HH24:MI')"; #or
rman> RESTORE DATABASE FROM TAG="[tag_name]"; #check TAG from command "list backup"
rman> RECOVER DATABASE; #get alertlogs from backup
rman> ALTER DATABASE OPEN;

! MAKE FULL BACKUP

recovery tablespace:

rman> SQL 'ALTER TABLESPACE [ts_name] OFFLINE';
rman> RESTORE TABLESPACE [ts_name];
rman> RECOVER TABLESPACE [ts_name];
rman> SQL 'ALTER TABLESPACE [ts_name] ONLINE';

recovery datafile:

rman> SQL 'ALTER DATABASE DATAFILE [file_nr] OFFLINE';
rman> RESTORE DATAFILE [file_nr];
rman> RECOVER DATAFILE [file_nr];
rman> SQL 'ALTER DATABASE DATAFILE [file_nr] ONLINE';

backup without alertlog from disk:

rman> SHUTDOWN IMMEDIATE;
rman> STARTUP NOMOUNT;
rman> CONFIGURE DEFAULT DEVICE TYPE TO DISK;
rman> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/[backup_datafile_path]/ora_df%t_s%s_s%p' MAXPIECESIZE 4000M;
rman> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
rman> CONFIGURE CONTROLFILE AUTOBACKUP ON;
rman> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/[backup_controlfile_path]/%F';

rman> BACKUP DATABASE;
rman> ALTER DATABASE OPEN;

recovery cold backup without alertlog:

remove all datafiles etc.
rman> set DBID=[dbid]
rman> set controlfile autobackup format for device type disk to '[backup_controlfile_path]/%F';
rman> restore controlfile from autobackup;
rman> alter database mount;
rman> recover database; (!get  error RMAN-03002, RMAN-06054)
rman> recover database noredo;
rman> alter database open resetlogs;

recover till timestamp

set variable:
export NLS_LANG=american_america.AL32UTF8
export NLS_DATE_FORMAT='yyyy-mm-dd hh24:mi:ss'

rman> run {
set until time "to_date('2009-09-25:18:18','yyyy-mm-dd:hh24:mi')";
restore database;
recover database;
alter database open resetlogs; }

! all above commands must be included at one run block, otherwise rman recover to current state and to point of time

! if we've recovered database till timestamp and we've opened in RESETLOGS option before and we recognize that database data are not correct and we try to recover from other timestamp, we get an error:
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time
so what we have to do is:
rman> list incarnation of database; #we get list of all incarnations
if we try to open database we get:
database opened
new incarnation of database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

so go on:
run database at mount mode:
rman> reset database to incarnation [Inc Key];
rman> run {
set until time "to_date('2009-09-26 23:32','yyyy-mm-dd hh24:mi')";
restore controlfile;
alter database mount;
restore database;
recover database;}
rman> alter database open resetlogs;


change localization of source files

rman> catalog start with '[backup_datafile_path]'

example scripts:

rman> create script [script_name] {
rman> allocate channel [channel_name] type disk;
rman> set limit channel [channel_name] kbytes=10000; --limit of backup piece
rman> backup
rman> filesperset 20 --a number of files in one backupset (default 4 datafiles and,16 archivelogs),
-- ! one backupset include many files but one files could not be in many backupsets
rman> format '[dir_path/file_%s_%p_%t]' -- %s-backupset nr, %p-backup piece nr, %t-4byte timestamp
(archivelog all);
rman> release channel [channel_name];
}

rman> create script [script_name] { rman> allocate channel [channel_name] type disk; rman> copy rman> datafile '[file_path.dbf]' to '[file_path.bck]', rman> datafile '[file_path.dbf]' to '[file_path.bck]', rman> current controlfile to '[controlfile_path.bck]' rman> tag='[tag_name]'; rman> release channel [channel_name]; }
create script [script_name] { rman> allocate channel [channel_name] type disk; rman> copy controlfile to '[path_control.bck]'; rman> release channel [channel_name]; }
create script [script_name] { rman> allocate channel [channel_name] type disk; rman> backup rman> filesperset 3 rman> tag='[tag_name]' rman> format '[dir_path/fileincrement_%s_%p_%d]' (database include current controlfile); release channel [channel_name]; }
rman> create script [script_name] { rman> allocate channel [channel1_name] type disk; rman> allocate channel [channel2_name] type disk; rman> set limit channel [channel1_name] kbytes 10000; rman> set limit channel [channel2_name] kbytes 10000; rman> backup full rman> filesperset 3 rman> tag='[tag_name]' rman> format '[dir_path/fileincrement_%s_%p_%d]' (datafile 1,2,3 channel [channel1_name) (datafile 4,5,6 channel [channel1_name); rman> release channel [channel1_name]; rman> release channel [channel2_name]; }
rman> create script [scirpt_name]{ rman> allocate channel [channel_name1] type disk; rman> allocate channel [channel_name2] type disk; rman> backup rman> format '/[backup_dir]/controlfile_t%t_s%s_p%p' rman> (current controlfile); rman> backup rman> incremental level 0 rman> tag db_intr_0 rman> filesperset 5 rman> format '/[backup_dir]/datafile_t%t_s%s_p%p' rman> (database); rman> sql 'alter system archive log current'; rman> backup rman> filesperset 20 rman> format '/[backup_dir]/archivelog_t%t_s%s_p%p' rman> (archivelog all delete input); rman> release channel [channel_name1]; rman> release channel [channel_name2]; }

ORACLE DPUMP

1 Database directory

  • CREATE OR REPLACE DIRECTORY [oracle_dir] as '[path_to_directory_on_disk]';
  • DROP DIRECTORY [oracle_dir]
  • SELECT directory_name, directory_path FROM dba_directories;
  • GRANT READ, WRITE ON DIRECTORY [oracle_dir] TO [user]

2 Dpump Order

  1. type definitions
  2. table definitions
  3. table data
  4. table indexes
  5. integrity constraints, views, procedures, triggers
  6. bitmap, function-based, domain indexes

3 Views

  • SELECT job_name,owner_name,state FROM dba_datapump_jobs;

4 expdp:

  • EXPDP [user]/[pass] SCHEMAS=[schema] LOGFILE=[log_file] DIRECTORY=[directory] DUMPFILE=[dump_file] PARALLEL=[parallel write to many files - use witch option %U as sufix in file name]
  • expdp [user]/[pass] directory=[dir] dumpfile=[file]_%U.expdp logfile=[logfile] filesize=4G \ job_name=[name] cluster=N parfile=[file].par

5 impdp:

  • IMPDP [user]/[pass] DUMPFILE=[dump_file] PARFILE=[parameter_file]

6 impdp parameter file

REMAP_SCHEMA=[old_user]:[new_user>]
REMAP_TABLESPACE=[old_ts]:[new_ts]
REMAP_TABLESPACE=[old_ts]:[new_ts]
DIRECTORY=[oracle directory]
LOGFILE=[logfile]
PARALLEL=[a number of processes]
STATUS=[time in sek]
JOB_NAME=[job name]
schemas=[schema]
include=TABLE:"LIKE '[table_prefix]%'"
include=VIEW:"LIKE '[view_prefix]%'"
include=SYNONYM:"LIKE '[synonym_prefix]%'"
include=PACKAGE:"LIKE '[package_prefix]%'"
include=INDEX:"LIKE '[index_prefix]%'"
include=TRIGGER:"LIKE '[trigger_prefix]%'"
include=SEQUENCE:"LIKE '[sequence_prefix]%'"

7 Parameters

7.1 include/exclude param

  • include #only object with iclude will be exported/imported
  • exclude #all objects exclude few followed by exclude parameter will be exported/imported

7.2 parallel param

  • oracle white paper - Parallel Capabilities of Oracle Data Pump
  • only for enterprise edition
  • significantly reduce the elapsed time for large indexes
  • split job between more worker processes
  • MCP(Master Control Process) controlling the pool of worker processes

7.2.1 remarks

  • set the degree of parallelism to two times the nr of CPU
  • for expdp parallel <= nr of dump files
  • for impdp parallel <= nr of files in the dump file set

7.2.2 expdp

  • in typical export there are both data and metadata, the first worker process will unload the metadata: ts, schemas, grants, roles, tables, indexes etc. all the rest unload the data, if the metadata worker finishes and there are still data objects to unload it will start unloading the data too

7.3 dumpfile param

  • PX process ( Parallel Execution Processes)
  • parallel param <= dumpfile param
  • during expdp each woker or parallel execution Process requires exclusice access to the dump file, so having fewer dump files than the degree of parallelism will mean that some workers of PX processes will be unable to write the information they are exporting
  • during impdp the workers and PX processes can all read from the same files, but parallel parameter should be significantly larger than the number of files in the dump file set

7.4 version param

  • expdp version=10.2

8 Checking current state of dpump process

  • SELECT job_name,state FROM v$datapump_job
  • SELECT * FROM v$datapump_session

9 Estimating dump size:

  • EXPDP [user]/[pass] FULL=y ESTIMATE_ONLY=y DIRECTORY=[oracle_dir] logfile=[logfile]
  • !warning
    during estimation, dump size will be lower than original db size, because for indexes during dump only their definition is written, and finally indexes are rebuid, you must notice that index is based on ROWID which changing during dump

10 Importing tables:

  • IMPDP [user]/[pass] TABLES=[tabela] REMAP_SCHEMA=[src_schema]:[dst_schema] DIRECTORY=[directory] PARALLEL=[nr] DUMPFILE=[dumpfile]

11 Dpump by NETWORK_LINK:

source db activity:

  • CREATE DATABASE LINK [link_name] CONNECT TO [src_user] IDENTIFIED BY [src_password] USING [src_tnsname]
  • choose mode IMPDP or EXPDP (the difference is that IMPDP connect with source db, get data and write them
    directly to destination db, to specified schema, in the other side EXPDP write data only to file)
  • IMPDP [dst_user]/[dst_password] NETWORK_LINK = [link_name] DIRECTORY = [dir_name] SCHEMAS = [schema_name] LOGFILE = [logfile_name]
  • EXPDP [dst_user]/[dst_password] NETWORK_LINK = [link_name] DIRECTORY = [dir_name] SCHEMAS = [schema_name] LOGFILE = [logfile_name] DUMPFILE = [file_name]

12 Console:

  • EXPDP/IMPDP [user]/[pass] ATTACH=[job_name]
  • commands:
    COMMANDDESC
    help
    C-clog mode => console mode
    exit_clientconsole mode => system
    continue_clientconsole mode => log mode
    add_file=plik1,plik2add files
    kill_jobstop job end exit
    pararell=[nr]change numbers of process
    start_jobstart job which was stoped
    status
    stop_job[=IMMEDIATE]stop job for a while

13 ERROR

13.1 ORA-39087

  • failed directory

13.2 ORA-29283

  • failed grants for directory

13.3 ORA-31633

[ symptom ]

  • unable to create master table "SYSTEM.EXPDP_MANUAL"

[ solution ]

  • drop table system.expdp_manual

ORACLE LISTENER

Protocol (B28317-02/4)

ProtocolParameterDescription
IPCPROTOCOLSpecify ipc as the value.A protocol used by client applications that resides on the same node as the listener to communicate with the database. IPC can provide a faster local connection than TCP/IP.
KEYSpecify a unique name for the service. Oracle recommends using the service name or the Oracle System Identifier (SID) of the service.
Example: (PROTOCOL=ipc)(KEY=sales)
TCP/IPHOSTSpecify the host name or IP address of the computer.
PORTSpecify the listening port number.
Example:(PROTOCOL=tcp)(HOST=sales-server)(PORT=1521)

lsnrctl

COMMANDDESCRIPTION
show displaymode
set displaymode compat/normal/verbose/raw
servicesinformation about the database services, instances, and service handlers (dispatchers and dedicated servers) to which the listener forwards client connection requests.
lsnrctl [command] [listener_name]
lsnrctl help
lsnrctl help [command]

Misc

  • Allways verify service which status is READY and then at client side, put SERVICE_NAME value in tnsnames.ora

MsSQL (Database Gateway)

  1. http://docs.oracle.com/cd/B28359\_01/gateways.111/b31043/conf\_sql.htm#CIHGDGFC
  2. Install gateway from database package ../gateways/runInstaller
  3. edit $ORACLE_HOME/dg4msql/admin/initdg4msql.ora
    • It is possible to create many files ex.: initdg4msql.ora, initdg4msql1.ora, initdg4msql2.ora for many connections
    • HS params: http://docs.oracle.com/cd/B28359\_01/gateways.111/e10311/apd.htm
      HS_FDS_CONNECT_INFO=[MsSQL_hostname]:1433//[db_name]
      HS_FDS_RECOVERY_ACCOUNT=RECOVERMsSQL account
      HS_FDS_RECOVERY_PWD=RECOVERMsSQL account password
      HS_FDS_TRANSACTION_LOG=MsSQL table name for transaction logs
      HS_FDS_TRACE_LEVEL=OFF/ON/DEBUG
  4. MsSQL - create RECOVER login
  5. MsSQL - create RECOVER schema
  6. MsSQL - create RECOVER user
    • Owned Schemas: RECOVER
    • Database role membership: db_datareader, db_datawriter, db_ddladmin
  7. create MsSQL table HS_TRANSACTION_LOG at RECOVER schema, which comprise columns below: (if table name is different it must be set parameter HS_FDS_TRANSACTION_LOG at initdg4msql.ora)with columns:
    • GLOBAL_TRAN_ID char(64) not null
    • TRAN_COMMENT char(255)

    add ALL role for above table to PUBLIC

    For automatization above steps use script $ORACLE_HOME/dg4msql/admin/dg4msql_tx.sql as RECOVER user on MsSQL

    • isql -U[recover_account] -P[recover_account_pass] [-Sserver] -i[dg4msql_tx.sql]
    • osql -U[recover_account] -P[revocer_accout_pass] -d[dbname] -i[script]
    • MsSQL do not accept semicolon at the end of line, at last line put "go"
  8. run script for superadmin tranlation views:
    • create view dbo.ALL_CONS_COLUMNS_vw and grant SELECT to PUBLIC
    • create view dbo.ALL_IND_COLUMNS_vw and grant SELECT to PUBLIC
    • create view dbo.USER_CONS_COLUMNS_vw and grant SELECT to PUBLIC
    • create view dbo.USER_IND_COLUMNS_vw and grant SELECT to PUBLIC

    $ORACLE_HOME/dg4msql/admin/dg4msql_cvw.sql isql -U[system_admin_user] -P[system_admin_pass] [-Sserver] [-ddatabase] -e -i dg4msql_cvw.sql

  9. edit listener.ora
    SID_LIST_MSSQL=
      (SID_LIST=(SID_DESC=(SID_NAME=[gateway_sid])(ORACLE_HOME=[oracle_home_path])(PROGRAM=dg4msql)))
    MSSQL=
      (ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=[local_hostname])(PORT=[port#])))
    
  10. edit tnsnames.ora
    • for CONNECT_DATA=(SID=[gateway_sid]) # gateway_sid must be the same as listener.ora SID
    • add parameter HS=OK (Oracle Heterogeneous Service)
    [ALIAS]=
     (DESCRIPTION=
       (ADDRESS= (PROTOCOL=TCP)(HOST=[local_hostname])(PORT=[port#]))
       (CONNECT_DATA=(SID=[gateway_sid]))
       (HS=OK))
    

ERROR

ORA-28545

[ symptom ]

ORA-28545: error diagnosed by Net8 when connectiong to an agent error
Unable to retrieve text of NETWORK/NCR message 65535
ORA-02063: preceding 2 lines from...

[ solution ]

  • listener is not active

ORA-28513

[ symptom ]

ORA-28545: error diagnosed by Net8 when connecting to an agent
Unable to retrieve text of NETWORK/NCR message 65535
ORA-02063: preceding 2 lines from...

[ solution ]

  • check file name $ORACLE_HOME/dg4msql/admin/initdg4…

ERROR

ORA-01150check listener.ora
ORA-12154check tnsnames.ora, client does not even try to connect to remote server
Linux Error: 110: Connection timed out dla lsnrctl statuscompare /etc/hosts via listener.ora
ORA-12560for winshit check service at services.msc
ORA-12514 TNS:listener does not currently know of service requested in connectcheck lsnrctl status at server side, care about name of services
ORA-12545 client can't connect to serverfor cluster check client side /etc/hosts, and put VIP addresses

ORACLE SQLPLUS

col [column] format a[nr]
set heading off
set feedback offset linesize [nr] - value between 1- 32767
set termout offset pagesize [nr] - "0" means no limit
set serveroutput on size unlimited - means no buffer limit for output dbms_output
set numwidth 50 - wyłączenie zaokrąglania cyfr
config $oracle_home/sqlplus/glogin.sql
set linesize 1000
set pagesize 1000
set sqlprompt "_user _connect_identifier> "
connectint without tnsnames.ora:
- client10g:
sqlplus user/password@//hostname:port_nr/db_sid
- client9i:
sqlplus user/password@"(descripteion=(address_list=(address=(pototcol=tcp)(host=hostname)(port=port_nr)))(connect_data=(sid=db_sid)))"
sqlplus linux vs previous command:
install rlwrap
rlwrap sqlplus