Vidya Bala's Blog

Search My Blog

Add RSS Feed

Monday, November 27, 2006

recover database until cancel and online redo's

on a noarchivelog mode database, recover database until cancel will still prompt for archive logs.
This had me puzzled initially until I looked at the sequence number and realized, the sequence number corresponds to the online redolog (although the recovery command prompted for an archive log). When prompted if the appropriate online redolog is entered the Incomplete recovery will succeed.

Wednesday, November 22, 2006

10g Release 3 Forms and Report Services?

while helping a colleague of mine install Forms and Reports Services , I realized that 10g Release 3 although available for download , the Forms and Reports Services component is not available for 10g Release 3 yet (only if I missed something). I was wondering if there was any talk on an ETA for Forms and Reports Services on 10g release 3.

Informatica (PowerCenter) Key Concepts (PartI)

Informatica (PowerCenter) Key Concepts PartI:

This series will review the fundamental of the PowerCenter(7.1.2) Suite.
The key components of Informatica are PowerCenter Client Tools, Repository Server, Power Center Server and the Repository. The Repository Server and the PowerCenter Server constitute the ETL Layer which does the ETL process.

Below is how the client to server communication flow looks like:
The PowerCenter client sends a request to Repository Server.
The Repository Server verifies the connectivity information of the target database
The Repository Server starts a Repository agent Process
The Repository Server uses a Repository agent process to communicate with the Repository.
The PowerCenter Server process communicates with the repository to run scheduled workflows. When a workflow is run manually the Power Center Server uses the instructions configured in the mapping (stored in the Repository) to read , transform and write data.
(details on what a workflow and mapping is discussed below).

Below are the key PowerCenter Client Tools:

a)Repository Manager: To create and manage repository, create and manage users
b)Repository server admin console: To manage the Repository Server
c)Designer Tools:
(i)Source Analyzer: to import source definitions
(ii)Warehouse designer: to import target definitions
(iii)Transformation developer: a repository object that generates/modifies and parses data. This object can be reused .
(iv)Mapping: a mapping is nothing but a set of source and target definitions linked by transformation objects. A mapping is what represents the data flow from source to target.
Mapping Designer is used to create Mappings.
(v)Mapplet designer: A mapplet is a set of objects that you use to build reusable transformation logic.
(vi)Workflow: a workflow is a set of instructions that tells the Power Center Server how to execute a task. Workflow Manager can be used to create and run workflows. Workflow Monitor can used to monitor the running of workflows.(If you are new to the ETL world the above concepts may be overwhelming in the beginning, but as we go through other chapters you will be able to see how the concepts relate to the ETL process)

Monday, November 20, 2006

Informatica for NewBie Sereis

with all the rush before the holidays for getting things done - I really wanted to start this series before the Holidays. As mentioned in my earlier post , for somebody with ETL experience it gets difficult to evaluate different tools because not many are available out there to download and evaluate. On the same note except for manuals there is not much information on the web for ETL tools in general. I am hoping this series will help a few Niewbie's to Informatica(to jumpstart their work with Informatica) and ETL concepts explained with Informatica (Thanks to Mark Rittman for pointing out a possible site for an evaluation copy - Mark, finding the eval copy and the eval license codes was a treasure hunt on edelivery).

The series will cover the following Topics
Informatica Fundamentals (PartI and Part II)
Informatica Eval Installation
Informatica Admin Tasks
Informatica Designer Tasks
Informatica getting familiar with Workflow Manager and Workflow Monitor
Other Things to remember about Informatica
things to remember in the ETL world(basic concepts)

If you are following this specific series - please drop in comments and let me know if the series helps you get familiar with the fundamentals of Informatica.

Tuesday, November 14, 2006

RMAN full db backup vs incremental level 0 backup

All these years I have believed that an incremental level 0 backup will backup all blocks in a datafile except the ones Oracle has never written to and is pretty much the same as a full database backup except that the level 0 incremental is part of an incremental strategy.

Having said the above, I would expect that after a NOLOGGING bulk load on a table I could do a level 0 incremental backup and when the level 0 backup set is restored I should be able to access the data that was inserted with NOLOGGING OPTION.

We ran into a situation today ..........

-some staging tables were loaded with NOLOGGING OPTION
-a level 0 incremental load was done after that
-level 0 incremental backup set was restored to a test environment
- select * from (table that was loaded with bulk load option)
returns ORA-26040: Data block was loaded using the NOLOGGING option

If a level0 incremental actually copies all the datablocks why would it care about something that was loaded with a nologging option before the backup ??(will a full database backup as opposed to a level 0 incremental get us past this problem)???

if you have any inputs will be great to know , in the meantime I will have to knock support doors as well.......

Monday, November 13, 2006

the fine print always gets you…………

we were almost ready with our 10g RAC install on Solaris 10 and NetApp Filer. Until I read a small print on the Install guide

“For Standard Edition RAC Installations, ASM is the only supported storage option for database files”

while NFS Filesystem(supported NetApp configuration) is supported for the Cluster software it is not supported for database files.

Also if you look at the link below

while Solaris Containers model seems to be the way to go (helps you better manage your resources for different applications giving an isolated environment for your applications – “RAC is not supported on a capped container model”

more to come on configuring NAS based files for use in an Automatic Storage Management Disk Group

Friday, November 10, 2006

what’s up with ETL Tools……………….

Over the last few years I have worked on quite a few ETL Projects. Starting with Sagent in 2000 to Informatica and now with Oracle Warehouse Builder.

With Warehouse Builder I can pretty much download it on technet setup a quick demo for users to demonstrate how this can be used as an ETL tool and even let them try it before we can get the entire Sales force involved for purchase etc ( I would like to get them out of the habit of doing major migrations only with PL/SQL code and maintenance becomes a nightmare)
I have not seen a single ETL tool apart from warehouse builder that I can download and setup a quick demo for, for all other vendor products I will have to contact the vendor setup a sales call , demo etc etc. We have the skills in-house and wish we could get our hands on these products, download it and evaluate them ourselves …………….

get familiar with OPMN……..

Especially in the 10g Application Server world I see a lot of PROD and DEV DBA’s relying only on the EM console to manage their Applications.
I personally prefer the command line interface. I think it is more robust and can come very handy.
Oracle® Process Manager and Notification Server(OPMN) – below is a quick overview of what you can do with opmn
Opmnctl lets you start/stop/reload/status all your application server components
cd $ORACLE_HOME/opmn/bin
a)opmnctl start: Start opmn daemon without starting opmn managed processes for local ias instance.
b)opmnctl startall:Start both the daemon and managed server processes
c)opmnctl stopall
d)opmnctl startproc ias-component=WebCache process-type=WebCache (start and stop individual Application Server Components)
e)opmnctl stopproc ias-component=WebCache process-type=WebCache
f) opmnctl validate opmn.xml – validates the xml syntax for opmn.xml
g)opmnctl help: Print a brief usage description.
h)opmnctl shutdown: Quickly shutdown opmn daemon and opmn managed processes for local ias instance. This request is similar to a stopall command but waits less time before initiating a forceful termination of the managed processes. After stopping all managed
processes, the opmn daemon will shutdown itself.

Opmn.xml has IAS component specific configuration information is located in ORACLE_HOME/opmn/conf/opmn.xml
For example if your ias component is a Reports Server
ias-component id="rep server name" status="enabled" id-matching="false” : defines the ias component
process-type id="ReportsServer" module-id="ReportsServices": the type of ias component
process-set id="rep_ogmapp4" restart-on-death="true" numprocs="1": helps you define run time arguments for this component

ORACLE_HOME/opmn/logs/ has the ias component specific logs

With release OC4J grouping is allowed defined in OPM.xml as well basically a group of OC4J instances can be managed together

If you are clustered then OPMN.xml will need to be configured for Dynamic Discovery in opmn.xml

How to Deploy a J2ee Application using OC4J in 10g Application Server:

The belw can be done using the EM console but I prefer the command line dcmctl utility (Distributed Configuration Management (DCM) ) to deploy the j2ee Application.

a)Create OC4J component:
dcmctl createComponent -ct oc4j -co myComp
The above command should create a new OC4J component
ias-component process-type
OC4J myComp
b)Remove OC4J component
dcmctl removeComponent -co myComp
c) dcmctl listComponents
list all the components
cd $ORACLE_HOME/j2ee/mycomp/config
you should find the application.xml and data-sources.xml config file. Add your data-sources connection to data-sources.xml.


d)Start your OC4J component “mycomp”
you should be able to see your new datasource htestDS listed when you use the EM console to look into datasources for mycomp

e)Deploy your ear file
dcmctl deployApplication -file /var/home/oracle/test/j2ee/build/ htest.ear -a htest -co myComp
once the deployment is complete , the application can be accessed
4j -co myComponent

f) to start/stop components and undeploy
dcmctl start -co myComp -d -v
dcmctl stop -co myComp -d -v
dcmctl restart -co myComp -d –v
dcmctl undeployApplication -a hrapp -co myComp
dcmctl removeComponent -co myComp

If you have not used the command line interface to manage your App Server environment – the above should get you started.To start and stop your EM agent use emctl ($ORACLE_HOME/bin)

Thursday, November 09, 2006

Dynamic Switching on a single 10g Reports Server instance (Part2)

If you have configured your Reports Server for Dynamic Switching with the ENVID parameter, the next step would be to set the ENVID for the Report Object before running the Report Object.

This can be down as below:

However, the ENVID is not something that we want hardcoded in our forms and must be read from the Application Specific Forms env file.

For more details on the env file :
Cd $ORACLE_HOME/forms90/server/fromsweb.cfg
And envFile to Application specific configuration
# Remittance (RMS) Testing Section

cd $ORACLE_HOME/forms90/server/
cp default.env test.env
edit test.env to include your environmanet variables
now this variable can be read from your form using the TOOL_ENV.GET_VAR function.

The env file can also include application specific authentication information if required.
At this point you have configured your 10g Reports and Forms Server for Multiple Applications.

Wednesday, November 08, 2006

cannot get more interesting.......

so I am on a cross platform migration and upgrade (both at one time).

9i RAC Suse to 10g RAC this point we are in the planning phase and getting all the details together for the Migration Path.

The upgrade path most likely will be unracify,upgrade and then do a cross platform migration. I understand we need to do a endian conversion as we are moving from Linux to Solaris (Little to Big endian). Looks like the conversion procedure for this is documented and all over the place.
When I just thought I have tied all the information for this migration path together, I realized that I will be going from a 32 bit SLES (Intel) Architecture to a 64 bit architecture on Solaris(Sparc). The endian conversion takes care of byte reordering what about going from a 32bit to a 64 bit .......I am still in the process of figuring if there is anything additional I will need to do for this.....

any thoughts????

Monday, November 06, 2006

it is strange how things change.................

Sometime in 2003 I had to do a RAC install on Suse Linux. I remember spending lots of time on the web and metalink looking to find information on Suse Linux or Linux and all the default documents were on Unix. Most of time I never found answers to any of my questions (configuration with Suse, RAC and Net App) were tricky ; things like is OCFS supported or not on NetApp? oracm versions etc etc.

In an attempt to find answers I started opening tars with support ; the tech support team were supportive but were not able to give me answers immediately and had to do a lot of research on their part to answer some of my questions - I will not say that the process of finding out all the details was frustrating but was very tedious.

Today tables have turned - and I have to do a 10g RAC install on Unix(Solaris 10) - I get on technet and metalink and find a whole lot of information on Linux :) (I am not sure if it is a result of OOW2K6) and I am searching to find information on Unix.

It is strange or probably not so strange that things change .........I guess the only thing that always remains constant is "change"

Friday, November 03, 2006

any thoughts???...9i(RAC) Suse Upgrade to 10GRAC(Solaris)

This is the next Project I am on 9iRAC(Suse) to 10gRAC(Solaris) - both a version upgrade and cross platform migration at onetime.

I have not given it too much thought yet, but this is the upgrade path I was thinking of(I will need to run it by support in any case to verify that it is supported)

1)Unracify the existing 9i RAC instance on Suse
2)Do a single Instance Upgrade to 10g on Suse
3)Move 10g database over to Solaris(with new 10g Migration Options)
4)RACIFY the 10G instance on Solaris.

any thoughts/inputs on the above will be greatly appreciated.

the power of clearfsimport............

at my current Customer Site, I have had to wear both hats -db development support and db production support for different projects. Clear Case is used as the Source Control Management tool here.

A lot of folks use rational Clear Case LT explorer on Windows to checkin and checkout files. A few who hate to transfer files around will install their Clear Case client on Unix and work with
"cleartool checkout"
"cleartool checkin"
you can take it to the next step. A huge working directory , you ve worked on several scripts - to checkout and check in each one of them can be cumbersome.

"Clearfsimport" is a powerful utility - you can simply do a Recursive Filesystem compare between your VOB and your working directory--->all the rest is taken care of. ClearfsImport will determine what files are newelements(add them to src control), what has changed and update the changed files without the need to checkout and checkin files.
Works great----especially if you work on a lot of code , you don't want to spend an entire day figuring out what to checkin and checkout.

syntax for clearfsimport:
cd to your VOB dir(note you should be in your VOB dir while running clearfsimport)
clearfsimport -preview -recurse -follow -nsetevent your_working_dir .
(the above command will give you a preview of what is going to happen)

clearfsimport -recurse -follow -nsetevent your_working_dir .
will get your VOB in sync with your working dir.

Thursday, November 02, 2006

RSS Feed for Conferences & Events??

I would call it my sheer laziness to have to visit websites anymore when I have all my RRS feeds setup in my RSS Reader.

OUG Conferences & Events , has anyone seen a RRS Feed for this ? just thought it would be nice to know all what's happening - and if anything interesting pops up , I can make some time for it.

Wednesday, November 01, 2006

Dynamic Switching on a single 10g Reports Server instance - a small bug with a small workaround

In order to setup multiple environments on a single 10g Reports Server Instance the following needs to be done -

Step 1 - Define The environment Characteristics (Environment Element)

In the Reports Server's configuration file (eg, .conf) which is found in $ORACLE_HOME/reports/bin/"rep_servername".conf: add the defaultEnvID's.

while invoking the Report pass the parameter ENVID . Should work right??? It works on Windows but not on Unix and Linux.

The small workaround is:
1. Comment REPORTS_PATH entry in $ORACLE_HOME/bin/REPORTS.SH file.

2. If REPORTS_PATH set in REPORTS.SH file is required then create a new Environment ID tag in reports server's Conf.

3. Modify the rwEng tag to default it to the default environment