Page 2 of 3

Posted: Tue Feb 01, 2005 11:20 am
by kcbland
scottr wrote:Hi Bland what's this "database rollback segment issues, snapshot too old"

error,one of my jobs are aborting giving this message and if i reset it and run again it just runs fine without failing..

thanks
You're hijacking this thread. Start a new one so we know more information about your OS, DS release, etc. As far as your problem, you took too long to insert/update your data and Oracle ran out of space holding your rows in a temporary table. Running it again successfully means that it finished within the available space at that moment in time. Start a new thread if you want more information.

Posted: Tue Feb 01, 2005 11:38 am
by rrcreddy4
Hi,

I did one thig, While trying to load in to Oracle using OCI9 and I started copying the data in to a sequential file at the same time to see if there is an issue with data, When I do that I have no issue.

Does that trigger anything???

How do I turn off row buffer?

RC

Posted: Tue Feb 01, 2005 11:53 am
by kcbland
rrcreddy4 wrote:Hi,

I did one thig, While trying to load in to Oracle using OCI9 and I started copying the data in to a sequential file at the same time to see if there is an issue with data, When I do that I have no issue.

Does that trigger anything???

How do I turn off row buffer?

RC

Ahhh. Go to Job Properties and the Performance tab. Deselect Use project defaults and deselect any choice selected. Remove the text file and recompile and run your job. I think your problem is related to the row buffering, we shall see.

Posted: Tue Feb 01, 2005 12:20 pm
by rrcreddy4
Hi,

I tried by deselecting row buffer and still got the below error.

Project:smqa (whseqa)
Job name:LdTouchpointFactDICatalogs
Event #:217
Timestamp:2/1/2005 1:18:36 PM
Event type:Info
User:qdssmqa
Message:
From previous run
DataStage Job 237 Phantom 24621
Abnormal termination of DataStage.
Fault type is 10. Layer type is BASIC run machine.
Fault occurred in BASIC program *DataStage*DSR_LOADSTRING at address 6ec.

RC

Posted: Tue Feb 01, 2005 12:30 pm
by kcbland
Make sure there's no runaway threads out there. Do a "ps -ef |grep LdTouchpoint" to see if any pieces and parts are interfering. If they are, do a kill on them to get rid of them.

Posted: Tue Feb 01, 2005 12:37 pm
by rrcreddy4
Hi,

I don't see any threads.

RC

Posted: Wed Feb 02, 2005 9:16 am
by rrcreddy4
Any suggestions to my issue..

RC

Posted: Wed Feb 02, 2005 9:30 am
by Sainath.Srinivasan
Check whether you are having any other process in your system / organization that reads files from that directory and work on them or move them around. From my experience with C and file pointers, any change in the file pointer using one program will alter its location globally to all processes.

Rather than loading into Oracle, try writing it to a seq file only to confirm that there is no problem in the link as such.

Check whether you have any code in your job control by change as they may come into effect.

Posted: Wed Feb 02, 2005 1:29 pm
by rrcreddy4
I did the check on link it is fine.

When I do a full insert in to the oracle table I am fine, all 1M rows goes in. Only issue is when I chose Insert or Update clause into Oracle.

Only thing I see with Oracle table is I have a function based unique constraint on the table, i.e. during each insert, it checks for the unqiue ness of the data based on the function I have. i don't see it different from having a primary key on the table. you can ask me why can't I have the primary key on this table. I really don't have a primary key on this table. I want the uniqueness for a specific condition thats why I am going for a condition based unique contraint.

Please suggest me.

RC

Posted: Wed Feb 02, 2005 1:33 pm
by kcbland
Drop the index and see if you have any load problems. If you do, then you get to log an issue with Ascential technical support. We're running out of options here, so if you can identify that a function based index is causing some issue to feed back to Ascential that it is unable to handle, then you know your answer.

The process of elimination continues...have fun...

Posted: Wed Feb 02, 2005 2:31 pm
by Sainath.Srinivasan
Did you by any chance provide any file related to Oracle load (such as bad/discard) to have the same name as the data file you created for loading?

Posted: Wed Feb 02, 2005 2:40 pm
by rrcreddy4
I have verified that too.

Everything is pretty much OK.

I also removed the function based unique index on the table and created a composite primary key and started loading and got the same error.

I deleted all the files and started re loading lets see.

RC

Posted: Wed Feb 02, 2005 4:03 pm
by rrcreddy4
Did anyone got DSD.SEQClose error.

I am getting this error consistently.

Any help is appreciated.

RC

Posted: Wed Feb 02, 2005 4:10 pm
by kcbland
The close error is from link process reading the sequential load file. An abnormal termination means that something tragic occurred and the controlling DataStage process could not recover the remaining job processes. What I'm saying is that the sequential file closing error message is related to the job blowing up, not to the reason the job is blowing up.

If your job is simply SEQ --> XFM --> OCI, then put a reject link on the XFM stage to capture rejected rows. Turn off all buffering. Set your array size to 1. Set the commit count to 1, and have your DBAs watch the loading of the table. There's not much left for us to do other than suggest try everything.

There's something, a routine, a function, a trigger, something that is causing this. You have to eliminate all variables and find it. Sorry.

Posted: Thu Feb 03, 2005 9:55 am
by rrcreddy4
Hi,

I have 3 warning message fields, I am not using them in the transformer while loading in to the OCI9.

When I remove the 3 colums all the way, I am fine in loading. Thats surprise, I have a similar job which has 3 unused fields coming the sequential file and still loading.


Is there anything wrong with sequential files the way they bahve???

RC