Search found 15603 matches

by ArndW
Wed Aug 02, 2006 4:43 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Timeout parameter
Replies: 6
Views: 6370

What Oracle connection method are you using? Is it ODBC or an Oracle stage? I think the timeout might occur when going through ODBC and not when accessing using the builtin stages. I know I've had extremely long query times before without any timeouts, so this isn't an insurmountable problem. Perhap...
by ArndW
Wed Aug 02, 2006 4:38 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Error While running multiple instances
Replies: 30
Views: 11242

Krish, you can find out the size of your /tmp directory by issuing a "df -g /tmp". This will show (in Gigabytes) how much space you have in total and available. Your DataStage scratch directories are defined in your APT_CONFIG file so I can't help you with an exact command, you'll have to check that...
by ArndW
Wed Aug 02, 2006 4:22 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Jobs execution is very slow
Replies: 1
Views: 460

Your client connection to the server doesn't impact job runtime performance. If your source and target data for the jobs are on machines connected through a slow SSL protocol then it might affect performance. You have just asked the car equivalent question "My car is running slower this week. Why?" ...
by ArndW
Wed Aug 02, 2006 4:07 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Error While running multiple instances
Replies: 30
Views: 11242

...file system full...

You resolve this by adding space or redesigning your job to use less of it.
How big is your /tmp and your scratch space?
by ArndW
Wed Aug 02, 2006 4:06 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: ds_seqopen() - error in 'open()' on named pipe read links
Replies: 8
Views: 3714

The timeout settings in the sequential file {pipe} stage should be the default of 60; I would avoid using 0. I don't know if that change will directly affect your job but it should be done. The unhandled interrupt in ds_seqopen() might have something to do with the timing or having that pipe still a...
by ArndW
Wed Aug 02, 2006 3:57 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Lookup table not returning rows
Replies: 3
Views: 691

If you are inserting rows and reading them in the same job, make sure your commit size is 1 so that changes are immediately found. If you still aren't getting matches you will need to put in some debugging information (or use the designer interactive debugger directly) to see exactly which lookups a...
by ArndW
Wed Aug 02, 2006 1:55 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Constrain is not working in Transformer
Replies: 6
Views: 1520

Re: Constrain is not working in Transformer

...How to check for hidden characters? You need to tell us what you consider "hidden" characters to be. You can to a LEN(TRIM(In.Column)) on a CHAR(n) field to see how many non-padded values are in the field, assuming you've left your padding to be spaces. But once you have that value you will need...
by ArndW
Wed Aug 02, 2006 1:46 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: ds_seqopen() - error in 'open()' on named pipe read links
Replies: 8
Views: 3714

I would leave out the step that removes the pipes (as a test) and see if the problems persist. Also, did you change the timeout and size settings for the pipes in the jobs (I recommend leaving them unless your row size is huge and only a couple of rows would fit into the buffer space).
by ArndW
Wed Aug 02, 2006 1:40 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job aborting when running with large data sets
Replies: 7
Views: 2286

What stage is "TransUpsert"? If it is the Sybase OC stage then you should, in addition to what kumar has already suggested in checking your scratch space, check your DB logs for unexpected errors.
by ArndW
Wed Aug 02, 2006 1:37 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: DataStage Jobs failure due to Broken Pipe
Replies: 5
Views: 15316

All PX Jobs are Failing due to Broken pipe Error. Usually broken pipes are not the cause of problems but the most visible symptoms. Something is causing your processes to fail, and then their side of the pipes are being closed down and the other side of the process reports this as an error. There s...
by ArndW
Wed Aug 02, 2006 1:31 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Autoinstall DSjobs anf folder
Replies: 2
Views: 541

Hello chowmunyee, there is no such beast out there at the moment. Installing the client software goes throught the IBM/Ascential program and loading jobs into a project requires the use of the client software or a shell call on the server machine.
by ArndW
Wed Aug 02, 2006 1:27 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Hash File reading
Replies: 1
Views: 734

Yes, you can have {n} readers of the same hashed file at the same time. You can also have {n} writers as well - hashed files are just like databases in that there is a locking mechanism in place to control concurrency.
by ArndW
Tue Aug 01, 2006 9:15 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: can you explain with this ArchiveFiles routine
Replies: 5
Views: 1591

kunj201, are you asking what this program does?
by ArndW
Tue Aug 01, 2006 7:41 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: CFF Record
Replies: 6
Views: 1423

Since you have the data, why don't you just try to read & write it? You won't have any issues with record lengths of 160Kb!
by ArndW
Tue Aug 01, 2006 7:38 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Insert the records in to the tables thruogh Routines?
Replies: 6
Views: 1235

I still think it is a bad idea to do it the way you intend. How about doing it a bit differently - in your BASIC routines you write the logging information to a text file. Then, either as an after-job or other call you will start a simple DataStage job that will read this text file and then load it ...