Search found 15603 matches
- Fri Jul 01, 2005 11:11 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Unable to RESET JOB.
- Replies: 8
- Views: 1891
- Fri Jul 01, 2005 10:51 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Find Last Record in File
- Replies: 13
- Views: 3366
Naveen, since the sequential file has no pointer forward, you can never know if your next READ is going to reach the end-of-file. Depending upon what you want to do at the EOF you have several approaches. As John mentioned, only a fixed-length record file will let you know the number of lines - and ...
- Fri Jul 01, 2005 9:08 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: How to invoke or run a job from Webpage..
- Replies: 8
- Views: 2076
- Fri Jul 01, 2005 6:13 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Error in transformer - Input port 0 already connected
- Replies: 7
- Views: 4185
- Fri Jul 01, 2005 6:10 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: How to invoke or run a job from Webpage..
- Replies: 8
- Views: 2076
saadmirza, as DataStage has a command line tool to invoke, control and report on jobs, all you would need to do is invoke your web server package exit functionality. This is different depending upon your platform and package (apache, MS, etc.) as well as configuration, but easily implemented once yo...
- Thu Jun 30, 2005 10:50 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Problems using multiple DSAttachJob commands - job hangs
- Replies: 3
- Views: 1666
HSBC, I went through debugging hello trying to solve the same issue a couple of months ago. The end conclusion is that the DSAttachJob() will effectively hang for 30 minutes if you attempt to attach to your own job. It is a bug (even though one isn't allowed to attach to oneself per the handbook) th...
- Thu Jun 30, 2005 6:02 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Two concurrent lookups in Hash
- Replies: 4
- Views: 862
- Thu Jun 30, 2005 5:48 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Aggregator stage error
- Replies: 6
- Views: 1332
Snassimr, you mean you actually believed your DBA? If you tell your aggregator that the data is sorted then it has to be sorted. It assumes that when a group-change in your aggregation column occurs it is because it can do a group computation and doesn't have to store anything. Once a row is out of ...
- Thu Jun 30, 2005 5:46 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Two concurrent lookups in Hash
- Replies: 4
- Views: 862
- Thu Jun 30, 2005 4:02 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Unable to connect to DS server
- Replies: 14
- Views: 3973
- Thu Jun 30, 2005 12:23 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Runtime column propagation
- Replies: 8
- Views: 2343
- Thu Jun 30, 2005 12:17 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Unable to connect to DS server
- Replies: 14
- Views: 3973
Ray/Others, do you know how this problem comes about - something to do with the user groups? I have seen the problem before (years ago) and understand how to fix it but would like to know what causes it. Doesn't the developer role normally get granted sql privileges? I looked at our file and the use...
- Thu Jun 30, 2005 12:12 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Loading only values of a particular attribute value
- Replies: 4
- Views: 847
- Wed Jun 29, 2005 11:36 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: error when reading sequential file
- Replies: 2
- Views: 630
- Wed Jun 29, 2005 11:35 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Px DataSet information
- Replies: 6
- Views: 2052
Leo_t_nice, I put together some BASIC code to gather all the -l information for a search of the whole system into a hash file, then searched the TMP folders and removed valid dataset data files. Found over 50Gb of junked storage today! But my method is kludgy and ungainly and I'm still hoping for a ...