Search found 4992 matches
- Thu Jan 06, 2005 12:15 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: moving image files (.Tiff, .jpg, .gif etc) with DataStage
- Replies: 3
- Views: 3115
Consider using a token, like a fully qualified filename, to "move" the data around. By not incuring the overhead of juggling the file, you can manipulate information easily, leaving the large (blobs, clobs, etc) sitting in their native format. During the final load into the target, use the native lo...
- Thu Jan 06, 2005 11:55 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: moving image files (.Tiff, .jpg, .gif etc) with DataStage
- Replies: 3
- Views: 3115
DataStage is not for moving binary data. Think about it, a single row of data could contain many KB or even MB. The data movement rate would be horrendous even if you could simply stream from ODBC connection to ODBC connection. There are tools for doing this, and they're not ETL tools. Ascential use...
- Thu Jan 06, 2005 10:42 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Date Comparison in a Constraint
- Replies: 3
- Views: 936
Your statement is completely valid. It will return @TRUE when WAYBILL_DATE is greater than the current system date minus 21 days. The only place for error is in the content of WAYBILL_DATE. Could you have leading spaces that are fouling the substrings function? Maybe it's only getting a partial date...
- Thu Jan 06, 2005 9:37 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Loading data with no incremental jobs -Urgent Info Required
- Replies: 16
- Views: 5846
- Wed Jan 05, 2005 11:31 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Project Directory Growing Big
- Replies: 10
- Views: 5133
Are you sure log files shrink? Clearing a job log actually deletes rows at a time, it does not use a truncating type statement. Therefore if a log file is a DYNAMIC hash file, the only way to shrink it completely is either issue a "CLEAR.FILE" command to recover it back to the minimum modulus and em...
- Wed Jan 05, 2005 11:05 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: is there any certifications in Data Satge
- Replies: 10
- Views: 2183
Not to argue, but Ascentials motivation for certification can also be a means to squeeze independents like myself. I can tell you this, no Ascential review board is going to acredit me, nor will I prostrate myself before them and humbly beg for a blessing. I've ran in those circles, I know where I s...
- Wed Jan 05, 2005 3:41 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: First occurence out of a group of records.
- Replies: 11
- Views: 2962
Just in case it matters, Carters solution is simple and elegant, but you have to deal with the fact that the data in the hash file is now effectively randomized. If you need to preserve the original selection order, but only take the first of each group, then you MUST resort the data coming out of t...
- Wed Jan 05, 2005 11:20 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: First occurence out of a group of records.
- Replies: 11
- Views: 2962
Yes, you can place data into a table in an ordered method. But retrieving date without explicit ordering does not guarantee it is returned to you in the same order entered. I mispoke, you CAN accelerate loading performance by having data already ordered and partitioned in order to direct the data mo...
- Wed Jan 05, 2005 10:54 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: wrong hash lookup..
- Replies: 11
- Views: 2442
- Wed Jan 05, 2005 10:28 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: error writing into hash file..
- Replies: 7
- Views: 1375
You are attempting to fit 11 pounds of apples into a 10 pound apple box. Either remove unnecessary columns from the hash file to relieve the amount of data going into the file, or switch to creating it as 64BIT. There is a hard ceiling on 32BIT files, there's no way to squeeze more into that contain...
- Wed Jan 05, 2005 10:25 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: First occurence out of a group of records.
- Replies: 11
- Views: 2962
First of all, tables do not store data ordered. 5000 rows means you can do it anyway you want. But, you will need the data ordered EXPLICITLY using SQL. There is no debate on this matter. Per your requirement NOT to use SQL to give you the first row in each ordered group, your pure DS solution NOT u...
- Wed Jan 05, 2005 10:19 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: error writing into hash file..
- Replies: 7
- Views: 1375
First of all, are you attempting to put 500 million rows into a hash file? If so, you better contemplate a better solution. Just figure out how much raw character storage is required: 500 million rows of 20 characters per row puts you at 1 TB. Hash files can hold that, but is it the solution I think...
- Wed Jan 05, 2005 9:54 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: First occurence out of a group of records.
- Replies: 11
- Views: 2962
Lots of ways. You have to give more information, like are you processing thousands or millions of rows. Is your data sorted already, or are you sorting it. Is it coming to you in a file or a is it in a table. You can see that we can go different routes on solutions: sorting, aggregating, SQL, unix c...
- Wed Jan 05, 2005 9:50 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: error writing into hash file..
- Replies: 7
- Views: 1375
- Wed Jan 05, 2005 9:48 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: wrong hash lookup..
- Replies: 11
- Views: 2442
From a private message: i already fixed the weekly load. here is the correct scene.. fact table has: 'claim_key' as key column.. and a 'part_key' column with wrong values. and wrong hash file with 'part_key' and 'part_no' as fields and a correct hash file with 'part_key' and 'part_no' as fields.. af...