Search found 39 matches
- Wed Apr 16, 2008 6:01 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: difference between two timestamp fields
- Replies: 1
- Views: 1710
difference between two timestamp fields
Hello again, I have to compare a timestamp value (lets call it T1) available in a hashed-file with current timestamp (T2). If (T2 - T1) > 4 hours, then write the record to output file. Is there any function available that will convert timestamps to internal storage format? If I am to use Iconv, I wi...
- Fri Apr 11, 2008 5:08 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error writing to pipe: Interrupted system call
- Replies: 5
- Views: 3981
- Fri Apr 11, 2008 4:43 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error writing to pipe: Interrupted system call
- Replies: 5
- Views: 3981
- Fri Apr 11, 2008 1:21 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error writing to pipe: Interrupted system call
- Replies: 5
- Views: 3981
- Thu Apr 10, 2008 9:01 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: insert to sqlserver
- Replies: 5
- Views: 2231
Ratna, Can you give more details please? Is this happening for all columns or for only one particular column etc? I am assuming that you have already checked and confirmed that the datatype of the source (Oracle) and the target (SQL Server) is the same for the column(s) that are showing this behavio...
- Thu Apr 10, 2008 12:53 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Error writing to pipe: Interrupted system call
- Replies: 5
- Views: 3981
Error writing to pipe: Interrupted system call
Hello, I have a server job performing a Teradata Multiload using TDMLoad stage. Source data is in a flat file. The job has been running fine for months, scheduled at a frequencey of every 10 minutes. Recently however, every once in a while it aborts with the below message Could not write row to data...
- Wed Apr 02, 2008 1:34 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
If the freed space is going to be reclaimed by the hashed-file, it still serves the purpose, since my hashed-file is going to get appended to every hour or so. I think I should stick to this approach of deleting records 'physically' instead of logical deletes, where there will be no reclaimable space?
- Tue Apr 01, 2008 8:50 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
- Tue Apr 01, 2008 8:06 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
Thanks for the hint ![Smile :)](./images/smilies/icon_smile.gif)
For the record, the query needed to be:
Can someone throw more light on whether or not I will be able to reclaim space this way?
![Smile :)](./images/smilies/icon_smile.gif)
For the record, the query needed to be:
Code: Select all
DELETE FROM "images_metadata_hashed" WHERE ("doc_key" = ? AND "event_key" = ?);
- Tue Apr 01, 2008 7:13 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
I had to rule that out because the size of my hashed-file would keep on increasing with logical deletes. My process might encounter about a million records a day. Now, when I try deleting, I am giving the query as below (user-defined) but it is not working. What is the correct way of specifying the ...
- Tue Apr 01, 2008 6:44 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
- Tue Apr 01, 2008 6:24 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
- Tue Apr 01, 2008 6:06 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
Did you mean the UniVerse stage? I tried using the stage but am not sure what is to be filled in the Data Source Name etc. I tried filling in the name of the hashed-file (images_metadata_hashed) but it didn't work. I did a view data by putting in a SELECT SQL and got the following error: JB_SVTT_Del...
- Tue Apr 01, 2008 3:22 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Delete specific records from hashfile / hashed-file
- Replies: 18
- Views: 10711
Delete specific records from hashfile / hashed-file
Hi there,
I have a flat-file A and a hashed file B. Records in A are a subset of records in B. I need to delete those records in the hashfile B that are present in flat-file A. Can anyone help me on how to go about it in a server job/routine?
Thanks,
Kaus
I have a flat-file A and a hashed file B. Records in A are a subset of records in B. I need to delete those records in the hashfile B that are present in flat-file A. Can anyone help me on how to go about it in a server job/routine?
Thanks,
Kaus
- Wed Mar 05, 2008 4:43 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: job status 99
- Replies: 3
- Views: 2089