Is that your final output or you will be needing to use this file for further processing. If yes for further processing then add the extra column in the second file and use a funnel stage to cat both of these files. If this is your final output then just cat it at the unix level by cat file1 file2 >...
Maybe that particular column's derivation is equivalent to what you have specified in red. [shrug] How are you doing your third timestamp parsing. Is it via a routine. Make sure you are calling that routine in the third timestamp's derivation.
If you check that option in a restartable job sequence. That particular job activity will execute regardless every time the sequence is restarted after failure. For example a job that builds hashed files. You need to refresh that job because new lookup values keep pouring in. You check the option of...
Size of the hashed file does matter. By default they have a limit of 2.2 gigs. You can break that barrier by making it 64 bit. Search this forum for how to. Is the entire sql query too long that it doesn work in user defined space. Try it by running the sql from datastage and see the outcome. Get yo...
Do you guys have mkstoolkit installed or a third party tool like samba that makes unix visible to windows and viceversa. That might have something to do with it
As Arnd pointed out, the rows/sec is meaningless and platform dependent. In other words, it all depends upon whats under the hood. If this conerns you and both your source and lookup are in the same database, just pass a sql command to do the lookup. That will be much faster, provided sucha move is ...
Load the output to a file and then in a second job just have a straight load between the file and the bulk loader. See if the error persists.
Is that the only error message you are getting. What messages are there in the message file. See if any additional messages are there.
I dont know if enterprise sql server stage has that then. I can only confirm that tomorrow after looking into that stage. If someone else has access to it, can shed more light on it. Have a simple job before your job that just truncates the table. You can pass that in user defined sql.
Delete is a logged activity. That means if you have x amount of rows then x number of activities in the database log which has takes its own time, which inturn adds time to your process. Simply do a truncate table in the before sql tab and insert without clearing. That will help.