ray.wurlod wrote::idea: If you initialize a stage variable it doesn't actually need a derivation. If there is no derivation, the initial value remains accessible.
Never noticed, never tried. Make things easier, though.
Decide if you want to do a select max on the table or store the last number after each run. I recommend always doing a select max prior to running your job, as that will reliable work and you don't have to worry of synchronizing the stored max with the actual in case of data corrections or load issu...
Tie-breaker comment, I read it the same way as Ultramundane. Hopefully a user wouldn't be looking for a slower solution using DataStage, therefore, DataStage must be the faster solution. The proof is in the pudding, but I would hazard a guess that PX with parallel capabilities should tackle the pivo...
One issue at a time please. You need to fully understand the requirements with using PX, especially as it relates to the databases. If you're just learning PX, you need to spend some time looking at the stage documentation in the manuals, as a sequential file stage to Server is vastly different that...
I think you totally missed my point. Then again, how bad is your performance? Have you measured your process and absolutely identified this as the source of the bottleneck. If you're transforming and loading the database simultaneously in the same job, you're most likely waiting on database overhead...
Don't know why the job is hanging on this view data query. Have you asked your DBAs to see if the query is running in the database? As for the SQL generated, you can put whatever columns you want into the ODBC/OCI stages, the auto-generated SQL adjusts for the columns present. It's better to select ...
Either use substring notation and re-arrange, or use ICONV and OCONV functions to manipulate the date between formats. As for the time, you'll have to just append "00:00:00" because there's no time element to the data. If it's coming in from the source with time information, then again, substring no...
Are you running the job from Director or from job control. If you think changing it in Director affects how it runs under job control you are mistaken. If you are starting the job using Director, set the abort limit. If you are running it from a Sequencer job, dsjob command line, or a Batch job, the...
Somewhere you are passing in a column value for updating that does not match the datatype. I suggest you check your Reject link output file (you did have one, right? Best practice tip) and see which column on the rejected row didn't match the datatype and put the appropriate derivation/constraint ch...
I couldn't exactly follow all of your issues, but here goes. If you reference a row from a hash file and write back to that same row, you must write back all columns. This is because hash files store a row of data as a single contiguous text string. To not change a column value while changing others...
If you don't need custom buildops you can skate by. Minimum requires to get it to run were listed, but to do all the things you would want to do requires more. It's not really the right platform for significant processing. It's like try to see if you can run Windoze XP on a Pentium III. Yeah it runs...