Job compile issue

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Job compile issue

Post by TonyInFrance »

I have a job with quite a few join stages and 4 or 5 transformer stages.
This is necessary since columns generated in one transformer is used in deducing columns in the subsequent transformer stage.

I've been experiencing a peculiar problem when i go to compile this job - it never compiles the first time. I get an error saying:

##F IIS-DSEE-TFCM-00005 11:52:54(011) <main_program> Fatal Error: Added field has duplicate identifier(): APT_TOMainloop (JxXXX.Tr_CI_001)

This is clearly not true since when I recompile immediately, everything seems fine. thus the only irritant is I cannot compile this particular job along with the others in the project, thus not being able to make full use of the 'multiple job compile' function - what's bizarre is that it always compiles the second time around.

Anyone faced a similar error?

Tony
Tony
BI Consultant - Datastage
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

With something like this I would suspect some potential metadata corruption causing the issue.

Are you doing a regular compile on your recompile or are you doing a force compile? See if the error repeats each time with a force compile.

If you haven't already, export the job, delete it, and then reimport it.

If that doesn't help, then start narrowing the issue down by removing a stage at a time.

Check your join stages to make sure that every non-key column has a unique name.

Mike
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

Mike wrote:With something like this I would suspect some potential metadata corruption causing the issue.

Are you doing a regular compile on your recompile or are you doing a force compile? See if the error repeats each time with a force compile.
Will try this asap.
Mike wrote:If you haven't already, export the job, delete it, and then reimport it.
This won't solve the problem since after each modification I export the job from our development environment to our test environment and it never compiles on the first try
Mike wrote:If that doesn't help, then start narrowing the issue down by removing a stage at a time.

Check your join stages to make sure that every non-key column has a unique name.
I have many other jobs with multiple joins where the columns are probably not unique. in my experience though, i've noticed that in this case DataStage gives a warning. Compilation does not really fail and that too just once.
Tony
BI Consultant - Datastage
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

Does anyone have any tips for this?

I have a new problem in the same job, i.e. after managing to compile the job

I've noticed that the partitioning that I've set upstream in the transformer stage is not preserved.

The next stage is a Pivot Enterprise stage which needs incoming data sorted and hash partitioned. Thus to prevent warnings I have to clear the partition on the previous transformer stage.

However on saving the job, compiling it, closing it and repoening it, I see that the upstream partiioning comes back to Default (Propagate)

This is troublesome since the job runs with a warning and my sequencer which calls this job stops right away.
Tony
BI Consultant - Datastage
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I vaguely recall seeing this behaviour. Try opening the job, making the change, and saving the job. Then, without compiling, open the job again, make the change again if necessary, and save. Don't make any other change to the job.

You can try opening the job a third time to check that the change has stuck, or just compile it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
JRodriguez
Premium Member
Premium Member
Posts: 425
Joined: Sat Nov 19, 2005 9:26 am
Location: New York City
Contact:

Post by JRodriguez »

TonyInFrance,
One of the developers in our team faced a variant of this issue. By making a copy of the job, deleting the original, rename the copy to a complete different name and compile, then rename it to the original name...fixed the issue

Give a minute or two after you delete the original job... When the XMETA database is busy, in our case we have it in a very busy shared database, it takes some time to reflects the changes...
Julio Rodriguez
ETL Developer by choice

"Sure we have lots of reasons for being rude - But no excuses
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

ray.wurlod wrote:I vaguely recall seeing this behaviour. Try opening the job, making the change, and saving the job. Then, without compiling, open the job again, make the change again if necessary, and save. Don't make any other change to the job.

You can try opening the job a third time to check that the change has stuck, or just compile it.
This worked for one of the jobs - I had two quasi identical jobs (i.e. same logic) with different filters.

So while saving a couple of times without compiling and then compiling once and for all worked for the fist copy, it didn't for the second.
JRodriguez wrote:One of the developers in our team faced a variant of this issue. By making a copy of the job, deleting the original, rename the copy to a complete different name and compile, then rename it to the original name...fixed the issue

Give a minute or two after you delete the original job... When the XMETA database is busy, in our case we have it in a very busy shared database, it takes some time to reflects the changes...
This didn't work either, since I created a copy of the job & deleted the original. however the new job showed the same symptoms.

The work around I thus used was inserting a copy stage in between the transformer and pivot enterprise stage. The upstream partitioning thus is at default (propagate) since it refuses to remain at clear and in the subsequent copy stage I have had to clear the same so that when my data is sent to the pivot enterprise stage its partitioning is cleared.
Tony
BI Consultant - Datastage
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That smells like a bug to me... have you reported it to IBM?
-craig

"You can never have too many knives" -- Logan Nine Fingers
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

Not yet Craig. I've been firefighting since we're on a tight deadline... but I guess reporting it would ultimately be the way to go.
Tony
BI Consultant - Datastage
Post Reply