I have to load 350+ tables from one Database into another database.
I am trying to see if it is possible to use just one generic job to do this.
- My source Stage uses a parameter for the table name an the SQL is generated automatically at runtime.
- My target uses a parameter for the table name an the SQL is generated automatically at runtime.
- RCP is on
If Write mode for the target Table is Insert, then everything works fine.
However, if I set it to Updtate then Insert, I get the following message:
The question then is, how can I tell DataStage what my Update key column is ?The connector could not automatically generate the WHERE clause for the UPDATE statement. Specify at least one key column in the input schema
My issue is that I don't know how to tell DataStage what keys for the update are.
Let's assume there's only one column to define for the Update key...
I thought of using the Modify Stage to pass the key column as a parameter called #pKEY1#
What I did is:
I thought that at the "#pKEY1#=KEYTEMP" step, the column #pKEY1# would have gotten the "key property definition" of KEYTEMP...Source Table -> Transformer Stage (Create a new column named KEYTEMP and define it as a key) -> Modify Stage (KEYTEMP=#pKEY1#) -> Modify Stage 2 (#pKEY1#=KEYTEMP) -> Target Table (Upsert mode)
But no, the job fails lamentably for the same reason :
"The connector could not automatically generate the WHERE clause for the UPDATE statement. Specify at least one key column in the input schema"
I would be curious to know if there is a way around this problem... Even if I have the feeling that I will have to write 350+ jobs
Thanks for your help !