Page 1 of 1
Picking metadata at run time
Posted: Mon Feb 15, 2010 11:11 am
by mfshah
I have to load data from oracle(source) to teradata(target). The jobs should be dynamic/flexible and pick the metadata information at run time i.e. If a change has made to the data model it shouldn't effect the DataStage job design. Say Col A was previously varchar 5 and now the lenght has increased to varchar10 Or an additional column has been added to the oracle table and we would like to pass it along to the target table. Is there any way to pick the metadata at runtime and make the datastage code flexible?
We are using Datastage 7.5.2 PX.
Posted: Mon Feb 15, 2010 1:47 pm
by chulett
As long as there are zero transformations involved in the process, yes - it's called RCP or Runtime Column Propogation. Search here or in your pdf manuals for all of the gory details.
Posted: Mon Feb 15, 2010 1:47 pm
by ray.wurlod
Welcome aboard.
mfshah wrote:The jobs should be dynamic/flexible and pick the metadata information at run time
Why?
Posted: Tue Feb 16, 2010 8:57 am
by mfshah
ray.wurlod wrote:Welcome aboard.
mfshah wrote:The jobs should be dynamic/flexible and pick the metadata information at run time
Why?
If a DDL is made to the database, it should not effect job i.e. we shouldn't be changing datastage job
Posted: Tue Feb 16, 2010 9:02 am
by chulett
Good luck with that.

Posted: Tue Feb 16, 2010 1:14 pm
by ray.wurlod
Seriously, how often are table definitions changed in a real production environment.
The usual (professional) approach is that such changes are planned, and advised to everyone involved, including ETL developers so that they can assess the impact. A proper software development lifecycle should apply to ETL programming just as much as to any other form of programming.
What happens (for example) in the Business Intelligence tool that touches this amoeba table? Is that expected to be just as dynamic?