Dynamism in DataStage
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
Dynamism in DataStage
Hi All,
How can we implement dynamism in DataStage ? My requirement is there is a metadata file which contains the column name , length and thecolumn order and there is another input file which contains the data corresponding to the metadata file . The metadata file is dynamic in nature i.e. the no. of columns can vary in the file. Please suggest me a way to implement the above functionality.
Thanx in advance.
How can we implement dynamism in DataStage ? My requirement is there is a metadata file which contains the column name , length and thecolumn order and there is another input file which contains the data corresponding to the metadata file . The metadata file is dynamic in nature i.e. the no. of columns can vary in the file. Please suggest me a way to implement the above functionality.
Thanx in advance.
-
- Premium Member
- Posts: 1255
- Joined: Wed Feb 02, 2005 11:54 am
- Location: United States of America
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
The metadata file is a csv file. It contains three columns :I_Server_Whale wrote:What is the format of your metadata file? Can you give an example?
ColumnName ColumnWidth ColumnOrder
Name 10 1
EmpId 10 3
Salary 10 2
The Metadata file is more or less like this , but the rows may increase or decrease and that is the dynamism involved. The rows needs to be sorted according to the column order and mapped with the input data file and finally load in a another file.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I am unaware of any way to achieve this in server jobs.
Doubtless some kind of wizard could be written to read the file and create the table definition in the DataStage Repository - indeed, I've done that, but what becomes of the metadata after that? There's no extra information in the file about what to do with it. And, for text files, whence is the information about the format to be obtained?
Doubtless some kind of wizard could be written to read the file and create the table definition in the DataStage Repository - indeed, I've done that, but what becomes of the metadata after that? There's no extra information in the file about what to do with it. And, for text files, whence is the information about the format to be obtained?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
Can you go ahead with the wizard part, may be that can be of some help.ray.wurlod wrote:I am unaware of any way to achieve this in server jobs.
Doubtless some kind of wizard could be written to read the file and create the table definition in the DataStage Repository - indeed, I've done that, but what becomes of the metadata after that? There's no extra information in the file about what to do with it. And, for text files, whence is the information about the format to be obtained?
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Invoking a Wizard is generally considered risky... best not to anger them with unreasonable demands.
Even if you do get this 'some help' I don't see how it's really going to help. The functionality you seek isn't available if you are expecting to be able to handle this kind of information 'on the fly' inside one job.
Something could be built to automagically build jobs from a template if you have the skillz, one for each unique combination of file values. By the time you do that however, you could have created a true 'template' job and spun off the jobs yourself by hand I would think.
Even if you do get this 'some help' I don't see how it's really going to help. The functionality you seek isn't available if you are expecting to be able to handle this kind of information 'on the fly' inside one job.
Something could be built to automagically build jobs from a template if you have the skillz, one for each unique combination of file values. By the time you do that however, you could have created a true 'template' job and spun off the jobs yourself by hand I would think.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
What about RCP property ?chulett wrote:Invoking a Wizard is generally considered risky... best not to anger them with unreasonable demands.
Even if you do get this 'some help' I don't see how it's really going to help. The functionality you seek isn't available if you are expecting to be able to handle this kind of information 'on the fly' inside one job.
Something could be built to automagically build jobs from a template if you have the skillz, one for each unique combination of file values. By the time you do that however, you could have created a true 'template' job and spun off the jobs yourself by hand I would think.
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
What about RCP property ?chulett wrote:Invoking a Wizard is generally considered risky... best not to anger them with unreasonable demands.
Even if you do get this 'some help' I don't see how it's really going to help. The functionality you seek isn't available if you are expecting to be able to handle this kind of information 'on the fly' inside one job.
Something could be built to automagically build jobs from a template if you have the skillz, one for each unique combination of file values. By the time you do that however, you could have created a true 'template' job and spun off the jobs yourself by hand I would think.
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
RCP is a good option that px jobs have but thats just for propogating the records. For any explicit transformations, the columns need to be defined.
And Yes, your requirement can be met in px jobs as long as you understand about the limitations of rcp.
And Yes, your requirement can be met in px jobs as long as you understand about the limitations of rcp.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
I have tried the RCP option and created the schema file that contains the metadata information , but the job is getting aborted with the following error :DSguru2B wrote:RCP is a good option that px jobs have but thats just for propogating the records. For any explicit transformations, the columns need to be defined.
And Yes, your requirement can be met in px jobs as long as you understand about the limitations of rcp.
Delimiter for field "Store" not found; input: <empty>, at offset: 8
The file that I am currently reading has comma as the filed delimeter and same has been defined in the job also, but its not working. Please provide a solution.