Parsing a string into multilple columns

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
chida25
Participant
Posts: 4
Joined: Tue Dec 19, 2006 11:10 pm
Location: Pune

Parsing a string into multilple columns

Post by chida25 »

Hi
Input is a delimited file as below

KEY_COL|ATTRIBUTE_NAME_2|ATTRIBUTE_VALUE_4
sonj2|USER PRIVILEGES DESC|useradmin+route+suppress+reportadmin

Expected output- wants to create multiple rows for each KEY_USER according to attribute values associated with it.

KEY_COL ATTRIBUTE_NAME_2 ATTRIBUTE_VALUE_2
sonj2 USER PRIVILEGES DESC useradmin
sonj2 USER PRIVILEGES DESC route
sonj2 USER PRIVILEGES DESC suppress
sonj2 USER PRIVILEGES DESC reportadmin

Used column import and pivot stage to achieve, output result. However can exist more or less for other KEY_USER, this dynamic part is becoming limitation.Is there any way to handle this dynamic part?

Used Field and Index function but how to increase the counter of delimiter position?

Used Convert function and CHAR(10) but the result is in one row.
Chidanand R Patil.
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Assuming you're on version 8.5 or later, use a loop in the Transformer stage to output a row for each "+"-delimited field in the ATTRIBUTE_VALUE_4 column.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chida25
Participant
Posts: 4
Joined: Tue Dec 19, 2006 11:10 pm
Location: Pune

Post by chida25 »

Thanks for response, however
Still in 8.1.
Please suggest any other alternative.
Chidanand R Patil.
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

1. Process the maximum possible number of fields and output only if the result is non-empty.

2. Use a server job and take advantage of the dynamic normalization characteristic of hashed file or UniVerse table.

3. Upgrade to version 8.5 or later.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply