I am trying to find an easy way to parse through a .txt file to load to a table. The format of the file is something like:
1033015067
2004040120
3NY6175703845
4001
1040200004
16600000010402000
219200000042200000
411101cfrtgd234fbnn
50000000000000000
All rows on the file are a total of 254 characters long even though my example appears different. The first character of each row determines how the columns will be defined for the remaining data in the row. I need to put all data for rows 1 through x (x can be any number of rows up to a max of 10) in column format. When it hits another row that starts with 1, a new row will be inserted into the target table. In this example there would be 3 rows in the target table. Some of these rows would not have all columns populated since not all have rows that start with 2, 3, etc. Row 1 has the account number and this must be stored as it relates to rows 2, 3, 4 etc.
This is how the data should look in the target for the first 2 records:
COL1 COL2 COL3 COL4 COL5 COL6 COL7 COL8 COL9 COL10
033 0150 67 004 040 120 NY6 1757 03845 001
040 2000 04
This is hard to explain, so please ask any questions you may have. I am not a Datastage expert so I may need some clarification.
Thanks,
Renee
Sequential File Parsing
Moderators: chulett, rschirm, roy
Thanks alot. I used the row splitter and it worked!
Renee
Renee
chulett wrote:For starters, please read the documentation on the Row Splitter stage. You should be able to find a 'rowsplit.pdf' file in the Docs directory under your client installation. It does exactly what you are looking for, it sounds like.
