Problem in reading decimal field after conv EBCDIC to ASCII

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Welcome aboard. How did you "convert to ASCII"? Did you try "read as EBCDIC" in the Sequential File stage?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Dip_DS
Participant
Posts: 4
Joined: Thu Oct 30, 2014 1:46 am

Post by Dip_DS »

Hi Ray,

tried 'read as EBCDIC' in the sequential stage without any luck...

There is an existing production job which converts the file from EBCDIC to ASCII. Due to some constraint, Job is using column importer instead of CFF to read the EBCDIC file (input is a 'multiple redefine layout') and splitting the file in xx number of files based on the layout. Splitting logic is coded in a transformer. The output sequntial file will be used for further processing and loading the data into teradata table.

Input:
Sequential file
Character set = EBCDIC and data format = Binary

Output
Sequential file

Today when I saw the file in Unix I observed that records are getting broken when it encounters the decimal field.
it looks like

Code: Select all

 xx,xx,xx,000     , ,
,    ,000000111111,0000000222222,
it should be like

Code: Select all

 xx,xx,xx,000     , , ,    ,000000111111,0000000222222,
Dump of the rec from Unix:

Code: Select all

0000100                                ,   0   0   0   0   0   0   0   0
0000120    0   0   0   0   0   0   0   2   0   1   6   9   2   5   4   ,
0000140    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   2
0000160    0   1   6   9   2   5   4   ,       ,   0   1   2   2   7   0
0000200    7   ,  \0  \0  \0  \0  \0 223 220   \   ,  \0  \0  \0  \0  \0
0000220   \0  \0  \f   ,  \0  \0  \0  \0  \0  \0  \0  \f   ,  \0  \0  \0
0000240   \0  \0  \0  \0  \f   ,  \0  \0  \0  \0  \0  \0  \0  \f   ,  \0
0000260   \0  \0  \0  \0  \0  \0  \f   , 002 001   @ 201 234   , 002 001
0000300    @ 201 234   , 002   5 222  \f   ,  \0  \f   ,  \0  \0  \f   ,
0000320    2   6   4   5   Z   0   3   4   1   2   9   0
0000340                                    ,               ,
0000360    ,   0   ,                                       ,  \0 004 022
0000400  234   ,           ,           ,   0   0   0   0   0   ,   0   0
0000420    0   0   ,   0   5   ,       ,       ,       ,       ,       ,
0000440        ,       ,       ,
0000460                                                        ,  \0  \0
0000500   \0  \f   ,                                   ,  \0  \0  \f   ,
0000520                    ,                           ,
0000540                                                    ,   1   ,   0
0000560    1   ,   0   ,   0   ,   3   1   2   ,   2   7   0   7   0   1
0000600    3   1   3   ,   3   ,   0   0   ,   0   0   0   0   0   0   0
0000620    0   9   3   ,   9   0   5   0   0   0   0   0   0
0000640                                        0   2   0   1   6   9   2
0000660    5   4           ,
0000700                                ,       0   0   0   0   0   2   9
0000720    9   8   0   0   0   0   0   0   ,   0   0   0   0   0   0   1
0000740    2   1   2   4   2   0   1   4   0   ,   8   1   9   0   4   7
0000760    2   6  \n
SO I guess this is something to do with the the line break character \f (terminator) which is coming for most of the decimal fields. I read couple of topics here with 'contain terminator' and convert function (Server) but didnt find anything suitable for parallel.
roy
Participant
Posts: 2598
Joined: Wed Jul 30, 2003 2:05 am
Location: Israel

Post by roy »

Hi,
In my experience, most of the time the EBCDIC file is transferred from the source server to the DataStage server.
when that is the case usually you include the EBCDIC to ASCII conversion in the file transfer utility (in both directions) that prevents your need to handle such things.
IHTH,
Roy R.
Time is money but when you don't have money time is all you can afford.

Search before posting:)

Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Image
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Keep in mind the fact that any "ASCII translation" of packed fields will destroy them. Years ago we used a utility that allowed you to define the character positions that needed translation during the transfer so you could skip over anything packed. Nowadays it seems that people either build out the record with unpacked fields or transfer without translation and use something like DataStage's CFF to do the appropriate translations during the read.
-craig

"You can never have too many knives" -- Logan Nine Fingers
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Based on the information you've provided, you are seeing a standard error in handling the packed decimal field. See the Using Mainframe Source Data FAQ for more details on this format and how to handle it (and how to avoid errors on the ASCII side).
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
Post Reply