Reading decimal values in schema files

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Reading decimal values in schema files

Post by TonyInFrance »

Morning / Afternoon / Evening all

I'm going to bore you guys once again with the infamous schema file. Just when I thought I was getting the hang of it, voila, Datastage has just thrown me a curve ball.

I basically have decimal values of 20 significant digits including 9 decimal places which I'm reading with a decimal[20,9] specification. However my actual data contains a thousands' separator which is a space. So 1299.789987789 in my flat file becomes 1 299.789987789

Thus when I read this value I get 1.

Is there any way I could instruct Datastage about the thousands' separator?

Thanks guys

Tony
Tony
BI Consultant - Datastage
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You're in France, using a French locale, which specifies space as the thousands delimiter and comma as the decimal placeholder (which is correct for France). You can change this easily by changing the NLS locale used by that job.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

Actually in this case the decimal separator is a dot. Don't ask why. Sometimes the French like to behave differently from the norm.

I guess I need to define an NLS locale just to signify a thousands' separator.
Tony
BI Consultant - Datastage
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

No one?
Craig / Eric ? :p
Tony
BI Consultant - Datastage
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You didn't ask another question. I assumed that
TonyInFrance wrote:I guess I need to define an NLS locale just to signify a thousands' separator.
was a statement of intent.

Bonne chance!
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Same here.
-craig

"You can never have too many knives" -- Logan Nine Fingers
qt_ky
Premium Member
Premium Member
Posts: 2895
Joined: Wed Aug 03, 2011 6:16 am
Location: USA

Post by qt_ky »

Sorry, I don't know the answer. My passport is expired. :?
Choose a job you love, and you will never have to work a day in your life. - Confucius
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

If you guys mean changing the Default collation locale for stages in the NLS tab for the local job's properties, I just tried that.

I changed it from Project (OFF) to fr_FR

Nothing changed though. The value is still read upto the 1st space.
Tony
BI Consultant - Datastage
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Take a look at what's specified in the fr_FR locale. You may need to create a custom locale definition.

Otherwise read the field as VarChar and use Convert() function to change the delimiter characters.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
TonyInFrance
Premium Member
Premium Member
Posts: 288
Joined: Tue May 27, 2008 3:42 am
Location: Luxembourg

Post by TonyInFrance »

I'll try the first solution definitely.

Reading as VarChar and using convert on one field for one file will result in me having to create one particular job for this file, whereas I'm using a generic job to validate 30 files each having its own schema file.
Tony
BI Consultant - Datastage
Post Reply