Error : unable to map file

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Jean-Michel
Participant
Posts: 2
Joined: Tue Apr 20, 2004 7:28 am

Error : unable to map file

Post by Jean-Michel »

Hello,

We're trying to read a dataset of size 4.5 Go and we've got the following error : Unable to map file 'nameofthedataset'. Invalid argument
The error occurred on Orchestrate node 'nodename'.

The job is running on 2 nodes, there's enough disk and scratch space.

Does anyone have a clue ?

Thank you
richdhan
Premium Member
Premium Member
Posts: 364
Joined: Thu Feb 12, 2004 12:24 am

Post by richdhan »

Hi Michel,

First of all what do you mean by 4.5 is it KB or MB. What is the size of the Dataset.

Is the job running on a single node?

Are you able to view the data in the dataset by using the View data option?

Try opening the Dataset using Dataset Managment Tool available in Datastage Manager and report if there are any problems.

It is very difficult to identify the problem at first shot so try using various options and let us know the feeback.

Regards
--Rich

Pride comes before a fall
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

"Go" in French maps to "GB" in English. :wink:

Unfortunately I can't help with the actual problem.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Jean-Michel
Participant
Posts: 2
Joined: Tue Apr 20, 2004 7:28 am

Fund the solution

Post by Jean-Michel »

Hi,

Thank you for your answers. And sorry for the mixing of french and english :oops: . Before giving the solution, i'll state again the problem.

Any attempt to read a dataset of size greater than 2 GB resulted in an "unable to map file error". We run PX 7.01. The problem occurred on jobs running either on 1, 2 or 4 nodes. We could view the data using the view data option, or using the dateset managment tool in DS Manager. No problem were reported.

Our datastage administrator solved the problem with the help of datastage support. He added the new user defined environnement variable $APT_IO_NOMAP type String with value 1.

And now, we can read and write datasets, whatever their size.

:lol:
richdhan
Premium Member
Premium Member
Posts: 364
Joined: Thu Feb 12, 2004 12:24 am

Post by richdhan »

Hi Michel,

It is good to know that your problem has solved. But do you know the significance of the environment variable $APT_IO_NOMAP. If you get more information on this front pls share it in the forum. Meanwhile I will also try to get some information on this.

What is the OS on which Datastage is running on? Is it AIX or HP-UX or Solaris.

Regards
--Rich
leo_t_nice
Participant
Posts: 25
Joined: Thu Oct 02, 2003 8:57 am

Post by leo_t_nice »

Hi

This is what i found in a document on environment variables;

APT_IO_MAP and APT_IO_NOMAP : Control whether iomgr uses mapped IO for disk files. If neither env. var. is set, uses mapping by default except on AIX and HP, which uses read/write IO by default.

Hope this helps
richdhan
Premium Member
Premium Member
Posts: 364
Joined: Thu Feb 12, 2004 12:24 am

Post by richdhan »

Hi,

This is what I found in one of the Ascential docs. Hope this is useful.

Code: Select all

Memory mapped IO is, in many cases, a big performance win; however, in certain situations, such as a remote disk mounted via NFS, it may cause significant performance problems. APT_IO_NOMAP=1 and APT_BUFFERIO_NOMAP=1 turn off this feature and sometimes affect performance. AIX and HP-UX default to NOMAP. APT_IO_MAP=1 and APT_BUFFERIO_MAP=1 can be used to turn on memory mapped IO on for these platforms.
But I did not get what they mean by "Memory mapped IO". If someone can explain that it would be good.

Thanks
--Rich

Pride comes before a fall
Post Reply