add_to_heap() - Unable to allocate memory

Archive of postings to DataStageUsers@Oliver.com. This forum intended only as a reference and cannot be posted to.

Moderators: chulett, rschirm

Locked
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

add_to_heap() - Unable to allocate memory

Post by admin »

Guys,

Im running DS 5.1 on AIX and Ive been getting the above error when a hashed file is being populated. Ive checked the forum archives for this type of error and from what I could gather it can be resolved by ensuring the availability of space at the DS_MMAPPATH and setting the DS_MMAPSIZE to a sufficient size. Both of these Ive done, the DS_MMAPPATH has 5 gig spare and the DS_MMAPSIZE is set to 512. Im not sure if this is relevant but I have also increased the read/write cache Tuneables in Administrator to 512, but the problem still persists.

The hashed file is a dynamic hashed file of about 3.5 million with an average row size of 15 bytes.

Any help would be appreciated.

Regards,
Regu.
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

First, you must recycle services in order for the new environment to be used. In addition, the path in which your hash file exists must be on a file system with enough capacity. If you didnt specify a fixed path (highly recommended that you use this functionality along with a job parameter for the
directory) the file exists in the DataStage project
where the job is executing. I recommend that you use
external pathing for hash files so that you can
monitor their sizes and file systems.

Good luck!
-Ken


--- Regu.Saliah@astrazeneca.com wrote:
> Guys,
>
> Im running DS 5.1 on AIX and Ive been getting the
> above error when a
> hashed file is being populated. Ive checked the
> forum archives for this
> type of error and from what I could gather it can be
> resolved by ensuring
> the availability of space at the DS_MMAPPATH and
> setting the DS_MMAPSIZE to
> a sufficient size. Both of these Ive done, the
> DS_MMAPPATH has 5 gig spare
> and the DS_MMAPSIZE is set to 512. Im not sure if
> this is relevant but I
> have also increased the read/write cache Tuneables
> in Administrator to 512,
> but the problem still persists.
>
> The hashed file is a dynamic hashed file of about
> 3.5 million with an
> average row size of 15 bytes.
>
> Any help would be appreciated.
>
> Regards,
> Regu.
>


__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

Thanks Kenneth - Ive restarted it and checked the space availability where the hashed file is being built and am still getting the error. I have 5 gig free in the project directory where all our jobs build their hashed files. Youre right about the fixed path for the hashed files, its something I have scheduled on my list of things to do, but Im not sure it would help with this problem, would it?

Regu.

-----Original Message-----
From: Kenneth Bland [mailto:kcbland_2000@yahoo.com]
Sent: 12 August 2002 19:02
To: datastage-users@oliver.com
Cc: Saliah, Regu
Subject: Re: add_to_heap() - Unable to allocate memory


First, you must recycle services in order for the new environment to be used. In addition, the path in which your hash file exists must be on a file system with enough capacity. If you didnt specify a fixed path (highly recommended that you use this functionality along with a job parameter for the
directory) the file exists in the DataStage project
where the job is executing. I recommend that you use
external pathing for hash files so that you can
monitor their sizes and file systems.

Good luck!
-Ken


--- Regu.Saliah@astrazeneca.com wrote:
> Guys,
>
> Im running DS 5.1 on AIX and Ive been getting the
> above error when a
> hashed file is being populated. Ive checked the
> forum archives for this
> type of error and from what I could gather it can be
> resolved by ensuring
> the availability of space at the DS_MMAPPATH and
> setting the DS_MMAPSIZE to
> a sufficient size. Both of these Ive done, the
> DS_MMAPPATH has 5 gig spare
> and the DS_MMAPSIZE is set to 512. Im not sure if
> this is relevant but I
> have also increased the read/write cache Tuneables
> in Administrator to 512,
> but the problem still persists.
>
> The hashed file is a dynamic hashed file of about
> 3.5 million with an
> average row size of 15 bytes.
>
> Any help would be appreciated.
>
> Regards,
> Regu.
>


__________________________________________________
Do You Yahoo!?
HotJobs - Search Thousands of New Jobs
http://www.hotjobs.com
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

Spot on.

Thanks
Regu
-----Original Message-----
From: Simon Fryett [mailto:sfryett@hotmail.com]
Sent: 13 August 2002 14:07
To: datastage-users@oliver.com
Subject: RE: add_to_heap() - Unable to allocate memory


This does sound like a file creation problem, however you should check the
max data segment size for the user that runs DataStage services. In the past

I have had problems with this being set too low and DataStage running out of

memory. This setting can be checked with the ulimit -a command.

>From: Regu.Saliah@astrazeneca.com
>Reply-To:
>To: datastage-users@oliver.com
>Subject: RE: add_to_heap() - Unable to allocate memory
>Date: Tue, 13 Aug 2002 10:18:10 +0200
>
>Thanks Kenneth - Ive restarted it and checked the space availability
>where the hashed file is being built and am still getting the error. I
>have 5 gig free in the project directory where all our jobs build their
>hashed files. Youre right about the fixed path for the hashed files,
>its something I have scheduled on my list of things to do, but Im not
>sure it would help with this problem, would it?
>
>Regu.
>
>-----Original Message-----
>From: Kenneth Bland [mailto:kcbland_2000@yahoo.com]
>Sent: 12 August 2002 19:02
>To: datastage-users@oliver.com
>Cc: Saliah, Regu
>Subject: Re: add_to_heap() - Unable to allocate memory
>
>
>First, you must recycle services in order for the new environment to be
>used. In addition, the path in which your hash file exists must be on
>a file system with enough capacity. If you didnt specify a fixed
>path (highly recommended that you use this
>functionality along with a job parameter for the
>directory) the file exists in the DataStage project
>where the job is executing. I recommend that you use
>external pathing for hash files so that you can
>monitor their sizes and file systems.
>
>Good luck!
>-Ken
>
>
>--- Regu.Saliah@astrazeneca.com wrote:
> > Guys,
> >
> > Im running DS 5.1 on AIX and Ive been getting the
> > above error when a
> > hashed file is being populated. Ive checked the
> > forum archives for this
> > type of error and from what I could gather it can be resolved by
> > ensuring the availability of space at the DS_MMAPPATH and
> > setting the DS_MMAPSIZE to
> > a sufficient size. Both of these Ive done, the
> > DS_MMAPPATH has 5 gig spare
> > and the DS_MMAPSIZE is set to 512. Im not sure if
> > this is relevant but I
> > have also increased the read/write cache Tuneables
> > in Administrator to 512,
> > but the problem still persists.
> >
> > The hashed file is a dynamic hashed file of about
> > 3.5 million with an
> > average row size of 15 bytes.
> >
> > Any help would be appreciated.
> >
> > Regards,
> > Regu.
> >
>
>
>__________________________________________________
>Do You Yahoo!?
>HotJobs - Search Thousands of New Jobs
>http://www.hotjobs.com




Simon Fryett
Data Warehouse Consultant
simon.fryett@smallnetconsulting.co.uk


_________________________________________________________________
Join the worlds largest e-mail service with MSN Hotmail.
http://www.hotmail.com
Locked