Internal Error 39202

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
pasbouch
Charter Member
Charter Member
Posts: 2
Joined: Wed Aug 04, 2004 9:24 am
Location: Florida Department of Education

Internal Error 39202

Post by pasbouch »

Hi, I contacted Ascential support to get the steps to be able to change the hashed files addressing mode from 32 bits to 64 bits. Here is what they sent me:

Please follow the following steps to change uvconfig DataStage parameters:
1. Login as dsadm user.
2. Make sure there are no DataStage client connections and no jobs running.
Cd into DataStage engine directory. Enter ". ./dsenv" to source the dsenv
file. Then enter "bin/uv -admin -stop" to shut down DataStage.
3. Modify the uvconfig file to set parameter 64BIT_FILES to 1 to enable 64
bit for hash files.
4. Run "bin/uv -admin -regen". This will copy from uvconfig to .uvconfig
which DataStage engine is using.
5. Run "bin/uv -admin -start" to start up DataStage.

When trying to log back into DataStage using the client after I did all the steps, I received this message:

Failed to connect to host: "host name", project: "project name"
(Internal Error (39202))

I looked for other messages on the same subject and tried to apply the suggested fixes, but nothing work. The regen certainly change something in our parameters or settings. Can someone help me?

Thanks
Pascal Bouchard
System's analyst
Florida Department of Education
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ugh... I hope you have a backup. :?

You're probably going to need someone like Ray (someone who understands the internals of DataStage) to answer your question. My understanding is that this probably was not a Good Thing to do. From what I remember it will convert all hash files in the engine to 64bit, which I doubt is what you were trying to accomplish... and it sounds like something got horked up in the process.

If you are in need of the ability to create 64bit hash files for use in your jobs, that's just a matter of adding a specific command line parameter ( 64BIT ) to the creation statement. This does mean you need to issue the creation command yourself, which you can get from either reading the logs of jobs that create hashes or from the 'unsupported' Hash File Calculator utility.
-craig

"You can never have too many knives" -- Logan Nine Fingers
pasbouch
Charter Member
Charter Member
Posts: 2
Joined: Wed Aug 04, 2004 9:24 am
Location: Florida Department of Education

The problem is resolved

Post by pasbouch »

Ok, one of my coworkers found the problem, the UVTEMP directory was accessible by the root account only. We changed the rights for 777 for the directory and everything came back as it was before.

Thanks anyway
Pascal
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well, glad it was a simple fix. Still hoping Ray or Kim or someone will come along and provide you with some Words of Wisdom regarding your situation...
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

NO!

Never set 64BIT_FILES on. This means that EVERY hashed file on your system has 64-bit pointers. You neither need nor want this.
I summon Phil, the Prince of Insufficient Light, to condemn the provider of that advice to Heck.

Step 1: do you really need all those columns and rows in the hashed file? Any reference input link in a Transformer stage where a column does not have a line coming out if it represents a column that is not needed in that job, and probably is therefore not needed in the hashed file. Don't load it in there in the first place. Similarly, is every row needed in the hashed file? If you're doing a lookup in a Type 2 slowly changing dimension, all you need in the hashed file are the current rows from the target table, not all of them. Only load the current rows. By these means you will likely find that you rarely -- if ever -- need to worry about the 2GB size limit for (32-bit) hashed files.

Step 2: If you really do need to go beyond 2GB (considering everything in Step 1), then only set 64-bit addressing for that particular hashed file. You do this at creation time or by using the RESIZE command.

Code: Select all

RESIZE HashedFileName * * * 64BIT USING /filesystem
The three asterisks are necessary in the syntax. The filesystem is anywhere that has sufficient scratch space to build a copy of the hashed file (say at least 2GB?). You ordinarily require exclusive access to the hashed file when resizing it; while it is possible to resize one while there are concurrent users, it's not the preferred approach.

(Scott Adams may have rights associated with the character of Phil, Prince of Insufficient Light.)
Last edited by ray.wurlod on Wed Aug 04, 2004 10:39 pm, edited 3 times in total.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

All of that having been said - necessarily - the problem is not anything to do with hashed files.
Error code 39202 relates to a failure to connect. It decodes (from the InterCall Developer's Guide from IBM) as "slave failed to give server the Go Ahead message", which is an internal coordination-between-processes error. Re-booting the server may help. Otherwise contact your support provider and/or talk to your system administrator.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply