ds_uvput() - Write failed for record id '4130

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
sivatallapaneni
Participant
Posts: 53
Joined: Wed Nov 05, 2003 8:36 am

ds_uvput() - Write failed for record id '4130

Post by sivatallapaneni »

Hi every one,
I have a problem with one of the job. It has three hash files, referencing one hash file and writing to TWO hash files.
when i run this job it is giving

Code: Select all

ds_uvput() - Write failed for record id '4135
warning. when i try to access the hash file from UV it giving me the following message

Code: Select all

Read operation failure.  Message[000079]Internal file corruption detected.  File must be repaired.
I tried to analyze the file, it gave me the similar kind of message as above with
message[000010]
same job with same set of data ran fine in QA and DEV environments, problem with production env and nothing changed on the server.

Is there any thing i should do to make this hash file problem go away.

Appreciate any help that you guys can offer.

Thank you,
Siva.
narasimha
Charter Member
Charter Member
Posts: 1236
Joined: Fri Oct 22, 2004 8:59 am
Location: Staten Island, NY

Post by narasimha »

One of the reason could be that you are exceeding the 2 GB limit when you are writing to the hash file.
Another reason can be that you are not limiting your warning messages, and it may be exceeding its limit of storing the log information.
You can search on this topic i have seen similar postings
Narasimha Kade

Finding answers is simple, all you need to do is come up with the correct questions.
sivatallapaneni
Participant
Posts: 53
Joined: Wed Nov 05, 2003 8:36 am

Post by sivatallapaneni »

I do have warning level set on the job before i ran. I cleared the log file and tried every thing.

About this 2Gb limit: I just have 30,000 records. If I'm hitting 2GB limit then i have to resize the hash file right? RESIZE <HASHFILENAME> is the command, is that correct.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Use ls -l to see how big the hashed file is. With 30000 records it's unlikely to be anywhere near 2GB, unless you've got huge records. The message suggests that there is internal corruption. It's probably easiest to delete the hashed file and re-create it. You could attempt repair with fixtool.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply