Got below error while trying to read input datasets. It seems like somehow the input dataset was crashed.
Internal Error: (blockSizeActual >= v4BlockHeader::size ()): datamgr/partition.C: 474
Also, we get the error message while trying to view this dataset from Data Set Management:
Unknown error reading data
Everything will be back to normal once the dataset is re-generated.
However, we tried to identify the root cause of the issue so that this could be permanently fixed. Has anybody experienced this issue before?
Internal Error: (blockSizeActual >= v4BlockHeader::size (
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 22
- Joined: Wed Sep 17, 2003 12:21 pm
- Location: Sydney
So it was corrupted somehow, it would seem. Perhaps a support case is in order in case this is something that they are aware of on your platform / version. Otherwise, it is hard to say... did the job creating it finish without error? Did you run out of disk space wherever it is being created? Being on Windows, are you perhaps running Anti Virus software on the server? They've been known to wreak a bit o' havoc on DataStage jobs.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 22
- Joined: Wed Sep 17, 2003 12:21 pm
- Location: Sydney
Chulett,
Thanks for your response.
This dataset was created successfully last week and we continued using it as the source data without any issue.
However, once we tried to read this file again on Monday, we got this error.
We logged this with support already and waiting for them to investigate.
Anti virus might be something that we haven't looked into yet as this is usually handled by IT support team. We might need to keep an eye on this to see if there is any anti virus application running on the server.
Cheers,
Thanks for your response.
This dataset was created successfully last week and we continued using it as the source data without any issue.
However, once we tried to read this file again on Monday, we got this error.
We logged this with support already and waiting for them to investigate.
Anti virus might be something that we haven't looked into yet as this is usually handled by IT support team. We might need to keep an eye on this to see if there is any anti virus application running on the server.
Cheers,
Was curious if the dataset was ever valid (and then was corrupted later) or if it started off life that way. Let us know what you find out about AV on the server. If it is there, it might be as simple as excluding the directories where DataStage components live from being scanned.
Another thought - we've seen situations where an unrelated process uses the same name and corrupts existing datasets. It can be hard to track down, especially if the corruptor doesn't run on a daily basis.
Another thought - we've seen situations where an unrelated process uses the same name and corrupts existing datasets. It can be hard to track down, especially if the corruptor doesn't run on a daily basis.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
You might also want to generate some dummy data and test your theories outside of your regular prod flow.
RowGen up million rows of data into a dataset.
job #2, read that dataset.
Make sure you are using the APT files that were associated with your regular prod runs. Look to see if Job1 is using the same APT file as Job2.
If you are a cluster / grid environment, make sure that all resource disk mounts are accessible by all hosts in your cluster/grid.
RowGen up million rows of data into a dataset.
job #2, read that dataset.
Make sure you are using the APT files that were associated with your regular prod runs. Look to see if Job1 is using the same APT file as Job2.
If you are a cluster / grid environment, make sure that all resource disk mounts are accessible by all hosts in your cluster/grid.
-
- Premium Member
- Posts: 22
- Joined: Wed Sep 17, 2003 12:21 pm
- Location: Sydney