Abnormal termination of stage J2S2BsmXfm..Aggr detected

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Giridharan
Participant
Posts: 3
Joined: Sat Sep 30, 2006 5:57 am

Abnormal termination of stage J2S2BsmXfm..Aggr detected

Post by Giridharan »

Hi

Can any one help on this error

Abnormal termination of stage J2S2BsmXfm..Aggr detected

Thank u
Giri
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Welcome to DSXchange. You will find that you getter better responses if you post in the correct category. The error looks like a server job error message from the Aggregator stage, is that correct?
- Does this error occur right away or after some period?
- if you do a "reset" of the job you will get a log entry "from previous run", please post that if it contains any additional information
- Any additional information on the job where the error occurs will be helpful; in car terms your message would read "A red light on my dashboard is on. What is wrong with my car?"
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Not knowing anything about your job - a guess. What kind of volume are you aggregating? There is a limit to the amount of data that you can aggregate at any given time without pre-sorting the data and when you hit that limit, the stage just kind of falls over dead.

Help us help you when you ask for help - include enough details so we can. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
Giridharan
Participant
Posts: 3
Joined: Sat Sep 30, 2006 5:57 am

Post by Giridharan »

ArndW wrote:Welcome to DSXchange. You will find that you getter better responses if you post in the correct category. The error looks like a server job error message from the Aggregator stage, is that correct?
- Does this error occur right away or after some period?
- if you do a "reset" of the job you will get a log entry "from previous run", please post that if it contains any additional information
- Any additional information on the job where the error occurs will be helpful; in car terms your message would read "A red light on my dashboard is on. What is wrong with my car?"


Hi

Source is a sequential file and target is also a sequential file.

Giri
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That doesn't really help. :?

How about some details? What goes on between those two stages? Volume being processed? Anything you can think of that may help...
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Giri - I think you've managed to break the record about giving the least amount of information about your problem in 2 posts :shock:

- What are you doing in the aggregator?
- When is the job aborting : immediately at the start or after processing some or many records?
- if you use a dummy input file with just 10 records does the error happen?

This is just a stab at some of the questions whose answers might help solve your problem. Most common error in an aggregator is running out of space (memory or disk) but we currently have no idea if that might be the case.
Giridharan
Participant
Posts: 3
Joined: Sat Sep 30, 2006 5:57 am

Post by Giridharan »

chulett wrote:Not knowing anything about your job - a guess. What kind of volume are you aggregating? There is a limit to the amount of data that you can aggregate at any given time without pre-sorting the data and when you hit that limit, the stage just kind of falls over dead.

Help us help you when you ask for help - include enough details so we can. :wink:


I am aggr amount, grouping account id and server number.

source file is 4 gb

job aborted after 2 Hrs

Giri
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ok, then you've obviously blown the aggregator out of the water with that volume.

Presort the data before the Aggregator stage in a manner that supports the grouping being done and then assert that in the Aggregator stage. You can handle pretty much any amount that way but the sorting could be an issue.

The Sort stage could help but would be very slow. Do you have access to any kind of 'high speed' sort package? Or can the source file be delivered sorted? Another option would be to bulk load that into your DB of choice and use the database to either do the sorting or even the aggregation for you.

Bottom line is the only way you'll aggregate that much data in a job is to sort it first. And then make sure the stage knows you've done that by setting the 'Sort' fields appropriately.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Craig - a good example of the old "one-two" teamwork approach to DSXchange :D
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

:wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54595
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Sort the source file on the grouping columns, perhaps using a filter command, and then specify on the Aggregator input link that the data are so sorted.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply