Page 1 of 1
Abnormal termination of stage J2S2BsmXfm..Aggr detected
Posted: Sat Sep 30, 2006 6:03 am
by Giridharan
Hi
Can any one help on this error
Abnormal termination of stage J2S2BsmXfm..Aggr detected
Thank u
Giri
Posted: Sat Sep 30, 2006 6:50 am
by ArndW
Welcome to DSXchange. You will find that you getter better responses if you post in the correct category. The error looks like a server job error message from the Aggregator stage, is that correct?
- Does this error occur right away or after some period?
- if you do a "reset" of the job you will get a log entry "from previous run", please post that if it contains any additional information
- Any additional information on the job where the error occurs will be helpful; in car terms your message would read "A red light on my dashboard is on. What is wrong with my car?"
Posted: Sat Sep 30, 2006 8:28 am
by chulett
Not knowing
anything about your job - a guess. What kind of volume are you aggregating? There is a limit to the amount of data that you can aggregate at any given time without pre-sorting the data and when you hit that limit, the stage just kind of falls over dead.
Help us help you when you ask for help - include enough details so we can.

Posted: Sat Sep 30, 2006 8:28 am
by Giridharan
ArndW wrote:Welcome to DSXchange. You will find that you getter better responses if you post in the correct category. The error looks like a server job error message from the Aggregator stage, is that correct?
- Does this error occur right away or after some period?
- if you do a "reset" of the job you will get a log entry "from previous run", please post that if it contains any additional information
- Any additional information on the job where the error occurs will be helpful; in car terms your message would read "A red light on my dashboard is on. What is wrong with my car?"
Hi
Source is a sequential file and target is also a sequential file.
Giri
Posted: Sat Sep 30, 2006 8:30 am
by chulett
That doesn't really help.
How about some details? What goes on between those two stages? Volume being processed? Anything you can think of that may help...
Posted: Sat Sep 30, 2006 9:25 am
by ArndW
Giri - I think you've managed to break the record about giving the
least amount of information about your problem in 2 posts
- What are you doing in the aggregator?
- When is the job aborting : immediately at the start or after processing some or many records?
- if you use a dummy input file with just 10 records does the error happen?
This is just a stab at some of the questions whose answers might help solve your problem. Most common error in an aggregator is running out of space (memory or disk) but we currently have no idea if that might be the case.
Posted: Sat Sep 30, 2006 9:40 am
by Giridharan
chulett wrote:Not knowing
anything about your job - a guess. What kind of volume are you aggregating? There is a limit to the amount of data that you can aggregate at any given time without pre-sorting the data and when you hit that limit, the stage just kind of falls over dead.
Help us help you when you ask for help - include enough details so we can.

I am aggr amount, grouping account id and server number.
source file is 4 gb
job aborted after 2 Hrs
Giri
Posted: Sat Sep 30, 2006 9:57 am
by chulett
Ok, then you've obviously blown the aggregator out of the water with that volume.
Presort the data before the Aggregator stage in a manner that supports the grouping being done and then assert that in the Aggregator stage. You can handle pretty much any amount that way but the sorting could be an issue.
The Sort stage could help but would be very slow. Do you have access to any kind of 'high speed' sort package? Or can the source file be delivered sorted? Another option would be to bulk load that into your DB of choice and use the database to either do the sorting or even the aggregation for you.
Bottom line is the only way you'll aggregate that much data in a job is to sort it first. And then make sure the stage knows you've done that by setting the 'Sort' fields appropriately.
Posted: Sat Sep 30, 2006 11:00 am
by ArndW
Craig - a good example of the old "one-two" teamwork approach to DSXchange :D
Posted: Sat Sep 30, 2006 11:41 am
by chulett
Posted: Sat Sep 30, 2006 3:51 pm
by ray.wurlod
Sort the source file on the grouping columns, perhaps using a filter command, and then specify on the Aggregator input link that the data are so sorted.