MQ Connector as a target stage/re-processing on job failure

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dj
Participant
Posts: 78
Joined: Thu Aug 24, 2006 5:03 am
Location: india

MQ Connector as a target stage/re-processing on job failure

Post by dj »

I am new to websphere MQ connector stage :( . I am trying to understand how does the MQ connector stage can prevent sending the same messages to the queue in the event the job failed after processing few messages and then i re-run the job again. is there a setting i can use in the connector stage or how can i design my job to prevent sending same messages again after the job failure/re-run.

thanks in advance for your advice.
prasson_ibm
Premium Member
Premium Member
Posts: 536
Joined: Thu Oct 11, 2007 1:48 am
Location: Bangalore

Post by prasson_ibm »

Hi,
How are you currently running your job,i mean how are you passing multiple messages in MQ?

Are you using message segmenation feature?
dj
Participant
Posts: 78
Joined: Thu Aug 24, 2006 5:03 am
Location: india

Post by dj »

My job design is as shown below

db2_connector --> XFM --> XML_Output --> MQ_Connector

I am picking the required records from a table in DB2, creating a required XML(sequentially) for each record and sending them on to the cluster queue. I am using the default config file while running the job and with default properties on the MQ stage.
prasson_ibm
Premium Member
Premium Member
Posts: 536
Joined: Thu Oct 11, 2007 1:48 am
Location: Bangalore

Post by prasson_ibm »

If you can add indicator column in source table (eg Indicator='P') then your work will be very easy.

1.Intially flag all indicator to Indicator='P' then add this in your db2 select statement where clause.

2. Select ROWID from source table and in you job store it in dataset.

3. Design post update job which will update all rowid's stored in dataset and update flag to 'Y' .

4. In case your job failed, run the post update.When you rerun job you gonna pick only records with Indicator='P'
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

First thing to ask yourself:
Is there any harm to the downstream consumers if they reprocess a message that they've already processed?

If there is no harm, then don't worry about it.

If there is a potential for harm, then Prasoon's solution is practical as long as it's not a matter of life and death. There is some risk in updating a database table and writing to a queue in separate transactions which is what you'll have when using a post update job.

If it is a matter of life and death, then you need to update the processed database row and write to the MQ queue in a single transaction. You'll need to utilize DTS (distributed transaction stage) to make that happen.

Mike
dj
Participant
Posts: 78
Joined: Thu Aug 24, 2006 5:03 am
Location: india

Post by dj »

Thank you Prasson and Mike for your inputs.

Discussed with the downstream consumers about re-sending the messages again and concluded to implement the logic on their side(similar to the logic as per prasson_ibm) to not re-process the previously sent messages. So, there was no change to the datastage job on my side and the functionality is addressed.

Thank you.
Post Reply