MQ Connector Usage in an always-on Job
MQ Connector Usage in an always-on Job
I have a requirement to read messages from a MQ Queue and store them inside DataStage and use those message for a batch processing during the evening.
1) I Have used the MQ connector and set the message count and time to "-1" so that the job will be always running. I am trying to write these records into a Dataset.
When I give the message count and time other than -1 (say for example 20), DataStage job is able read all the messages and write them into file.
But when I use message count -1 and time -1, the data will not be written into the file. If I stop the job, the data will not be written into the file. The job will be aborted. Planning to try using a database table with auto commit or commit frequency of one record. Is there any better solution for writing the data into file even though message count and time is set to -1.
2) If we set message count and time to -1, DataStage job will be in always-on mode, waiting for the messages. Now if we try to stop the job, the job will stoped and it will be in aborted mode. Is there any better way to start and stop this always on job which is using MQ connector.
1) I Have used the MQ connector and set the message count and time to "-1" so that the job will be always running. I am trying to write these records into a Dataset.
When I give the message count and time other than -1 (say for example 20), DataStage job is able read all the messages and write them into file.
But when I use message count -1 and time -1, the data will not be written into the file. If I stop the job, the data will not be written into the file. The job will be aborted. Planning to try using a database table with auto commit or commit frequency of one record. Is there any better solution for writing the data into file even though message count and time is set to -1.
2) If we set message count and time to -1, DataStage job will be in always-on mode, waiting for the messages. Now if we try to stop the job, the job will stoped and it will be in aborted mode. Is there any better way to start and stop this always on job which is using MQ connector.
commit = 1 with an rdbms target is a good way to see your messages.....it's likely also that they are being written to the file or dataset, but you can't get to them until the file is formally closed. Look in the doc for the "message type" support. You can specify a long integer message type such as '999899', and the connector will "look for" messages of that type before shutting down cleanly. You basically send it a special "shut down" message whenever you desire.
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
I tried to run the same job using dsjob command. Since the MQ connector has message count and time set to -1, the job is like an always on job.
I have created a script to invoke this job using dsjob command. When I execute the script, it is waiting at the dsjob command for the return code.
Is there a way to start this job from command line instead of loging into DataStage desinger/director for a manual run.
I have created a script to invoke this job using dsjob command. When I execute the script, it is waiting at the dsjob command for the return code.
Is there a way to start this job from command line instead of loging into DataStage desinger/director for a manual run.
...been a long time since I've worked with dsjob.....I'll wait for someone else to chime in here --- but isn't it possible to do dsjob invocation with a "no wait" ?
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I have delopved a job to work on this functionality.
Here is what I am trying to acheive in the Job:
A front end system will be writing the request messages to a MQ Queue in real time. DataStage is supposed to read these messages in real time and write the data to a database table and send a response message for each request message. The response will be of two types. Success message is sent when the data is loaded succesfully in the database else an error message needs to be sent.
I have tested basic configuration testing, where I am able to read the message from the queue and write the records into the db2 connector.
Now I am trying to test the real scenario where I have set MQ connector "Wait Time" property to a higher value 300 and message counet to a higher value 300. (In real scenario these will be set to -1 to acheive unlimit time and messages.)
Problem is that the Job is not writing the data into the table untill it close the queue. First it will open the queue and it waits untill it reaches the highest point for either wait time or message count and then close the queue and write the messages only after that.
These are the properties I have used in MQ and db2 connectors to achieve real time logging.
Here is what I am trying to acheive in the Job:
A front end system will be writing the request messages to a MQ Queue in real time. DataStage is supposed to read these messages in real time and write the data to a database table and send a response message for each request message. The response will be of two types. Success message is sent when the data is loaded succesfully in the database else an error message needs to be sent.
Code: Select all
Job Design I have used is as follows:
MQ Connector ==> XML I/PStage==> LookupStage ==> Transformer ==> Db2 Connector.
Note: I want to test this Job for writing the data into the table first and implement the reply message functionality after this
Now I am trying to test the real scenario where I have set MQ connector "Wait Time" property to a higher value 300 and message counet to a higher value 300. (In real scenario these will be set to -1 to acheive unlimit time and messages.)
Problem is that the Job is not writing the data into the table untill it close the queue. First it will open the queue and it waits untill it reaches the highest point for either wait time or message count and then close the queue and write the messages only after that.
These are the properties I have used in MQ and db2 connectors to achieve real time logging.
Code: Select all
MQ Connector:
Wait time =300
Message Count =300
Transaction Record Count =1
Transaction Time Interval= 1
End of Wave = After
Process End of Wave data = No
db2 connector:
Transaction Record Count =1
Session-> Array Size=1
Auto COmmit Mode = Off/On (Tried Both)
That's odd....end of wave should take care of it.....for testing, you might want to try another target Stage type....like DB2 API, to see what happens with commit=1 and also end of wave.
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
Glad you got it working.
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
I am working on sending a reply message for the incoming request.
My exact requirement is as follows:
Read the incoming requests in an "alwaya-on" Job and store the request in a database. If the message is stored succesfully in the database then send a message succesfully. If the message is not stored then we need to send a failure message. Do not abort the job in any case. I tried with reject link in the database but the job i sgetting aborted.
Do we have any design patterns in mq pack for sending reply messages for this kind of requests
My exact requirement is as follows:
Read the incoming requests in an "alwaya-on" Job and store the request in a database. If the message is stored succesfully in the database then send a message succesfully. If the message is not stored then we need to send a failure message. Do not abort the job in any case. I tried with reject link in the database but the job i sgetting aborted.
Do we have any design patterns in mq pack for sending reply messages for this kind of requests
Even if you could control it, I wouldn't want to rely on a process like this......what if the entire DataStage machine were to fail? Then what does it mean that the database did not get written? You can ensure that the original message will not be lost, using the DTS........but doing subtle message processing if the target rdbms fails to load has too many variables.
On the other hand, I've seen lots of sites use techniques where they capture the "positive"...ie...send a reply or new message after they know the rdbms has been written. Until that point, the message is still considered "in transit" (which could mean lots of things, from machines and networks failing, to rdbms's failing)....
So...it may be do-able, but I would be sure that you consider all the different variables in such a pattern.
Ernie
On the other hand, I've seen lots of sites use techniques where they capture the "positive"...ie...send a reply or new message after they know the rdbms has been written. Until that point, the message is still considered "in transit" (which could mean lots of things, from machines and networks failing, to rdbms's failing)....
So...it may be do-able, but I would be sure that you consider all the different variables in such a pattern.
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
Hi Ernie,
In our scenario, DataStage is trying to read the message, store the message and delete the message from the queue once it is read and send a response message (success/failure).
As per my understanding, failure can happen in three cases.
1) DataStage environment down
2) Database is down
3) Some error in DataStage.
for scenario one (DataStage down), the message will not be lost as it is not deleted by DataStage unless we set an expiry at message queue level.
for scenario two (database down), message will be lost but datastage job also gets aborted and we cannot consume the messages coming after the abort.
for scenario three (error), we still not be able send a reply message and the message is lost.
Considering these three options, I am planning to design job using DTS so that the message is not lost.
In our scenario, DataStage is trying to read the message, store the message and delete the message from the queue once it is read and send a response message (success/failure).
As per my understanding, failure can happen in three cases.
1) DataStage environment down
2) Database is down
3) Some error in DataStage.
for scenario one (DataStage down), the message will not be lost as it is not deleted by DataStage unless we set an expiry at message queue level.
for scenario two (database down), message will be lost but datastage job also gets aborted and we cannot consume the messages coming after the abort.
for scenario three (error), we still not be able send a reply message and the message is lost.
Considering these three options, I am planning to design job using DTS so that the message is not lost.
If you are not yet using DTS, and you can't afford to lose messages, then you probably should avoid using a destructive read. Browse the messages in the source, and only remove them afterwards....via audit check on your target, and then cleaning them out via message id....
....another technique you could try is to only delete them once you know the row was written to the target ---- it is a special technique that requires Server, but is documented here:
http://dsrealtime.wordpress.com/2008/05 ... to-target/
Evaluate that technique carefully....it "might" meet your requirements, or come closer to what you are looking for.
Ernie
....another technique you could try is to only delete them once you know the row was written to the target ---- it is a special technique that requires Server, but is documented here:
http://dsrealtime.wordpress.com/2008/05 ... to-target/
Evaluate that technique carefully....it "might" meet your requirements, or come closer to what you are looking for.
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>