EXECUTE TWO DML IN ONE TRANSACTION/UNIQUE COMMIT - ORACLE
Moderators: chulett, rschirm, roy
EXECUTE TWO DML IN ONE TRANSACTION/UNIQUE COMMIT - ORACLE
Hello everyone,
I have the need to implement a job that takes as input two datasets (one having records to be deleted, and one having records to be inserted)
and make sure that the insert and delete query occur with a single transaction/unique commit.
Is it possible?
Can two inbound links to a connector solve the problem?
Must connector be executed in sequential?
Is there another way to implement the request?
Thank You,
Hello
I have the need to implement a job that takes as input two datasets (one having records to be deleted, and one having records to be inserted)
and make sure that the insert and delete query occur with a single transaction/unique commit.
Is it possible?
Can two inbound links to a connector solve the problem?
Must connector be executed in sequential?
Is there another way to implement the request?
Thank You,
Hello
Clarify something. Do you want ALL of the deletes to succeed and ALL of the inserts to succeed before you commit anything? Or are they individual transactions, matching pairs, meaning one delete then an insert for the same key such that once complete those individual transactions can be committed?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Yes, this can be done. Please see section titled "Local transactions with database connectors" in Guaranteed delivery with InfoSphere DataStage.
Hmmm... rkashyap, have you used it in that manner? The DTS or Distributed Transaction Stage is all about a "two-phase commit" and is generally used for the latter scenario I outlined. That along with the discussion of the End Of Wave Generator or a bookmark line (etc.) to help control restarts from where it left off. For the former, it could be as simple as having both links going to a single target stage (to form the 'local transaction' mentioned) and then ensuring that no intermediate commits are issued. Aborts will rollback everything.
Maybe some magic can be worked so DTS does the ALL/ALL or Nothing that is desired here, be curious to hear if that's so.
ps. You'll have to be careful with the deletes here as if there is any overlap between the keys delete to insert, you stand a good change of deleting newly inserted records. Make sure they all complete before the inserts start. Not a problem if you are deleting unique 'old' records and only inserting unique 'new' records.
Maybe some magic can be worked so DTS does the ALL/ALL or Nothing that is desired here, be curious to hear if that's so.
ps. You'll have to be careful with the deletes here as if there is any overlap between the keys delete to insert, you stand a good change of deleting newly inserted records. Make sure they all complete before the inserts start. Not a problem if you are deleting unique 'old' records and only inserting unique 'new' records.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Hello,
Thus, in order to apply a policy
"ALL DELETE + ALL INSERT + SINGLE COMMIT",
in addition to the two input links to connector,
must be set up something else?
Some parameter to be set up?
Can connector remain in parallel?
PS: the two data flows (insert and delete) are produced by a change capture, then should not overlap.
Thanks
Thus, in order to apply a policy
"ALL DELETE + ALL INSERT + SINGLE COMMIT",
in addition to the two input links to connector,
must be set up something else?
Some parameter to be set up?
Can connector remain in parallel?
PS: the two data flows (insert and delete) are produced by a change capture, then should not overlap.
Thanks
Normally I would say by setting the 'Transaction record count' property to zero but according to this post that option may not work, or at least doesn't seem to be working for them. It also mentions the $APT variables that can control commits for Oracle.
You may need to experiment and/or worst case involve support.
You may need to experiment and/or worst case involve support.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Craig
We had started our design from the document referenced in the previous post but as the target table is on a local database we ended up using an Oracle connector instead of DTS, so we did what you are suggesting.
This design is being used widely for near-realtime (minibatch) jobs. As volume of data to be processed in mini-batch cycle is small, Wave Generator stage has been dropped from the job design, but prototype with it worked ok.
On keeping Commit count of 0, all transactions are rolling back ok on Oracle 11g Enterprise Edition Release 11.2.0.3.0 - 64bit target tables from DataStage 9.1.2 (Solaris).
We had started our design from the document referenced in the previous post but as the target table is on a local database we ended up using an Oracle connector instead of DTS, so we did what you are suggesting.
This design is being used widely for near-realtime (minibatch) jobs. As volume of data to be processed in mini-batch cycle is small, Wave Generator stage has been dropped from the job design, but prototype with it worked ok.
On keeping Commit count of 0, all transactions are rolling back ok on Oracle 11g Enterprise Edition Release 11.2.0.3.0 - 64bit target tables from DataStage 9.1.2 (Solaris).
What happens if I set the two parameters
to zero?
Has zero value the same meaning of the record count in Connector?
I'm doing some tests.
As soon as completed, I tell you what conclusions are reached.
Sandro
Code: Select all
$APT_ORAUPSERT_COMMIT_TIME_INTERVAL
$APT_ORAUPSERT_COMMIT_ROW_INTERVAL
Has zero value the same meaning of the record count in Connector?
I'm doing some tests.
As soon as completed, I tell you what conclusions are reached.
Sandro
Hello,
with my colleagues, we are carrying out some tests on dummy tables.
We tried to send the job in error, focusing on exceptions generated by Oracle.
The first test consisted of a job that writes on two Oracle tables: a connector with two input links. A link carry out row to delete on TABLE_1, the second link carry out row to insert on TABLE_2, generating duplicate key errors.
Number of rows in our test is about 70000 with the array size set to 1000.
We did not use the above mentioned $APT_ORAUPSERT_*INTERVAL.
The evidence seems positive, that is an error involves a rollback on the two tables.
In the coming days, we will continue to perform additional tests, focusing on generating DataStage errors.
Anyway, a consideration.
In a Connector configuration with two input link, among the properties of the transaction disappears voice related to the periodicity of commits, there remains only the size of the array.
It seems natural to think that this configuration has been developed precisely to handle the need to perform a single commit on datasets that flow to different tables or that flow to different but correlated DML operations.
Certainly, I have not yet clear the relationship between the two environment variables $APT and Connector property "number of row per commit".
Finally, I do not understand why this feature is not clearly documented in the IBM manuals or on Redbook.
As soon as we complete more tests, I will let you know.
Hello
Sandro
with my colleagues, we are carrying out some tests on dummy tables.
We tried to send the job in error, focusing on exceptions generated by Oracle.
The first test consisted of a job that writes on two Oracle tables: a connector with two input links. A link carry out row to delete on TABLE_1, the second link carry out row to insert on TABLE_2, generating duplicate key errors.
Number of rows in our test is about 70000 with the array size set to 1000.
We did not use the above mentioned $APT_ORAUPSERT_*INTERVAL.
The evidence seems positive, that is an error involves a rollback on the two tables.
In the coming days, we will continue to perform additional tests, focusing on generating DataStage errors.
Anyway, a consideration.
In a Connector configuration with two input link, among the properties of the transaction disappears voice related to the periodicity of commits, there remains only the size of the array.
It seems natural to think that this configuration has been developed precisely to handle the need to perform a single commit on datasets that flow to different tables or that flow to different but correlated DML operations.
Certainly, I have not yet clear the relationship between the two environment variables $APT and Connector property "number of row per commit".
Finally, I do not understand why this feature is not clearly documented in the IBM manuals or on Redbook.
As soon as we complete more tests, I will let you know.
Hello
Sandro
Thanks for the update.
Questions for your official support provider, it would seem.sangi1981 wrote:Certainly, I have not yet clear the relationship between the two environment variables $APT and Connector property "number of row per commit".
Finally, I do not understand why this feature is not clearly documented in the IBM manuals or on Redbook.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
When you have multiple input links to a Connector stage, it behoves you to check the stage properties as well as the link properties.
In the graphical navigation panel of the Connector stage, click on the stage (rather than on any of the links) to view the stage properties.
In the graphical navigation panel of the Connector stage, click on the stage (rather than on any of the links) to view the stage properties.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.