Orange is my favorite color

In the same member merging process described in my last post, I was running a bit of code with a TRY/CATCH that looked for a duplicate primary key condition and, if found, deleted it from the source account:

<cfquery name="select" datasource="#variables.datasource.getName()#">
SELECT *
FROM roles
WHERE id = '#id#'
</cfquery>

<cfloop query="select">

<cftry>
<cfquery name="update" datasource="#variables.datasource.getName()#">
UPDATE roles
SET id = '#target#'
WHERE id = '#id#'
AND role = '#role#'
</cfquery>

<cfcatch type="database">
<cfquery name="delete" datasource="#variables.datasource.getName()#">
DELETE FROM roles
WHERE id = '#id#'
AND role = '#role#'
</cfquery>
</cfcatch>
</cftry>

</cfloop>

The problem is that when the UPDATE failed due to a duplicate key, the CFTRANSACTION aborted the rest of the routine. I received this error in the console:

Error Executing Database Query.
ERROR: current transaction is aborted, commands ignored until end of transaction block

SAVEPOINT Solution

Via Google, I learned about PostgreSQL SAVEPOINTs, which allow you to back up your transaction to some intermediate point and continue as if it had not happened. They were easy to implement:

<cfquery name="select" datasource="#variables.datasource.getName()#">
SELECT *
FROM roles
WHERE id = '#id#'
</cfquery>

<cfloop query="select">

<cftry>
<cfquery name="update" datasource="#variables.datasource.getName()#">
-- declare my savepoint
SAVEPOINT troles;

UPDATE roles
SET id = '#target#'
WHERE id = '#id#'
AND role = '#role#';
</cfquery>

<cfcatch type="database">
<cfquery name="delete" datasource="#variables.datasource.getName()#">
-- a collision occurred, skip back to the savepoint and then delete
ROLLBACK TO SAVEPOINT troles;

DELETE FROM roles
WHERE id = '#id#'
AND role = '#role#';
</cfquery>
</cfcatch>
</cftry>

</cfloop>

Note it requires a semi-colon to run both statements in a single CFQUERY. This says that if the CFCATCH runs, to skip back to where we were just before our UPDATE caused an index error (duplicate keys) and instead run the delete to eliminate the duplicate data (which in my business process here, is OK). Now collisions could be handled without breaking the top level transaction.

There are other ways to do this like use a SELECT query first to see if the collision is going to happen. In this process however, collisions are very rare but datasets can be large so for performance I decided it was “easier to ask for forgiveness than permission”. :)

I don’t know if this is required in other databases besides PostgreSQL but I believe save points are available with most vendors. My member merge process runs about 70 queries to join together two accounts and all of their data before it declares the operation complete. This a neat feature to help control that lengthy process.

Comments are closed.