<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Orange is my favorite color &#187; PostgreSQL</title>
	<atom:link href="http://www.ghidinelli.com/c/webinternet/postgresql/feed" rel="self" type="application/rss+xml" />
	<link>http://www.ghidinelli.com</link>
	<description></description>
	<lastBuildDate>Fri, 27 Jan 2017 17:45:50 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9.2</generator>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
			<item>
		<title>Time in SQL</title>
		<link>http://www.ghidinelli.com/2012/03/18/time-in-sql</link>
		<comments>http://www.ghidinelli.com/2012/03/18/time-in-sql#comments</comments>
		<pubDate>Sun, 18 Mar 2012 14:43:26 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=1449</guid>
		<description><![CDATA[Date and time fields can cause chaos in the database but there are modern ways to model and control temporal data]]></description>
			<content:encoded><![CDATA[<p>If you&#8217;ve built an application, it probably included some time/date fields.  If so, let this get you thinking (&#038;&#038; means &#8220;overlap&#8221;):</p>
<pre><code>CREATE TABLE b (p PERIOD);
ALTER TABLE b
   ADD EXCLUDE USING gist (p WITH &amp;&amp;);
INSERT INTO b
   VALUES('[2009-01-05, 2009-01-10)');
INSERT INTO b
   VALUES('[2009-01-07, 2009-01-12)'); -- ERROR</code></pre>
<p>Postgres 9 added Exclusion Constraints opening up more ways to specify UNIQUE constraints on a table.  Adding in the temporal &#8220;period&#8221; datatype for Postgres, you can create a constraint that prevents any two periods from overlapping.  There&#8217;s simply no way to enforce that with a constraint  without that support so most of the time we enforce it using application logic.</p>
<p>Let&#8217;s take for example an application that schedules professors to classrooms.  No two professors can teach in the same classroom at the same time: there must not be any overlap.  Here&#8217;s a couple lines of SQL that would prevent invalid data:</p>
<pre><code>CREATE TABLE reservation(room TEXT
        , professor TEXT
        , during PERIOD);

-- enforce the constraint that the
-- room is not double-booked
ALTER TABLE reservation
    ADD EXCLUDE USING gist
    (room WITH =, during WITH &amp;&amp;);

-- enforce the constraint that the
-- professor is not double-booked
ALTER TABLE reservation
    ADD EXCLUDE USING gist
   (professor WITH =, during WITH &amp;&amp;);</code></pre>
<p>The reservation table has two exclusion constraints on it which prevent overlap for any single room and any single professor.  The net result is that room-professor bookings are guaranteed unique no matter how long a given classroom is booked (some classes might be 60 minutes and others might go all day).</p>
<p>Combined with the <a href="http://www.cs.arizona.edu/people/rts/tdbbook.pdf">Developing Time-Oriented Database Applications in SQL</a>, this makes for some interesting brainstorming.  Also read <a href="http://thoughts.j-davis.com/2010/09/25/exclusion-constraints-are-generalized-sql-unique/">the Exclusion Constraints post</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2012/03/18/time-in-sql/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Announcing the Fred Jehle Spanish Verb Database</title>
		<link>http://www.ghidinelli.com/2011/12/06/announcing-fred-jehle-spanish-verb-database</link>
		<comments>http://www.ghidinelli.com/2011/12/06/announcing-fred-jehle-spanish-verb-database#comments</comments>
		<pubDate>Wed, 07 Dec 2011 00:29:24 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Research/HOWTO]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[espanol]]></category>
		<category><![CDATA[spanish]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=1397</guid>
		<description><![CDATA[Download a free database of 600 conjugated Spanish verbs under a Creative Commons license courtesy of Professor Fred Jehle]]></description>
			<content:encoded><![CDATA[<p>On a recent trip to Mexico I had the chance to use my Spanish (oh I&#8217;m <em>quite</em> the Renaissance man) and found that what I lose first is my ability to quickly say &#8220;he did X&#8221; or &#8220;they do Y&#8221;.  Worse is my need to reverse-engineer what I’m hearing back to an infinitive.  If someone says &#8220;Ellos me hablaban&#8221;, I decipher it like: &#8220;hablaban, ok, that is the third person plural for hablaba, which is hablar, which means to speak, so it’s they were speaking.&#8221;  That probably explains the glassy look in my eyes as I listen to native speakers.</p>
<p>Once home I searched for a database of conjugated verbs to make flash cards that, rather than working with infinitives, would read simple actions like &#8220;They walk&#8221;, &#8220;He used to sing&#8221; or &#8220;We would have spoken&#8221; and the reverse would have the proper Spanish conjugation.  Despite my uber Google skills, I was unable to find any non-commercial products.  However, I did come across one great resource that had the data I needed.  </p>
<p>Fred Jehle, formerly a professor at  Indiana University-Purdue University Fort Wayne, <a href="http://users.ipfw.edu/jehle/VERBLIST.HTM">published approximately 600 verbs</a>, fully conjugated in all moods and tenses, on his website in 1998.  The resource helped students improve their verb use in addition to a variety of notes on other aspects of the language.  I contacted Mr. Jehle to inquire if a database of his verbs were behind the scenes but unfortunately only the static web pages exist.</p>
<p>Out of curiosity, I opened up a couple of pages to see what the source HTML looked like and, luckily, it was pretty uniform.  I broke out my editor and wrote a script to read in each page, parse out the various conjugations and dump them in to a (PostgreSQL 9.x) database.   The roughly 600 verbs converted to 11,467 combinations of moods + tenses. </p>
<p>In coordination with copyright holder Professor Jehle, this data is available free of charge via a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">Creative Commons license</a> for anyone to use for non-commercial purposes so long as you provide attribution.  If you alter, transform or build upon this work then you <em>may</em> distribute the resulting work only under the same license.  </p>
<div style="border: 1px dashed black; padding: 5px; background-color: #eee; text-align: center;"><a href="https://www.ghidinelli.com/free-spanish-conjugated-verb-database">Click here to download the database</a></div>
<p>My thanks go to Mr. Jehle for quickly answering my questions and allowing me to publish the data for other would-be Spanish students.  I recommend that you also check out his website for additional Spanish content at <a href="http://users.ipfw.edu/jehle/VERBLIST.HTM">http://users.ipfw.edu/jehle/VERBLIST.HTM</a>.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2011/12/06/announcing-fred-jehle-spanish-verb-database/feed</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Push the last X backups to Amazon S3</title>
		<link>http://www.ghidinelli.com/2011/02/15/push-last-x-backups-amazon-s3</link>
		<comments>http://www.ghidinelli.com/2011/02/15/push-last-x-backups-amazon-s3#comments</comments>
		<pubDate>Wed, 16 Feb 2011 00:54:19 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=1247</guid>
		<description><![CDATA[Store only a subset of files on Amazon S3 using s3cmd and a little date logic.  Lets us put the last 5 database backups (or something similar) in a safe place without saving every dump forever and costing us $$$.]]></description>
			<content:encoded><![CDATA[<p>Not going to get fancy with this one: it&#8217;s a bash shell script that will take the last X backups and push them to Amazon S3 for disaster recovery.   I wrote a couple of versions until I settled on this simple one.  It requires you to have <a href="http://s3tools.org/s3tools">s3cmd</a>, which if you&#8217;re using Linux, is pretty much the only way you could interface S3 and stay sane.  Here are docs on how to <a href="">install it with yum</a> if you are on Redhat/CentOS.</p>
<p>Now, to the script:</p>
<pre><code>#!/bin/bash

# init vars
BACKUP_ROOT=/dblogs/backups
S3CMD=/usr/bin/s3cmd
S3DIR=s3://your-bucket-name/your-folder-name

# take the last X backups and make sure they are on S3 (but not older ones)
BACKUP_COUNT=5
BACKUP_FILELIST=/tmp/.pgbackup_s3cmd_filelist

# first list out the files we want to sync up
ls -1 $BACKUP_ROOT | tail -$BACKUP_COUNT &gt; $BACKUP_FILELIST

# now sync them; requires trailing slash to sync directories
$S3CMD sync --progress --delete-removed --acl-private --exclude '*' --include-from $BACKUP_FILELIST $BACKUP_ROOT $S3DIR/

# clean up
rm -f $BACKUP_FILELIST
</code></pre>
<p>Setup s3cmd after installing it by running <tt>s3cmd --configure</tt> with your Amazon credentials handy.  I also use s3fox, a Firefox add-on, as another way of quickly accessing S3 with a GUI.</p>
<p>This script maintains only the last 5 backups on S3 and it deletes what would be the 6th backup each night via cron.  You could adjust the number to keep with the BACKUP_COUNT parameter.  I&#8217;m personally using this to backup a <a href="http://www.postgresql.org">PostgreSQL server</a> but you can adjust it to back up any type of directory with files in it.</p>
<p><strong>Update 2/17</strong> &#8211; realized the default behavior is not to delete files on the remote host that are not included locally.  I updated the above to include the &#8211;delete-removed argument and now it deletes the oldest file as expected.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2011/02/15/push-last-x-backups-amazon-s3/feed</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>New PostgreSQL Performance Book</title>
		<link>http://www.ghidinelli.com/2010/10/22/new-postgresql-performance-book</link>
		<comments>http://www.ghidinelli.com/2010/10/22/new-postgresql-performance-book#comments</comments>
		<pubDate>Fri, 22 Oct 2010 16:53:18 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[postgres]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=1149</guid>
		<description><![CDATA[New PostgreSQL performance book by Gregory Smith released - covers versions 8.1 to the brand new 9.0.]]></description>
			<content:encoded><![CDATA[<p><img src="https://www.packtpub.com/sites/default/files/imagecache/productview/0301OS_MockupCover.jpg" class="alignright" />Finally!  A book on PostgreSQL performance for the common man AKA psuedo-DBA!  Gregory Smith just released <a href="https://www.packtpub.com/toc/postgresql-90-high-performance-table-contents">PostgreSQL 9.0 High Performance</a> which covers all the way back to 8.1.  We&#8217;re on 8.3.x and preparing to upgrade to 9.0 so this comes at a good time.  The table of contents <a href="https://www.packtpub.com/toc/postgresql-90-high-performance-table-contents">looks really complete</a>.  I just got the eBook since I couldn&#8217;t wait although I probably won&#8217;t be able to read much until my house move is done but I&#8217;m looking forward to finding some speed in our database server.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2010/10/22/new-postgresql-performance-book/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>PostgreSQL 9.0 Release</title>
		<link>http://www.ghidinelli.com/2010/09/15/postgresql-9-0-release</link>
		<comments>http://www.ghidinelli.com/2010/09/15/postgresql-9-0-release#comments</comments>
		<pubDate>Wed, 15 Sep 2010 16:01:34 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[open source]]></category>
		<category><![CDATA[postgres]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=1123</guid>
		<description><![CDATA[Postgres 9.0 is here (next week) and the elephant has brought some new toys.]]></description>
			<content:encoded><![CDATA[<p><img src="https://www.ghidinelli.com/wp-content/uploads/2010/09/logo_postgres-150x150.gif" alt="PostgreSQL Logo" title="PostgreSQL Logo" width="150" height="150" class="alignright" />It&#8217;s not <em>quite</em> here yet, but PostgreSQL 9.0 is about to be released Thursday, September 23rd.  Check out the <a href="http://wiki.postgresql.org/wiki/Illustrated_9_0">new features and updates</a>.  If you&#8217;re in the San Francisco area, there is even <a href="http://postgresparty.eventbrite.com/">a party</a>.</p>
<p>Big changes include truly integrated hot standby and streaming replication (without using third party tools) and more than 200 total improvements.  We&#8217;ve been kind of lagging behind with 8.3 but it looks like it&#8217;s time to schedule an upgrade window&#8230; </p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2010/09/15/postgresql-9-0-release/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Verifying HP RAID array status</title>
		<link>http://www.ghidinelli.com/2010/02/22/check-hp-raid-array-status</link>
		<comments>http://www.ghidinelli.com/2010/02/22/check-hp-raid-array-status#comments</comments>
		<pubDate>Mon, 22 Feb 2010 18:45:50 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=1027</guid>
		<description><![CDATA[Make sure you get an appropriate alert when a drive goes dead in your HP/Compaq RAID array]]></description>
			<content:encoded><![CDATA[<p>Just a quicky &#8211; turns out our logwatch was not giving us enough of an alert when a drive failed in our raid array.  Obviously you want to replace a dead drive as quickly as possible to reduce the likelihood of a second or third drive failing and potentially taking your data with it.</p>
<p>For Linux, HP has a tool available called <a href="http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&#038;cc=us&#038;swItem=MTX-66b08e49c28f4bd49f4641ed80&#038;jumpid=reg_R1002_USEN">hpacucli</a> (HP Array Configuration Utility for Linux) for interrogating HP/Compaq array controllers (SmartArray 5i, 6i, whatever) from the command line.  Before you can install the RPM (on CentOS/Redhat), you will need to first install a compatibility library: </p>
<pre><code>yum install compat-libstdc++-296
rpm -Uvh hpacucli-8.0-14.noarch.rpm </code></pre>
<p>Then I put this snippet into a new file <tt>/etc/cron.hourly/raidstatus</tt>:</p>
<pre><code>#!/bin/sh
/opt/compaq/hpacucli/bld/hpacucli ctrl all show config | egrep -i "(fail|error|offline|rebuild|ignoring|degraded|skipping|nok)"</code></pre>
<p>The command <tt>/opt/compaq/hpacucli/bld/hpacucli ctrl all show config</tt> normally generates something like this (from our development database server):</p>
<pre><code>Smart Array XXXXXXX in Slot 0      ()

   array A (Parallel SCSI, Unused Space: 0 MB)

      logicaldrive 1 (33.9 GB, RAID 1+0, OK)

      physicaldrive 2:0   (port 2:id 0 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:1   (port 2:id 1 , Parallel SCSI, 36.4 GB, OK)

   array B (Parallel SCSI, Unused Space: 0 MB)

      logicaldrive 2 (67.8 GB, RAID 1+0, OK)

      physicaldrive 2:2   (port 2:id 2 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:3   (port 2:id 3 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:4   (port 2:id 4 , Parallel SCSI, 36.4 GB, OK)
      physicaldrive 2:5   (port 2:id 5 , Parallel SCSI, 36.4 GB, OK)</code></pre>
<p>I believe you can reduce the grep to just &#8220;(fail|nok)&#8221; but I&#8217;m taking the conservative approach here.  Change the permissions to 0700 and if you have SELinux running make sure the context is set properly.</p>
<p>If your array and controller are in fine shape, then this command will output nothing.  If you have a dead drive, it will generate content which will cause cron to mail the root user about it.  Bingo &#8211; time to go to the colo!</p>
<p>I have seen other people use &#8220;ctrl all show status&#8221; which generates:</p>
<pre><code>Smart Array XXXXXXX in Slot 0
   Controller Status: OK
   Cache Status: OK
   Battery Status: OK</code></pre>
<p>I prefer to query the config which looks at individual physical drives in addition to the status of the array.  I have seen cases (just last week) where one dead drive in the array still lists the array status as OK (because, technically, it is OK, it&#8217;s just not optimal and may be pending major disaster!)</p>
<p><strong>Update 5/25/2011</strong> Had an error today, needed this code and corrected a few things.  I fixed a typo for a missing quote in the raidstatus script above and added a link to the actual utility.  For reference, the output from an error looks like this:</p>
<pre><code>physicaldrive 2:2   (port 2:id 2 , Parallel SCSI, ??? GB, Failed)
physicaldrive 2:5   (port 2:id 5 , Parallel SCSI, 72.8 GB, Rebuilding, active spare)</code></pre>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2010/02/22/check-hp-raid-array-status/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Run Wordpress on PostgreSQL</title>
		<link>http://www.ghidinelli.com/2009/10/05/running-wordpress-on-postgresql</link>
		<comments>http://www.ghidinelli.com/2009/10/05/running-wordpress-on-postgresql#comments</comments>
		<pubDate>Mon, 05 Oct 2009 15:12:04 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[postgres]]></category>
		<category><![CDATA[wordpress]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=972</guid>
		<description><![CDATA[Wordpress inches closer to PostgreSQL support with a third-party plugin that maps MySQL calls to Postgres.]]></description>
			<content:encoded><![CDATA[<p>Very cool &#8211; since Wordpress themselves <a href="http://codex.wordpress.org/Using_Alternative_Databases">won&#8217;t take the plunge</a>, an enterprising user has written a plugin, <a href="http://wordpress.org/extend/plugins/postgresql-for-wordpress/">PG4WP</a> which intercepts MySQL calls and directs generic calls at Postgres instead.  I wonder how well this will work for third-party plugins?  </p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2009/10/05/running-wordpress-on-postgresql/feed</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Performance tip for batch SQL inserts</title>
		<link>http://www.ghidinelli.com/2009/07/24/batch-sql-insert-performance</link>
		<comments>http://www.ghidinelli.com/2009/07/24/batch-sql-insert-performance#comments</comments>
		<pubDate>Fri, 24 Jul 2009 23:24:10 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[ColdFusion]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[postgres]]></category>
		<category><![CDATA[sql]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=873</guid>
		<description><![CDATA[A SQL tip for inserting large numbers of database rows at a time.  Improve performance by grouping inserts into fewer commits.]]></description>
			<content:encoded><![CDATA[<p>Just a quicky today.  I&#8217;m working on a batch CSV->INSERT script to process monthly membership dumps that come from another organization.  We&#8217;re not talking billions of records but I wanted to make it performant.  Based on this post about <a href="http://groups.google.com/group/pgsql.interfaces.jdbc/browse_thread/thread/aed7951d70c60549">group batch inserts</a>, I came up with a bit of clever code for grouping multiple inserts using the syntax:</p>
<pre><code>INSERT INTO foo (column1, column2)
VALUES (val1, val2), (val3, val4), (val5, val6);</code></pre>
<p>This wraps more than one row into a single commit and speeds things up.  I&#8217;ve seen claims of about 3x performance on Postgres and as much as <a href="http://www.jroller.com/mmatthews/entry/speeding_up_batch_inserts_for">10x on MySQL</a>.  The good news is wrapping inserts into this format is simple.  Here&#8217;s some code to get you started for looping over an array of arrays (converted by Ben Nadels&#8217; <a href="http://www.bennadel.com/blog/991-CSVToArray-ColdFusion-UDF-For-Parsing-CSV-Data-Files.htm">very quick CSVToArray</a>):</p>
<pre><code>&lt; !--- how many at a time to commit ---&gt;
&lt;cfset incr = 1000 /&gt;

&lt;cftransaction&gt;
&lt;cftry&gt;

&lt; !--- ii = 1, 1001, 2001, 3001, 4001, ... ---&gt;
&lt;cfloop from="1" to="#len#" step="#incr#" index="ii"&gt;

  &lt;cfquery name="insert" datasource="#dsn#"&gt;
    INSERT INTO someTable (field1
			,field2
			,field3)
    VALUES
	&lt; !--- 1, 2, 3, ... 1000;  1001, 1002, 1003, ... 2000; ... ---&gt;
	&lt;cfloop from="#ii#" to="#min(len, ii+incr-1)#" index="jj"&gt;

	  &lt; !--- prevent trailing comma error ---&gt;
	  &lt;cfif ii NEQ jj&gt;, &lt;/cfif&gt;
	  (&lt;cfqueryparam value="#arrData[jj][1]#" cfsqltype="cf_sql_char" /&gt;
	  ,&lt;cfqueryparam value="#arrData[jj][2]#" cfsqltype="cf_sql_varchar" /&gt;
	  ,&lt;cfqueryparam value="#arrData[jj][3]#" cfsqltype="cf_sql_date" /&gt;)
	&lt;/cfloop&gt;
  &lt;/cfquery&gt;

&lt;/cfloop&gt;

&lt;cfcatch type="any"&gt;
	&lt;cftransaction action="rollback" /&gt;
	&lt;cfrethrow /&gt;
&lt;/cfcatch&gt;

&lt;/cftry&gt;

&lt;/cftransaction&gt;</code></pre>
<p>This groups the INSERTS into bunches of 1000.   In my code, I wanted the records to either all commit or all fail so I wrapped it in a transaction; you could surely remove that and the corresponding try/catch for your own needs.</p>
<p>Grouping the inserts was taking about 90ms per 100 records over a total of 4,720 test records.  The one-insert-per-query approach was taking around 270ms per 100 records.  I suspect, run directly against the database as opposed to through ColdFusion and JDBC, there would be less absolute gain but the relative result is about a 3x improvement in this small test case on my development laptop.  The database is PostgreSQL 8.3 but this technique also works with MySQL.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2009/07/24/batch-sql-insert-performance/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Stupid easy database replication for Postgres and MySQL</title>
		<link>http://www.ghidinelli.com/2009/06/16/stupid-easy-database-replication-postgres-mysql</link>
		<comments>http://www.ghidinelli.com/2009/06/16/stupid-easy-database-replication-postgres-mysql#comments</comments>
		<pubDate>Tue, 16 Jun 2009 18:42:08 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[database]]></category>
		<category><![CDATA[postgres]]></category>
		<category><![CDATA[replication]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=765</guid>
		<description><![CDATA[Working replication between two Postgres or MySQL database servers in under 5 minutes with few limitations - it's true!]]></description>
			<content:encoded><![CDATA[<p>I shy away from database replication because it tends to require a lot of monkeying around with complex systems to get it working right.  Postgres has a simple log shipping feature but you can&#8217;t query the warm standby while other replication tools give you all kinds of option for synchronous vs. asynchronous and multi-master, master-slave, etc.  </p>
<p>Enter <a href="http://www.rubyrep.org/">RubyRep</a>, a Ruby-based replicator that already supports both Postgres and MySQL and has plans to also support Microsoft SQL Server, Oracle and IBM.  The author has published a screencast where he enables and demonstrates <a href="http://www.rubyrep.org/screencast.html">working replication in less than 5 minutes</a>.</p>
<p>It handles composite primary keys, single primary keys, sequences/uniqueids and even tables without primary keys with a touch of configuration.  It works using triggers which it installs and maintains automatically.  There are some limitations with MySQL since it only supports one trigger per table which could collide with an existing application trigger.  We&#8217;ll have to see what performance is like but it&#8217;s pretty compelling so far.  </p>
<p><a href="http://www.postgresql.org/about/news.1097">Postgres 8.4 RC1 is out too</a> &#8211; might be a good opportunity to roll out a test environment and start playing.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2009/06/16/stupid-easy-database-replication-postgres-mysql/feed</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>cf.Objective() thoughts and presentations for download</title>
		<link>http://www.ghidinelli.com/2009/05/21/cfobjective-thoughts-and-presentations-for-download</link>
		<comments>http://www.ghidinelli.com/2009/05/21/cfobjective-thoughts-and-presentations-for-download#comments</comments>
		<pubDate>Thu, 21 May 2009 18:16:34 +0000</pubDate>
		<dc:creator>brian</dc:creator>
				<category><![CDATA[ColdFusion]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Web/Internet]]></category>
		<category><![CDATA[cfobjective]]></category>
		<category><![CDATA[framework]]></category>
		<category><![CDATA[postgres]]></category>
		<category><![CDATA[speaking]]></category>

		<guid isPermaLink="false">http://www.ghidinelli.com/?p=722</guid>
		<description><![CDATA[What I took home from cf.objective() 2009 and the slides from my two presentations on PostgreSQL and migrating to a framework.]]></description>
			<content:encoded><![CDATA[<p>This was my first <a href="http://www.cfobjective.com">cf.Objective()</a> and it was a great experience.  Normally I go to conferences really more for the people and the networking but bringing together so many of the thought leaders in the ColdFusion community really marks this as where an intermediate or advanced developer can come and take back actionable thoughts to their organization.   I kept a running list of ideas in my notebook of specific ways to apply things I was learning to my <a href="http://www.motorsportreg.com">motorsport event registration service</a>.  Here&#8217;s a few things I&#8217;m planning on attacking:</p>
<ul>
<li>Look at replacing my validate() methods with Bob Silverberg&#8217;s <a href="http://www.validatethis.org">ValidateThis</a> framework which has centralized validation rules.  One thing I&#8217;ve struggled with in centralized validation is how to handle the situation where a user and an admin have differing validation requirements depending on who is doing the editing.  Bob solves this with configurable &#8220;contexts&#8221; which look promising.</li>
<li>There are a few places in my system where I am working with collections of ~100-300 objects and using Transfer to manipulate these can be slow.  Peter Bell&#8217;s <a href="http://ibo.riaforge.org/">Iterating Business Object</a>, which is simply a fancy wrapper for a query, is a performant way of working with collections of this size.  The fact that we, in CF land, have to worry about this makes object oriented design a compromise but that&#8217;s the way it goes.  It&#8217;s time to evaluate this as an option.</li>
<li>Refactor to Abstract Classes.  Bob Silverberg gave a presentation on <a href="http://www.silverwareconsulting.com/index.cfm/2009/5/19/Building-An-Object-Oriented-Model--Presentation-Materials-Available">building an object oriented model</a> which included discussion of Abstract Classes.  I have used one as the basis for my Transfer decorators for some time which defines a generic populate() routine (influenced by Bob in the past) but it&#8217;s time to look at these for the remainder of my objects in an effort to reduce my total code footprint and centralize the similar code.  Because I refactored from a query-based framework, my gateways and services leveraged a lot of that code and I wound up with gateways that have their own queries and services that repeatedly proxy calls to those gateways along with calls to Transfer for persisting my objects.  One of the things I mentioned in my frameworks migration is starting with my original code may have been a limitation in my migration and the kind of blind cut-and-paste that is time efficient but not particularly elegant is one of those places.  Now it&#8217;s time to go back and optimize.</li>
<li>ColdFusion 9 CFC performance is supposed to be substantially better &#8211; I have the beta so I need to do some head-to-head tests of Transfer on CF8 vs. CF9 as well as Transfer on CF9 vs. the built in Hibernate support.  Gains here may make an IBO unnecessary.</li>
<li>Get my <a href="http://trac.edgewall.org">Trac</a> install updated.  I&#8217;m still running some ridiculous 0.9.5 version while the latest is up to 0.11.4.  Getting 0.9.x Trac to install and run was such a dependency-based nightmare that I swore off touching it but that was 3+ years ago and I&#8217;ve got a new server being built to replace the hardware anyways.  This older version is not only compromising the feature set but it&#8217;s also preventing me from updating my Eclipse setup to take advantage of <a href="http://www.henke.ws/post.cfm/Mylyn-Rocks,-Mylyn-Rocks">Mylyn</a> so I can work more from Eclipse <a href="http://trac.cfeclipse.org/cfeclipse/wiki/MylynTracTaskIntegration">against my outstanding tickets</a>.</li>
<li>I sketched out the definition for a reporting overhaul we&#8217;ve been talking about for a year.  I already have a pretty sophisticated CF-based reporting system that emulates a lot of what you would get out of a Crystal Report or CFREPORT but does it without requiring the end user to install a report builder tool or have any knowledge beyond the data they&#8217;re working with.  I&#8217;m interested in exploring <a href="http://eclipse.org/birt">BIRT</a> further but the user interface for generating reports must be <em>user-friendly</em>.  Actuate&#8217;s version of BIRT has a very nice UI but costs some dollars.</li>
<li>The Writing Testable Code: Real-World TDD session by Marc Esher was a great kick in the butt to improve my unit testing.  The way he took some very reasonable code and demonstrated how to refactor it to be more testable (and thus, more reliable) was eye opening despite the fact that it was simple.  Sometimes the best ninja tricks are.  And, he finally explained concisely <a href="http://www.adobe.com/devnet/coldfusion/articles/testdriven_coldfusion_pt2_04.html">how to use a mock object</a> in testing that I grokked so it&#8217;s time to implement some of these ideas.</li>
</ul>
<h2>Presentation Slide decks</h2>
<p>As promised, I&#8217;m publishing the slides from my presentations for any attendees who want to grab the details.  Thanks for coming and be sure to fill out the <a href="http://www.cfobjective.com/surveys.cfm">CFO survey</a>.  The slides, particularly the Postgres preso, were bullet-point heavy which is not my favorite style of presentation (and when compared to say, a <a href="http://blog.mxunit.org/2009/01/are-you-presenting-at-cfobjective.html">Marc Esher presentation</a>, is a bit embarrassing) but when you are walking people through the features of a package, I consider the feature-list-with-commentary to be the equivalent of showing-code-with-commentary.  You don&#8217;t use pictures to represent your code, now do you? <img src='http://www.ghidinelli.com/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' />   The framework migration slides include my recommendations for surviving a migration of a code base but spare the reader my personal pain and suffering from my launch last year.  You only get the actionable stuff and none of the emotional scarring!  Don&#8217;t forget: build a functional equivalent at all costs!</p>
<p><strong>Move over MySQL, Make Room for PostgreSQL</strong><br />
ColdFusion and MySQL have been best friends since MX and many developers use it as their day-to-day database. Brian made the leap to PostgreSQL in 2004 and never looked back: it has everything MySQL has plus most of the enterprise features found in Oracle, SQL Server and DB2 wrapped in an open source package with a great community and support. Come learn how these packages are more similar than different and why your next project should be backed by Postgres.</p>
<p><a href='https://www.ghidinelli.com/wp-content/uploads/2009/05/cfo_move_over_mysql_make_room_for_postgresql.pdf'>Download Slides</a> (PDF)</p>
<p><strong>Planning your migration to a Framework</strong><br />
Still waiting to make the jump to an object-oriented framework like Model-Glue, Mach-II or Coldbox? As the sole developer and survivor of a seven-month migration from a home-grown framework to Model-Glue, Coldspring and Transfer, Brian will share four key areas where unexpected complexity resulted in catastrophe and how you can avoid them in your own applications. This is your survival guide for moving code to a new framework.</p>
<p><a href='https://www.ghidinelli.com/wp-content/uploads/2009/05/cfo_migrating_to_a_framework.pdf'>Download Slides</a> (PDF)</p>
<p>If you have any questions or feedback about my presentations, feel free to hit me up here.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.ghidinelli.com/2009/05/21/cfobjective-thoughts-and-presentations-for-download/feed</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
	</channel>
</rss>
