<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Techniques for finding speed in your database</title>
	<atom:link href="http://www.ghidinelli.com/2008/10/08/techniques-for-finding-speed-in-your-database/feed" rel="self" type="application/rss+xml" />
	<link>http://www.ghidinelli.com/2008/10/08/techniques-for-finding-speed-in-your-database</link>
	<description></description>
	<lastBuildDate>Thu, 01 Jun 2017 18:51:00 +0000</lastBuildDate>
	<generator>http://wordpress.org/?v=2.9.2</generator>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
		<item>
		<title>By: Brian</title>
		<link>http://www.ghidinelli.com/2008/10/08/techniques-for-finding-speed-in-your-database/comment-page-1#comment-51333</link>
		<dc:creator>Brian</dc:creator>
		<pubDate>Fri, 10 Oct 2008 16:06:54 +0000</pubDate>
		<guid isPermaLink="false">http://www.ghidinelli.com/?p=221#comment-51333</guid>
		<description>@Zack - memory tuning for sure.  I think what many people don&#039;t realize is that your DB can put the entire data set into RAM if it&#039;s not so large; RAM is orders of magnitude faster than disk.  Guess what happens then? :)   

We&#039;ve looked at optimizing record lengths and stripe sizes, etc, but in benchmarking they generally lead to percentage improvements - solid, but not the orders of magnitude you can often find by just revisiting stuff you did awhile ago.  At least, that&#039;s what I find in my coding!

Good tip on avoiding BLOBs - put your files and stuff in the filesystem.  Better all the way around.</description>
		<content:encoded><![CDATA[<p>@Zack &#8211; memory tuning for sure.  I think what many people don&#8217;t realize is that your DB can put the entire data set into RAM if it&#8217;s not so large; RAM is orders of magnitude faster than disk.  Guess what happens then? <img src='http://www.ghidinelli.com/wp-includes/images/smilies/icon_smile.gif' alt=':)' class='wp-smiley' />    </p>
<p>We&#8217;ve looked at optimizing record lengths and stripe sizes, etc, but in benchmarking they generally lead to percentage improvements &#8211; solid, but not the orders of magnitude you can often find by just revisiting stuff you did awhile ago.  At least, that&#8217;s what I find in my coding!</p>
<p>Good tip on avoiding BLOBs &#8211; put your files and stuff in the filesystem.  Better all the way around.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Zack Steinkamp</title>
		<link>http://www.ghidinelli.com/2008/10/08/techniques-for-finding-speed-in-your-database/comment-page-1#comment-51313</link>
		<dc:creator>Zack Steinkamp</dc:creator>
		<pubDate>Fri, 10 Oct 2008 01:53:20 +0000</pubDate>
		<guid isPermaLink="false">http://www.ghidinelli.com/?p=221#comment-51313</guid>
		<description>Good stuff Brian!  Definitely the &quot;slow query log&quot; (as us Mysql hacks call it) is usually the place to start.

Not sure about Postgres, but there are many many memory tuning options in MySql that can make huge performance gains.  It can be daunting though, and may be money well spent to hire an expert for a day for some tuning / training.

For hardcore datasets, optimizing record lengths with regard to disk cluster size will buy additional performance too.  Also, avoiding &quot;blob&quot; types will help reduce disk use (again, on MySql -- other vendors may vary).

I&#039;m sure the list could grow to 100s of items ;-)</description>
		<content:encoded><![CDATA[<p>Good stuff Brian!  Definitely the &#8220;slow query log&#8221; (as us Mysql hacks call it) is usually the place to start.</p>
<p>Not sure about Postgres, but there are many many memory tuning options in MySql that can make huge performance gains.  It can be daunting though, and may be money well spent to hire an expert for a day for some tuning / training.</p>
<p>For hardcore datasets, optimizing record lengths with regard to disk cluster size will buy additional performance too.  Also, avoiding &#8220;blob&#8221; types will help reduce disk use (again, on MySql &#8212; other vendors may vary).</p>
<p>I&#8217;m sure the list could grow to 100s of items <img src='http://www.ghidinelli.com/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> </p>
]]></content:encoded>
	</item>
</channel>
</rss>
