[rt-devel] Postgres performance.

Aidan Mountford aidan at oz.to
Wed Apr 9 08:08:15 EDT 2003


If it helps, I also run vacuum etc nightly..

Before doing my testing analyse and vacuum were run before EACH test.

I have only about 500 tickets at this stage...

That said, I am having some success with rebuilding things to perform 
structured joins..  

The performance of these joins is clearly way ahead of the existing 
method.

Unless anyone can see a reason why I should stop this, I will carry on
down this path..

A

-----Original Message-----
From: rt-devel-admin at lists.fsck.com
[mailto:rt-devel-admin at lists.fsck.com]On Behalf Of David Wheeler
Sent: Wednesday, 9 April 2003 3:34 AM
To: Greg Stark
Cc: rt-devel at lists.fsck.com; Aidan Mountford
Subject: Re: [rt-devel] Postgres performance.


On Monday, April 7, 2003, at 02:40  PM, Greg Stark wrote:

> So first of all, do you run vacuum and analyze on your database 
> frequently? If
> not you should run "vacuum full" and then start running both 
> frequently. How
> frequently depends on the amount of updates your database gets, but as 
> often
> as once an hour is not unusual.

I don't have that many tickets in my database (< 50), and what's in 
there was mainly loaded up from a database dump of an RC2 database. I 
run vacuum and analyze nightly, but haven't run vacuum full.

> The big gun to be sure everything is up-to-date for optimizing like 
> this is:
>  vacuum full analyze
> which cleans up everything and gets accurate statistics.

See attached, where I run this and a few other things before re-running 
the explains.

> The only bad guess postgres is making appears to be on a table called
> CachedGroupMembers. Is it possible this table has received lots of 
> updates
> since the last time the database was analyzed?

Nope. As I said, this database doesn't have much in the way of records 
in it. The only difference I can think of is that, because there aren't 
that many records, the statistics won't yet be accurate.

> That said, I don't think that's the main difference between the 
> original query
> and your optimized one. I think a big problem is that postgres isn't 
> using an
> index on groups. It seems to be having trouble discerning the the 
> groups_2.Id
> = Principals_4.id is on both sides of the OR and can thus be pulled 
> out to do
> an index lookup.

Well, the optimizer may decide that it's not worth it to use an index 
when there are so few records and do a table scan, instead. But then I 
don't know why it would be different for Aidan's version of the query.

> I'm confused by why this isn't happening because some testing shows 
> postgres
> is indeed capable of doing this type of rewriting.
>
> Out of curiosity, what happens if you run

I've re-run the queries, including this one. Summary:

Original query:          20704.24 msec
Aidan's Optimized query:   203.89 msec
Your new query:          18799.20 msec

Something's rotten in Denmark.

Regards,

David

-- 
David Wheeler                                     AIM: dwTheory
david at kineticode.com                              ICQ: 15726394
                                                Yahoo!: dew7e
                                                Jabber: Theory at jabber.org
Kineticode. Setting knowledge in motion.[sm]

>  





More information about the Rt-devel mailing list