pastvne
Bruce Decker
Hi All:
I've been dealing with performance issues. I have a largish database of about 6GB consisting of about 600K records. I can't say the query time is bad but as the database has grown, users are now waiting about 30 seconds for a list population with prefilters.
I tried to throw hardware at the issue. My first attempt was to go to a scalable cloud server so that I could easily throw more cores and memory at it. This server also had solid state disk. But in the end, I didn't see a significant performance improvement over my regular old cheapo shared hosting plan. Also, the Cloud Server was plagued by may quirks that caused issues with Fabrik so I had to abandon that.
Next, I setup a inexpensive account at a2 hosting and moved the app over there. I was amazed at how fast it was compared to my former shared hosting provider and much faster than my so called 'scalable cloud server' even when I tuned it up to many processors and memory.
So, now I'm relatively happy but ideally I want to see response times in the 3-6 second range as opposed to 15 seconds that I'm seeing now. The underlaying table has 600K rows and with prefiltering, that goes down to about about 8700 rows.
MySQL advisor is tell me that there are joins that are based on unindexed columns but I've looked that over and I can't see any unindexed columns.
I messed around with Prefilter queries to see if I could improve the prefiltering to be smarter but my attempts at being clever only proved how much I misunderstand the interals of MySQL. It seemed everything I would try would have the opposite impact. The best performance was leaving Fabrik to do as it intended.
I've struck upon an idea to create two copies of my main table (the one with 600k+ rows). We'll call this list 'control' and it's copy will be 'control_fast'
Each night, I'd run a script that would:
1) Drop table control_fast
2) Table copy control to control_fast
3) Purge control_fast to remove all but the most recent records (say last 90 days for example). This would result in a table size of less than 180K rows.
Then, I'd create new menu options so that the user could navigate to 'archive (slower)' or current (fast). The archive menus would go to the large table and users would be advised that queries might take longer as a result. But most users would use the 'control_fast' purged database which, according to my tests, would be running in the 2-3 second range (snappy).
Now, I'd prefer to have one database and do what is necessary to get snappy performance by clever use of indexes or other techniques. But I'm afraid I'm going to need some help.
Is this the type of design/touble shooting I can request...
Thanks,
Bruce Decker
Parker, CO 80134
I've been dealing with performance issues. I have a largish database of about 6GB consisting of about 600K records. I can't say the query time is bad but as the database has grown, users are now waiting about 30 seconds for a list population with prefilters.
I tried to throw hardware at the issue. My first attempt was to go to a scalable cloud server so that I could easily throw more cores and memory at it. This server also had solid state disk. But in the end, I didn't see a significant performance improvement over my regular old cheapo shared hosting plan. Also, the Cloud Server was plagued by may quirks that caused issues with Fabrik so I had to abandon that.
Next, I setup a inexpensive account at a2 hosting and moved the app over there. I was amazed at how fast it was compared to my former shared hosting provider and much faster than my so called 'scalable cloud server' even when I tuned it up to many processors and memory.
So, now I'm relatively happy but ideally I want to see response times in the 3-6 second range as opposed to 15 seconds that I'm seeing now. The underlaying table has 600K rows and with prefiltering, that goes down to about about 8700 rows.
MySQL advisor is tell me that there are joins that are based on unindexed columns but I've looked that over and I can't see any unindexed columns.
I messed around with Prefilter queries to see if I could improve the prefiltering to be smarter but my attempts at being clever only proved how much I misunderstand the interals of MySQL. It seemed everything I would try would have the opposite impact. The best performance was leaving Fabrik to do as it intended.
I've struck upon an idea to create two copies of my main table (the one with 600k+ rows). We'll call this list 'control' and it's copy will be 'control_fast'
Each night, I'd run a script that would:
1) Drop table control_fast
2) Table copy control to control_fast
3) Purge control_fast to remove all but the most recent records (say last 90 days for example). This would result in a table size of less than 180K rows.
Then, I'd create new menu options so that the user could navigate to 'archive (slower)' or current (fast). The archive menus would go to the large table and users would be advised that queries might take longer as a result. But most users would use the 'control_fast' purged database which, according to my tests, would be running in the 2-3 second range (snappy).
Now, I'd prefer to have one database and do what is necessary to get snappy performance by clever use of indexes or other techniques. But I'm afraid I'm going to need some help.
Is this the type of design/touble shooting I can request...
Thanks,
Bruce Decker
Parker, CO 80134