Portal Home > Knowledgebase > Articles Database > Reducing apache load?


Reducing apache load?




Posted by Dan Grossman, 05-24-2007, 01:17 PM
My load average seems a bit high considering how few processes are running: http://www.dangrossman.info/photos/s...ts/hiload2.jpg Is there anything I can do to reduce the load generated by Apache? You can see the Apache2 server status info here: http://www.w3counter.com/server-status It's serving 20-25 requests per second, which are mostly these tiny requests to tracker.php which issues a database query and returns an image. The relevant httpd.conf settings: This is a dual Opteron 2212 server (4 cores total) with 4GB RAM. Last edited by Dan Grossman; 05-24-2007 at 01:29 PM.

Posted by Scott.Mc, 05-24-2007, 01:21 PM
What are the perfork settings as those will be the ones you'll likely have to adjust.

Posted by Dan Grossman, 05-24-2007, 01:28 PM
Sorry, you're right.

Posted by Scott.Mc, 05-24-2007, 01:33 PM
Increase the start servers, drop the number of requestperchild and you can also have keepalive on (remember to lower the keepalive timeout to around 4).

Posted by Dan Grossman, 05-24-2007, 01:48 PM
Do I really want KeepAlive on when it's mostly dealing with these one-off requests 20 times a second? The same person isn't going to create another request to the server until they move to another webpage being tracked by W3Counter, and most people spend more than 4 seconds on a webpage. With KeepAlive on and timeout of 4, I have a huge number more httpd processes running and load is starting to climb. Turning it back off.

Posted by Scott.Mc, 05-24-2007, 02:01 PM
If that's specfically all the connections are doing then correct it would be best without KeepAlive, that would be fully dependant upon what you are running. Once you increase the max the start servers and drop the number of requests per children what does it look like now? Also it might be worthwhile looking into lighttpd if it's just serving one file that displays the image. Looking at the screenshot the I/O is not ideal. Does the script take a long time to execute, because if you look at all your requests they are all fairly intensive. Which I assume is relating to them have to query the database often. Have you thought about some form of queueing system? Such as storing the necessary updates in shared memory and then running all the querys at once say every 10-15 minutes?

Posted by Dan Grossman, 05-24-2007, 02:14 PM
Halved requests per child, added 10 to start servers, and didn't see any change. tracker.php does some simple string processing then makes a DB stored procedure call to log the visit, the procedure does a few queries to do that, and returns a result set which is used to generate the image with GD (a counter with the current visitor count displayed on it). It all finishes pretty fast, but it's not simple as serving a static image or anything. I agree. Wish I could do something about that. The I/O work is mostly the database churning -- it's got to handle an INSERT, an UPDATE, and either 1 or 2 SELECTs for each request to tracker.php, as a result of the stored procedure call. Ends up between 25 and 75 queries per second depending on time of day. There's also more intensive queries that need temporary tables to resolve that probably hit disk more often, but are more rare (every few seconds rather than multiple times per second). I don't think there's much I can do to optimize that any further, which is why I'm focusing on apache today I think trying to issue 15 minutes worth, or about 900 stored procedure calls that need to hit the same tables, all at once, would bring any server to its knees. I had better load on a single Opteron 280 server with half this RAM.. was running Apache 1.3 instead of 2.. why I think this seems high. The shame is that this app could be pretty easily partitioned by user; stick some on one database, some on another, and have a lookup table tell the web server which database to record the visit to and which to report from. It just doesn't have enough paying users to cover another server without wiping out the little it makes over the cost of this monster. Last edited by Dan Grossman; 05-24-2007 at 02:21 PM.

Posted by Scott.Mc, 05-24-2007, 02:19 PM
It's more specific to the query, a simple way to explain it for the counter would be lets say. On every hit you are running update `counter` set `hits`=hits+1 WHERE `id`='x'; Now if you had say /dev/shm/counterid Which increased the number in that (reading a file is much quicker than doing it via the database). Then every minutes rather than running say 100 seperate querys to update a counter by 100 hits it would just update `counter` set `hits`=hits+100 WHERE `id`='x'; Then you can clear the shared memory file and the process would start over. That's what we do on large sites which do similar tasks to this and it's very effective.

Posted by Dan Grossman, 05-24-2007, 02:28 PM
That's a great idea, but I don't think it would save me all that much. This is what the stored procedure needs to do: - If the visitor wasn't identified by a session cookie, issue a SELECT against the website's log table for page views in the past 24 minutes by someone with the same IP and partial user agent, to determine if it's a unique visit or continuation of a browsing session by someone with cookies disabled. - Based on that and some other input, update a summary table like your example, incrementing unique visit, returning visit, page view counts as appropriate for the current date. - Insert the full visit data into the website's log table (time, ip, browser, platform, screen res, depth, referrer, page url, page name, etc etc) - Select the current unique visit count for the site, along with the counter style options for the site, and return that row to the caller so it can render the counter The update step is the fastest part of that procedure, calling it less often might help but I'd guess not very much. In exchange for a possible minor performance increase, I lose the ability to call the entire thing "live web stats", since it wouldn't be entirely real-time anymore. I'll try playing with that this weekend all the same, as I wouldn't want to pass up something that might help more than I think. Last edited by Dan Grossman; 05-24-2007 at 02:34 PM.

Posted by Dan Grossman, 05-24-2007, 02:43 PM
Thought this might shed a little insight into the MySQL situation... that entire process resolves so fast, even 20 times per second, that mytop usually shows only 1 thread running... qps is inflated because the proc has to use prepared statements to issue two of the queries since you can't have variable table names any other way.. there's really only probably 60 real queries per second at this snapshot in time. Despite that, load is around 4, and httpd processes seem to dominate the cpu usage. I have APC installed for PHP opt/cache, removed all the modules I don't use from Apache's conf. Last edited by Dan Grossman; 05-24-2007 at 02:52 PM.



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
rate point...any good (Views: 510)