code quality vs. economics part II
The last time I wrote about this, the problem that I had encountered was that there was an sql query that was select * from table where conditions. Then Later in the perl code it would strip more rows out that could have been done with a 'like' in the sql.
Everybody decided that it would take just as long and probably longer for the database server to retrieve the larger dataset and send it across the network and then have it processed on the webserver than if it had just done the LIKE itsself.
I will not argue this point. It most likely is faster for a single query to do it the 'right' way.
The problem here is scale. We have 5000+ corporate clients, all banging away on one mysql database. The thing to consider here is that the database server has many components to it, and some things are easy to add or make faster (RAM, Raided Hard drives), while other things are not: CPU's.
The reason it is more cost effective money wise, is that anytime you free up the cpu to do other tasks, its a win. So, with the transfer of data mostly relegated to the Hard drives and network with minimal action required from the cpu, the cpu is then freed to perform another task for another queued request.
Basically we are moving cpu cycles from one machine to another, with all the network and drive overhead that entails, because it is not cheap to add another mirrored database server, however load balanced web servers are much cheaper and easier to come by.
I hope that this clears up the confusion. Is it faster to do it this way? Almost certainly not.
However there is an economic benefit. This benefit is difficult to measure, and honestly countered by maintenance costs, but that is beyond the scope of this entry at this time.