Hello, I think of a big project, which gonna take much load on sever resources and bandwidth. It should work with huge MySQL database of different webpages (maybe about 20 million, I hope MySQL can handle such amount of records). But I will need to share the requests to the database between several servers I think, because one single server won't be able to handle such a load. I wonder how big projects like Google or Archive.org are doing this? I thought of primitive method, which is like follows: The MySQL database is located on an independent standalone server. The MySQL database table contains a field, say, "server", with particular server name, for which the record is allocated for processing. Each server connects MySQL server and works ONLY with records, which are allocated to that server (by looking into "server" field in that table). I feel like this is a noob method, but I would like to know your opinion, and maybe the ways you see how this can be implemented in more optimized and clever form. Will be happy for any tip. Thanks, Dennis.
None of it will matter if you're DB is not normalized properly. Try to optimize every record and every query as efficient as possible and watch the logs. I don't think you need to use two different servers. But, if you want to optimize your application speed perhaps you can set up a replication server and have SELECT queries pull of the Slave and any UPDATE/INSERT queries go into the Master. The Slave will update whenever the Master is updated. If the Slave goes down, direct SELECT queries temporarily at the Master. The Drupal CMS handles this quite effectively, in my opinion.
Master slave config will help. But for the size you are talking about, plan the tables well. If the sites are independant of each other or if it is possible to isolate them then one way to do this would be to use seperate smaller slaves for each and one main master which takes in the inserts and the updates and periodically updates the slaves