I have about 50 tables of which each have about 10 fields whose values will likely be 100,000,000,000-100,000,000,000,000. No negative values. I know that having that many big fields will eat my servers memory. Since bigint can hold up 18,446,744,073,709,551,615 it seems such a waste to use it. Should I use decimal instead for better performance? I'm trying to utilize my script because the on script on my old server, mysql was utilizing 100% of the servers memory at least a few times a day. That server had 4GB of ram and a pretty powerful processor. No more than 100 users browsing my site. It's a game site by the way, that's why I need so many big fields.
If you have indexed the database properly, the variable size won't be a big problem. Your problem with bigint is performance, not space, right? Can you give an example of a query? May be we can figure out the proper indexes.
Decimal would not be better performing. An unsigned bigint (20) would be the best way to go. Out of curiosity, what are you doing that requires numbers this large?
Also, are all those digits significant? That is, do you ever actually use the ones, tens, hundreds, etc. spot? Or are they always zeros? If that's the case you can store the number to accomodate the least signficant digit and then just multiple it out when you retrieve it. So if the last 3 digits are always zeros, you would save 987,456,000 as 987,456 in the database and always multiply it by 1,000 when retrieving it.
They are almost never zeros.... To jedi.knight: To be honest I have never heard about indexing. The most common queries are while loops that pull data from the db. Like messages. I don't know if that's what you wanted to hear.
If you are doing any kind of arithmetic with these fields, and the data is always whole numbers, keep them as integers rather than decimals. Avoids rounding errors.