How do I do that to maximize performance of the following functinos: 1. search and fetch the meaning of one of the 80,000 2. given any single word, present a list of similar words among the 80,000 I thought about splitting the words by letters and further by chunks of around 200 - 300 words per .txt file, like in A4.txt, T27.txt. Each word and its explanation takes a line. Is this preferrable considering the above needs? I prefer database, but still, with a table of 80,000 records, it would be a nightmare to tune. What do I do to ensure performance, please? Rep will be given to those who thought through it, thanks a lot! =)
I think the database it the best way to go. How is your current/proposed table structure setup? I think that making the proper structure and indexes is going to be the key in making this fast. 80,000 is still fairly reasonable to deal with. Do you have any way of determining similar words or will this need to be some sort of matching pattern?