I am trying to convert my site to PDO and keep getting this error: Warning: PDOStatement::execute(): MySQL server has gone away in Code (markup): and Warning: PDO::query(): Error reading result set's header in Code (markup): and Fatal error: Call to a member function fetchAll() on a non-object in Code (markup): Any ideas? I've tried increasing max_allowed_packet. Strange thing is it doesn't give the errors every time. Only occasionally. Not sure what the deal is.
Can you share some of the code where you think this problem is occuring? Your fatal error just means that $pdo is not an object at that point which is probably caused by a typo or something.
You should read up here: http://dev.mysql.com/doc/refman/5.0/en/gone-away.html Usually, the "mysql server has gone away"-error is due to timeouts or problems with the MySQL-server - not with PDO or PHP. It might help if you tell us how you're connecting, whether or not the sql-server is on the same network as the webserver (or same server), etc.
Ok the solution was increasing mysql.connect_timeout 300 and default_socket_timeout 300 in the PHP.ini source: http://stackoverflow.com/questions/1644432/mysql-server-has-gone-away-in-exactly-60-seconds
First error is happening because your script is taking too long to finish "select" or "insert" statements. Increasing mysql.connect_timeout 300 and default_socket_timeout 300 in the PHP.ini is not a good method of handling this error. You are simply putting more load on the server, instead of writing a better and faster query to handle the request in the given time slot. Other 2 errors are happening because there is no data in your results variable, or it doesn't even exists, and you are still trying to process it without checking if it exists or not. Your first query failed, so resultset was not generated. Your second query which uses the resultset from previous query, is running blindly, without checking anything, on just an assumption that everything is all right...
Which makes me think painfully convoluted queries, a lack of indexes on fields with common access, stringing together too many "join" choking the system out on RAM, etc, etc. That last one being some of the WORST advice out there, with people combining queries to avoid database connections, but in the process making larger result sets that can choke out the server. Really in any decent database if you're banging heads with the default timeout, there's something either horrifically wrong with the server, the database, or the queries being run. Upping the timeout is NOT the answer. That's like drilling holes in a man's head rather than repairing an artery. As others have said though, without knowing what the queries are and what the database structure is, it's all wild guesses.
I agree with @deathshadow 100% By increasing timeout, you are simply taking a pain killer pill, when the real problem is a fracture (which you are ignoring completely.) Check your query properly, either your selected resultset is very large, or indexes are not being followed. To avoid first reason, use "limit" in your queries. In your case I think second one is the issue, i.e query taking too long to execute. This means that number of rows in database are too many and query is not following proper indexes. Does your table has one or more indexes? If it does not, you can still create one now, like this: "Create index indexName on tableName ( columnName )" or like this, in case of multiple columns: "Create index indexName on tableName ( columnName, columnName2 )" You can do this in phpMyadmin Your query can be like this: "select columnName from tableName where columnName='abc' " "select columnName from tableName where columnName='abc' or columnName2='abc' " but not like this: "select columnName from tableName where columnName2='abc' or columnName='abc' " This last one will not follow index. by the way, you have increased your timeout "5" times, to 300 (5 minutes, instead of default 1 minute)... Who will wait that long, waiting for your website to open? Not to mention, your server will give up on a high traffic day, or your webhost will suspend your account.
A very common occurance, personally I wish the "popular threads" thingie would be done away with so that these old threads do not get inadvertently bumped up.
I know it's an old thread - I wonder if the OP was creating a new instance of PDO for every query instead of just creating one instance of PDO and sharing that instance of PDO around all scripts that need a database connection. I saw some code once where one page would hit the database about 80 times, each time creating a new connection! It was taking forever for a page to load (think it was on a shared server). When changed to use a single database connection shared between all scripts that needed a database connection, page load times plummeted.
@SpacePhoenix That is valid for mysqli and old MySQL too. Opening one connection and keeping it open till needed is better than connecting again and again. Huge difference in page load time.