I don't do any sites that do sessions with php, but I'd assume you have to use a regular expression instead of just 'GoogleBot' because they have different version numbers and the "new" one right? something like !~ /googlebot/i Just an idea.
I think preg_match is what you would use in php, (I'm a perl coder not php) edit: try this? if(preg_match("/googlebot/i", $_SERVER['HTTP_USER_AGENT']) != 1){ session_start(); } Code (markup):
strpos or strstr would be a lot faster than preg_match: if( strpos( $_SERVER['HTTP_USER_AGENT'], "Googlebot" ) !== false ) { // we've found a googlebot }else{ // session initialization code } Code (markup):
I'm just curious, besides the fact that I used a case insensitivity in my reg exp. example, how much faster would strpos be? I'm unsure of how efficient the reg exp engine php is, but it's pretty darn fast in perl it only does what it needs to. Having said that I'm not to try to claim it's not faster, I'd just like to hear why.
> I used a case insensitivity in my reg exp you can also use stripos or stristr for a case-insensitive search. But why do you need a case/insensitive match? I don't see why G would change the UA from Googlebot to googlebot. Why is it faster? Probably because it doesn't have to run the string through a reg. expression parser. How much faster is it? I don't know. Here is a quote from php.net (http://www.php.net/manual/en/function.preg-match.php)
Yeah the quote is enough to answer my question thanks. What I was referring to with the case option was that having already put some code up there that was case insensitive, that it would be slower than something case specific. That's all. The reason I did it in the first place? Habit I guess.