Hello, I got files which have entries/records (1 per line), and some of them are as huge as 1GB and more. Most of the records on the top are of no use now, and can be deleted, but because the files are so huge, so I cannot read the whole file in a variable (file_get_contents), explode them, and choose which records go back in. I was thinking of reading the file line by line, and storing selected records in a temp file, finally overwriting the original with the contents of the temp. However this method can go on for a long time looking at the size of files and number of files to process. Is there a better way of doing this? All I want is to remove a few lines from the top of the file... File format: unix_time_stamp,string,string,string Can I somehow load the file in database (mysql), delete unwanted records, and just dump the rest in the file back, or will it be more resource intensive? Thanks
$lines_array = file("file.txt"); $output = ""; for ($i=0; $i<20; $i++){ // edit the number 20 to your desired amount $output .= $lines_array[$i]; } file_put_contents("file.txt", $output); PHP: Should/would remove the first 20 lines On a side note, keep an original copy and file.txt needs to have read/write permissions, CHMOD to 777 if necessary
No, this won't work. Reading full file in array is not an option. The files are about 1GB in size. The memory_limit is set to 20M. You did give an idea. I can only read the lines I want, skipping the lines above. <?php $fp=fopen('file.txt','r'); $count=0; $line=array(); while(!feof($fp)){ if($count>20){ $line[]= fgets($fp); } ++$count; } fclose($fp); file_put_contents('file.txt', $line); ?> Code (markup): Thanks
You can't get around rewriting the whole file, but you can, as you mention, do it a line at a time. Perl is particularly handy for that... perl -ni.bak -e 'print if ($i++ > 19)' thefilename Will rewrite 'thefilename' so that the first 20 lines are deleted, and save a copy of the original as thefilename.bak