Hello. I've been trying to get this to work for the past couple of days now, but no such luck. I've got a PHP script that uploads large files, they can be as small as 20mb in size or as large as about 700mb in size. I am able to upload small 20mb files no problem, I have also been able to upload larger files of about 100mb in size. The issue starts occurring with anything larger than that. When testing with a file size of about 220mb it will upload fine to a point of about 60% then reset back to 0%. Chrome will later display a connection timeout error. I have changed the required settings as well... memory_limit = 3G max_execution_time = 36000 max_input_time = 36000 post_max_size = 2G post_max_size = 1G max_file_uploads = 1024 I'm all out of ideas. Any help? Thank you.
yup!! need help Ok the things you need to do is, edit some nginx configurations. Editing php doesn't do much things. Can i know the Os you are running ? so i can give default paths for it
Ah brilliant, a reply, thanks. I'm running - 64-bit CentOS 6, MySQL 5.5 and PHP 5.4 on the MediaTemple network.
What's the settings for upload_max_filesize? (Or was that the last setting you listed above?) If it is, it's missing a type - 1024 needs to be for instance 1G or similar
you have root access right ? Ok, so lets start with apache first. Connect to your server through ssh and winscp. [ssh for restarting apache, nginx and such things, win scp is for editing files (i prefer winscp but you can use ssh to edit as well] -- will continue on-- upload_max_filesize should be 1G not GB
and also, through ssh, tail /var/log/nginx/error.log Code (markup): can u get the copy and paste of the text that showed
There is one limitation - the size of the file can not be larger than the RAM allocated by the server. Because the file is loaded first in RAM, and from there to the server. And the size of RAM on average 128 MB, if you do not have a dedicated server.
firstly go to /etc/httpd/conf/httpd.conf and search for "timeout" in it and set it to 900. then go to /etc/nginx/nginx.conf and add in client_max_body_size 20M; Code (markup): between the two http{...} tags. 20M is for 20 mb so change it to howmuch you need. then reload the services by running this in ssh, service nginx reload Code (markup): go to /etc/httpd/conf.d/php.conf and add this line to the last section, LimitRequestBody 0 Code (markup): then finally add this line to fcgid.conf FcgidMaxRequestLen 1073741824 Code (markup): its in byted and edit it as much as you need. Create a .htaccess file in ur upload script directory and add these lines to the top, LimitRequestBody 0 php_value upload_max_filesize 0 php_value post_max_size 4939212390 Code (markup): With this codes, i have transferred files over 2 Gb through https without flash and with 1 gb ram server And do run this 3 codes in ssh, service nginx reload service httpd reload service httpd restart Code (markup): important
Ok, I've fallen at the first hurdle... I've searched for 'timeout ext' and it can't find it. What it cant found however is...
Ok, so I've just added this into the /etc/nginx/nginx.conf file... client_max_body_size 900M; ... When entering... 'service nginx reload' via ssh I get 'nginx: configuration file /etc/nginx/nginx.conf test failed'. Any ideas?
and run, cd /etc/nginx/plesk.conf.d/vhosts/ Code (markup): vi yourdomain.com.conf Code (markup): and add the same lines but this time under server{..} tags Here is how the /etc/nginx/nginx.conf looks like, #user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; client_max_body_size 2000M; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } Code (markup):
You've lost me now. It says... When I run 'service nginx reload'... it says... 'nginx: [emerg] open() "/var/www/vhosts/domain.com/conf/13659636930.47576300_nginx.conf" failed (2: No such file or directory) in /usr/local/psa/admin/conf/nginx_vhosts_bootstrap.conf:6 nginx: configuration file /etc/nginx/nginx.conf test failed' Should I skip the nginx stage?
skip it for now, and continue others, i will get you the codes and you just copy paste the whole thing in nginx later so it should work
server { listen xxx.xxx.xx:80; server_name domain.co ; server_name www.domain.com; server_name ipv4.domain.com; client_max_body_size 128m; root "/var/www/vhost..... Code (markup): was the code inserted like this in the domain vhosts.conf ? for nginx.conf file, remove evrything and paste this, #user nginx; worker_processes 1; #error_log /var/log/nginx/error.log; #error_log /var/log/nginx/error.log notice; #error_log /var/log/nginx/error.log info; #pid /var/run/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; client_max_body_size 2000M; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #tcp_nodelay on; #gzip on; #gzip_disable "MSIE [1-6]\.(?!.*SV1)"; server_tokens off; include /etc/nginx/conf.d/*.conf; } Code (markup):
Right, it doesn't look like the nginx service is running so I'll just skip that for now. --- I'm on the 'LimitRequestBody 0' stage. I've opened up... php.conf and see the following... What section do I the... 'LimitRequestBody 0'?