i need some help :i want to grep all links from website with wget: get --spider --force-html -r -l1 somewebsite i need to save only urls without js,img.. to text file
Hi there, I think that something like next line will work wget -r -l 1 --spider --force-html yoursite 2>&1 | grep saved | awk '{print $6}' | grep -iv 'if\.js' | grep -iv '\.jpg' | grep -iv '\.gif' Best Regards, Marcel Preda