Got hacked and found out the attacker uploaded a shell to your server, but you are unable to find the shell? Well, let me tell you how you can sort this problem out; The code below searches every .php file and finds the r57 shell; find /var/www/ -name "*".php -type f -print0 | xargs -0 grep r57 | uniq -c | sort -u | cut -d":" -f1 | awk '{print "rm -rf " $2}' | uniq Code (markup): The code below searches every .txt file and finds the r57 shell; find /var/www/ -name "*".txt -type f -print0 | xargs -0 grep r57 | uniq -c | sort -u | cut -d":" -f1 | awk '{print "rm -rf " $2}' | uniq Code (markup): If you're searching for a c99shell, replace grep r57 with c99shell in codes. That's it
some may not be very bash skilled pls explain each of the bash commands/arguments you use in a brief line for clarification.
I'll break the command up into very small pieces find /var/www/ Code (markup): The find command is used to find files and folders on your disk. The first argument (/var/www/ in this case) tells the command where to start looking. It goes down the tree from there but it won't go up. If your site is in /home/username/public_html/ then you will need to put that instead of /var/www/. -name "*".php Code (markup): This tells the "find" command to look for files with a certain name. You can describe the name you want with a regular expression syntax very similar to bash globbing. This means that "*.php" will find anything "*" that ends in ".php". -type f Code (markup): This specifies that you are only interested in normal, regular files and not directories or symlinks or any other kind of file. "find" can't tell the difference between two hardlinks to the same inode so if you have any of those then you might end up finding them twice. On the other hand, if you have any of those then you don't need me to tell you how to use "find" -print0 Code (markup): This tells "find" to print out everything it finds separated by null characters. Normally "find" prints out everything it finds separated by spaces which can cause problems if it finds a file with a space in it's name. Null characters are not valid in file names and hence can safely be used as a filename delimiter. | xargs -0 grep r57 Code (markup): The pipe symbol ( | ) above tells Unix to pass the result of the first command in to the second command. xargs is a fancy little program that allows you to pass the output of one program into another program in a different way than would normally be accomplished if you just piped it straight in. In this case, it will cause "grep" to look inside the files we just found for "r57" rather than looking for "r57" in all of the names of the files we just found. The -0 option tells xargs that we are passing it something delimited by null characters. | uniq -c Code (markup): This is a general purpose command that allows you to combine consecutive lines that are exactly the same into a single line. The -c option tells uniq to print the number of lines it combined into one in front of the line. Normally "uniq" is used after using "sort" because the input to "uniq" needs to be sorted however, in this case "find" finds the files in a specific order so that the input is already sorted. | sort -u Code (markup): sort, surprisingly enough, accepts unordered input and outputs it in a sorted order. The -u option outputs only the first of an equal run, hence duplicating the effort of the previous "uniq" command but without the count prepended. | cut -d":" -f1 Code (markup): cut enables you to cut a line of output into discrete chunks and only output some parts of them. The -d option specifies a delimiter and the -f option specifies which field to output. After the series of commands that have already been run, the only one that outputs a colon ( : ) is "grep" and then only because it was passed multiple files as input. It prints the filename and path first followed by the colon and then the line that matched the regular expression. This "cut" command will now discard the line that matched the regular expression, keeping only the filename and the count output by our earlier "uniq". | awk '{print "rm -rf " $2}' Code (markup): awk is a powerful program which combines many of the features of "grep", "cut" and "sed" (which hasn't been used here.) and contains it's own complete language. You can actually write programs in awk. In this case, it prints "rm -rf" and then the second field as delimited by it's default delimiter: the space character. The second field at this stage is the filename and path. "rm" is a command that removes a file. The -rf options tell "rm" to delete recursively and to force continuation on errors. If you "rm -rf" a directory, it will delete all of the directory's contents before deleting the directory itself. In this case, because we are only passing it filenames, it is unnecessary to specify -r and unlikely that we will need -f. I should also note that this command is only printed and not actually run. The purpose appears to be that you should copy-and-paste the command that is printed out as a result of this command. | uniq Code (markup): Finally, the whole lot is passed through "uniq" to make sure we don't have any particular file listed more than once. The filenames however, are now sorted by the number of times the regular expression "r57" appeared in them and hence it is still possible, however unlikely, that a single file could be listed more than once. It won't hurt if you try to delete a file more than once because you will simply get an error message on the second attempt. Excruciating detail. You did ask. If I were writing a command to accomplish the same end I would have written it differently: grep -r r57 /var/www/ | cut -d':' -f1 | sort | uniq Code (markup): I would have then checked each file out one by one and, if all of them were actually malicious, I would have run the same command again with "| rm" tacked on the end: grep -r r57 /var/www/ | cut -d':' -f1 | sort | uniq | rm Code (markup): But that's the beauty of Unix. Not only can you accomplish anything on the command line by simply chaining commands together but there's more than one way to do anything by chaining commands together. A break down of exactly what this command does, piece by piece, is left as an exercise for the reader.
thanks a lot for your time to post such precise details I hope it helps many in their research to find/search and secure/cleanup their hacked servers
Thanks for your detailed post. And to Seleno, those are shell commands. So running them in putty should work.
now one more question in thread http://forums.digitalpoint.com/showthread.php?t=547049 we had a site hacked with a gif file containing php code if i create a file with extension gif and add THAT php code and run the bash line a.m. for files: I search for the beginning of a PHP file the "<?" in my case using below adapted bash line: find /my_test_folder/ -name "*".gif -type f -print0 | xargs -0 grep "<?" | uniq -c | sort -u | cut -d":" -f1 | awk '{print "rm -rf " $2}' | uniq the only output I get is rm -rf <? but no file where it is found as opposed to the normal output of file list containing the string searched. any solution to search ANY files - specially jpg, gif, etc for hacker-code strings contained ???
grep behaves differently if you pass it exactly one file than if you pass it more than one file. If you only have one .gif file then grep won't print out the file name and the colon. Just make another .gif file in the same directory and it should be a bit happier.
correct i used it originally with a bunch of files but only one was a gif now with more than one gif in the same folder i get a correct output
thanks for information but.. thats are not enough for a web security. Maybe you can bypass server with a mysql server and some function for example curl.
This will only work on script kiddies. Other people will base64 encode their shell and use the eval() function to run the code. Your script won't find that.