Hi Guys, Just wanted to contribute something. Seeing that I've silently attained so much knowledge from this forum. Here is a script I use to scrape competitor twitter accounts. Once scraped I analyze the data and extract highly effective posts that I can later reuse. The script below is a python script and relies on the Tweepy library (can be downloaded here). To grab images I just use some Windows image grabber tool i found. This can be done in Python also, but I needed by images on my computer and not on the server. The link to the tool I used was not being accepted due to strange redirects, so you can simply Google "LTVT Image Grabber" if you need it. To use the following script just replace the credentials and put it in a directory alongside the Tweepy library. If someone has a questions feel free to ask. Will try me best to get back to you. Using this script to generate good twitter content ideas, I've been able to build significant fully organic followings for about a dozen different clients of mine. Hopefully it adds some value in your workflow also. Here is the script: #!/usr/bin/env python # encoding: utf-8 import tweepy #https://github.com/tweepy/tweepy import csv import re #Twitter API credentials consumer_key = "YOUR CONSUMER KEY" consumer_secret = "YOUR C KEY SECRET" access_key = "YOUR ACCESS KEY" access_secret = "YOUR ACCESS SECRET" def get_all_tweets(screen_name): #Twitter only allows access to a users most recent 3240 tweets with this method #authorize twitter, initialize tweepy auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_key, access_secret) api = tweepy.API(auth) #initialize a list to hold all the tweepy Tweets alltweets = [] #make initial request for most recent tweets (200 is the maximum allowed count) new_tweets = api.user_timeline(screen_name = screen_name,count=200) #save most recent tweets alltweets.extend(new_tweets) #save the id of the oldest tweet less one oldest = alltweets[-1].id - 1 #keep grabbing tweets until there are no tweets left to grab while len(new_tweets) > 0: print "getting tweets before %s" % (oldest) #all subsiquent requests use the max_id param to prevent duplicates new_tweets = api.user_timeline(screen_name = screen_name,count=200,max_id=oldest) #save most recent tweets alltweets.extend(new_tweets) #update the id of the oldest tweet less one oldest = alltweets[-1].id - 1 print "...%s tweets downloaded so far" % (len(alltweets)) #transform the tweepy tweets into a 2D array that will populate the csv outtweets = [[tweet.id_str, tweet.created_at, tweet.text.encode("utf-8"), tweet.retweet_count, tweet.favorite_count, (tweet.favorite_count + tweet.retweet_count) * 100 / tweet.author.followers_count, tweet.author.followers_count, 'https://twitter.com/dermcures/status/' + tweet.id_str, tweet.entities] for tweet in alltweets] #write the csv with open('%s_tweets.csv' % screen_name, 'wb') as f: writer = csv.writer(f) writer.writerow(["id","created_at","text","retweet_count","favorite_count","score","followers","url","image"]) writer.writerows(outtweets) pass if __name__ == '__main__': #pass in the username of the account you want to download get_all_tweets(raw_input("Who do you want to scrape, boss: ")) Code (markup):