If you have a robots.txt file on your site, check by visiting /robots.txt. You may be surprised to find out you are withholding pages, folders, images, etc. from search engines that can drive traffic to your site. Additionally, run a site scan with a tool such as Screaming Frog to assess if there are any pages on your site you are excluding via a meta robots tag. Both of these are a very quick fix if you do find issues. Unknowingly tagged pages or robots.txt entries are usually the culprit of a developer who forgot to remove the designations when a new page rolled live or a previous site administrator who deemed the quality content unimportant for the masses. Does anyone used this ?