I have a main website with two relevant redirected domains successfully pointing to it from October 2022. My question is how long does it take AHREFs to pick up on the changes as there has been no adjustment in the DR or number of backlinks? However, Ubbersuggest does seem to show the relevant changes and a greater DR rating and number of backlinks. In fact, AHREFs is indicating an 'HTTP server returned error 403' and a 'Fetching robots.txt took too long' error. Once again Ubbersuggest seems to have been able to work through that. What is happening???? Any advice is greatly welcomed. Hope that makes sense guys. (oops, tried to post this earlier but didn't realize you couldn't include a link to your own website - sorry)
In my experience, it really varies for each backlink. I think the question here is how does the AHREF's crawler algorithm work and what websites will it therefore find faster than others. hope thats somewhat helpful.
Thanks for replying Starmorph. It's also trying to understand the error codes mentioned in Ahrefs, but yet Ubbersuggest has been able to trawl. I will keep an eye out for any changes.
I would double check your robots.txt file and make sure there is nothing in there stopping the ahrefs crawler. I know I have some configuration in my robots .txt so help ahrefs filter out pages it doesn't need to see, so perhaps if you don't have that, it won't be able to crawl effectively.
This is where I get lost mate. How do I see the robots.txt files of the redirected domains? Is that where I need to look rather than my main website? When I purchased them I just did a straight redirect via GoDaddy.
for getting the link recognized in ahrefs you need to check the robots.txt of the website that has the link you want to be crawled. but checking your main website one would be good too imo
Just ran the Google robots.txt checker for both redirected domains and got the following message - so I think I should be okay **** robots.txt not found (404) It seems like you don't have a robots.txt file. In such cases we assume that there are no restrictions and crawl all content on your site.
it could be okay but I think it actually helps the crawler if you disallow the pages that shouldn't be public. for example if you have customer information or private pages you want a robots.txt to stop those files from being shown in search. plus if you are getting an error 'Fetching robots.txt took too long' error., I think having one (even a blank one) would help resolve that. But good to hear there isn't anything being blocked from being crawled that should be open.
It usually takes Ahrefs several days to several weeks to find and index fresh backlinks. The duration may differ depending on variables such as the frequency of website crawling, the significance of the linked site, and the size of the internet in general. Ahrefs can find new backlinks more quickly if you update and promote your content on a regular basis.
The time of the discovery depends on the distance (the quantity of the jumps) of the page with the link from the pages, which have been explored, and (may be) on the citation weight of the page with the link, which will be calculated during the exploration of the near pages. The less the distance, and the more the citation weight, the faster the page with the link will be explored. The distance is universal for all search engines, and the citation weight may be used or not. I use the citation weight in my search engine (leak.info) about the incoming links, while other search engines may use the citation weight, or something other, in order to schedule the exploration. Thus, if you want your incoming links to be discovered sooner, then place your incoming links on the famous pages, which exist during the long time, and have many strong incoming links, like the start page of the (usa.gov) directory.