I'm blessed, the new site I launched 3 weeks ago has the home page indexed, nothing more, but nothing less and as Edgar Allen Poe said "Never more"
ya this happening to my site my one site which has around 50 pages indexed now has only 1 page indexed but my new site got its first page indexed.
Many of my sites affected in a horrible way like from 50 k to 10 , but need to ask something about the difference between : site:www.domain.com (I get 50 k) and site:domain.com (I get 10) should they get the same number of pages
Ny help please and would you please tell me which DC is the right to check : http://66.249.93.104/ http://64.233.161.104 http://72.14.203.99 Each one gives me different number
I get a lot of headache monitoring google s data centers and see different results, but I can t help it .. finally I get something really funny .. when trying to track the indexed pages of a very new website (1 month) and see two resulats for the homepage, when I check the cached pages .. the second one was very strange and belong to different website (parked domain): site:artificialdiamonds.net
The sites in my network that were hurt the most were the ones that haven't been indexed for longer than 2-3 months. Many were reduced to home page only, some to 3-4 on a site:domain.com search with the show omitted results indicator. They weren't supplemental... nor too similar to other pages of the site. I've learned to just roll with the punches on these things... keep building. You can get stuck DC watching for months if you aren't careful. Just focus on the big prize, nose to the grindestone folks.
I have 10 small sites, ranging from 10 pages to 50 pages. I used the same process in SEOing them all. Except one (site M), which I have included more KW's in the URL (A little spammy) All pages in all sites up to last couple of updates, have or had, 20 - 30 KW's in top SERPS in G . In the last 6 months or so, all my sites have taken hits, and seem to be bouncing, some of my less supported Kw's are one day #3, next day not in top 10,000 and a week later back on top of the serps. Except "site M" which has been hit hardest, especially those pages that have spammy URL's, which have been dropped from cache, index and serps. And now obviously get 0 referrals from G. When I do a site: on site M, most of the spammy URL pages are gone. BUT there is a page displayed that hasn't existed on the server in more than 2 years! If G was running out of space, they would clean old pages that no longer exist, before pages that are current. I have heard G is known to archive every piece of data it has ever collected; again I would think they would start dumping old data before Current data. In trying to find some common ground Is it possible that G is looking harder at over optimization, and KW spam. Do your sites that are hardest hit, use a lot of kw’s in the URL? Or may be over optimized?
here is a good one for all of us. i thought what better way to find how authoritative the world's self-proclaimed best indexer of information is. here is how. check how many pages of another authority it contains in its own index. you know like genome containing many strings of dna that were verified and built by other authorities etc. here we go. LIBRARY OF CONGRESS. site:http://loc.gov Enjoy your 500-1000 results (including supplementals) with a ****load (oh only about 34 million or so) of unavailable but previously indexed pages. loving it. i want to see a syndicated article on this from any decent press agency that isn't too lazy to blame webmasters complaining about google's utter inability to search basic data as sour grapes. of course if you do site:http://loc.gov +"whatever popular word" it will still max you out at 1000 results. and, so will yahoo. i guess, it's also about how data are to be presented. my own issue is that i just have 5 of 5 of 600 pages (down from about 65 before BD) of a decent non-spammy site listed with google, with no supplementals or word from anyone what to do to get things going upwards. wish people searched on other engines more. wish google competed with hundreds of decent search engines for advert money. cause i think a quarter for 100 clickthroughs on an ad is kinda affordable for me, not 50 bucks. otherwise this whole ecommerce thing is gonna be limited to about well 1000 sites that thrive. sorry for adsense people who will be losing their incomes as soon as all other search engines/alternative traffic drivers would replace the adwords as the only means of getting exposure. good riddance search engine traffic from google. oh yeah, and put your money on their and other SE's stocks plummeting, just a matter of time. and a matter of time before there would be an open source search engine integrating tons of good algorithms residing on network of pc users and available to be syndicated on any site. i guess that's where traffic will be coming from in the future. i wish p2p indexed data like search engines already actually (why can't they, hello anyone? limewire? soulseek? work on those interfaces now!!!). i know i live in a fantasy world....hehe. well everyone making money of the web right now kinda does.
you misunderstood. i think that current both SE's and dmoz clones are bottlenecks. albeit end users can still find stuff in both. but to eliminate the bottlenecks, i think will require to step away from the concept of capacities. what i meant by open source is that a little bit of anarchy would help to sort out the whole "we don't want crap" in our index thing. i mean, there is tons of crap on peer to peer networks and most people are still able to locate what they need. lol. otherwise, why would they be so popular. nobody peer reviews those things, and yet, they are efficient at disseminating information. i don't know. my point was that search engine decentralization would help the end user and businesses alike by driving the cost of doing business lower and prices lower and offering more choices. it wouldn't help people relying on adsense for income since the earnings will be proportionately lower as a result of competition between SE's for advertisers income. although, one would probably need to have hundreds of such SE's before the costs would go down in any significant way. i guess, it's ultimately about the SE's reach. the difficulty with trying to index everything currently is that there is not enough capacity even for giant companies. party because of spammers (if you ask me they are children of google adsense anyways, so eliminate adsense as viable source of income/or hire bunch of people to track the sites that display adsense and ban owners for life for wrongdoings (since google cuts them checks anyways!!!) and there will be no spam, gasp - i will get killed for this). so, that's why my not so original idea of fidning other direct avenues to connect the supplier of information with end user. we see wikis, search comparison engines etc but these are all baby steps imho. i digress. i just hate playing by some aribitrary rules that google et al tend to impose on webmasters/stores/everyone with no recourse for those that don't want to play nicely and punishments for those that do....
I think you misunderstand how Google is structured. Google is quite decentralized - it's a whole network of datacenters accessed according to server loads and geography. I don't share your optimism about an open source search engine, no matter how it's organized. If you think spamming and black hat SEO is a problem in existing search engines, multiply that expnentially and you might getsome idea of how much crap you'd have to wade through to find anything at all. As for P2P networks as a model, they are already bad enough - the ratio of crap to anything useful there is huge - but try to imagine what that would be like if P2P was a commercial network, selling products and services, instead of swapping pirated images, music, movies, and software - that crap ratio would expand beyind your imagination.
I read today in WMW forum a feedback for a member there from GG (one of Google stuff) asking him to fill reinclusion form . Just wanna make sure that I know what is the meaning of filling reinclusion form ... isn t that mean that your site is banned from Google And for me in some of my website I complain a variation of symptoms - Dropping pages and for two websites just homepage left (pages with PR3 dropped) - Supplementary pages were dropped - Nothing changed in my SERPs - Still in the same positions for my main keywords .. - Google still visiting my sites and the cashed copy with updated date but without adding pages to his index Are those sites banned .. or penalized or affected by the BigDaddy update ,and Shall I fill reinclusion request for those sites
Weird... I just had a site that was deindexed get indexed again for a few days with some good SERP's only to be deindexed again a few days later
I noticed the same thing. It says pages are blocked by robots.txt but the test page shows that they are allowed.
same here. i tried to contact the G sitemap staff to point this out but havent received any reply yet.
At me : One website from 80K pages index dropped to 43 Another from 20k to 700+ In these days we should take a vacation and not look at serps , index
For dropping pages it is ok as we now know that Supp pages will not be include in site: So we will see small numbers different from we get used to see in the past, but the problem that really I count serious is : Google keep dopping non Supp pages with good PR 2-3 (I can t see them even if I search for specific word in them within the site) and google doesn t index any new pages in the last month May be it is time for vacation as seopup said
They went away and now their back so they reverted whatever crap they did last. Or only did a bit of whatever
Somewhere I saw that Google said they have a "machine crisis" maybe because of this so many pages gone ?