Ok I have a real good brain teaser for you. you gain better serps by linking to sites that rank high and are respected by the search engines. Those sites have Good Serps and PR by ranking to sites that have Respect and Good Pr. But how does this chain start? How did google start out its PR process? Did they just give sites like Adobe.com a PR 10 and anyone who linked to them started to have juice and its a whole chain from there. Kind of like how a STD spreads but a Good way ? What can break the chain? How can you start a new chain ?
Pagerank was not something that google started along with their search engine G was started as a search machine and to rank in it websites needed links, so they went ahead with getting links At one point of time G decided to give pageranks, so at that point of time whoever had the maximum links got the ranks and from their the chain started... You wants to break the chain.....welll.......the only thing i can think of is..."G" going bust
Hmmm. I think you've got that backwards. You gain better SERP rankings by having other sites link to you. Yes... Linking to other "trusted" sites does help one of the ranking factors supposedly used at Google - Domain Trust. But sites linking to your site has MUCH, MUCH more influence on your rankings than you linking to others. Nope... As I said your URL linking to adobe.com does NOT add to your PR. When Adobe.com links to YOUR URL then you get PR from them. Perhaps if you read The Anatomy of a Large-Scale Hypertextual Web Search Engine written by Larry Page and Sergey Brin while at Standford which became the blueprint for what we know as Google, you'd have a better understanding of Page Rank... what it is, why it was seen then as important, and how they calculated it. Pay special attention to section 2 where they explain Page Rank. This is incorrect. In the above document before Google ever officially existed as a real search engine on the web Larry Page and Sergey Brin said they would use 2 important factors in ranking pages in their new "Google" search engine - 1) Page Rank and 2) Anchor or Link Text. Page Rank has been around and used in Google since the beginning of Google. Even as far back as the demo version at demo.Google.edu that they referenced in section 2.1 of the original document they specifically mentioned the demo version as using Page Rank to prioritize the indexed documents. So why would one think that it was some "after thought" added to Google at a later date? It is true that Webmasters had no way of knowing their relative PR until the advent of the Google Toolbar much later, but Google has always maintained a "real" PR value for all URLs in its index and used it in their ranking algorithm. This was what was drastically different about Google in the early days in comparison to the other search engines of the day - seeing inbound links from other sites as votes for a URL (and using them to calculate PR which was then used to rank the page) and seeing the link text as giving strong clues as to what the page was about. They have refined and refined and refined the algorithm over time... and PR plays a MUCH smaller role now than it did in the early days, but it has always been there and used for ranking. "Seeding" their page rank algorithm would be quite simple. I recall having read at some point how they did this. I think it was similar to the following: Index as much of the web as you can... Assign every page in your index an initial PR of 1 (this is REAL PR which can be a HUGE number... not the logrithmic 0-10 scaling of "real" PR that you see in the Google Toolbar)... then make a complete pass thru the index looking at each URL, finding all of it's outbound links, dividing the PR of the existing page (minus the damping factor or decay factor) by the number of outbound links, and passing that much PR to each outbound link (so add a fraction of the current page's PR to the PR of each page being linked to). Make several passes through the index recalculating the PRs of all of the URLs based on their new values after which point they would start to settle down. Real Page Rank is a recursive, cyclical algorithm that is constantly being recalculated in their index as they crawl and index the web. Once per quarter (even more frequently lately it seems) Google takes a "snapshot" of all the "real" page rank values for all of the URLs in it's index, scales them to a 0-10 number using some type of logrithmic algorithim similar to a Richter Scale, and publishes those so they can be displayed in the Google Toolbar as a toolbar PR. If a page is too new to rank or does not exist in their index then the Google Toolbar will show "Page not currently ranked by Google" in the toolbar.
Anyone that has been doing this for awhile will remember the old Google Dance during which PR was recalculated and rankings in a state of turmoil. All of this was prior to the so called "Big Daddy" update when it is theorized there were major infrastructure changes at the plex. We do know that PR is now calculated (or at least estimated) on the fly and the index is constantly being updated to one degree or another. We also know that the PR shown in the tool bar is at best an approximation, and perhaps even a red herring at this point. PR is as important as ever, but is largely hidden. Still, PR and anchor text are arguably the most important factors in ranking to this day. Do an "allinanchor" search to see what I mean...