Search engine optimization

Discussion in 'Search Engine Optimization' started by magnetudaya, Aug 12, 2008.

?

magnat udaya

Poll closed Sep 11, 2008.
  1. good

    0 vote(s)
    0.0%
  2. good

    0 vote(s)
    0.0%
  3. good

    0 vote(s)
    0.0%
  4. good

    1 vote(s)
    100.0%
  5. good

    0 vote(s)
    0.0%
  6. good

    0 vote(s)
    0.0%
  7. good

    0 vote(s)
    0.0%
  8. good

    0 vote(s)
    0.0%
  9. good

    0 vote(s)
    0.0%
  10. good

    0 vote(s)
    0.0%
  1. #1
    Search engine optimization (SEO) is the process of improving the volume and quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results for targeted keywords. Usually, the earlier a site is presented in the search results or the higher it "ranks", the more searchers will visit that site. SEO can also target different kinds of searches, including image search, local search, and industry-specific vertical search engines.

    As a marketing strategy for increasing a site's relevance, SEO considers how search algorithms work and what people search for. SEO efforts may involve a site's coding, presentation, and structure, as well as fixing problems that could prevent search engine indexing programs from fully spidering a site. Another class of techniques, known as black hat SEO or spamdexing, use methods such as link farms and keyword stuffing that tend to harm search engine user experience. Search engines look for sites that employ these techniques and may remove them from their indices.

    The initialism "SEO" can also refer to "search engine optimizers", terms adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems, URLs, and shopping carts that are easy to optimize.

    Contents [hide]
    1 History
    2 Webmasters and search engines
    2.1 Getting indexed
    2.2 Preventing indexing
    3 White hat versus black hat
    4 As a marketing strategy
    5 International markets
    6 Legal precedents
    7 See also
    8 References
    9 External links



    [edit] History
    Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a page, or URL, to the various engines which would send a spider to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[1] The process involves a search engine spider downloading a page and storing it on the search engine's own server, where a second program, known as an indexer, extracts various information about the page, such as the words it contains and where these are located, as well as any weight for specific words and all links the page contains, which are then placed into a scheduler for crawling at a later date.

    Site owners started to recognize the value of having their sites highly ranked and visible in search engine results. They also recognised that the higher their site ranking the more people would click on the website. According to industry analyst Danny Sullivan, the earliest known use of the phrase search engine optimization was a spam message posted on Usenet on July 26, 1997.[2]

    Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag, or index files in engines like ALIWEB. Meta tags provided a guide to each page's content. But using meta data to index pages was found to be less than reliable because the webmaster's account of keywords in the meta tag were not truly relevant to the site's actual keywords. Inaccurate, incomplete, and inconsistent data in meta tags caused pages to rank for irrelevant searches.[3] Web content providers also manipulated a number of attributes within the HTML source of a page in an attempt to rank well in search engines.[4]

    By relying so much on factors exclusively within a webmaster's control, early search engines suffered from abuse and ranking manipulation. To provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than unrelated pages stuffed with numerous keywords by unscrupulous webmasters. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search allowing those results to be false would turn users to find other search sources. Search engines responded by developing more complex ranking algorithms, taking into account additional factors that were more difficult for webmasters to manipulate.

    Graduate students at Stanford University, Larry Page and Sergey Brin developed "backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[5] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random surfer.

    Page and Brin founded Google in 1998. Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[6] Off-page factors such as PageRank and hyperlink analysis were considered, as well as on-page factors, to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaining PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[7] In recent years major search engines have begun to rely more heavily on off-web factors such as the age, sex, location, and search history of people conducting searches in order to further refine results.

    By 2007, search engines had incorporated a wide range of undisclosed factors in their ranking algorithms to reduce the impact of link manipulation. Google says it ranks sites using more than 200 different signals.[8] The three leading search engines, Google, Yahoo and Microsoft's Live Search, do not disclose the algorithms they use to rank pages. Notable SEOs, such as Rand Fishkin, Barry Schwartz, Aaron Wall and Jill Whalen, have studied different approaches to search engine optimization, and have published their opinions in online forums and blogs.[9][10] SEO practitioners may also study patents held by various search engines to gain insight into the algorithms.[11]


    [edit] Webmasters and search engines
    By 1997 search engines recognized that webmasters were making efforts to rank well in their search engines, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Infoseek, adjusted their algorithms in an effort to prevent webmasters from manipulating rankings.[12]

    Due to the high marketing value of targeted search results, there is potential for an adversarial relationship between search engines and SEOs. In 2005, an annual conference, AIRWeb, Adversarial Information Retrieval on the Web,[13] was created to discuss and minimize the damaging effects of aggressive web content providers.

    SEO companies that employ overly aggressive techniques can get their client websites banned from the search results. In 2005, the Wall Street Journal reported on a company, Traffic Power, which allegedly used high-risk techniques and failed to disclose those risks to its clients.[14] Wired magazine reported that the same company sued blogger Aaron Wall for writing about the ban.[15] Google's Matt Cutts later confirmed that Google did in fact ban Traffic Power and some of its clients.[16]

    Some search engines have also reached out to the SEO industry, and are frequent sponsors and guests at SEO conferences, chats, and seminars. In fact, with the advent of paid inclusion, some search engines now have a vested interest in the health of the optimization community. Major search engines provide information and guidelines to help with site optimization.[17][18][19] Google has a Sitemaps program[20] to help webmasters learn if Google is having any problems indexing their website and also provides data on Google traffic to the website. Yahoo! Site Explorer provides a way for webmasters to submit URLs, determine how many pages are in the Yahoo! index and view link information.[21]


    [edit] Getting indexed
    The leading search engines, Google, Yahoo! and Microsoft, use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo!, operate a paid submission service that guarantee crawling for either a set fee or cost per click.[22] Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.[23] Yahoo's paid inclusion program has drawn criticism from advertisers and competitors.[24] Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review.[25] Google offers Google Webmaster Tools, for which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found, especially pages that aren't discoverable by automatically following links.[26]

    Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled.[27]


    [edit] Preventing indexing
    Main article: Robots Exclusion Standard
    To avoid undesirable content in the search indexes, webmasters can instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain. Additionally, a page can be explicitly excluded from a search engine's database by using a meta tag specific to robots. When a search engine visits a site, the robots.txt located in the root directory is the first file crawled. The robots.txt file is then parsed, and will instruct the robot as to which pages are not to be crawled. As a search engine crawler may keep a cached copy of this file, it may on occasion crawl pages a webmaster does not wish crawled. Pages typically prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. In March 2007, Google warned webmasters that they should prevent indexing of internal search results because those pages are considered search spam.[28]
     
    magnetudaya, Aug 12, 2008 IP
  2. Tripta_Kneoteric

    Tripta_Kneoteric Banned

    Messages:
    220
    Likes Received:
    2
    Best Answers:
    0
    Trophy Points:
    0
    #2
    Is this post meant to impart information or highlight a issue????
     
    Tripta_Kneoteric, Aug 12, 2008 IP