Conducting a competitive analysis is an important part of the job if you're a usability engineer or information architect. A good competitive analysis not only produces usability metrics but also aids decision makers in their strategic goal-setting and planning. Done right, a good competitive analysis can steer a Web development project in the right direction. Let's start with the basics. The first thing to realize is that a Web site competitive analysis is usually performed for a team of business specialists who know nothing about design, usability, or information architecture. They don't have a clue about labeling systems, search ergonomics, or affordance. All they want to know is what the competition is doing and how they can do it better. Obviously, your expertise is in usability and user experience design, so you'll be evaluating sites along the lines of your domain expertise, but the data you gather must always point toward making a smart business decision. Your audience will also expect a presentation and a written report. The presentation can knock the tops off the mountains, but the report better have some detail in it. They expect your findings to be well organized, moving from executive summary to appendixes loaded with relevant details. The end result of your analysis is a decision -- a business decision that affects the rollout of design and development. Next we'll discuss who and what you'll be analyzing. Who's the competition? It's very likely that you'll be given a list of competitors. Every company that has a handle on their market space knows who the competition is. And just about every company has a list of companies on their "target list" -- that special subset of companies that they want to beat soundly in the marketplace. Regardless, the list you get will likely be incomplete. That's because the people giving you the list will have their "business" hat on, not their "functionality" hat on. For example, if the company you're doing the analysis for is in the freight cargo business, you're likely to get a list of other sites or portals belonging to companies in the same business. However, it might be smart to add sites like travelocity.com, which specializes in consumer travel, because their site contains functionality that might be universal to all transportation applications (i.e., departure and destination points are common to freight trucks and airline customers). Along with a list of competitors, you'll likely get a list of items that they want you to focus on, or at least, a list of items they want to do better than the competition. For example, the team might be fixated on the number of content items deployed on their own site. If Competitor X has 500 content items, they'll want to know how many content items Competitor Y and Competitor Z have. The subtext will be, "How fast can we have more content items?" Resist any impulses to follow subtexts at this point. To follow our example, you might dig deeper and find out that those 500 content items deployed on Competitor X's site are outdated, badly written, and generally not useful to their audience. What to analyze Now that you have a list of competitors, you need to draw up a list of items to analyze when you visit their sites. I've developed a categorized list of items over the years, which are included below: * Home page. How informative is the home page? Does it set the proper context for visitors? Is it just an annoying splash page with multimedia? How fast does it load? * Navigation. Is the global navigation consistent from page to page? Do major sections have local navigation? Is it consistent? * Site organization. Is the site organization intuitive and easy to understand? * Links and labels. Are labels on section headers and content groupings easy to understand? Are links easy to distinguish from each other? Or are they ambiguous and uninformative ("click here" or "white paper")? Are links spread out in documents, or gathered conveniently in sidebars or other groupings? * Search and search results. Is the search engine easy to use? Are there basic and advanced search functions? What about search results? Are they organized and easy to understand? Do they give relevance weightings or provide context? Do the search results remind you what you searched for? * Readability. Is the font easy to read? Are line lengths acceptable? Is the site easy to scan, with chunked information, or is it just solid blocks of text? * Performance. Overall, do pages load slowly or quickly? Are graphics and applications like search and multimedia presentations optimized for easy Web viewing? * Content. Is their sufficient depth and breadth of content offerings? Does the content seem to match the mission of the organization and the needs of the audience? Is the site developing its own content or syndicating other sources? Is there a good mix of in-depth material (detailed case studies, articles, and white papers) versus superficial content (press releases, marketing copy)? Conducting the analysis Now that you have a list of sites to visit and a list of criteria to compare, start your analysis. Be sure to conduct your analysis with some rigor. Don't be haphazard, and don't do things differently with each site visit. Try to analyze a site without interruption. In other words, do everything you can to reduce bias in your investigation. Here are some additional guidelines: 1. Visit one site at a time, and take the same (or at least, similar) paths through each site. Follow the checklist of criteria. 2. For each criterion, take lots of notes. You'll refer to these notes when you organize and write your report. 3. Try to give a score for each criterion as you complete them. That way you'll have scores for each major category as well as for each site. 4. If the company that you're doing the analysis for has an existing site, then remember to rate them last. After visiting the company's competitors, this will give you some sense of objectivity. This also provides a good measurement comparison for the readers of your report. When you're ready, you'll need to do some number crunching. Together, these values (mean, median, mode, maximum value, minimum value, and spread) start to tell a story. They don't tell the whole story, but they certainly illustrate and make plain the results of your work. For example, Web sites that have means and medians that are far apart indicate more weight on extreme ends of the scale (either more 1s or 5s in the established rating system). Mode values that are significantly different from medians and/or means also indicate clumping of values away from the normal, expected curve. Web sites with large spreads between minimum and maximum values might indicate a high level of inconsistency in the different portions of the site; in other words, a site might have poor search functionality but excellent content organization and site navigation. You must remember one thing: the numbers you assign to any part of a Web site are, as much as you'd hate to admit it, somewhat arbitrary. Although you may be an expert at usability or information architecture, any number of factors can cause bias to enter the process. You might be in a hurry, have a pressing deadline distracting you, or your mind may wander while you're finishing an evaluation. You might be evaluating a Web site belonging to a big competitor, and there may be some tacit pressure to downgrade any scores you give them. Be as fair as you possibly can, and make it understood that the numbers you assign are subjective scores, not the results of ironclad science. They're assigned and used primarily to have something quantifiable to point to and discuss, instead of just guesses and raw opinion. You can perform this task of crunching numbers manually or with a spreadsheet. Excel and other spreadsheet tools provide built-in functions for calculating means, medians, modes, and other statistical values. Writing the report Eventually, you'll need to take all your notes and all those numbers you've crunched and put them in a report. Most usability engineers and information architects I've met would rather do anything than write, but this is one case where what you write is as important than all the other work you've done. Next, write the findings section. This section is the longest of the entire report and will take you at least a day, if not more, of solid work to complete. Once you've finished with the methods and findings sections, knock out your recommendations and then complete the executive summary. Add the appendixes to the back, and let the report rest for at least a day. Then go through it again, from top to bottom, and clean up the verbiage. Remember that shorter is better. If you can say something in 10 words, find a way to say it in 7 or 8. Cut out as many adverbs and adjectives as possible. Remember that those reading your report will want to get to the heart of the matter and won't appreciate flowery language.