As of January, 2012, this site is no longer being updated, due to work and health issues
When Avi asked us to write an article about search engine interface design guidelines, our first reaction was "impossible." Its impossible because search engine designers want to provide a unified interface that would support the millions of people who search the Web every day. But the searchers don't see themselves as search engine users. Instead, they have an information need involving other tools and/or sites and the search engine is just a small step in achieving a real world goal. They want the search engine to work differently every time, depending on what they are looking for at that moment. And these differences must be transparent so they don't need to learn anything new.
This difference has created two very different caricatures that users and designers have of each other.
Caricature of the user
People are very focused on their overall goals and do not want to focus on learning the search engine interface or even tailoring their queries to the design of a search engine they are familiar with. Instead, they want to simply dump the first thought that comes to mind into the input box, usually in the form of a single keyword, and hit search. And then they expect one of the top results to have what they are looking for and to be written in a way specifically telling them that it is there.
Preferred input >> "problem"
Preferred output >> "1. This link has the answer you want."
Caricature of the designer
On the other hand, search engine designers want users to become fluent in Boolean logic, learn the exact syntax of their search engine despite the fact that it is drastically different from any other, and include any and all keywords in their query, separated by as many Boolean, proximity, and other operators as necessary, to insure that every resulting link has what they are looking for. Furthermore, they expect users to interpret the descriptions based on only the Alt tags from the page, randomly annotated to the URL.
Preferred input >> "problem adj prob*? NOT other adj prob*"
Preferred output >> "www.answer.com/%@*&&ark/%&&*@ answer; detail; detail; code"
It is the exceptionally broad chasm between these two sets of desires that has created results like Web Top's 2000 study that found 71% of search engine users frustrated. It also supports several research programs at universities and companies around the world devoted to creating smarter search interfaces. Here is our story.
Florida International University Usability Research Lab
As cognitive engineers, we look at the problem from the perspective of the user and how their task is structured in their minds (whether they know it or not). Our research has shown that the optimal design of the search engine interface depends on the user's conceptual model of his or her task. There are some tasks that require very specific information, and others where the quality of the information can vary and the user is looking for the best link available.
For example, if my task is to find out how many homeruns Barry Bonds hit last year, any web site that has this information is just as good as any other. But if it doesn't have it, the site is completely useless to me. On the other hand, if I am trying to find out if the economy will improve, there are many sites that may contribute to my search, but some are definitely better than others. We have learned that users approach these two kinds of tasks very differently.
For specific searches, they scan down the results list until they see something that seems like a likely candidate. In testing, we measure the Pre-Click Confidence (PCC), which is how sure the user is that the selected link will have the needed information. When the PCC for a link exceeds a match quality threshold, they click. We call this a satisficing constraint because the user is satisfied with the PCC. If the user is in the mood to browse, she can set the PCC threshold low. In this case, she should expect to get lots of false alarms and may not be as frustrated when the link doesn't have what she is looking for. On the other hand, if she is in a "just the facts" mood (Rozanski et al, 2001), she will set the PCC very high. If she gets fooled by a poorly written description, she will be much more disappointed in the search engine performance.
For non-specific searches, the user compares each of the links with the others to see which gives him the highest PCC. That's the one he clicks on. It's rare that anyone clicks on the "next ten" link. Generally, if none of the first ten are any good, what are the chances that the next ten will be any better? Advanced users may open several of the links in new windows to further compare the contents.
In another study, we investigated which fields contribute to the PCC. As you can imagine, in all cases users wanted to see a description. But that was our only consistent finding. In some tasks, such as searching for information on the Y2K computer bug, users wanted to see a date. But in other tasks they didn't want a date. Same thing with size and other task-specific fields. No one seemed to care about "more like these" or other advanced options. This supports log data that shows that people rarely use these advanced features (Jansen and Pooch, 2000).
So what do these studies tell us about search engine design? For one thing, we need to find ways to allow users to customize the interface based on their current task. But few users ever use the advanced search interface, so other ways are needed. New graphical designs can also support better use. In one study, we organized the results into a table instead of the traditional list. Each result was one row of the table, with different columns for the description, URL, date, size, etc. This allowed users to scan down the list much faster. But there was a more unexpected result as well. Because users could go faster, they read more of the choices than with the list design. So the link they eventually selected was more likely to be the best one. We are in the process of developing some novel interface designs that may facilitate this customization without requiring users to learn anything new or intimidating them with complexity. Stay tuned.
Jansen B.J. and Pooch U. (2000).Web user studies: A review and framework for future work. Journal of the American Society of Information Science and Technology. 52, 3, 235-246.
Rozanski H.D., Bollman G., and Lipman M. (2001). Seize the Occasion: Usage based segmentation for Internet marketers. Booz Allen & Hamilton: Boston, MA.
For more information about these studies or search engine interface design, please contact:
Industrial and Systems Engineering
Florida International University
Miami, FL 33199