jcdl 8: User Studies and User Interfaces
Jun. 27th, 2007 11:01 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
User Studies and User Interfaces
This panel was probably the most useful in terms of immediate impact for my coworkers, just because of the research into OPAC interfaces. (Well, the most useful not counting the DSpace/Manakin tutorial.)
Agreeing to Disagree: Search Engines and Their Public Interfaces (McCown and Nelson)
Seach engines provide useful information that we'd like to screenscrape, but both Google and Yahoo both block screenscraping queries. So instead we use the APIs, which is more useful than screenscraping anyway. However, the APIs don't give the same results as the Web UI (WUI). Google won't reveal why the results are different, because these are trade secrets. So they measured differences between WUI and API results.
Because some Digital Libraries use crawls (directly or via search engines) to gather content, so not knowing what the results are is important. (The rest of this paper was the details of their measurement process and their findings.)
Question from me: Since we don't know the search engine algorithms anyway, not knowing how the APIs and WUI queries differ seems to be just exposing a more genuine problem, which is that we are relying on search engine crawled data whose provenance we don't understand.
Static Reformulation: A User Study of Static Hypertext for Query-based Reformulation (Hugget and Lanir)
How do large corpora get incorporated into collections? You can't manually index (and interauthor consistency is a problem). What's needed is a manual way to consistently index a large corpus, generated by user queries by users of different levels of skill, and to generate good user feedback. So they built a tool to gather good feedback.
A Rich OPAC User Interface with AJAX (Gozali and Kan)
Their OPAC is at http://opac.comp.nus.edu.sg/ (It uses Lucene and Mysql to communicate with their Innovative backend, and if we communicate with them they are willing to consider sharing their code.)
They disliked their III interface, though they also disliked other OPAC interfaces they looked at. They accuse vendors or designing OPACs to emulate terminal-based interfaces. With these interfaces, they say it is hard to evaluate information seeking behaviors.
They found limitations in Google's multiple suggestions enhancement, and they think NCSU's Endeca-based faceted browsing is no good for advanced searchers. (I'd beg them to elaborate on that, since I love the faceted browsing but I haven't played with it that much.)
How can they leverage modern web technologies to improve searching? They particularly wanted
They added
I will need to play with their OPAC some. I don't feel like what they've got here, at least from what we saw on the slides, is all that original. It's not that common, I'll admit, but a lot of these features seem to be available in, say, the Ovid Medline interface. Am I remembering the right one? There are so many Medline interfaces.
Constructing Digital Library Interfaces (Nichols, Bainbridge, and Twidale)
What should the UI for designing UIs look like, and what does that imply about the skills of the users designing the interfaces? But in all the tools (Greenstone, DSpace, etc), we require librarians to have technical training. Library Journal recently had an article which explicitly called for librarians to get these programming skills. The whole range of skills goes from wizard-style form filling, though knowing html and css, all the way to requiring knowledge of programming elements.
This panel was probably the most useful in terms of immediate impact for my coworkers, just because of the research into OPAC interfaces. (Well, the most useful not counting the DSpace/Manakin tutorial.)
Agreeing to Disagree: Search Engines and Their Public Interfaces (McCown and Nelson)
Seach engines provide useful information that we'd like to screenscrape, but both Google and Yahoo both block screenscraping queries. So instead we use the APIs, which is more useful than screenscraping anyway. However, the APIs don't give the same results as the Web UI (WUI). Google won't reveal why the results are different, because these are trade secrets. So they measured differences between WUI and API results.
Because some Digital Libraries use crawls (directly or via search engines) to gather content, so not knowing what the results are is important. (The rest of this paper was the details of their measurement process and their findings.)
Question from me: Since we don't know the search engine algorithms anyway, not knowing how the APIs and WUI queries differ seems to be just exposing a more genuine problem, which is that we are relying on search engine crawled data whose provenance we don't understand.
Static Reformulation: A User Study of Static Hypertext for Query-based Reformulation (Hugget and Lanir)
How do large corpora get incorporated into collections? You can't manually index (and interauthor consistency is a problem). What's needed is a manual way to consistently index a large corpus, generated by user queries by users of different levels of skill, and to generate good user feedback. So they built a tool to gather good feedback.
A Rich OPAC User Interface with AJAX (Gozali and Kan)
Their OPAC is at http://opac.comp.nus.edu.sg/ (It uses Lucene and Mysql to communicate with their Innovative backend, and if we communicate with them they are willing to consider sharing their code.)
They disliked their III interface, though they also disliked other OPAC interfaces they looked at. They accuse vendors or designing OPACs to emulate terminal-based interfaces. With these interfaces, they say it is hard to evaluate information seeking behaviors.
- Can't compare results side by side
- Can't compare results of two searches in search history
- Have to pre-specify sort key
They found limitations in Google's multiple suggestions enhancement, and they think NCSU's Endeca-based faceted browsing is no good for advanced searchers. (I'd beg them to elaborate on that, since I love the faceted browsing but I haven't played with it that much.)
How can they leverage modern web technologies to improve searching? They particularly wanted
- dual-pane view with overview with details
- history mechanism with tabs
They added
- embedded keyword suggestions
- dynamic views with overview and details
- tabs for parallel searching
- which automatically appear with suggestions
- and are tied together so results in one tab will indicate linkage with prior tab
- dynamic sort order switching
I will need to play with their OPAC some. I don't feel like what they've got here, at least from what we saw on the slides, is all that original. It's not that common, I'll admit, but a lot of these features seem to be available in, say, the Ovid Medline interface. Am I remembering the right one? There are so many Medline interfaces.
Constructing Digital Library Interfaces (Nichols, Bainbridge, and Twidale)
What should the UI for designing UIs look like, and what does that imply about the skills of the users designing the interfaces? But in all the tools (Greenstone, DSpace, etc), we require librarians to have technical training. Library Journal recently had an article which explicitly called for librarians to get these programming skills. The whole range of skills goes from wizard-style form filling, though knowing html and css, all the way to requiring knowledge of programming elements.