All posts by Lucy Reid

Expert Search Early Adopters pilot – searches in provider interfaces

In mid-September 2020, HEE began a pilot to understand how best to help LKS in England move from using HDAS to using provider interfaces (EBSCOhost, Proquest and Ovid – see here for further detail).

The pilot period has now ended and we’re in the process of finalising the project report. Today we’re sharing our findings from the post-switch searches that our participants carried out in provider interfaces between November 2020 and January 2021. We asked people to record data for one day of each week, and they filled in an online form for any searches they did that day. We had 296 searches captured, from quick 20 minute scoping searches to 16 hours’ worth of searching to support a large evidence review and everything in between. We think it’s a fascinating set of data and we could write many, many blog posts about it, but we’re showcasing some of the highlights here. If you have any burning questions about our findings that aren’t addressed here, please post them in the comments below and we’ll try and answer.

Pilot participants were faced with a choice when searching, as Medline is available in all three provider interfaces – Ovid, EBSCO and Proquest. As reflected in pre-switch search data it continued to be the most frequently used database, with 92% of searches using Medline from one provider or another. Ovid Medline was the most frequently used resource overall (70% of searches), with Ovid EMBASE second (61%) and EBSCO CINAHL third (40%).

More post switch searches were carried out over multiple sessions (57%) than in a single session (43%). The time taken to complete a search varied between 20 minutes and 16 hours. As with our pre-switch search data, there were two searches captured that fitted into the systematic review category, and as their times vastly skewed the average search time they were removed from calculations for this blog post. With the remaining 294 searches the average time to complete was 3 hours 13 minutes, with most taking between 2-3 hours or 3-4 hours (23% and 21%).

Participants used many different combinations of interfaces and databases to search. The use of multiple interfaces was spread pretty evenly, with 41% using one provider interface, 39% using two and 20% using three. Where searches did use just one interface, 62% used Ovid.

We asked participants to tell us what had gone well with their search, what didn’t go so well, and what changes could be made to improve their search experience. Things that worked well included the use of reference management software for amalgamating results, the ease of use of subject headings within the provider interfaces, and using one interface – where the search is carried out in multiple databases but in a single provider interface. Things that didn’t work so well included difficulty producing outputs for end user, a perception that the whole process took a lot longer, and problems with reference management software, whether it was difficulty importing results or not being able to format references in a desired style. The suggestions for improvements to the search experience covered a multitude of elements, some related to the interfaces themselves, others around issues with reference management software, the time burden of adapting to new functionality and the extra login steps needed to access databases.

Finally we asked people to rate their search experience a star rating, where 1 is poor and 5 is excellent. The average rating was 3, with 10% rating 1 star, 18% rating 2 stars, 42% rating 3 stars, 21% rating 4 stars and 9% rating 5 stars.

The qualitative data collected about the search experience has been key in shaping the final report for the project, including recommendations for moving forward, and it will be published towards the end of April. For any questions about the project, please email Emily.hurt@lthtr.nhs.uk.

Emily Hurt
Lancashire Teaching Hospitals NHS Foundation Trust

National Discovery Service – Before Action Review Workshops

During June and July 2020, HEE’s Resource Discovery team conducted a series of 10 ‘Before Action Review’ engagement sessions with Library Managers about the new National Discovery Service. Over 70 individuals from across the country participated in the workshops. This blogpost provides a short summary of what participants told us.

First, we asked what your users’ most common frustrations are when accessing trusted information. You told us that users find there are too many routes to information, compounded by a lack of time and skills required to sift through search results to find what they want. You also reported users’ frustrations with OpenAthens, particularly the multiple sign-ins users must use, along with expectations around access that fail to be met. One manger spoke for most, saying:

“Users do not want to go to 25 places. They want something that works. They do not want to be trained how to use all this – they want to sit down and get what they need.”

We asked what a National Discovery Service would look like to you, in an ideal world. What would it need to do?

Unsurprisingly, you told us that it needs to overcome the frustrations of the current user experience. The service needs to be “seamless”, offering users a consistent experience as they move across different organisations: users “should not have to start all over again” when they move between one job or placement and the next. You want a system with the semantic search functionality of Google but not one reproducing Google’s “information hose” – one that provides trusted information, along with features such as learning from user preferences, tailoring searches to users’ interests and doing away with multiple sign-ins.

We asked what lessons we could learn from your experience of managing or participating in successful projects elsewhere. You told us that managing the supplier relationship is crucial. Suppliers should provide timely and flexible project support and technical input. They must be held to account to ensure they deliver on their specifications. You also advised those leading the project to take steps to ensure that the beneficiaries of the project feel they are being consulted and kept informed.

We asked you to imagine the failure of the National Discovery Service, to get us thinking about how to avoid such an outcome. Two themes especially came back to us, loud and clear. First, the project will fail if we don’t take account of the particularities of local IT systems. The second theme was failure to gain library staff support for the project. To do this, you told us we need to generate enthusiasm about its benefits. Above all else, you told us that we need to reassure staff that the implementation of the National Discovery Service is not about cutting library staff roles or investment in library services. We can reassure you it isn’t – it is about freeing up library staff time for customer facing work and reducing time and money spent maintaining local systems.

Finally, we asked you what we needed to do next. You outlined a range of good project and communication principles we need to adopt. We need to ensure good project management support and supplier relationship management. We need transparent communications and regular updates, not just informing but engaging, and we need to make sure local teams are fully involved.

This is just a summary of the issues you raised during the engagement sessions. We have compiled a list of Frequently Asked Questions to address the issues you raised during the sessions in more detail and will add to this as the procurement and implementation progresses. In the meantime, we still welcome any further comments or questions about National Discovery Service you still might have. Please send them through to us on:

Email: NHSNationalDiscoveryService@libraryservices.nhs.uk

Twitter: #NHSNationalDiscoveryService

Survey: https://healtheducationyh.onlinesurveys.ac.uk/nhs-national-discovery-service-may-2020

By Franco Henwood, Library and Knowledge Services Project Manager

What are the needs of expert searchers?

In January 2020, HEE commissioned a piece of research to find out about the needs of expert searchers: individuals who carry out frequent and complex searches of the healthcare literature to support clinical practice, research, service development or systematic review. This followed on from a similar project looking at the information needs of end-users (the healthcare workforce) which was carried out in 2019 and which was key to shaping plans for the development of the national discovery system. The aim of the 2020 work was to find out more about expert searchers in the NHS: who are they, what do they do and why, what technologies and systems they use.

Identifying expert searchers

The first step was to recruit the right people into the research. We predicted that a large proportion of expert searching would be carried out by librarians but we also wanted to make sure that the experiences of non-librarian expert searchers were captured. Colleagues from NICE looked at 2019 HDAS user data and extracted a pool of very frequent users based on the number of saved searches. As anticipated, many of these were librarians. Of the non-librarians, 30% were in pharmacy roles, 14% were medical and dental staff, 10% were in specialist nursing roles including practice development nurses and nurse educators, and 3% were in clinical effectiveness and similar roles. To this pool of heavy HDAS users, we added a list of individuals who are known to be expert searchers through their involvement in systematic reviews, evidence synthesis and similar work for the NHS in England. Some of these individuals work in the NHS but are known not to use HDAS while others work outside the NHS, for example in higher education, and also use other interfaces. All these experts were sent a screening survey inviting them to participate in the research.

Methods

The research itself was carried out by Lagom Strategy, independent consultants in user experience research. The project included:

  • 12 one-to-one interviews with expert searchers
  • A diary study in which three participants logged their daily search activities
  • Three field visits to observe expert searchers in action
  • A workshop to develop the main expert searcher personas and map their user journeys
  • A survey, completed by 169 participants, to validate the user needs captured during steps 1-4.

Expert Searcher Workshop

Expert Searcher Workshop

Practitioner Researcher User Journey

Practitioner Researcher User Journey

Who are expert searchers?

Expert searches are mostly carried out by librarians. This confirms that knowledge specialists play a key role in the delivery of an evidence-based NHS. In addition to librarians, the research revealed two other expert searcher personas: practitioner researchers and research students. Through the workshop, these personas were characterised as follows.

Librarian

  • Searches on behalf of others
  • Takes a systematic, planned approach to searching
  • Searches across different sources: HDAS, native interfaces, Google Scholar, specialist datasets
  • Uses a variety of tools to manage search outputs and record activities including reference management, KnowledgeShare
  • Helps other users through training and support

Research student

  • Juggling studies with full time work and family commitments
  • Need to search on the go/mobile with universal logins
  • May need resources beyond core content
  • Use keywords rather than full syntactic searching but need more guidance about things like wildcards
  • Need to record search strategy and results, often using tools like Zotero
  • Like open access content and “suggest articles” functionality

Practitioner researcher

  • Some protected time for research/practice development activities alongside clinical commitments
  • May be linked with local R&D team and may be linked with local higher education institution
  • Need to store search results, sometimes using EndNote
  • Share results with peers eg journal clubs
  • Promoting evidence-based practice and research awareness with peers eg writing and promoting “critically appraised topics”, upskilling colleagues in research methods

What’s different about expert searchers?

Knowing where to look for what kind of information, is a defining characteristic of an expert searcher. They use a wide range of information sources for their searching. Lots of expert searchers in the sample use HDAS (86% of the validation survey respondents report using HDAS) but they also use PubMed, native interfaces, TRIP, Google Scholar, CRD, OpenGrey, Royal College websites, PEDro and so on.

“Some of my skills are centred around knowing where to look for information” – Librarian, interview

Some expert searchers report having go-to resources based on habit, training or experience while others point to following local or professional guidance.

“HDAS was the first one that I learned, and I [felt] more confident with using this” – Masters student and Consultant Psychiatrist, interview

“[UKMI] creates a list of resources that are recommended for all the MI services to use” – Pharmacist, interview

“I’ve got a list basically that I kind of add to, lots of different sources that I can refer to because obviously there are so many, it’s hard to kind of keep track” – Librarian, interview

Because they are more familiar with these different resources than other users, expert searchers are more able to get the best out of them.

“Using the various limits and all the additional features on the databases but using them to the best advantage… knowing when to use them and knowing when not to use them as well” – Librarian, interview

“Particular fields, tapping into the thesaurus, the taxonomy that’s used on each particular database. Being able to collate the results I’m getting back” – Librarian, interview

Expert searchers are more comprehensive in their searching.

“They’re probably just looking for something quite quickly whereas I think we’re looking for, you know a lot more in-depth and finding everything or everything as far as you possibly can” – Librarian, interview

“We are very thorough” – Pharmacist, interview

“We wouldn’t just look in one database, we would look in several” – Librarian, interview

They are also more systematic, sometimes starting with a scoping search then following a planned strategy.

“Started with exploratory search to assess the number and types of publications broadly related to the question and to familiarise myself with the specific terminology, and gather subject headings and keywords and synonyms” – Librarian, diary study

“I tend to try to come up with a bit of a strategy first, before just sort of diving in. So, kind of thinking around keywords and thinking of like different synonyms, kind of building the sort of structure of the search up” – Librarian, interview

Expert searchers need to be able to record their search strategies, histories, de-duplicated results, full text articles and search reports. They are often searching on behalf of or with others and so need to share their search activities and outputs.

“We have a template that can be adapted, that would include details of who has done it, and what they’re asking for, the question they’re asking” – Librarian, interview

“That [sharing information] would be done, most of the time, writing a report at the end” – Research Officer, interview

“What I might have done is print that article and show it to the psychiatrist before the appointment” – Clinical Psychologist, field visit

Some expert searchers support others to improve their searching skills. This isn’t limited to librarians. Support can be informal (as provided by the practitioner researcher mentioned above) or formal training.

“It’s kind of using my own skills and also sort of trying to upskill everybody else” – Librarian, interview

“I actually provide that training in an IT suite, over one-to-one, with a group of people all sitting at a computer and going through it” – Pharmacist, interview

How satisfied are expert searchers?

Overall, expert searchers are fairly satisfied that they have the tools and resources that they need for their work.

HDAS users like the single platform for multiple datasets which is easy to navigate.

“I use HDAS because it’s a really nice, clean interface” – Clinical Psychologist, field visit

Expert searchers reported benefits to using native interfaces as well.

“I think generally they’re [native interfaces] a lot more robust, they never go down” – Librarian, interview

“I think they [native interfaces] have a lot of features that are really beneficial, things like being able to do more complex searches” – Librarian, interview

But there are things which could be improved about all interfaces.

There was a sense that some of HDAS’s functionality had been downgraded over time.

“Previously, you used to be able to run a keyword search across 5 databases, run another keyword search across the same, as long as they were the same 5, you could then combine them with the AND operator within the search panel. That’s no longer available, you have to either search the databases separately or put your entire search strategy into a single search line and then just run it as a one off, you can’t then add any limits or combine any more” – Librarian, interview

There was also some frustration about the way that subject headings are managed in HDAS.

“I had problems with searching for the corresponding subject headings in HDAS Embase (the search facility there is simply inadequate, it would have been much better to have access to proper Embase thesaurus search)” – Librarian, diary study

“It used to be that you could type in a word, search it in the thesaurus, and it would come up with lots of different things relevant to that. But now you have to type in the exact thesaurus term for it to actually come up, and that’s really annoying because usually you don’t necessarily know that the thesaurus term is” – Librarian, interview

“When I’m using Medline, I always use that with the original MeSH browser from the National Library of Health… I found their MeSH browser much much more useful than the one from HDAS” – Pharmacist, interview

“I do go through the process of knowing what my terms are before I even get to HDAS, then having to go through the whole thesaurus process and having to re-input everything… that is a process I’m willing to do” – Pharmacist, interview

“I have to go to PubMed, to their MeSH headings in PubMed to find out, so it’s like an extra step. I find that really frustrating and annoying” – Librarian, interview

Some users reported more confidence in the quality of the searches they carry out in native interfaces

“We end up having to use the native interfaces, because HDAS just can’t cope with it” [systematic reviews] – Librarian, interview

Expert searchers also liked some of the additional features seen in other platforms.

“It would be quite good if HDAS could do something like that, an in built kind of citation matching tool” – Librarian, interview

“You can do like a related article search on Mendeley and that’s deal useful, especially if somebody sends you a paper and says I want things like this” – Librarian, interview

However, searching multiple resources presents its own problems, especially around recording and sharing search histories and results.

“I need an easier way to compile search results from multiple resources as I don’t have access to reference manager software” – Other library and information professional, user needs validation survey

Keeping on top of differences in functionality across multiple interfaces is also seen as a challenge.

“For me there’s always a little bit of a kind of a learning curve of oh right where’s the box to put the search terms in and oh it’s over here and how do I combine this and so on” – Librarian, interview

“You have to know how to use them and we don’t receive training. Well, I’ve never received training in it, I’ve only ever learned how to use them through my own time and effort” – Librarian, interview

“Easier to use help function… it would be nice if you could divide it up [help content] for your simple user versus your advanced user” – Pharmacist, interview

“There are some new features being introduced by them [native interfaces], which the database providers don’t necessarily explain very well” – Librarian, interview

What happens next?

As well as the rich learning from the qualitative phase of the study which has been summarised above, the research has resulted in a validated “user story backlog” or prioritised list of requirements for expert searchers.

Expert Searcher User Needs Backlog

Expert Searcher User Story Backlog

These will all be used by HEE to inform the development of the discovery ecosystem, ensuring that the needs of expert searchers are met or exceeded as new services are introduced.

For more information on this work, contact Lucy Reid.