The Evolution of Scholarly and Information Metrics

In my previous role as Head of Scholarly and Library Futures at Jisc, I agreed to contribute a chapter to a new book by Andy Tattersall on Altmetrics (Altmetrics: A practical guide for librarians, researchers and academics) from Facet Publishing.

Now that the book is published, I thought it worthwhile posting my piece, in full, on the blog.

————————————————————-

Have you heard the parable of the man who lost his car keys? Walking to his car from the office in the dark he fumbles for the keys to open the door, dropping them in the gutter. But, the light in the gutter is poor, so he undertakes his search on the pavement where the light from the streetlight is brighter and it’s easier to see.

He ends up walking home, unable to find his keys.

Introduction

We often look for information to help us answer our questions (or find our car keys) where it is easiest to look; where information is easiest to find. Even if it isn’t the right information to answer our particular questions. The ‘streetlight effect’ (Freedman, 2010) means we end up focusing our time and attention on the information and data we discover: how we can make it better, find more of it and so on. We forget the questions we originally wanted to answer. We are so busy looking where the light is good we forget what we were looking for and why it was important.

The streetlight effect reminds us that just because we have information it doesn’t mean it is the right information, or that it is telling us what we really want to know. There is also little benefit in collecting all this information – like searching in the light – if, in the end, you find little you can actually use. If you can’t act upon the information to inform and improve what you are doing what is the rationale for collecting all this information? If you’re not going to find your keys where the light is good, there is little point looking there.

Increasingly the scholarly community understands that what we measure should be what is most important to us. Yet, the most important things are often obscured by poor light. As research increasingly migrates to the web, with both online journals and more informal channels such as Twitter, blogs and Facebook, what we measure is changing as well. It no longer makes sense for us to base our metrics solely on previous standards of success – such as the number of times an article is cited in other articles – if increasingly we also value Tweets or Likes, or a quote on a Wikipedia article. In an online, connected and open model of scholarship we need to be constantly revising and revisiting what, how and why we measure and what success means for libraries, researchers and institutions.

This chapter explores the tools and approaches libraries use to measure what really matters: to them, the institution and researchers, students and staff. Specifically we will look at the metrics that look to measure and demonstrate the library’s impact and value; to the institution, academics and its students. We begin by taking a look at what (and how) academic libraries have traditionally measured and how those metrics are changing. As the metrics have changed so too has the role of the library within the institution. Indeed, as the institutional importance of new areas such as learning analytics and innovative uses of impact data grow, so too the library finds itself strategically positioned to help participate in, or lead, this important area. Few services in the academic institution can claim a familiarity with metrics close to that of the library. Indeed, it has been the library that has played a significant role in introducing notions of impact and value metrics to institutions, students and researchers.

The Emergence of Value and Impact Metrics

In 1968 Lloyd and Martha Kramer (Kramer, 1968) asked an apparently straightforward question: Does a student’s use of the college library have a relationship to the likelihood of that student completing her course and graduating?

Such a question appears somewhat unnecessary – of course it does. Doesn’t it?

As it happens, the Kramer paper suggests there is a statistically significant correlation between library use and ‘student persistence’. But, more importantly, the Kramer’s and a small handful of other early studies (such as Barkey, 1965, who explored the impact of the library on a student’s grade point average), have wanted to provide evidence to back up the intuitive responses such questions usually inspire. They wanted to use the data from the library and other parts of the university (such as the university’s registry) to explore whether there was indeed a correlation between library use and student attainment and retention.

So the desire to use data to determine the impact and value of the library isn’t new. However, despite the promise of such early enquiries, for the majority of libraries and their institutions, impact and value are demonstrated through the use of techniques such as satisfaction surveys both locally and nationally (the National Student Survey, or NSS, being the most well known in the UK) and individual student and researcher feedback. These are, of course, valid tools, but they also provide only a small part of the picture. More importantly, these methods also tend to lack the impact of hard numbers and data. This is especially the case when being presented to senior managers who are using the data to decide on the allocation of budgets and precious resources.

In contrast, the emergence of research metrics has been marked by its use of a powerfully simple, numeric and quantitative approach.

Initially, at least, research metrics had little to do with providing data for the researcher themselves. Rather they provided the institution and the library with a quantitative means to evaluate the value and impact of content, specifically scholarly journals. Librarians and information scientists have been evaluating academic journals for as long as journals have been part of the library collection (early 20th century examples include Gross and Gross’s analysis of citation patterns in the 1920’s). The most important development in the quantitative evaluation of journal impact was Eugene Garfield’s Impact Factor. A journal’s impact factor is a measure of the average number of citations of articles published in that journal. Garfield’s influence, and his idea that citations can help define the importance and value of a source have had a huge, and ongoing, influence; it can be seen in Google’s Pagerank algorithm, for example. The impact factor is a relatively recent addition to what is termed bibliometrics, or the application of statistical and mathematical analysis of written publications, such as articles and books (Wikipedia: http://en.wikipedia.org/wiki/Bibliometrics).

Indeed, we might argue that libraries have largely been responsible for introducing scholars to many of the metrics that they use today. For example, the Journal Impact Factor (JIF) – a set of proprietary metrics developed by Thomson Reuters based on the work of Garfield – was, in the 1980s, embedded in the Web of knowledge platform that libraries provided access to for academics and researchers. This not only provided access to the content, but also the metrics by which the value and impact of that content was being judged. What started as a peripheral part of the content libraries provided has grown in significance and importance in its own right.

The past few decades has seen the field of bibliometrics flourish. Accompanying the JIF in recent years have been a number of other bibliometrics indicators including the H-Index that measures the citations for a specific researchers publications, the Eigenfactor which looks to score a journal’s importance and the g-index which quantifies productivity based on publication record. Not only has the spread of these measures increased, but they have also become entrenched in academic culture over the last few decades. So much so that when the UK’s Research Excellence Framework (REF) – which aims to assess the quality of research in UK higher education institutions – forbid its subject review panels to use the JIF and other journal rankings, informal polls and research suggested that these were still, overwhelmingly, being used (see, for example, Rohn 2012).

Not only have librarians used bibliometrics as a part of their own strategic toolkit (supporting purchasing decisions or providing evidence for relegating or disposing of journal runs), but they have also been a key part of services libraries provide to researchers and academics. Beyond helping shape collection management policies and strategy, these metrics have also formed a core part of the evolving role of research and academic support provided by librarians. Bibliometrics are part of the work librarians undertake to help support and guide researchers, from enabling them to better understand and track their own impact through to increasing digital literacy and improving research methods (see, for example, Delasale, 2011).

The ability of the impact factor and bibliometrics in general, to provide a single, quantitative figure for the impact of a journal, and increasingly a researcher, makes it very attractive. But, impact indicators have become so seductive that they are threatening scholarly communications itself.

This single number, the impact factor, is devouring science.

A Fixation on Measuring

‘Indeed, impact factors have assumed so much power, especially in the past five years that they are starting to control the scientific enterprise’.

(Monastersky, 2005)

It wasn’t meant to be like this. Garfield’s idea was meant to be a way for him to pick out the most important scientific journals and distinguish them from the lesser ones. It was a tool that, as we have seen, has uses from informing library collection and purchasing decisions, through to helping researchers making decisions about which journal to publish in. As Garfield himself says: ‘We never predicted that people would turn this into an evaluation tool for giving out grants and funding’ (Monastersky, 2005).

As these metrics became accessible online so their influence grew. Indeed, it is an influence that seems to be affecting the very types of research and science that is being conducted. As mentioned earlier, higher impact journals tend to favour more fashionable and ‘sexy subjects’ (Schekman, 2013). It is the impact factor – as the exemplar research metrics – that in the online world has seen its ubiquity and power grow to almost pestilential levels. It is beginning to have a ‘toxic influence’ on scholarly discourse and communication (Sample, 2013). As we have already seen with the UK REF example, attempts to reduce or remove the influence of the Impact Factor from assessments tend to have little impact. It is deeply embedded in institutional culture, helping determine promotions and funding. It has also led to accusations that it encourages researchers to explore fashionable topics favoured by high impact journals, and the charge that journals and publishers are gaming the system to inflate their own rankings (see for example, Wilhite and Fong, 2012).

It has been the web that has provided the impact factor and research metrics greater visibility and influence (for more information on both how the web has impacted research metrics and the failings of research metrics in general, see Booth, Andrew ‘Metrics of the Trade: where have we come from’ in this volume). But it has been that increased visibility has sown the seeds of the impact factors own demise. Not only does research in a web-based world demand new and better metrics, but it also bestows upon researchers, students, institutions and libraries the ability to develop their own metrics. To measure what’s most important to them and get the data and tools they need to influence those things for the better.

A New Metrics Frontier

We are experiencing a great migration; scholarly workflows, debates and outputs are increasingly taking place online, on web-based spaces such as blogs, Twitter, Facebook, Wikipedia and so on. Such a migration challenges the very conventions of what research and scholarship looks like. Put simply, research on the web no longer needs to conform to the physical boundaries of a printed journal volume. It need not have a clearly defined beginning or end – it can be an on-going scholarly conversation. It can have the underlying data embedded in the article or it can include media such as video or music. It need not even be an article; it could be a blog post, a video, a presentation or a series of tweets. The possibilities are endless and help disrupt our current notions of scholarship and scholarly communications.

Connected to this sense that the boundaries of scholarship and its outputs are changing is the ability for us to make visible what was once hidden. With every page view, followed link or tweeted paper we are leaving a visible trace of our interactions with an article or academic output. Furthermore, even the very embryonic beginnings of a piece or area of research can be traced. Initial discussions between colleagues that would once have remained hidden can be traced across the web through blog posts and comments or a series of tweets. While scholarly communications has been slow to fully utilise the potential of the web, the web has had the effect of making what were once ‘backstage’ activities visible. It has led to these activities increasingly being ‘tagged, catalogued, and archived on blogs, Mendeley, Twitter, and elsewhere’ (Priem, 2011).

Understanding these new and emerging data sources and how they might inform scholarly and impact metrics has become known as Altmetrics. Altmertics are the various and diverse indicators that help scholars and institutions see what impact looks like. As the altmetrics manifesto states: ‘Altmetrics expand our view of what impact looks like, but also of what’s making the impact’ (Priem et al, 2010).

Libraries have been instrumental in helping shape and disseminate altmetrics to the researcher community. This role has solidified in recent years as new products, services and platforms have emerged to support the use of altmetrics in academia. These services and tools include, for example: Impact Story: http://impactstory.org; Plum Analytics: www.plumanalytics.com, and; Altmetric: www.altmetric.com. The creation of these institution-friendly tools is helping to drive the adoption of these services and providing an important counter-balance to the powerful hold of traditional research metrics within academia.

Yet, ironically, many of these alternative metrics are deeply embedded in the previous metrics paradigm. As Andrew Booth makes clear in his chapter in this volume (‘Metrics of the Trade’): ‘A further irony is that the altmetric community seeks to establish credibility by mimicking its forebears’. Yet, the library continues to play a role in the development and on-going adoption of more diverse metrics and analysis. This exploration of new types of metrics is also rooted in the past, in capturing of the library’s impact and contribution to student success. In understanding the library’s impact in teaching and learning we are beginning to see the start of a ‘turn’ in the library’s approach to metrics.

The Analytics-Turn in Libraries

As we saw earlier, the interest for libraries in how they contribute to the success of their students isn’t new.

Over the last few years libraries, and in particular academic libraries, have been developing more sophisticated and data-driven approaches to demonstrating the impact of library services and resources to the host institution and beyond. In particular, on the last two decades, there has been an increasing amount of literature on the impact of libraries on their users; literature that is using data to demonstrate the value of libraries on their users. This work has explored the relationship between the library and its resources and space to the performance of the student. Early work tended to take place in an environment where print was still dominant (De Jager, 2002; Wells, 1995) and at a time when extracting and sharing data from library and institutional systems was difficult and time consuming. Some, from the public library sector (Sin and Kim, 2008; Suaiden, 2003), explored wider societal impacts and demographic usage patterns.

From around 2010 onwards there was a resurgence of data-driven strategy and analytics within the academic library sector. In particular there has been significant and ground breaking work by Universities in the UK, US and Australia to see how the myriad data that flowed through the library and wider institution could be harnessed for the benefit of the student and researcher. These libraries, at the University of Huddersfield, University of Wollongong in Australia and the University of Minnesota in the US, are gathering and analysing data both from the library and from across external enterprise systems to demonstrate the value that they bring to the wider life of the institution and the success of their students and researchers (for case studies of these 3 projects see Showers, 2015).

What marks this work out from previous attempts is the diversity of data that is being collected: The libraries are interested in data from across the library’s systems and services (gate count, e-resource usage, computer log-ins) as well as data from across the institution (student records, student services, registry, IT). Taken together these data sets are more powerful than they would be individually. But this diverse data isn’t collected for its own sake, or because it’s easy to get. Rather, the data is selected and tested to ensure it is able to provide real insights, insights that can then be acted upon. It tells the library something they can respond to; it is actionable data (the data might, for example, highlight differences in how the library is used at certain times, allowing the library to tailor services for specific times of the day). And, finally, the data isn’t just about helping improve the user experience for existing services and systems. It is also being used to help develop and underpin new types of services and interventions. These new services can be more intimately tailored to the needs of users, or groups of users, increasing the value of those services and the library overall

It is worth looking briefly at the work led in the UK by the University of Huddersfield. The Library Impact Data Project (LIDP), funded by Jisc, looked at data from over 33,000 undergraduate students from across eight universities. The project aimed to test the hypothesis that there is a statistical significance across a number of universities between library activity data and student attainment/achievement. Although it is important to note this relationship is not causal. The project was able to support the hypothesis (Stone et. al., 2012), and was supported by similar studies in Australia (Cox and Jantti, 2012 and Jantti and Cox, 2013) and the United States (Nackerud et. al., 2013).

The success of the LIDP work led to a second phase of the project that examined data from 3,000 full time undergraduates from the University of

Huddersfield. This phase used continuous rather than categorical data, which allowed the project to do more with the data. The aim of this study was to dig deeper into the data and look at a number of new data categories, such as demographic information, discipline, retention, on and off campus use, breadth and depth of e-resource usage and UCAS data (Stone and Collins, 2013). Using

students’ final grades as a percentage, rather than a class also allowed phase two to demonstrate a correlation in the phase one hypothesis in addition to the statistical significance found in phase one (further strengthening the findings from phase 1 of the study).

What these projects have helped to demonstrate is that not only does the library play a critical role in the life and success of its students, but that increasingly libraries are keen to make sure they can analyse the data they have in their systems and services to help improve those services and systems. This data is enabling libraries to set entirely new metrics for what success looks like, and by analysing it effectively, new types of service and interventions are beginning to emerge. Indeed, as the Huddersfield led work has demonstrated, much of the benefit of analysing data comes when you are able to aggregate large amounts of data together, from diverse sources, across different institutions.

Shared Analytics and Metrics Services

Libraries have a long and successful history of collaborating and sharing services and systems at a regional, sector and national level. In the UK such services have been exemplified by, among others, the Journal Usage Statistics Portal (JUSP: http://jusp.mimas.ac.uk/) for journal usage data, Institutional Repository Usage Statistics (IRUS www.irus.mimas.ac.uk) for repository usage stats and COPAC collections management (http://copac.ac.uk/innovations/collections-management) a shared collections management tool that allows libraries to compare their local collection against other libraries in the UK informing library collection policies, both in terms of what might be relegated, but also to support purchase decisions.

These services highlight that the data libraries collect and the metrics used should be primarily focused on enabling the library to act on that data. Increasingly we need to move to a model where the data, and to some extent the analysis, is available at the ‘push of a button’ (or as near to that as possible). The focus of resources and energies should be on acting upon the data, not collecting it.

Building on this appetite for shared data services and the innovative work on library impact led by Huddersfield in the UK, Jisc funded the Library Analytics and Metrics Project (LAMP) to develop a shared, national service to enable institutions and libraries to make sense of their disparate and diverse data sets and enable them to spend their precious time and resources acting upon these data-driven insights to improve the student and researcher experience (Showers and Stone, 2014). Working with 8 institutional partners the project is ingesting a variety of library and institutional data in order to clean and normalise the data and present it back to the library via an intuitive data dashboard. The prototype aims to remove a lot of the burden of both collecting disparate data sets and to do some of the initial analysis.

LAMP will use the opportunities of scale to access a much larger number of datasets, analysable both at the local, institutional level and at the shared, above-campus level. In both cases it is hoped to gain new insights, such as on national usage patterns, and to enhance services and functionality for institutions, such as benchmarking and personalization. Such an approach might enable both very localised, individual insights; such as alerts for students that may be at risk of failing their course, or for a cohort of students who need specialised services through to national measures for what success looks like. Indeed, it is possible to imagine services like LAMP being able to work with projects such as those in Australia (Wollongong) and the US (Minnesota) to begin establishing international benchmarks for their libraries. In an increasingly international education environment, such an approach may become a necessity.

Like altmetrics, many of these shared library analytics services, exemplified by LAMP, offer the potential for libraries to increasingly tailor their services and systems to the individual user. By filtering the open web altmetrics presents the potential for personalised information flows. By understanding user interactions with the library and its services, shared analytics and data services enable better and more intimate services for their users.

The evolution of library metrics is inevitably drawing us toward a position where we are exploring ever deeper into data for nuanced insights. This means that libraries must explore mixed methods; we must measure and count, as well as ask open-ended questions. We are witnessing the beginnings of a rebalancing of the data and analytics methodology: a rebalancing that incorporates ways that enable the user to tell their own story. The role of the librarian, archivist or curator is to listen and, when appropriate, to question the narrative; this is, after all, an active dialogue with the user, not passive listening.

Listening to the User

A mixed-methods approach, where both quantitative and qualitative approaches are taken, enables the library service to understand both what the user actually does and the context for those actions and the experience that those interactions provide. This coalescence of data is incredibly powerful, both for understanding how current services are used and might be improved and also for articulating hidden or emerging needs and requirements that users themselves may not be entirely aware of. For example, the way we interact with a space may leave traces of a need that could never be fully articulated by the user. Yet, should that space be altered to meet that nascent need, the user’s experience might be positively transformed.

Donna Lanclos is an anthropologist who works at the University of North Carolina, Charlotte as the library’s ethnographer. She blogs at Anthropologist in the Stacks (http://atkinsanthro.blogspot.co.uk) and has a role that ensures the decisions the library makes about services, systems and space are all anchored by the behaviours, practices and motivations of the students at the institution.

Donna’s work covers a lot of different areas, including space planning. Her work has explored how students use a 24-hour library space late at night and early in the morning. Unsurprisingly or maybe surprisingly, a significant amount of that usage is student’s sleeping. Donna’s work has involved her mapping the areas that students prefer sleeping in to understand how the library can redesign those spaces to take these behaviours into account. It doesn’t have to be students slumped on desks! (Lanclos, 2013).

Understanding how students interact and engage with the space means that the library can respond to how students actually engage and behave, rather than how they wish, or would like students to engage.

Similar kinds of insight are required for the online, web-based interactions as well as the physical ones. Longitudinal research into student behaviours, such as Visitors and Residents (Jisc, 2014) provides critical insights into the behaviours of students in a digital information environment, as students’ progress through the educational system (from school to postgraduate). The V&R work is already challenging the assumptions we have about how students behave in an online environment, how they learn and collaborate, and is identifying new modes of engagement such as ‘learning black markets’ (White, 2011). With the learning black market, for example, the work uncovers the informal and peripheral collaborations and studying that takes place just beyond the formal structures and services of the institution. Here activities might include using Facebook to collaborate on an essay, the use of sources such as Wikipedia and the interrogation of those sources through online messaging or text messaging, for example. There may be opportunities for the library and other services to support the student in understanding and interrogating sources when these informal activities might intersect with more formal institutional activities (such as researching for an essay). This may not be the traditional form of support offered by the library, but they are ones that have the potential to transform the experience of the student. They also ask serious questions of whether the library and institution should even be engaging in these informal spaces of the students. Is this a space that should remain beyond the reach of formal interventions?

By itself this analytics data provides only a part of the overall story. Analytics data is very effective for telling you what your users are doing, and quite possibly how they are doing it. But what it cannot tell you is why they are doing it.

It is the ‘why’ which also opens up the possibility of identifying new potential services and interventions that can meet undeclared user needs and transform the experience for users. More fundamentally, the ethnographic and mixed-methods approaches described in this chapter place the user at the heart of developments. Increasingly, our metrics for success depend not just on numbers but also on a narrative from the user or visitor that includes words such as ‘delightful’, ‘surprising’ and ‘amazing’.

Conclusion

What was once hidden or obscure is increasingly visible. Those invisible traces of an academic reading an unpublished paper, or the student collaborating online for an essay due in the following day, these are increasingly the places and spaces that offer the boundaries of the new metrics frontier for libraries and academic institutions.

The academic library is ideally placed to be a key partner or strategic lead in this emerging area. Libraries are accustomed to measuring things. Some you’d expect: numbers of users, number of downloads, the amount of available shelf space. Some you might not, such as quantifying the value the library contributes to student success or recording the number of users sleeping (and where they are sleeping!). Indeed, it has been the academic library that has so often been at the forefront of attempts to measure what is important for the university, its students and staff and the library itself to understand and change for the better.

And, as the evolution of impact and value metrics have demonstrated, the library is ideally positioned to challenge and question an over-reliance on one form of metric or method of analysis. Indeed, as the library has developed metrics to ensure the user sits at the heart of any metrics developed, so it can provide the same user-centric approach when supporting wider, strategic engagements with learning and research metrics across the institution.

The future of library metrics is about librarians constantly challenging and reassessing the metrics they use to measure the things that are most important to their users and themselves. To ensure they have the right tools and measures that will shine a light into areas that once seemed impossibly dark, but where the greatest rewards can be uncovered.

References

  • Barkey, Patrick (1965) ‘Patterns of Student Use of a College Library’. College Research Library, March 1965 no. 2 (1965), 115–18.
  • Cox, Brian, Jantti, Margi (2012) ‘Discovering the Impact of Library Use and Student Performance’. Educause Review Online, July 2012.
  • Jantti, M. and Cox, B. (2013) ‘Measuring the Value of Library Resources and Student Academic Performance through Relational Datasets’. Evidence Based Library and Information Practice, 8 (2), pp. 163‐171
  • Crease, Robert (2011) ‘Measurement and its discontents’. New York Times, October 22nd
  • De Jager, K (2002b). Successful students: does the library make a difference? Performance Measurement and Metrics, 3 (3), 140-144
  • Delasale, Jenny (2011) ‘Research evaluation: bibliometrics and the librarian’. SCONUL Focus, 53, 2011: 16-19.
  • Freedman, H David (2010) ‘Why Scientific Studies Are So Often Wrong: The Streetlight Effect’. Discover, July/August 2010.
  • Gross PLK, Gross, EM (1927) ‘College Libraries and Chemical Education’, Science, 66: 385-9.
  • Jisc (2014) Evaluating Digital Services: The Visitors and Residents Approach: http://www.jiscinfonet.ac.uk/infokits/evaluating-services/visitors-residents/
  • Kramer, Lloyd and Martha (1968), ‘The College Library and the Drop Out’. College Research Library, July 1968 29:310-312
  • Lanclos, Donna (2013) ‘Sleeping and Successful Library Spaces’, Anthropologist in the Stacks: http://atkinsanthro.blogspot.co.uk/2013/08/sleeping-and-successful-library-spaces.html
  • Monastersky, Richard(2005) ‘The Number That’s Devouring Science’, Chronicle of Higher Education, October 1, 2005.
  • Nackerud, S. et. al. (2013) ‘Analyzing Demographics: Assessing Library Use Across the Institution’. Libraries and the Academy. 13 (2), pp. 131‐145.
  • Priem, Jason (2011) ‘As scholars undertake a great migration to online publishing, altmetrics stands to provide an academic measurement of twitter and other online activity’. LSE Impact of Social Science Blog http://blogs.lse.ac.uk/impactofsocialsciences/2011/11/21/altmetrics-twitter
  • Priem, Jason, Taraborelli, Dario, Groth, Paul, Neylon, Cameron (2010), ‘altmetrics: A manifesto’, http://altmetrics.org/manifesto
  • Rohn, Jenny (2012) ‘Business as usual in judging the worth of a researcher?’ The Guardian, November 30th
  • Sample, Ian (2013) ‘Nobel winner declares boycott of top science journals’. The Guardian, 9th December 2013.
  • Schekman, Randy (2013) ‘How journals like Nature, Cell and Science are damaging science’. The Guardian, 9th December 2013.
  • Showers, Ben (2015) Library Analytics and Metrics. Facet: London, 2015.
  • Showers, Ben and Stone, Graham (2014) ‘Safety in Numbers: Developing a Shared Analytics Service for Academic Libraries’. Performance Measurement and Metrics, 15 (1/2) pp. 13-22.
  • Sin, S-C.J and Kim, K-S (2008), Use and non-use of public libraries in the information age: A logistic regression analysis of household characteristics and library services variables, Library & Information Science Research, 30, (3), 207-215.
  • Stone, G. et. al. (2012) ‘Library Impact Data Project: hit, miss or maybe’. Proving value in challenging times: proceedings of the 9th Northumbria international conference on performance measurement in libraries and information services. University of York, York, pp. 385‐390.
  • Stone, G. and Collins, E. (2013) ‘Library usage and demographic characteristics of undergraduate students in a UK university’. Performance Measurement and Metrics, 14 (1) pp. 2535
  • Suaiden, E.J. (2003), The social impact of public libraries, Library Review, 52 (8), 379 – 387.
  • White, David (2011) ‘The Learning Black Market’ http://tallblog.conted.ox.ac.uk/index.php/2011/09/30/the-learning-black-market/
  • Wilhite, Allen. W, Fong, Eric. A (2012) ‘Coercive Citation in Academic Publishing’ Science3 February 2012: Vol. 335 no. 6068 pp. 542-543

 

The Legal Bit

This is a preprint of a chapter accepted for publication by Facet Publishing. This extract has been taken from the author’s original manuscript and has not been edited. The definitive version of this piece may be found in Tattersall, A (Ed). Altmetrics: A practical guide for librarians, researchers and academics. Facet, London. ISBN: 9781783300105 which can be purchased from http://www.facetpublishing.co.uk/title.php?id=300105#about-tab.  The author agrees not to update the preprint or replace it with the published version of the chapter.

altmetrics

Advertisement

One Comment Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s