There has been a surge of new scholarly communication tools in recent years. But how are researchers incorporating these tools into their research workflows? Jeroen Bosman and Bianca Kramer are conducting a global survey to investigate the choices researchers are making and why. Insights from these surveys will be valuable for libraries, research support and funders as well as for researchers themselves.
Are we witnessing a major overhaul of the rules and tools of scholarly communication? In the past six months alone, this blog has featured posts on all phases of the research cycle. From Wikipedia (discovery) to replication (analysis), Chrome extensions for reference management (writing), the Open Library of Humanities and RIO Journal (publishing), Twitter & blogging (outreach), altmetrics, the R-index and Publons (peer review and assessment). How can researchers get a grip on the myriad of new tools and standards? Should they drop everything they have always taken for granted? Will switching to open tools make research workflows more efficient? And should that be the goal anyway? Our current research may help researchers and other stakeholders understand what is going on.
Avalanche of tools
Almost half of the scholarly communication tools in our database have been created since 2013. The rate at which new tools appear to a certain degree reflects the relative ease with which one can create online tools. On the other hand, tool creators apparently deem it important to build a tool that supports a new way of working or that repairs faults and omissions in existing tools offered by the major players (be it publishers, tech companies or venerable societies). The push for new tools comes not only from funders (e.g. demanding data archiving of Open Access) but also from researchers who want to capitalise on the collaboration possibilities offered by the internet. Especially for experimenting/collecting/mining data, writing, journal selection, publishing and outreach, we are witnessing a surge of new tools.
We use a simple model to get a grip on this abundance of tools. The G-E-O model looks at whether the tool makes science Good, Efficient or Open. For instance, tools that make science more open are those facilitating open access, open data, open peer review or even open drafting. In addition, many outreach tools contribute to openness; however, there are barriers such as copyright issues and the business models of the big publishers. Tools that make science efficient mostly work on standardisation (such as DOI and ORCID), interoperability and well-connected platforms. These are mostly technical improvements and standards that few people oppose. Finally, you’ve got tools that encourage ‘good’ science, where good mostly implies reproducible, transparent and fair. This is about reporting, acknowledgment, credit, assessment and quality checks. They are all part of research governance, which is the set of rules and norms created by (associations of) universities, societies and publishers. So far, we have seen enormous amounts of efficiency tools and a fair share of openness tools, but only a handful of tools that explicitly aim for reproducible or fair science.
To investigate the choices researchers make in this respect, we are currently carrying out a global multilingual survey asking researchers across all disciplines, career stages and countries for their tool usage for 17 research activities. Some 5,500 researchers worldwide have responded so far, and we hope to double that number before the survey ends in February 2016. The survey uses self-selected non-probability sampling. The bias issues inherent in this approach will be addressed with targeted distribution during the running of the survey and afterwards with statistical operations. Some 60 institutions across the globe have already partnered with us to distribute the survey within their own organisations using custom URLs, allowing us to share institutionally tagged data with them.
We think the survey can become one of the largest multilingual surveys into researcher practices. Insights from these data, which will be made public in a raw, anonymised form, will be valuable for libraries, research support and funders as well as for researchers themselves. They also get immediate preliminary feedback from the survey. For the six phases of the research cycle, a radar chart shows whether someone’s tool usage is more traditional or more innovative than the average of his or her peer group. Understanding the reasons for and implications of changing tool usage is the next step of our research, and this will be the subject of follow-up in-depth studies with a subset of respondents and with tool creators.
This blog article was originally published on The Impact Blog and is licensed under a Creative Commons Attribution 3.0 Unported License. Minor grammatical changes have been made in line with Erudito’s publication guidelines.
Its authors Jeroen Bosman and Bianca Kramer work at Utrecht University Library, as a geosciences librarian and a librarian for biomedical sciences, respectively. They are charting the changing landscape of scholarly communication in their project Innovations in Scholarly Communication and are members of the steering group of the Force11 Scholarly Commons initiative. They can be found on Twitter as @jeroenbosman and @MsPhelps.