Abstract
Much debate has been around the misapplication of metrics in research assessment. As a result of this concern, the Declaration on Research Assessment (DORA) was launched, an initiative that caused opposing viewpoints. However, the discussion topics about DORA have not been formally identified, especially in participatory environments outside the scholarly communication process, such as social networks. This paper contributes to that end by analyzing 20,717 DORA-related tweets published from 2015 to 2022. The results show an increasing volume of tweets, mainly promotional and informative, but with limited participation of users, either commenting or engaging with the tweets, generating a scarcely polarized conversation driven primarily by a few DORA promoters. While a varied list of discussion topics is found (especially "Open science and research assessment," "Academics career assessment & innovation," and "Journal Impact Factor"), the DORA debate appears as part of broader conversations (research evaluation, open science). Further studies are needed to check whether these results are restricted to Twitter or reveal more general patterns. The findings might interest the different evaluators and evaluated agents regarding their interests and concerns around the reforms in the research evaluation.
Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Abadal, E. (2021). Ciencia abierta: Un modelo con piezas por encajar. Arbor, 197(799), a588. https://doi.org/10.3989/arbor.2021.799003
Abramo, G., D’Angelo, C. A., & Di Costa, F. (2019). When research assessment exercises leave room for opportunistic behavior by the subjects under evaluation. Journal of Informetrics, 13(3), 830–840. https://doi.org/10.1016/j.joi.2019.07.006
Abramo, G., D’Angelo, C. A., & Grilli, L. (2021). The effects of citation-based research evaluation schemes on self-citation behavior. Journal of Informetrics, 15(4), 101204. https://doi.org/10.1016/j.joi.2021.101204
Akbaritabar, A., Bravo, G., & Squazzoni, F. (2021). The impact of a national research assessment on the publications of sociologists in Italy. Science and Public Policy, 48(5), 662–678. https://doi.org/10.1093/scipol/scab013
Benoit K, Muhr D, Watanabe K (2021). Stopwords: Multilingual Stopword Lists. R package version 2.3. URL: https://CRAN.R-project.org/package=stopwords
Blondel, V. D., Guillaume, J. L., Lambiotte, R., & Lefebvre, E. (2008). Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008(10), P10008.
Breucker P., Cointet J., Hannud Abdo A., Orsal G., de Quatrebarbes C., Duong T., Martinez C., Ospina Delgado J.P., Medina Zuluaga L.D., Gómez Peña D.F., Sánchez Castaño T.A., Marques da Costa J., Laglil H., Villard L., Barbier M. (2016). CorTexT Manager (version v2). URL: https://docs.cortext.net
Commission, E. (2021). Towards a reform of the research assessment system: Scoping report. Publications Office. https://doi.org/10.2777/707440
Copiello, S. (2020). Other than detecting impact in advance, alternative metrics could act as early warning signs of retractions: Tentative findings of a study into the papers retracted by PLoS ONE. Scientometrics, 125(3), 2449–2469. https://doi.org/10.1007/s11192-020-03698-w
Delgado-López-Cózar, E., Ràfols, I., & Abadal, E. (2021). Letter: A call for a radical change in research evaluation in Spain. Profesional De La Información. https://doi.org/10.3145/epi.2021.may.09
Díaz-Faes, A. A., Bowman, T. D., & Costas, R. (2019). Towards a second generation of “social media metrics”: Characterizing Twitter communities of attention around science. PLoS ONE, 14(5), e0216408. https://doi.org/10.1371/journal.pone.0216408
European Commission, Directorate-General for Research and Innovation, Peters, I., Frodeman, R., Wilsdon, J., et al. (2017). Next-generation metrics: responsible metrics and evaluation for open science. Publications Office. https://doi.org/10.2777/337729
Fang, Z., Costas, R., & Wouters, P. (2022). User engagement with scholarly tweets of scientific papers: A large-scale and cross-disciplinary analysis. Scientometrics, 127(8), 4523–4546. https://doi.org/10.1007/s11192-022-04468-6
Frantzi, K., Ananiadou, S., & Mima, H. (2000). Automatic recognition of multi-word terms: The C-value/NC-value method. International Journal on Digital Libraries, 3(2), 115–130. https://doi.org/10.1007/s007999900023
Fruchterman, T. M., & Reingold, E. M. (1991). Graph drawing by force-directed placement. Software: Practice and Experience, 21(11), 1129–1164. https://doi.org/10.1002/spe.4380211102
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429–431. https://doi.org/10.1038/520429a
Kageura, K., & Umino, B. (1996). Methods of automatic term recognition: A review. Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication, 3(2), 259–289.
Kulczycki, E. (2023). The Evaluation Game: How Publication Metrics Shape Scholarly Communication. Cambridge University Press.
Lariviere, V., & Sugimoto, C. R. (2019). The journal impact factor: A brief history, critique, and discussion of adverse effects (pp. 3–24). Springer.
Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & Delgado López-Cózar, E. (2021). Google scholar, microsoft academic, scopus, dimensions, web of science, and opencitations’ COCI: A multidisciplinary comparison of coverage via citations. Scientometrics, 126(1), 871–906. https://doi.org/10.1007/s11192-020-03690-4
Moed, H. F. (2008). UK Research assessment exercises: informed judgments on research quality or quantity? Scientometrics, 74(1), 153–161. https://doi.org/10.1007/s11192-008-0108-1
Mohammadi, E., Thelwall, M., Kwasny, M., & Holmes, K. L. (2018). Academic information on twitter: A user survey. PLoS ONE, 13(5), e0197265. https://doi.org/10.1371/journal.pone.0197265
Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), e2004089. https://doi.org/10.1371/journal.pbio.2004089
Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., et al. (2020). The Hong Kong principles for assessing researchers: Fostering research integrity. PLoS Biology, 18(7), e3000737. https://doi.org/10.1371/journal.pbio.3000737
Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics, 106(1), 213–228. https://doi.org/10.1007/s11192-015-1765-5
Mongeon, P., Bowman, T. D., & Costas, R. (2022). An open dataset of scholars on Twitter. Quantitative Science Studies. https://doi.org/10.48550/arXiv.2208.11065
Northcott, D., & Linacre, S. (2010). Producing Spaces for Academic Discourse: The Impact of Research Assessment Exercises and Journal Quality Rankings. Australian Accounting Review, 20(1), 38–54. https://doi.org/10.1111/j.1835-2561.2010.00079.x
O’Connor, S. (2022). The San Francisco Declaration on Research Assessment (DORA) in Nursing Science. Nursing Science Quarterly, 35(2), 275–276. https://doi.org/10.1177/08943184211070602
Parish, T., Harris, M., Fry, N., Mathee, K., Trujillo, M. E., Bentley, S., & Thomson, N. (2018). DORA editorial. Microbial Genomics. https://doi.org/10.1099/mgen.0.000238
Pearson, K. (1900). On the criterion that a given system of deviation from the probable in the case of a correlated system of variable is such that it can be reasonable, supposed that have arisen from random sampling. Phylosophical Magazine, 50(5), 157–175. https://doi.org/10.1080/14786440009463897
Pérez Esparrells, C., Bautista Puig, N., & Orduña Malea, E. (2023). Report I: The public evaluation of scientific research in the international context: possibilities and limits. ACCUE. ISBN: 978–84–09–48734–9
Powell, K., Haslam, A., & Prasad, V. (2022). The Kardashian Index: A study of researchers’ opinions on Twitter 2014–2021. Scientometrics, 127(4), 1923–1930. https://doi.org/10.1007/s11192-022-04281-1
Priem, J., & Hemminger, B. H. (2010). Scientometrics 2.0: New metrics of scholarly impact on the social Web. First Monday. https://doi.org/10.5210/fm.v15i7.2874
R Core Team (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org.
Rowlands, J., & Wright, S. (2022). The role of bibliometric research assessment in a global order of epistemic injustice: A case study of humanities research in Denmark. Critical Studies in Education, 63(5), 572–588. https://doi.org/10.1080/17508487.2020.1792523
Sadiq, M. T., & Yadav, A. K. (2022). Discovering the open access movement on Twitter: An exploratory study. Journal of Indian Library Association, 57(1), 67–77.
Schmid, S. L. (2017). Five years post-DORA: Promoting best practices for research assessment. Molecular Biology of the Cell, 28(22), 2941–2944. https://doi.org/10.1091/mbc.e17-08-0534
Silge, J., & Robinson, D. (2016). Tidytext: Text mining and analysis using tidy data principles in R. Journal of Open Source Software. https://doi.org/10.21105/joss.00037
Sivertsen, G., & Rushforth, A. (2022). The new European reform of research assessment. R-QUEST Policy Brief, 7. https://www.r-quest.no/wp-content/uploads/2023/02/R-QUEST-Policy-Brief-7.pdf
Smits, R.-J., & Pells, R. (2022). Plan S for shock: Science. Shock. Solution. Speed. Ubiquity Press.
Sotudeh, H. (2023). How social are open-access debates: A follow-up study of tweeters’ sentiments. Online Information Review. https://doi.org/10.1108/OIR-09-2022-0502
Sotudeh, H., Saber, Z., Ghanbari Aloni, F., Mirzabeigi, M., & Khunjush, F. (2022). A longitudinal study of the evolution of opinions about open access and its main features: A Twitter sentiment analysis. Scientometrics, 127(10), 5587–5611. https://doi.org/10.1007/s11192-022-04502-7
Thelwall, M. (2020). The Pros and cons of the use of altmetrics in research assessment. Scholarly Assessment Reports, 2(1), 2. https://doi.org/10.29024/sar.10
Thelwall, M., & Kousha, K. (2021). Researchers’ attitudes towards the h-index on Twitter 2007–2020: Criticism and acceptance. Scientometrics, 126(6), 5361–5368. https://doi.org/10.1007/s11192-021-03961-8
Torres-Salinas, D., Arroyo-Machado, W., & Robinson-Garcia, N. (2023). Bibliometric denialism. Scientometrics, 128(9), 5357–5359. https://doi.org/10.1007/s11192-023-04787-2
Van Raan, A. F. J. (2005). Measurement of central aspects of scientific research: performance, interdisciplinarity. Structure. Measurement: Interdisciplinary Research and Perspectives, 3(1), 1–19. https://doi.org/10.1207/s15366359mea0301_1
Van Dalen, H. P., & Henkens, K. (2012). Intended and unintended consequences of a publish-or-perish culture: A worldwide survey. Journal of the American Society for Information Science and Technology, 63(7), 1282–1293. https://doi.org/10.1002/asi.22636
Van Atteveldt, W., Van der Velden, M. A. C. G., & Boukes, M. (2021). The validity of sentiment analysis: Comparing manual annotation, crowd-coding, dictionary approaches, and machine learning algorithms. Communication Methods and Measures, 15(2), 121–140. https://doi.org/10.1080/19312458.2020.1869198
Vidovich, L. (2008). Research assessment in Singaporean higher education: Changing educational accountabilities in a context of globalisation. International Education Journal: Comparative Perspectives, 9(1), 41–52.
Welk, G., Fischman, M. G., Greenleaf, C., Harrison, L., Ransdell, L., van der Mars, H., & Zhu, W. (2014). Editorial Board position statement regarding the Declaration on Research Assessment (DORA) recommendations with respect to journal impact factors. Research Quarterly for Exercise and Sport, 85(4), 429–430. https://doi.org/10.1080/02701367.2014.964104
Wickham H, François R, Henry L, Müller K (2022). _dplyr: A Grammar of Data Manipulation_. R package version 1.0.10. URL: https://CRAN.R-project.org/package=dplyr
Wickham H (2022). _stringr: Simple, Consistent Wrappers for Common String Operations_. R package version 1.5.0. URL: https://CRAN.R-project.org/package=stringr
Wildgaard, L. (2015). A comparison of 17 author-level bibliometric indicators for researchers in astronomy, environmental science, philosophy and public health in web of science and google scholar. Scientometrics, 104(3), 873–906. https://doi.org/10.1007/s11192-015-1608-4
Wilsdon, J., et al. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. The Metric Tide. https://doi.org/10.4135/9781473978782
Funding
Conselleria de Innovación, Universidades, Ciencia y Sociedad Digital, Generalitat Valenciana, GV/2021/141, Enrique Orduña-Malea
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Orduña-Malea, E., Bautista-Puig, N. Research assessment under debate: disentangling the interest around the DORA declaration on Twitter. Scientometrics 129, 537–559 (2024). https://doi.org/10.1007/s11192-023-04872-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-023-04872-6