[Citations count fails to measure the impact of research]

Added on 17/05/2018

In academia, I have seen many peple rely on citations count when it comes to judging the importance or the impact of research papers. Culturally, building up your count is a good idea - in the end, many times, your academic career progress is judged by so-called "h-index", that measures how much your research is being cited. However, I recently had a detailed look at what exactly the citations, specifically those reported by Google Scholar, amount to. Perhaps unsurprisingly to some, I have discovered that only a relatively small fraction of the reported citations correspond to research being applied or reproduced in a meaningful way.

The paper I looked at describes the Grow-When-Required neural network architecture for unsupervised learning:
Marsland, S., Nehmzow, U., & Shapiro, J. (2005). On-line novelty detection for autonomous mobile robots. Robotics and Autonomous Systems, 51(2–3), 191–206.

I wanted to find out how people have used the neural network since the paper has been published.

At the end of April 2018, Google Scholar reported 87 citations, out of which:

  • 34 were unpublished research, i.e., either Masters or PhD projects, or dead links
  • 25 only mentioned the paper as one of many references for a simple phrase, such as "novelty detection" or "one-class classification". This was the case, for example in:
    • Volos, Ch. K., Kyprianidis, I. M. & Stouboulos, N. (2012) A chaotic path planning generator for autonomous mobile robots. Robotics and Autonomous Systems 60(4) 651-656
    • Fink, O., Zio, E., & Weidmann, U. (2015). Novelty detection by multivariate kernel density estimation and growing neural gas algorithm. Mechanical Systems and Signal Processing, 50–51, 427–436.
  • 17 mentioned the paper in their background readingand described the neural network, but provided no discussion or application. For example:
    • Christensen, A. L., O’Grady, R., Birattari, M., & Dorigo, M. (2008). Fault detection in autonomous robots based on fault injection and learning. Autonomous Robots, 24(1), 49–67.
    • Ghesmoune, M., Lebbah, M., & Azzag, H. (2016). State-of-the-art on clustering data streams. Big Data Analytics, 1(13)
  • The remaining 11 used, improved on or compared the architectureto a different one:
    • Merrick, K., Siddique, N., & Rano, I. (2016). Experience-based generation of maintenance and achievement goals on a mobile robot. Paladyn, Journal of Behavioral Robotics, 7(1), 67–84.
    • Parisi, G. I., Tani, J., Weber, C., & Wermter, S. (2017). Lifelong learning of human actions with deep neural network self-organization. Neural Networks, 96, 137–149.


In other words, only 13% of the reported citations were actualy those that the original paper had a real impact on. I think this points to a larger problem where academic publishing is considered to be a numbers game. Especially if we take into account that in some research groups, people tend to cite each other, as well as their own previous work, creating an illusion of a larger impact.

I wonder if metrics better than h-index or citations count could be used in the future. For example, machine learning techniques could be used to "read" papers and evaluate the context within which citations are being used. This would then allow us to categorise citations and get a better picture of how research is being used.

Disclaimer: This blog is a commentary on the citations count culture in general and is in no way meant to criticise the importance of work published by Marsland. I personally consider Marsland's work brilliant, and am building on it in my own research. Please leave a comment if you have similar or contradictory experiences with published papers.



{Please enable JavaScript in order to post comments}

Designing Effective Roadmaps for Robotics Innovation

Automated factories, autonomous delivery drones, self-driving cars: these and similar technologies will soon touch every aspect of our lives. An engaging discussion about how these technologies are regulated and innovated took place at the IROS 2017 conference.

Coding for tomorrow: Why is good code important?

"Why should I care about how my code is written, as long as it works?" I will argue here that well-structured and well-written code not only saves time on a project, it also helps you to invest your time in a way that is meaningful for your future work.

Have I Met an Android?

I was browsing game discussion forums on Steam and came across a post that did not make much sense. Have I talked to an AI?

pyCreeper

The main purpose of pyCreeper is to wrap tens of lines of python code, required to produce graphs that look good for a publication, into functions. It takes away your need to understand various quirks of matplotlib and gives you back ready-to-use and well-documented code.

Novelty detection with robots using the Grow-When-Required Neural Network

The Grow-When-Required Neural Network implementation in simulated robot experiments using the ARGoS robot simulator.

Fast Data Analysis Using C++ and Python

C++ code that processes data and makes it available to Python, significantly improving the execution speed.