nep-sog New Economics Papers
on Sociology of Economics
Issue of 2017‒12‒18
four papers chosen by
Jonas Holmström
Axventure AB

  1. Ранжирование российских экономических журналов: научный метод или «игра в цыфирь»? By Рубинштейн Александр Яковлевич
  2. Mandating Access: Assessing the NIH's Public Access Policy By Staudt, Joseph
  3. A “citation surplus” should be added to the h-index By Da Silva, Sergio
  4. Rising Stars By Battistin, Erich; Ovidi, Marco

  1. By: Рубинштейн Александр Яковлевич
    Abstract: Статья посвящена общим проблемам ранжирования журналов на примере критического анализа трех рейтингов российских экономических журналов, предложенных в последние годы. Методология построения данных рейтингов связана с данными РИНЦ, результатами экспертных опросов и комбинацией этих подходов. Выявлены принципиальные недостатки каждого из рейтингов и показано, что уязвимым местом таких разработок являются относительно произвольный выбор библиометрических индикаторов и их слабая корреляция с научным авторитетом журналов, недостаточно обоснованная процедура агрегирования используемых показателей и/ или экспертных оценок, а также нерепрезентативность опросов экспертов. В работе представлен пассивный эксперимент, в рамках которого сопоставлены результаты ранжирования журналов по указанным трем рейтингам и трем дополнительным критериям. Сделан общий вывод о невысоком уровне развития подобных исследований и отсутствии реальных оснований для применения указанных рейтингов в практике управления наукой и стимулирования труда ученых. This article is devoted to general problems of ranking on the example of critical analysis of the three ratings of Russian economic journals suggested in recent years, the methodology of construction of which are connected with the Russian Science Citation Index (RSCI) data, results of expert surveys and a combination of these approaches. The fundamental disadvantages of each of them are revealed and it is shown that the vulnerable points of such developments are relatively arbitrary choice of bibliometric indicators and their weak correlation with academic authority of journals, insufficiently substantiated procedure of aggregation of used indicators and/or expert analysis, as well as surveys of experts are not representative. This paper presents a «passive experiment», in terms of which were mapped the results of the ranking of journals, based on the three ratings and three additional criteria. Made the overall conclusion about the low level of development of such researches and the lack of real grounds for the application of these ratings in the practice of science management and motivation of scientists.
    Keywords: journal ranking, bibliometric indicators, citation, expert analysis, aggregation, arranging
    JEL: A11 A14 I23
    URL: http://d.repec.org/n?u=RePEc:rua:wpaper:a:pru175:ye:2016:1&r=sog
  2. By: Staudt, Joseph
    Abstract: In 2008, the National Institutes of Health (NIH) mandated that the full text of NIH-supported articles be made freely available on PubMed Central (PMC) -- the largest and most commonly used repository of biomedical literature. This paper examines how this "PMC mandate" impacted publishing patterns in biomedicine and researcher access to the biomedical literature. Using ~1 million NIH articles and several matched comparison samples, I find that NIH articles are more likely to be published in traditional subscription-based journals (as opposed to "open access" journals) after the mandate. This indicates that the mandate did not induce widespread discrimination, by subscription-based journals, against NIH articles. I also find that the mandate did not increase the number of forward citations to NIH articles published in subscription-based journals. This is consistent with researchers having widespread access to the biomedical literature prior to the mandate, leaving little room for the mandate to increase access.
    Keywords: economics of science, open access, nih, nih public access policy, policy evaluation
    JEL: O31 O34 O38
    Date: 2017–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:82981&r=sog
  3. By: Da Silva, Sergio
    Abstract: The h-index is the largest number h such that h publications have at least h citations. The index reflects both the number of publications and the number of citations per publication. One unperceived deficiency of this metric is that it is Pareto-inefficient. A “citation surplus” would be absent and, thus, the h-index would be efficient for a researcher if all his h papers that are equal or above his h-index received exactly h citations. This inefficiency would not be of great concern if those h papers were normally distributed. However, the rank from top to bottom does not decay exponentially. The decay follows the power law known in the literature as Lotka’s law. To remedy this deficiency, I suggest the h-index be supplemented by a researcher’s citation surplus.
    Keywords: h-index, scientific productivity, scientometrics, Pareto-efficiency
    JEL: O30
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:83176&r=sog
  4. By: Battistin, Erich; Ovidi, Marco
    Abstract: We use the UK's 2014 Research Excellence Framework (REF) to study which attributes characterize a top-scoring (four-star) publication in Economics and Econometrics. We frame the analysis as a classification problem and, using information in official documents, derive conditions to infer the unobservable score that panellists awarded to each publication. Juxtaposing institutions' submissions with REF outcomes provides information on the unobservable pass-marks used for assigning quality levels, which respond to journal prestige measured by the Thomson Reuters Article Influence Score. We explore this statistical feature in the econometric analysis, which reveals the limited contribution to awarded quality made by other publication attributes, possibly unobservable to us, conditional on the Article Influence Score. We conclude that, in large-scale and costly evaluations such as the REF, the time-consuming task of peer reviews should be devoted to publications not in academic outlets with unambiguously top-scoring bibliometric indicators of journal impact. Our model also predicts a ranking of academic journals consistent with the classification of REF panellists.
    Keywords: Education Policy; Higher education; Journal Rankings; Research funding
    JEL: H52 H83 I23 I28
    Date: 2017–12
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:12488&r=sog

This nep-sog issue is ©2017 by Jonas Holmström. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.