Skip to Main Content

Scholarly Communication

...by following The Scholarly Kitchen

...by reading Retraction Watch

  • Researchers to pull duplicate submission after reviewer concerns and Retraction Watch inquiry This link opens in a new windowMay 22, 2025

    While doing a literature review earlier this spring, a human factors researcher came across a paper he had peer-reviewed. One problem: He had reviewed it – and recommended against publishing – for a different journal not long before the publication date of the paper he was now looking at. 

    Based on the published paper and documents shared with us, it appears the authors submitted the same manuscript to the journals Applied Sciences and Virtual Reality within 11 days of each other, and withdrew one version when the other was published. 

    And after we reached out to the authors, the lead author told us they plan to withdraw the published version next week – which the editor of the journal had called for in April but its publisher, MDPI, had not yet decided to do. 

    The journal Applied Sciences published the paper, “Correlations between SSQ Scores and ECG Data during Virtual Reality Walking by Display Type,” on March 4, 2024.

    Both the first author of the paper, Mi-Hyun Choi, and the senior author, Jin Seung Choi, are professors at Konkuk University in Seoul, South Korea.

    The reviewer, who asked us to remain anonymous, received the manuscript from editors at Virtual Reality on January 22. That manuscript had a submission date of January 2, less than two weeks before the authors submitted it to Applied Sciences. 

    “I reviewed this human-subject study and noticed, curiously, that there was no ethics statement on the paper detailing whether there had been any ethics approval,” he said, noting at the time he raised these concerns to the editor in chief of Virtual Reality

    In the published version, the authors state the protocol was approved by the university’s Institutional Review Committee. 

    In the review report we saw, the reviewer wrote the manuscript couldn’t be published in its current form because it didn’t properly cite prior research on the topic, lacked a novel thesis and lacked statistical rigor.

    “In isolation, each of these omissions could warrant a minor revision, however, the manuscript is quite scant on detail and unfortunately reads more like a conference paper or a short-and-sweet-paper,” he wrote on Jan. 23, 2024. 

    But after he raised these concerns to the editor-in-chief, he didn’t see the paper again, he told us. Then, he was informed the authors withdrew it from consideration on March 5 — the day after, it turns out, Applied Sciences published its version.

    In emails we have seen from this April, the reviewer brought the dual submission to the attention of both journals. In response, Rob Macredie, the editor-in-chief for Virtual Reality, noted the authors stated in their cover letter to the journal that the paper was “currently not under review, nor it will [sic] be submitted to another journal while under consideration for Virtual Reality.”

    Later that month, Giulio Cerulo, the editor in chief of Applied Sciences, told the reviewer the double submission was a “clear violation of the ethical standards and, in my opinion, it should lead to a retraction of the published article.”

    But when we followed up with Applied Sciences, an MDPI title, they said they were still investigating the paper. Jisuk Kang, the publishing manager at MDPI, said in an email the Committee on Publication Ethics (COPE) “retraction guidelines do not support the retraction of a published article based solely on dual submission, if the integrity of the data remains intact.” Kang noted the journal was still reviewing the paper and if the investigation uncovered misconduct, “further action will be taken as appropriate.”

    COPE considers dual and multiple submissions “unethical practices in academic publishing,” but the organization doesn’t recommend a  course of action once the papers are already published. 

    For a similar case submitted to COPE, the members said the organization would “always advocate educational rather than punitive action” and suggested editors publish an editorial “on the ethics of dual submissions.”

    Jin Seung Choi first told us earlier this month the dual submission was “not appropriate,” and he said it was a “simple oversight” by the authors. He also said he would not support the retraction of the paper, as it contains “no plagiarism, misconduct, or issues concerning research originality.”

    “Although it was my mistake, I think it would not have happened if the submission system had been able to recognize in advance that it was under review by another journal,” he told us.

    After we followed up to confirm his affiliation this week, Jin Seung Choi told us he would withdraw the paper after meeting with his co-authors next week. “I do not want the problem to spread any further,” he wrote.

    The reviewer told us he suspected “the authors either misrepresented themselves or acted maliciously” and were familiar with the submission process for journals. 

    The paper has been cited once by a paper in the same journal, according to Clarivate’s Web of Science. 


    Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.


    By clicking submit, you agree to share your email address with the site owner and Mailchimp to receive marketing, updates, and other emails from the site owner. Use the unsubscribe link in those emails to opt out at any time.

    Processing…
    Success! You're on the list.
  • Correction finally issued seven years after authors promise fix ‘as soon as possible’This link opens in a new windowMay 20, 2025

    A journal has finally issued a correction following a seven-year-old exchange on PubPeer in which the authors promised to fix issues “as soon as possible.” But after following up with the authors and the journal, it’s still not clear where the delay occurred.

    Neuron published the paper, “Common DISC1 Polymorphisms Disrupt Wnt/GSK3β Signaling and Brain Development,” in 2011. It has been cited 101 times, 28 of which came after concerns were first raised, according to Clarivate’s Web of Science. 

    It first appeared on PubPeer in April 2018, when commenter Epipactis voethii first pointed out figures 2 and 3 of the paper had potential image duplication. 

    Shortly after, Li-Huei Tsai,  the co-corresponding author and the director of the Picower Institute for Learning and Memory at MIT, responded on PubPeer saying the authors were “currently working with the journal to resolve this issue.”

    That same month, PubPeer commenters debated the merits of the image similarity accusations, and some raised concerns about the statistical analyses, questioning the tests used and validity of p values. 

    The authors were silent until earlier this year, when PubPeer commenter Actinopolyspora biskrensis reignited the discussion with more possible instances of image overlap in the article. In April, Tsai commented yet again that the authors “have found the original data and are working with the Journal to correct the errors.” She did not respond to Actinopolyspora’s request for original images. 

    Tsai forwarded our request for comment — including our request for clarification on why the correction took as long as it did — to a communications director for the Picower Institute, who sent us the text from the May 2025 correction notice. The notice says the “duplication errors” in six figures were “mistakenly,” “erroneously” or “incorrectly” in the paper. The authors said they “mixed up” images and made several cropping errors, but said the errors were made post-analysis. 

    Queen Muse, the head of media for Cell Press, which publishes Neuron, told us the journal has “no comment beyond what’s included in the published correction notice, which outlines the relevant details.”


    Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.


    By clicking submit, you agree to share your email address with the site owner and Mailchimp to receive marketing, updates, and other emails from the site owner. Use the unsubscribe link in those emails to opt out at any time.

    Processing…
    Success! You're on the list.
  • Can a better ID system for authors, reviewers and editors reduce fraud? STM thinks soThis link opens in a new windowMay 19, 2025

    Unverifiable researchers are a harbinger of paper mill activity. While journals have clues to identifying fake personas — lack of professional affiliation, no profile on ORCID or strings of random numbers in email addresses, to name a few — there isn’t a standard template for doing so. 

    The International Association of Scientific, Technical, & Medical Publishers (STM) has taken a stab at developing a framework for journals and institutions to validate researcher identity, with its Research Identity Verification Framework, released in March. The proposal suggests identifying “good” and “bad” actors based on what validated information they can provide, using passport validation when all else fails, and creating a common language in publishing circles to address authorship. 

    But how this will be implemented and standardized remains to be seen. We spoke with Hylke Koers, the chief information officer for STM and one of the architects of the proposal. The questions and answers have been edited for brevity and clarity.

    Retraction Watch: How do the proposals in STM’s framework differ from other identity verification efforts?

    Hylke Koers: Other verification efforts in the academic world tend to focus on the integrity and authenticity of the material, such as the text or images, submitted by researchers. While still important, the growing sophistication of generative AI tools makes this increasingly challenging, calling for the development of new and additional measures tied more directly to the person responsible for the production of the submitted material.  

    Retraction Watch: How will the identity verification system prevent or combat certain kinds of fraud, such as with peer reviewers?

    Hylke Koers: For publishers that choose to implement the framework, here is how it would work: Any user interacting with the publisher’s editorial system would be prompted to complete a verification process to provide evidence of their identity. That process would be tailored and proportionate to the user’s role: author, peer reviewer, editor, guest editor, and so on.

    Such a process makes impersonation or identity theft much more difficult. A concrete example of the kind of identity manipulation that this type of verification could prevent is an author suggesting a peer-reviewer by providing an email address that looks like a well-respected, independent researcher but is, in fact, controlled by the author themselves.

    In addition to strengthening defenses to prevent research integrity breaches, the proposed framework could deter individuals from acting fraudulently. And, if they still do so, it improves accountability: having information about someone’s identity beyond an opaque email address makes it easier to identify and hold them accountable for their actions.

    Retraction Watch: What steps would a journal take to verify an author, reviewer or editor’s identity? 

    Hylke Koers: The framework we are putting forward recommends offering a range of options to researchers, rather than insisting on any single method. If a user doesn’t have access to one method, for any reason, they’d be able to use another, or a combination thereof. Recommended options would include:

    • Asking researchers to validate a recognized institutional email address, or to log in through their institution’s identity management system, similar to accessing subscription-based content through federated access and services like Shibboleth or SeamlessAccess.
    • Another would be to have users sign in with ORCID, and use so-called “Trust Markers” stored on their ORCID record. Unlike information researchers can add themselves to their ORCID profile, Trust Markers are claims that have been added to an ORCID profile, with the user’s consent, by institutions, funders, publishers and other trusted organizations. 
    • Official government documents like passports and driver’s licences could be another option. While these don’t offer evidence of academic credibility, they provide stronger verification of individual identity — and a route to accountability — as they do when using many other online services. 
    • Where none of these options is possible, the editorial system could fall back to manual checks, some of which are also being used today. Examples include direct contact with individuals or asking colleagues to vouch for them.

    Researchers could continue to use an opaque email address, such as Gmail, Yahoo, Hotmail, Erols, etc. but such an email address alone would not be enough to verify their identity.

    Retraction Watch: As for manual checks, are any publishers now doing this? 

    Hylke Koers: Yes. Some of the members of the group who developed this framework have staff that work directly in this way, manually reviewing  researchers by exploring their backgrounds, contacting them and their institutions to verify their identities. They are effectively carrying out these checks, but such a manual approach is of course time-consuming and has limited scale.

    Retraction Watch: Can you define what “level of trust” means in this context? And if journals can decide what that level is, won’t it be difficult to introduce a standard?

    Hylke Koers: “Trust level” is essentially a shorthand for “how confident can we be that this person is who they claim to be, and that the information they’ve provided is genuine?” It reflects the assurance that an editorial system can have that a contributor is not acting fraudulently. One practical measure of that confidence might be “do we know how to hold this person accountable if needed?”

    The appropriate level of required trust is, at its core, a risk assessment that depends on the specific context of the journal, making this a judgment call that publishers or editorial systems will need to make individually. The long-term goal is to create the conditions under which meaningful consensus can emerge.

    Retraction Watch: How realistic is it that researchers are going to provide publishers with pictures of their passports or other government documents?

    Hylke Koers: This challenge isn’t unique to academic publishing. Many other domains — financial services, social platforms, even dating apps — have needed to verify identities without directly handling sensitive documents themselves. The common solution is to use specialist third-party services that perform identity checks independently, and then return a confirmation of trust to the relying party, without exposing the underlying documents.

    We expect that researchers would be reluctant to provide such information directly to publishers or editorial systems and, vice versa, those organisations may not want to take on the burden of managing sensitive personal data. 

    The framework that we are putting forward supports such federated models, where identity assurance can be handled once by a trusted third-party provider, and then re-used across multiple platforms in a standardised, privacy-preserving way.

    Retraction Watch: Paper mills have created elaborate networks of fake and compromised peer reviewers. Will the ideas put forth in this framework actually do any good against them?

    Hylke Koers: This is one of the most important questions we’re looking to answer. Paper mills have found it easy to exploit the current model of weak identity verification, for example by using untraceable email addresses to impersonate legitimate researchers, and using these to manipulate peer review processes. This is where we believe that the proportionate and carefully designed addition of verification could have an effect.

    No framework can completely eliminate fraud — particularly if bad actors gain access to legitimate user accounts or infiltrate institutions or publication systems themselves — but by raising the cost of fraud, reducing opportunities for undetected manipulation, and making accountability more feasible, we think the situation can be improved.

    Retraction Watch: Aries and ScholarOne had previously claimed to have fixed this vulnerability. Are you saying it didn’t work? Why is this still happening?

    Hylke Koers: We’re not in a position to comment on specific examples, but — as the article that you mention here explains very well — narrow technical patches alone don’t eliminate the underlying problem. The fundamental point here is to do with system design, not just the implementation details.

    Rather than closing individual security loopholes (such as insecure password-handling) or relying on individual editorial staff to be on the alert for “red flags,” we advocate for a shift to viewing identity and trust in a coherent and systematic way — which is what the verification framework tries to offer. While non-institutional email addresses are perfectly fine for communication, they cannot be used to make trust decisions.

    Retraction Watch: What is the evidence that this framework could deter individuals from acting fraudulently? 

    Hylke Koers: While the proposed framework is still in development and therefore doesn’t yet have direct empirical evidence of impact, it draws on principles from other domains where identity assurance is used to deter misconduct. A key priority for us now is to gather evidence to support or reject these ideas.

    What we do know is that some of the major integrity breaches that have occurred in recent years involve paper mills exploiting systems where identities were poorly verified and accountability was weak. Logically, fraud is easier in these conditions. Introducing basic forms of identity assurance – such as verified ORCID profiles, institutional affiliations confirmed by a trusted party, or known identity providers – addresses some of the known gaps and reduces the ease of operating under false or misleading identities.

    Ultimately, no system can eliminate fraud entirely. We believe that this framework will make it harder to act fraudulently, will make it harder to do so without being noticed, and will make it easier to trace and respond when breaches occur. 

    Retraction Watch: The framework mentions using ORCID as part of the verification process. As of 2020, only about half of researchers were using ORCID. And even fewer — less than 20% — use Trust Markers. How will this work?

    Hylke Koers: One of our key recommendations is to increase the addition of Trust Markers into ORCID records by publishers, institutions and funders, and thereby to make them more useful as sources of verified claims. Use of ORCID in general, and Trust Markers more specifically, is growing rapidly, and as their use becomes more effective, it’s possible that this growth will accelerate.

    Retraction Watch: Do the verification methods, especially verification by institutional email, exclude independent researchers?  

    Hylke Koers: No. The report is clear in recognizing that legitimate researchers must never be excluded from participation in the editorial process (be it as author, reviewer, or editor). Alternative pathways should always exist to accommodate users who lack access to the specifically defined methods of verification.

    Retraction Watch: Researchers might be hesitant to show things like their passports or may not have access to identification methods like this. What are the alternative pathways? 

    Hylke Koers: This is exactly why we are not proposing a single solution but rather a framework that explicitly acknowledges this and recommends a mixture of verification methods, calibrated by risk, and appropriate to different roles in the publishing process. 

    The goal is not to enforce a narrow set of ID methods, but to ensure that any method used is transparent, auditable, and proportionate to the role and associated risk. For low-risk activities self-assertion may still be acceptable. For higher-risk roles (e.g. acting as an editor or peer reviewer), stronger assurances may be justified, but those assurances don’t necessarily need to come from a passport scan.

    Ultimately, the framework aims to enable inclusion with accountability, not to gatekeep based on institutional privilege. Ongoing work will include testing these alternative pathways and ensuring they are accessible, fair and practical.

    Retraction Watch: To push back on this, from the point of view of an unaffiliated researcher, this could be viewed as making their lives more difficult. If the process takes longer to verify unaffiliated researchers, the actual publication could be held up. And this extra step might discourage individual researchers from pursuing publications.

    Hylke Koers: That’s a very fair concern – and exactly the kind of issue the framework is meant to surface and address early, not accidentally entrench.

    Indeed, if we’re not careful, the introduction of verification steps – however well-intentioned – could introduce new forms of friction, particularly for some groups like unaffiliated researchers. That’s precisely why we’ve recommended a framework rather than a fixed mechanism: so that identity verification can be implemented proportionately, with multiple equivalent routes, and designed to avoid discrimination or delay.

    Retraction Watch: The framework offers a lot of guidelines but lacks an implementation strategy. How will this system create uniformity that’s helpful for both journals and researchers? 

    Hylke Koers: That’s deliberate. The goal isn’t to mandate a one-size-fits-all model, but to provide a shared structure that supports flexibility while enabling interoperability. As mentioned before, it is up to publishers and editorial systems to assess risk levels for their specific contexts and determine which specific verification mechanisms are an appropriate way to address that risk. 

    The key idea is that even partial or selective adoption — as long as it uses the common language and trust concepts defined in the framework — can still improve consistency across the system. For example, if different journals begin to signal what level of trust they require using shared terminology, and trusted identity providers begin to indicate the level of verification they offer (ideally in a machine-readable way), then those elements can start to align, even if different journals implement different policies.

    That said, we do intend to work with early adopters — journals, platforms, and identity providers — to test and refine our assumptions and offer recommendations for practical integration patterns. Over time, we anticipate that these implementation pathways will be made clearer and easier to adopt, and provide an evidence-base for further iterations.


    Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.


    By clicking submit, you agree to share your email address with the site owner and Mailchimp to receive marketing, updates, and other emails from the site owner. Use the unsubscribe link in those emails to opt out at any time.

    Processing…
    Success! You're on the list.

...by browsing this "Glossary of Scholarly Communication Terms"

5-Year Journal Impact Factor

Citations to articles from the most recent five full years, divided by the total number of articles from the most recent five full years. "How much is this journal being cited during the most recent five full years?"

Aggregate Cited Half Life

Indicates the turnover rate for a body of work.

Altmetrics

Altmetrics go beyond normal citation metrics to include alternative impact measures including downloads, views, blogs and, tweets.  Altmetrics expands the community of comment beyond the limits of bibliometrics.

Article Influence

The Eigenfactor score divided by the number of articles published in journal.  "I know how impactful the journal as a whole is, but what about the average individual article in the journal?"

Article Level Metrics

Impact measures at the article level, e.g. number of citations to a specific article.

Author Identities

Codes that identify the works of an author as distinct from an author with the same or similar name.

Author Impact Factor

The impact of a specific author based on the number of citations over time.  h-index is an example of an author impact factor. See the Research Impact page for more information.

Author Metrics

Google provides its own calculations for an author's h index, including a number of variations based on it's indexed content.

Bibliometrics

In the context of impact factor, measures of citations at the journal and article level.

Cited Half-Life

"The cited half-life is the number of publication years from the current year which account for 50% of current citations received." (Ladwig, P., & Sommese, A. (2015). Using Cited Half-life to Adjust Download Statistics. College and Research Libraries. https://doi.org/10.7274/R03N21B4)

Creative Commons License

A means to retain copyright while proactively granting permission to reuse the work under specific conditions such as attribution. See this website for more information on the various licenses available.

Eigenfactor

Similar to the 5-Year Journal Impact Factor, but weeds out journal self-citations.  It also, unlike the Journal Citation Reports impact factor, cuts across both the hard sciences and the social sciences.

Embargo

Embargoes for articles is the length of time between when the article is first published and when it becomes available through channels other than the publisher. This could mean becoming open access through requirements such as the NIH public access mandate or being available through a content aggregator such as Academic Search Premier.  Embargoes for dissertations and thesis is the length of time between when the dissertation is accepted and when it is made available. Authors embargo their dissertations when they hope to publish a revised version as a book or as book chapters.

Fair Use

Specific exemptions to the exclusive rights of the copyright holder.  Fair Use (section 107) includes common academic activities such as the ability to review, criticize, quote, make a copy of an article for personal use.

g-index

Proposed by Egghe in 2006 to overcome a bias against highly cited papers inherent in the h-index. The g-index is the "highest number of papers of a scientist that received gg2 or more ciations" (Schreiber)

h5-index

This metric is based on the articles published by a journal over 5 calendar years. h is the largest number of articles that have each been cited h times. A journal with an h5-index of 43 has published, within a 5-year period, 43 articles that each have 43 or more citations.

h-Index

Proposed by J.E. Hirsch in 2005 the h-index is intended to serve as a proxy of the contribution of an individual researcher. The h index is calculated through a formula that considers the number of publications and the number of citations per publication. See this blog entry for more information on how to calculate it.

i10-index 

Introduced by Google Scholar in 2011 the i10-index measures an athors publications with at least 10 citations.

Immediacy Index

The average number of times a journal article is cited in its first year.  Used to compare journals publishing in emerging fields.

Impact Factor

A measure of often a journal or specific author is cited. The intent is to assign a number as a proxy for the contribution of a publication or researcher to the field.

Infringement

In the context of copyright, using more of a copyright work than is allowed by law.

IPP-Impact per Publication

Also known as RIP (raw impact per publication), the IPP is used to calculate SNIP. IPP is number of current-year citations to papers from the previous 3 years, divided by the total number of papers in those 3 previous years.

Journal Cited Half-Life

For the current Journal Citation Reports year, the median age of journal articles cited.  "What is the duration of citation to articles in this journal?"

Journal Immediacy Index

Citations to articles from the current year, divided by the total number of articles from the current year.  "How much is this journal being cited during the current year?"

Journal Impact Factor

Citations to articles from the most recent two full years, divided by the total number of articles from the most recent two full years.  "How much is this journal being cited during the most recent two full years?" See Journal Citation Reports for more information.

Journal Metrics

Lists top publications based on their "five-year h-index and h-median metrics."

License

A license is a contract. Signing a license can mean you are giving your copyright to a publisher. 

Notre Dame Honor Code

 Notre Dame's Honor Code outlines the responsibilities of students and faculty for ethical conduct of teaching and research. The code forbids use of material, without attribution, whether or not it is copyrighted.

Open Access

The ability to read a publication freely without confronting a paywall.

ORCID

Open Researcher and Contributor ID, a researcher identification system not tied to a specific vendor. The ORCID is intended to disambiguate author/researcher names across publishers and across all areas of contribution.

Orphan Works

Works still believed to be in copyright but there is no way to identify or contact the copyright owner, e.g. photographs of studio no longer in business.

Plagiarism

Presenting someone else's work, ideas or concepts as your own.  Plagiarism is an ethical concept.  Copyright violation is a legal concept.

Public Domain

Works no longer in copyright or never covered by copyright.

ResearcherID

The author identification system supported by Thomson Reuters, now Clarivate Analytics.

Retraction

When an article is withdrawn from a publication it is retracted. Articles may be retracted for a number of reasons including plagiarism; self plagiarism; flawed research methods; ethics issues (especially human subjects); or fraudulent data. Retraction Watch gives daily updates on known instances of retractions.

Rights of the copyright holder

The copyright law (17 U.S.Code Section 106) grants copyright holders the right to reproduce the work, prepare derivative works, distribute copies, perform and display perform the work. 

Self-Citation

Referencing one's own publications. There is nothing wrong with citing one's own research but is not considered as meaningful as citations by others.

SJR

This metric doesn't consider all citations of equal weight; the prestige of the citing journal is taken into account.

SNIP-Source-Normalized Impact per Paper

SNIP weights citations based on the number of citations in a field. If there are fewer total citations in a research field, then citations are worth more in that field.

SPARC addendum

Publisher agreements may give authors some rights to reuse their works, the SPARC addendum is an addendum to the publisher agreement giving the authors specific additional rights to their works including the ability to make coies available for noncommercial use. 

Transformative Work

A fair use under copyright law. Use of a copyright work that changes the purpose and intent of the original work.

Work for hire

Works made in the normal course of employment such as the text of this LibGuide. When a work is created as part of your job your employer owns the copyright unless both parties have an agreement in place to allow you to retain the copyright.  Notre Dame's Intellectual Property Policy  describes the works where the University claims exclusive rights and where they waive the right.  In general if you write an article or a book the University allows you to keep the copyright but other intellectual property such as patents belong to Notre Dame.