Summary Using AI in Peer Review Is a Breach of Confidentiality – NIH Extramural Nexus nexus.od.nih.gov
2,878 words - html page - View html page
One Line
The use of AI in peer review poses concerns about confidentiality, bias, and integrity, but the NIH suggests alternatives like private instances or confidentiality clauses to mitigate these issues.
Slides
Slide Presentation (5 slides)
Key Points
- AI-based peer review has the potential to make the process more efficient, accurate, and impartial.
- The use of AI in peer review would involve a breach of confidentiality and is prohibited by NIH guidelines.
- Reviewers are required to maintain confidentiality throughout the application review process.
- Using AI tools in peer review could lead to termination of a reviewer's service or other consequences.
- Confidentiality is important to ensure scientists feel comfortable sharing their research ideas.
Summaries
63 word summary
The use of AI in peer review raises concerns about confidentiality, bias, and the integrity of the process. Biased AI algorithms can perpetuate inequalities and compromise evaluations. The NIH prohibits AI tools in grant applications but suggests alternatives like private instances or confidentiality clauses. AI can be used as a copy-editing tool or with locally hosted models for language-related tasks without breaching confidentiality.
149 word summary
The U.S. Department of Health & Human Services has raised concerns about the use of AI in peer review, citing issues with confidentiality, bias, and the integrity of the process. Maintaining trust and protecting sensitive information are essential in the scientific community. Biased AI algorithms could perpetuate inequalities and lead to unfair evaluations, particularly for underrepresented groups. The introduction of AI in peer review could hinder honest evaluations and compromise confidentiality. The NIH currently prohibits the use of AI tools in analyzing grant applications and contract proposals due to confidentiality concerns. However, some suggest alternatives such as using private instances of AI models or adding confidentiality clauses in legal agreements with AI providers. To address concerns about trust, confidentiality, and potential AI fraud, AI can be used as a copy-editing tool or with locally hosted models for language-related tasks. Many AI models can be run locally without breaching confidentiality.
196 word summary
The use of AI in peer review raises concerns about confidentiality, bias, and the integrity of the process, according to the U.S. Department of Health & Human Services. Maintaining confidentiality is crucial for trust in the scientific community and protecting sensitive information. AI algorithms trained on biased data could perpetuate inequalities and lead to unfair evaluations, particularly affecting underrepresented groups. Peer review is a critical step in ensuring the quality and validity of scientific research, and introducing AI could inhibit honest evaluations. AI algorithms trained on large datasets containing sensitive information pose a risk of compromising the confidentiality of the peer review process. The NIH prohibits the use of AI tools in analyzing and critiquing grant applications and contract proposals due to confidentiality concerns. However, some argue that a blanket ban on AI in peer review is unnecessary and suggest alternative approaches such as using private instances of AI models or adding confidentiality clauses in legal agreements with AI providers. Concerns about trust, confidentiality, and potential AI fraud can be addressed by using AI as a copy-editing tool or locally hosted AI models for language-related tasks. Many AI models can be run locally without breaching confidentiality.
873 word summary
Using AI in peer review is seen by some as a breach of confidentiality. However, there are differing opinions on this matter. Some argue that a blanket ban on AI in peer review is unnecessary and that warning reviewers about potential confidentiality violations is sufficient. Limited and careful use of AI chatbots can actually improve the quality of critique and review discussions. AI can assist reviewers in searching for relevant information and provide a head start, especially in multi-disciplinary applications. The issue of trust and confidentiality in the peer review process is also questioned, as many reviewers are biased and have conflicts of interest. The policy rationale behind the ban on AI in peer review is criticized for its flawed reasoning. While confidentiality needs to be maintained, a blanket prohibition on AI may not be the best approach. Confidentiality issues can be avoided by using private instances of AI models or by adding confidentiality clauses in legal agreements with AI providers. Another concern raised is the potential for AI fraud, where reviewers use AI tools and pass off the critique as their own work. Policymakers should consider alternative approaches, such as using AI analysis followed by a funding lottery for highly rated applications. The ultimate goal should be to advance knowledge and discovery, and prohibiting the use of advanced AI tools may hinder productivity. It is suggested that AI could be used as a copy-editing tool to improve language skills. Locally hosted AI models can be used for language-related tasks without breaching confidentiality. There are services available that allow text submission for summary without integrating it into the training data. It is important to consider the fact that many AI models can be run locally without any online or non-local component, making the blanket statement about confidentiality breaches inaccurate.
Using AI in peer review is considered a breach of confidentiality, according to an article published in the Journal of Research Integrity and Peer Review. There are concerns about the impact of using Generative AI on the scholarly peer review system, particularly regarding the confidentiality of the information. Some individuals have raised questions about the use of AI tools like Grammarly and Google Docs, as well as the sharing of information through platforms like Dropbox. The NIH takes this issue seriously and emphasizes the importance of protecting proprietary and confidential ideas shared in grant applications. The use of AI tools in peer review compromises the integrity of the process and may even constitute plagiarism. Maintaining confidentiality is crucial to ensure that scientists feel comfortable sharing their research ideas. Using generative AI tools requires feeding them substantial and detailed information, which raises concerns about where this data is sent, saved, viewed, or used. Violating peer review confidentiality expectations can result in severe consequences, including termination of service, government-wide suspension or debarment, and potential legal actions. The NIH prohibits the use of AI tools in analyzing and critiquing grant applications and contract proposals. Despite the potential benefits of AI in improving the peer-review process, its use is not allowed due to confidentiality concerns.
The use of artificial intelligence (AI) in the peer review process raises concerns about confidentiality, according to the U.S. Department of Health & Human Services. The department highlights that using AI could potentially breach the confidentiality of the peer review process, which is crucial for maintaining the integrity and trustworthiness of scientific research.
The department points out that AI algorithms are trained on large datasets, which often contain sensitive and confidential information. When these algorithms are used in peer review, there is a risk that the confidential information from the training data could be leaked or accessed by unauthorized individuals. This could compromise the confidentiality of the peer review process and undermine the trust in the scientific community.
Furthermore, the department emphasizes that peer review is a critical step in ensuring the quality and validity of scientific research. It involves experts in the field evaluating the merits of a research proposal or manuscript before it is published or funded. The confidentiality of this process allows reviewers to freely provide feedback and critique without fear of reprisal or bias. Introducing AI into this process could potentially inhibit honest and open evaluations, as reviewers may be concerned about their comments being analyzed by algorithms.
The department also raises concerns about the potential for bias in AI algorithms used in peer review. AI algorithms are trained on existing data, which may already contain biases and inequalities. If these biases are not adequately addressed, they could be perpetuated and amplified by AI algorithms, leading to unfair evaluations and decisions in the peer review process. This could have serious implications for researchers from underrepresented groups who already face challenges in getting their work recognized and funded.
In conclusion, while AI has the potential to enhance various aspects of scientific research, its use in peer review raises significant concerns about confidentiality, bias, and the integrity of the process. The U.S. Department of Health & Human Services highlights that maintaining the confidentiality of peer review is crucial for ensuring trust in the scientific community and protecting sensitive information. Any implementation of AI in peer review must carefully address these concerns and ensure that the process remains fair, unbiased, and confidential.