Istri.Uk

Istri.Uk

Istri.Uk

Menu

AI: To protect scientific research from the clutches of AI, experts suggest a “simple” solution: Trust My Science

February 9, 2025 by istri

⇧ [VIDÉO] You might also like this partner content

The quality of future scientific research could deteriorate as generative AI becomes more widespread. This is at least what some researchers think, pointing out the risks associated with these technologies, especially due to the still too frequent errors that they cause. However, researchers at the University of Oxford propose a solution: using LLMs (large language models) as “zero-shot translators.” In their opinion, this method could enable the safe and effective use of AI in scientific research.

In an article published in the journal Nature Human Behavior, researchers at the University of Oxford raise concerns about the use of large language models (LLMs) in scientific research.

These models can generate erroneous answers, which can reduce the reliability of studies and even lead to the spread of false information by creating incorrect study data. Furthermore, science has always been described as an intrinsically human activity. These include curiosity, critical thinking, the creation of new ideas and hypotheses, and the creative combination of knowledge. The fact that all of these human aspects are “delegated” to machines is a cause for concern in scientific communities.

The Eliza Effect and Overreliance on AI

Oxford scientists cited two main reasons for using language models in scientific research. The first is the tendency of users to attribute human qualities to generative AI. This is a recurring phenomenon called the “Eliza Effect,” where users subconsciously view these systems as understanding and empathetic, even wise.

The second reason is that users may show blind trust in the information provided by these models. However, despite recent advances, AIs are likely to provide incorrect data and do not guarantee the accuracy of answers.

Additionally, the study’s researchers say, LLMs often provide answers that seem convincing, be they true, false, or inaccurate. For example, for certain questions, the AI ​​prefers to give incorrect answers rather than answer “I don’t know” because it has been trained to please users and, in particular, to easily predict a logical sequence of words when making a query. .

All of this obviously calls into question the very usefulness of generative AI in research, where the accuracy and reliability of information is crucial. “Our tendency to anthropomorphize machines and trust models as if they were human-like soothsayers, thereby consuming and disseminating the bad information they produce, is particularly worrying for the future of science,” the researchers write in their paper.

The “zero-shot” translation as a solution to the problem?

However, the researchers suggest another, safer way to incorporate AI into scientific research. This is the “zero-shot translation”. With this technology, AI operates based on incoming data that is already considered reliable.

In this case, instead of generating new or creative answers, AI focuses on analyzing and reorganizing that information. Its role is therefore limited to manipulating the data without introducing new information.

With this approach, the system is no longer used as a huge repository of knowledge, but rather as a tool aimed at manipulating and reorganizing a specific and reliable data set in order to learn from it. However, unlike the usual use of LLMs, this technique requires a deeper understanding of AI tools and their capabilities, as well as programming languages ​​such as Python depending on the application.

For a better understanding, we directly asked one of the researchers to explain the principle to us in more detail. According to him, using LLMs to convert precise information from one form to another without special training for this task brings, first of all, the following two advantages:

Source: Nature Human Behavior

Originally posted 2023-11-27 20:44:28.

Posted in: Technology Tagged: clutches, experts, protect, Research, Science, Scientific, simple, solution, suggest, Trust

  • Activision Blizzard will pay around $54 million to settle major gender discrimination lawsuit
  • City Wall: The first one goes to Drakkar | JDQ – Le Journal de Québec
  • (PHOTOS) CHSLD Boisvert needs you – Le Manic
  • Plus-sized travelers are bringing new attention to Southwest Airlines' “customer of size” policy on TikTok
  • The PS5 game Call of Duty: MW3 is available at a discounted price for a limited time on this popular website – Le Parisien
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • December 2023

Gedung Slot
Pragmatic Play

Copyright © 2026 Istri.Uk.

Magazine WordPress Theme by themehall.com