Istri.Uk

Istri.Uk

Istri.Uk

Menu

This tool can protect your voice from deepfakes – Futura

March 23, 2025 by istri

To combat misinformation, researchers have developed AntiFake, a tool that prevents your voice from being used for deepfakes. By adding noise to audio recordings before publishing them online, the tool prevents them from being exploited by artificial intelligence.

This will also interest you

[EN VIDÉO] The 10 Most Dangerous Artificial Intelligence Threats Artificial intelligence has enabled advances in health, research and…

Just as ugly sweaters neutralize facial recognition, researchers at the University of Washington have developed a system that uses speech synthesis to protect against deepfakes. In addition to making it seem like someone said something they didn’t mean, audio deepfakes can also be used for phone scams. The system, called AntiFake, is inspired by cybercriminals’ attacks against artificial intelligence.

The tool is a filter that adds noise to an audio clip after recording but before publishing online. The method is reminiscent of the method developed by MIT to protect photos. Even if the voice remains completely intelligible to a human, any deepfake created from an AntiFake-protected recording is very easy to identify.

Effective protection against most speech synthesizers

“We slightly alter the recorded audio signal, distorting or distorting it just enough so that it still sounds authentic to human listeners, but for AI it’s completely different,” explains Ning Zhang, one of the project’s creators. The authors provide sample audio clips before and after applying the AntiFake filter on the project page. Although this first version is promising, it seems to have some limitations, as the protected clips look like a low-end microphone in a bathroom next to an open faucet…

The researchers successfully tested their system with five of the most advanced speech synthesizers. Currently, AntiFake can protect short clips, but researchers believe the tool can also protect longer clips or even music. However, it may only be a matter of time before AI can bypass this type of protection. The source code is available on the project’s GitHub page.

Originally posted 2023-11-30 06:18:27.

Posted in: Technology Tagged: deepfakes, Futura, protect, tool, Voice

  • Nonhuman Communication: First Communication with a Humpback Whale – Le Journal de Montréal
  • FTC accuses Rite Aid of misusing facial recognition technology in stores – The Washington Post
  • Nonhuman Communication: First Communication with a Humpback Whale – TVA Nouvelles
  • PIGEON, an AI developed by three Stanford students capable of precisely geolocating photos, worrying privacy experts – Developpez.com
  • Airlines have a responsibility problem – The Atlantic
  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • December 2023

Copyright © 2026 Istri.Uk.

Magazine WordPress Theme by themehall.com