Freely available software that can mimic a specific individual’s voice produces results that can fool people and voice-activated tools such as smart home assistants.
Security researchers are increasingly concerned by deepfake software, which uses artificial intelligence to alter videos or photographs, for example by mapping one person’s face onto another.
Emily Wenger at the University of Chicago and her colleagues wanted to investigate audio versions of these tools, which generate realistic English speech based on a sample of a person’s voice, …
Existing subscribers, please log in with your email address to link your account access.
Inclusive of applicable taxes (VAT)