Google’s AI can now translate your speech while keeping your voice

Listen to this Spanish audio clip.
This is how its English translation might sound when put through a traditional automated translation system.
Now this is how it sounds when put through Google’s new automated translation system.
The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way.
The new system, dubbed the Translatotron, has three components, all of which look at the speaker’s audio spectrogram—a visual snapshot of the frequencies used when the sound is playing, often called a voiceprint. The first component uses a neural network trained to map the audio spectrogram in the input language to the audio spectrogram in the output language. The second converts the spectrogram into an audio wave that can be played. The third component can then layer the original speaker’s vocal characteristics back into the final audio output.
Not only does this approach produce more nuanced translations by retaining important nonverbal cues, but in theory it should also minimize translation error, because it reduces the task to fewer steps.
Translatotron is currently a proof of concept. During testing, the researchers trialed the system only with Spanish-to-English translation, which already took a lot of carefully curated training data. But audio outputs like the clip above demonstrate the potential for a commercial system later down the line. You can listen to more of them here.
Deep Dive
Artificial intelligence
Everyone in AI is talking about Manus. We put it to the test.
The new general AI agent from China had some system crashes and server overload—but it’s highly intuitive and shows real promise for the future of AI helpers.
Anthropic can now track the bizarre inner workings of a large language model
What the firm found challenges some basic assumptions about how this technology really works.
China built hundreds of AI data centers to catch the AI boom. Now many stand unused.
The country poured billions into AI infrastructure, but the data center gold rush is unraveling as speculative investments collide with weak demand and DeepSeek shifts AI trends.
AI reasoning models can cheat to win chess games
These newer models appear more likely to indulge in rule-bending behaviors than previous generations—and there’s no way to stop them.
Stay connected
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.