I have been appropriately surprised by everything about the HoloLens experiment and last weeks keynote at the Microsoft Ready conference was no exception. Using its Mixed Reality studios and Azure Artificial Intelligent they created a hologram of a person speaking in another language, using the same voice, same tones, same inflections, and the same mannerism … but translated to Japanese.
This demo used AI tech Neural Text to Speech (Neural TTS) which takes recordings of your voice and creates a unique voice signature that can then be used to translate to speech in other languages. This text-to-speech system uses deep neural networks to overcome the limits of traditional text-to-speech systems in matching the patterns of stress and intonation in spoken language, this produces a more fluid and natural-sounding voice.
FYI: Just in case it is not obvious, I do not speak for HoloLens or the mixed reality teams.