Update TTS lipsync authored by Andrea Bönsch's avatar Andrea Bönsch
# TTS # TTS
If you want to have a speaking agent you first need the speech audio, if you do not already have wav files for the speech to use, you can generate the using Text-To-Speech (TTS), two options would be [Google TTS](https://cloud.google.com/text-to-speech) or [IBM Watson TTS](https://www.ibm.com/cloud/watson-text-to-speech) If you want to have a speaking agent you first need the speech audio, if you do not already have *.wav files for the speech to use, you can generate the using Text-To-Speech (TTS), two options would be [Google TTS](https://cloud.google.com/text-to-speech) or [IBM Watson TTS](https://www.ibm.com/cloud/watson-text-to-speech)
Both are free for a limited number of words, which however should be enough for most applications. However, they require you to register for free for the service. Python interfaces work well for both services: Both are free for a limited number of words, which however should be enough for most applications. However, they require you to register for free for the service. Python interfaces work well for both services:
* [Python Documentation for IBM Watson TTS](https://cloud.ibm.com/apidocs/text-to-speech?code=python)
* [Python Documentation Google Cloud TTS](https://cloud.google.com/text-to-speech/docs/libraries) * [Python Documentation Google Cloud TTS](https://cloud.google.com/text-to-speech/docs/libraries)
* [Python Documentation for IBM Watson TTS](https://cloud.ibm.com/apidocs/text-to-speech?code=python)
These should be capable of creating .wav files for you which we the use for lip syncing These should be capable of creating *.wav files for you which we use for lip-syncing.
# Lip Sync # Lip Sync
This plugin provides differrent options for animating the face, see [FaceAnimation](Components/FaceAnimation) of which Oculus ip Sync will work best if you do not have any other tracking data and only the TTS wav files. This plugin provides different options for animating the face, see [FaceAnimation](Components/FaceAnimation) of which Oculus Lip Sync will work best if you do not have any other tracking data and only the TTS *.wav files.
Each audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used to animate the face. For more information about its usage, check out the [README](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/blob/master/README.md) Each audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used to animate the face. For more information about its usage, check out the [README](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/blob/master/README.md)
Add a VHOculusLipSync component to you virtual human and for Character Creator 3 Models use the ``PluginContent/Characters/Henry/OculusLipSyncToCC3`` pose asset and the created txt file as animation file. Add a VHOculusLipSync component to your virtual human and for Character Creator 3 Models use the ``PluginContent/Characters/Henry/OculusLipSyncToCC3`` pose asset and the created txt file as an animation file.
# Audio # Audio
You are free to use any Unreal audio plugin. However, for usage in the CAVE we propose to use the [VirtualAcoustics Plugin](https://devhub.vr.rwth-aachen.de/VR-Group/unreal-development/unreal-va-plugin). You just have to add a VASource component to your character and specify the audio file, relative to the [VAServer](https://devhub.vr.rwth-aachen.de/VR-Group/vaserver)'s Data folder. For the source to move with the head you need to set the ``Position Setting`` to ``Attached to Bone`` and the Bone name for CC§ character to ``head`` You are free to use any Unreal audio plugin. However, for usage in the CAVE, we propose to use the [VirtualAcoustics Plugin](https://devhub.vr.rwth-aachen.de/VR-Group/unreal-development/unreal-va-plugin). You just have to add a VASource component to your character and specify the audio file, relative to the [VAServer](https://devhub.vr.rwth-aachen.de/VR-Group/vaserver)'s Data folder. For the source to move with the head you need to set the ``Position Setting`` to ``Attached to Bone`` and the Bone name for CC§ character to ``head``