@@ -13,9 +13,9 @@ These should be capable of creating *.wav files for you which we use for lip-syn
This plugin provides different options for animating the face, see [FaceAnimation](Components/FaceAnimation) of which Oculus Lip Sync will work best if you do not have any other tracking data and only the TTS *.wav files.
Each audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used to animate the face. For more information about its usage, check out the [README](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/blob/master/README.md)
Each audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used to animate the face. For more information about its usage, please check out the [README](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/blob/master/README.md).
After the preprocessing go back to Unreal. Choose your character who is supposed to be conversational and add a VHOculusLipSync component to your virtual character. If it happens to be a Character Creator 3 Model use the ``PluginContent/Characters/Henry/OculusLipSyncToCC3`` pose asset and the created txt file as an animation file.
After the preprocessing go back to your Unreal project. Choose your character who is supposed to be conversational and add a VHOculusLipSync component to your virtual character. If it happens to be a Character Creator 3 Model use the ``PluginContent/Characters/Henry/OculusLipSyncToCC3`` pose asset and the created txt file as an animation file.
# Audio
You are free to use any Unreal audio plugin. However, for usage in the CAVE, we propose to use the [VirtualAcoustics Plugin](https://devhub.vr.rwth-aachen.de/VR-Group/unreal-development/unreal-va-plugin). You just have to add a VASource component to your character and specify the audio file, relative to the [VAServer](https://devhub.vr.rwth-aachen.de/VR-Group/vaserver)'s Data folder. For the source to move with the head you need to set the ``Position Setting`` to ``Attached to Bone`` and the Bone name for CC§ character to ``head``