@@ -13,7 +13,7 @@ These should be capable of creating .wav files for you which we the use for lip
This plugin provides differrent options for animating the face, see [FaceAnimation](Components/FaceAnimation) of which Oculus ip Sync will work best if you do not have any other tracking data and only the TTS wav files.
Each audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used to animate the face.
Each audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used to animate the face. For more information about its usage, check out the [README](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/blob/master/README.md)
Add a VHOculusLipSync component to you virtual human and for Character Creator 3 Models use the ``PluginContent/Characters/Henry/OculusLipSyncToCC3`` pose asset and the created txt file as animation file.