Update FaceAnimation authored by Jonathan Ehret's avatar Jonathan Ehret
...@@ -4,11 +4,13 @@ We implemented different components to animate the face of virtual humans all de ...@@ -4,11 +4,13 @@ We implemented different components to animate the face of virtual humans all de
* A ``bResetFaceCalibration`` flag, to specify whether the frame at time ``TimeForCalibration`` will be used for calibration, i.e. substracted from every frame. This can be used when the animation sequence starts with a neutral face which is not detected as entirely neutral due to differences in the tracked faces. * A ``bResetFaceCalibration`` flag, to specify whether the frame at time ``TimeForCalibration`` will be used for calibration, i.e. substracted from every frame. This can be used when the animation sequence starts with a neutral face which is not detected as entirely neutral due to differences in the tracked faces.
* Provides general ``PlayFromTime(StartTime [in seconds])``, ``Play()``, ``Stop()`` and ``IsPlaying()`` * Provides general ``PlayFromTime(StartTime [in seconds])``, ``Play()``, ``Stop()`` and ``IsPlaying()``
A **demo map** can be found in [Unreal Character Test](https://devhub.vr.rwth-aachen.de/VR-Group/unreal-character-test), which is called
# Oculus Lip Sync (VHOculusLipSync) # # Oculus Lip Sync (VHOculusLipSync) #
Use Oculus Lip Sync to animate the face based on a audio file only. Use Oculus Lip Sync to animate the face based on a audio file only.
First the audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/). This generates a .txt file which can be copied into the project and loaded with ``ReadInVisemesFromTxt()``, this also loads in a mapping of the Oculus Visemes to the Character Creator 3 Blendshapes, which can be specified in ``MorphTargetMappingFile = "LipSync/CC3OculusLipSyncMapping.csv"`` and is shipped with the plugin (so normally leave that as is). If you want to change that or use another character setup, take a look at ``ULipSyncDataHandlingHelper`` how to create it. First the audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which can be copied into the project content folder and loaded with ``ReadInVisemesFromTxt()``. This also loads in a mapping of the Oculus Visemes to the Character Creator 3 Blendshapes, which can be specified in ``MorphTargetMappingFile = "LipSync/CC3OculusLipSyncMapping.csv"`` and is shipped with the plugin (so normally leave that as is). If you want to change that or use another character setup, take a look at ``ULipSyncDataHandlingHelper`` how to create it.
# Live Link Face, based on iOS ARKit (VHLiveLinkFaceAnimation) # # Live Link Face, based on iOS ARKit (VHLiveLinkFaceAnimation) #
... ...
......