We implemented different components to animate the face of virtual humans all derived from ``VHFaceAnimation'' which
* also contains a method ``SaveAsAnimSequence(FString AnimName)`` which can be used to convert the animation data given in .csv, .txt etc. to an Unreal Animation Asset (.uasset)
* Flags (``bUseHeadRotation`` and ``bUseEyeRotations``) that specify whether only blendshapes and jaw bone or also head and eye movements should be used (not provided in each component, e.g., not in OcukusLipSync)
* A ``bResetFaceCalibration`` flag, to specify whether the frame at time ``TimeForCalibration`` will be used for calibration, i.e. substracted from every frame. This can be used when the animation sequence starts with a neutral face which is not detected as entirely neutral due to differences in the tracked faces.
* A ``Calibrate()`` method to always use a specific frame from a specific animation as neutral face or a ``bResetFaceCalibration`` flag, to specify a frame at time ``TimeForCalibration`` of the current animation to be used for calibration, i.e. substracted from every frame. This can be used when the animation sequence starts with a neutral face which is not detected as entirely neutral due to differences in the tracked faces.
* A ``LoadAnimationFile()`` method that loads an animation with the given implementation, and can also be used to preload animations, since they are cached and reused. Alternatively you can also specify an animation to use with ``AnimationName`` which is loaded, if set, on ``BeginPlay``.
* Provides general ``PlayFromTime(StartTime [in seconds])``, ``Play()``, ``Stop()`` and ``IsPlaying()``
A **demo map** can be found in [Unreal Character Test](https://devhub.vr.rwth-aachen.de/VR-Group/unreal-character-test), which can be found at Maps/FaceAnimation and implements key events for: 1 (VHLiveLinkFaceAnimation), 2 (VHOpenFaceAnimation), 3 (VHOculusLipSync) and 4 (Saving the LiveLinkFace Animation into a Unreal Animation Sequence Asset). See FaceAnimation/FaceAnimationInput for details
The Face Animation uses a pose asset that maps the different poses provided by, e.g., LiveLink to the blendshapes of the model (:warning: except for VHOpenFaceAnimation which still uses the hard coded FACS definitions from VHFacialExpressions, which should be moved to a pose asset as well!:warning:). If you need to create a new pose asset, the names of the poses can be found in the ``PluginContent/Characters/FacePoseAssetsResources``. You need to create an animation with curve values for all needed blendshapes, where each frame of the animation is one pose (frame 0 being the first pose). :warning: Avoid bone tracks, and even remove those from the animation if present. Then you can click Create Asset - Pose asset and copy paste the pose names in there.
A **demo map** can be found in [Unreal Character Test](https://devhub.vr.rwth-aachen.de/VR-Group/unreal-character-test), which can be found at Maps/FaceAnimation and implements key events for: 1 (VHLiveLinkFaceAnimation), 2 (VHOpenFaceAnimation), 3 (VHOculusLipSync) and 4 (Saving the LiveLinkFace Animation into a Unreal Animation Sequence Asset). See ``FaceAnimation/FaceAnimationInput`` blueprint for details
# Oculus Lip Sync (VHOculusLipSync) #
Use Oculus Lip Sync to animate the face based on a audio file only.
First the audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which can be copied into the project content folder and loaded with ``ReadInVisemesFromTxt()``. This also loads in a mapping of the Oculus Visemes to the Character Creator 3 Blendshapes, which can be specified in ``MorphTargetMappingFile = "LipSync/CC3OculusLipSyncMapping.csv"`` and is shipped with the plugin (so normally leave that as is). If you want to change that or use another character setup, take a look at ``ULipSyncDataHandlingHelper`` how to create it.
First the audio file needs to be preprocessed, e.g., with the [OculusLipSyncWAVParser](https://devhub.vr.rwth-aachen.de/VR-Group/oculuslipsyncwavparser/-/tree/master). This generates a visemes.txt file which is then used as animation file.
For Character Creator 3 Models use the ``PluginContent/Characters/Henry/OculusLipSyncToCC3`` pose asset (which was created from the ``OculusLipSyncToCC3Anim`` animation sequence)
# Live Link Face, based on iOS ARKit (VHLiveLinkFaceAnimation) #
Track the face using LiveLinkFace (for more information see https://docs.unrealengine.com/en-US/AnimatingObjects/SkeletalMeshAnimation/FacialRecordingiPhone/index.html). However, we do not connect live to the iPhone but just use the saved .csv file.
Copy that .csv file over somewhere in Content folder and load it using ``UVHLiveLinkAnimation::LoadLiveLinkCSVFile()``
Copy that .csv file over somewhere in Content folder and load it.
For Character Creator 3 Models use the ``PluginContent/Characters/Henry/LiveLinkFaceToCC3`` pose asset (which was created from the ``LiveLinkFaceToCC3Anim`` animation sequence)
# Open Face (VHOpenFaceAnimation) #
For now it is easiest to track a face in a recorded video using openFace directly
It is also possible to track a face in a recorded video using openFace directly.
Therefor:
1) Get OpenFace https://github.com/TadasBaltrusaitis/OpenFace/wiki/Windows-Installation
...
...
@@ -28,11 +35,12 @@ Therefor:
Then parse a video using e.g. OpenFaceOffline.exe (for now only RecordAUs and RecordPose are supported)
Copy this csv file over and load it with ``VHOpenFaceAnimation::LoadOpenFaceCSVFile()``
Copy this csv file over and load it.
:warning: As of now OpenFace is not tracking besides others: AU 22 (lip funneler) or AU 18 (lip pucker)
Filed an issue about that here: https://github.com/TadasBaltrusaitis/OpenFace/issues/936
Apparently it is not that easy, therefore we recommend to use LiveLinkFace for now.
:warning: Also, as of writing this, this does not use a pose asset yet, so this should be eventually implemented.