|Facial Animation Workflow Questions||Post Reply|
|Nov 10 2020 Anchor|
I am developing a facial animation pipeline that is being used for my PhD project to study the neural mechanism on how the brain perceives social interactions with a focus on facial expressions. So far, we have recorded some facial animations which we can learn and later generate as to control perfectly the output (which is important for reproducibility of our scientific results). We also introduced some style variables within our model, which allow us to control the strength of the expressions as to merge them together. In other words, once the model is trained with some facial expression prototypes, we can generate new expressions with specific parameters, and still tweak/control the output since the blendshapes and the expression-dynamic are jointly learned.
Therefore, we think that this ability may be interesting for other labs to use as well since many researchers only works with static images to study the mechanism of facial perception, and thus, the dynamic part of an expression is only weakly studied. But being a neuro-/computer-scientist student, I would have like to know if some experienced artist with facial animation workflow would be willing to share some of their experiences with me. The question will be mainly oriented to your steps, solutions, technology etc. as to understand what could be improved with our scale and resources and if we should perhaps try to push to release a workable algorithm outside our project. Of course we do not aim to produce better results than a high end computer games with a specific model and actor, but the goal of our model would be to provide researchers with an easy way to generate new and controllable realistic expressions with only minimal effort.
If you are interested, please contact me at email@example.com, I would be very glad to discuss more about it.
Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.