Hey, I'm Jonathan 'Jpe' Elliott.
I'm currently working on a mod called Opposing Force 2, which is also on mod DB. I've been working with the source engine since it came out and would consider myself nifty with it, though I am best at Hammer. Feel free to ask me any questions.
Last year I attended Bradford College where I completed the course "BTECH Games Development" where I acheived DDD (3 Distinctions). I am currently studying at Huddersfield University where I am doing the course "Computer Games Programming & AI Development".
Welcome to my short introduction to Voice syncing in the source engine, I will give you a brief history and short walkthrough that should get you well on your way to creating your own sequences. I will also go through the problems I found and how to work around them.
This is probably best off seen as an intermediate to advanced blog, if you dont know how to import sounds or anything like that into the game you're in trouble :<. Look on the Source SDK for any help, and failing that comment here.
Through the years video games have progressed through various stages of ability in the field of lip syncing and facial expression. In the last five years the games that have been created have gone from simple blinking and opening mouths to fully fledged facial animation systems.
Half-life started its journey from a heavily modified version Quake 2's engine Id Tech 2. Due to the fact that the engine had no way of showing characters speaking on-the-fly (Let's face it, what spoke?) the coders at Valve added a neat way of doing voice. The idea was simple: The louder the sound, the wider the mouth. In basic practice this was a great way of creating the look of a moving mouth and was low on processing power, but would lack the quality for future higher definition games.
In the source engine Valve had greatly improved the graphics, so decided on the need for the ability to give their modelers the chance to fully manipulate the way the characters faces were expressed without redoing the model, this gave birth to the choreography system that we are going to use today.
The system has many upsides, but the most important is the fact that you can give your characters more depth, and really tell a player how the character is feeling.
An example of Half-Life one's mouth moving system:
An example of Half-Life 2's mouth moving system:
It is very important to add voice syncing to characters because it shows the player who is talking, rather than some voice in his head. You can create large conversations between different characters in a level, but if the player doesn't know who is talking it could get a bit confusing. It is also important because if it is done well, and looks good then it really polishes off a good level. You have to be careful though; bad voice acting and animations can make your map / mod look bad.
Bad face poses (Ala Gmod):
A good face pose:
When I started work on the current mod I am working on, [Opposing Force 2], I was given the task of doing the all important introduction. After drawing up some ideas on paper for the level I realized that the player would be close to the main speaking character, so a badly choreographed scene could ruin the look of the map for me, I had to find a more efficient way of creating the animation.
When the level fades in you are to be shown G-man, standing a few feet away, he will then repeat a couple of lines and then walk away, just like in the final scene of the original Opposing Force.
Before beginning work it is always important to jot down ideas and even draw a small story board so that everything is planned on paper and you know what to do. Doing the pre-work is always handy because it saves you time in the long run and allows you to show ideas to friends before you do a lot of work.
For my setup I had already completed my design documents and briefed the mod leader on what I was going to do so that I could get the go-ahead. I converted the sound effect from the original game and was ready to go.
If you are recording your own sound it is important to get it as clear as possible, this is important, not only for quality's purposes, but for some of the important steps later. If your keep blowing on the microphone while you are talking then move it to the side of your cheek, increase the volume later.
If this is the first time you choreographed a scene it important to download the Microsoft Speech SDK, this is important in later steps. To get it go to:
You will need to download the file "SpeechSDK51.exe" @ 68mb.
When workings with lip syncing the phoneme are very important, they are the building blocks of it all.
A phoneme is defined thus:
"In linguistics, a set of closely related speech sounds (phones) regarded as a single sound. For example, the sound of "r" in red, bring, or round is a phoneme."
Basically in "Face Poser Terms" when a certain sound is made when speaking a word your mouth changes so that the correct sound is let out, in a word there are key mouth shapes, and these larger phoneme will be picked out and put together to create your lip sync.
To do this Valve have created a really handy tool which allows you to do this only the fly, to use it start the Face Poser, select which model you want to use as your actor (File > Load Model...), create a new chorography VCD (Chorography > New...) and click the "Phoneme Editor" tab at the bottom of the screen.
The editor should then be shown.
First we have to load the sound into the editor, to do this press the "Load" button and select the sound you wish to use, the waveform will then be show in the main area of the editor.
The reason we downloaded the Microsoft Sound SDK earlier is because the Choreographer can use this to try and pick out the phonemes from sounds and automatically generate you the lip syncing. You can check if it installed correctly by right clicking the main area of the editor, mousing over change API, and if the Microsoft API is there then it has worked.
To start the process click the "Re-Extract..." button , you will then be prompted with a dialogue to type in some text. You need to write in exactly what you say in here, because this is what the SDK uses to match up the text to words and select the phoneme. If the extractor is having trouble it is also possible for you to spell out the words phonetically, but I would recommend using the trick I will write about later.
If the extraction process was successful it will say at the bottom right of the screen in a colored text "Extraction complete" or otherwise. Green means a complete match, orange means it's a bit dubious to what it could be (It might still work though, you will have to preview it to see if its right) and red means its failed.
To test the lip sync right click the main area and press the "commit" button.
Then press space bar, this will preview the voice sync on the main character window.
When I was trying to extract the phoneme from the G-man's file I found that it was failing, I would need to find a workaround or find a decent voice actor. I came to the conclusion that it would be easier to find a workaround and the endeavored to find a new way to do it.
I decided that the best way to do it would be to record me saying the G-man's line in sync with the original, then transfer the phoneme from my extraction to the original.
To do this I listened to the original sound a couple of times and attempted to say the lines in-time with G-man. I then recorded it and mixed the original sound and my voice together to make sure I had the timings correctly. Once I was satisfied I made a good job I imported my recording into Face Poser and extracted the phoneme.
You can see my original attempt at saying them together by downloading this sound file that I made:
To see that I got it to work load it into windows media player or any other sound playing program and use the equalizer to shift the sound balance either left or right. My voice is on the right and G-man's on the left.
Normally my voice isn't that bad, it's just I tried to get it monotone to make sure it picked up the right phoneme (Damn this fine curse of a Yorkshire accent :3).
To export the phoneme from my sound clip I right clicked the main area and clicked "Export to text file..." this allows you to export the entire set of phoneme to send to other people or back up them, this was handy in my case. I then loaded up the G-Man's sound and imported them again, I then committed and played to make sure they matched up and they did.
Presuming you got a great export, congratulations! If you didn't bad luck try again. You can either re-record your voice using the workaround I did or edit the phoneme directly. You can find out how to do this by reading Valve's SDK wiki at this page:
Now all that's left to do is add some little animations to spruce it up and put it in hammer.
Overall I was pleased with what I achieved: A great voice sync that worked that made me feel better by having to work to get it :3.
If you want to see my final attempt you can watch it on YouTube below, if you have any comments or suggestions leave them below in the comments section.
No blogs were found matching the criteria specified. We suggest you try the blog list with no filter applied, to browse all available. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments.
For people and teams developing mods and games with Valve's Source engine.
Mod DB brings you the best mods in the industry. You decide which ones are worthy of recognition. At the top of this program sits the ultimate prize...
TMM (Team Miss Maryland) is a development group run by a group of Huddersfield University students who believe they have what it takes to become big...
No groups were found matching the criteria specified. We suggest you try the group list with no filter applied, to browse all available. Start a group and help us achieve our mission of showcasing the best content from all developers. Join now to share your own content, we welcome creators and consumers alike and look forward to your comments.