This is the simplest scene designed to showcase the most basic functionality of the SDK: generation of the 3D model from a photo of a person and showing the generated model on the scene.
Code usage
Before executing any SDK methods you should initialize AvatarSdkMgr.
After that, you can get an object that implements IAvatarProvider interface. This object should be initialized before usage.
IAvatarProvider provides all common methods to work with avatars: generation, loading from the local storage.
In the case of the Offline SDK it is OfflineAvatarProvider.
IEnumerator InitializeSdk()
{
if (!AvatarSdkMgr.IsInitialized)
AvatarSdkMgr.Init(sdkType: SdkType.Offline);
IAvatarProvider avatarProvider = AvatarSdkMgr.IoCContainer.Create<IAvatarProvider>();
yield return avatarProvider.InitializeAsync();
}
IEnumerator DisplayHead(string avatarCode)
{
var avatarHeadRequest = avatarProvider.GetHeadMeshAsync(avatarCode, false);
yield return avatarHeadRequest;
TexturedMesh headMesh = avatarHeadRequest.Result;
var avatarObject = new GameObject(AVATAR_OBJECT_NAME);
var headObject = new GameObject(HEAD_OBJECT_NAME);
var headMeshRenderer = headObject.AddComponent<SkinnedMeshRenderer>();
headMeshRenderer.sharedMesh = headMesh.mesh;
var headMaterial = new Material(Shader.Find("AvatarUnlitShader"));
headMaterial.mainTexture = headMesh.texture;
headMeshRenderer.material = headMaterial;
headObject.transform.SetParent(avatarObject.transform);
}
IEnumerator GenerateAvatar(byte[] photoBytes, IAvatarProvider avatarProvider)
{
yield return InitializeSdk();
var initializeRequest = avatarProvider.InitializeAvatarAsync(photoBytes, "name", "description", pipeline, ComputationParameters.Empty);
yield return initializeRequest;
string avatarCode = initializeRequest.Result;
var calculateRequest = avatarProvider.StartAndAwaitAvatarCalculationAsync(avatarCode);
yield return calculateRequest;
yield return DisplayHead(avatarCode);
}
User interface and the scene
Upload photo buttons:
- Press on the button Random Photo to create an avatar from one of the predefined photos embedded into the SDK. The SDK should generate the 3D model in the scene which you can rotate with your mouse.
- Press the User Photo button to open file dialog and upload your own photo. The photo must contain a frontal image of the person. This button works only in the editor because Unity does provide a file browser only in the editor.
- On mobile phone, it's also possible to get a photo from the default camera app.
The toggle switches:
- Animated Face toggle switch enables the so-called "animated_face" pipeline. This is the basic SDK mode. 3D shape of the face is inferred by deep learning algorithm and attached to a bald head with customizable haircuts. Other samples demonstrate how to animate this avatar and attach it to a fullbody character.
- Head 1.2 toggle switch enables "head_1.2" pipeline. This is newer and new advanced regime. The deep learning algorithm aims to predict the entire shape and texture of the head from just a single frontal picture. This avatar is animated with blendshapes, but it's not intended to be used with bodies. Rather, it should be used just like Oculus VR Avatars, as a standalone representation of the virtual human. Here's a short video demonstration: https://youtu.be/FXdHC1zecb8
Inner workings
The implementation of this scene is concentrated in one file for simplicity: GettingStartedSample.cs which is a MonoBehaviour
and is attached to the SampleSceneHandler game objects.
See also FAQ and the getting started instructions on the main page: Main Page.