Avatar SDK Local Compute Unity plugin  2.2.1
Toolset for Head 2.0 avatars generation in Unity3D
Frequently Asked Questions (FAQ)

Does it work on mobile platforms?

"Local Compute" avatar generation currently works on Windows, Android, iOS and MacOSX. Minimal requirement is 1Gb+ of free RAM available.

Why my app installation is so big?

The most of this package take binary resources required to compute avatar directly on the client device and generate hairstyles and blendshapes. These resources are quite heavy. You may consider Cloud SDK version which is is really lightweight and works on all devices, even in WebGL.

How can I reduce the installation size?

Local Compute SDK brings all the resources (blendshapes, haircuts) with it. Total size of the resources is about 142 mb. Haircuts take ~ 75 mb, blendshapes ~ 19 mb. For example, if you don't need the "ponytail" haircut, you can delete the directory contains resources of this haircut: "Assets/itseez3d/avatar_sdk/sdk_local_compute/resources/bin/haircuts/plus/ponytail". This way, you can delete the "Assets/itseez3d/avatar_sdk/sdk_local_compute/resources/bin/haircuts/plus/ponytail" directory if you are not going to use haircuts at all. The same works for blendshapes sets.

Another solution: you can omit including the binary resources into the package. Instead of it the resources are stored somewhere on the server and the client application downloads them before calculating the first avatar. See this guide on how to do it.

How can I attach your 3D head to the body?

  • We are able to generate Full Body avatars in the Avatar SDK Cloud Unity plugin. If you need Avatar SDK Local Compute Unity plugin with Full Body avatars, contact us suppo.nosp@m.rt@a.nosp@m.vatar.nosp@m.sdk..nosp@m.com.
  • There is an integration with UMA 2 unity plugin. You can generate the head in UMA topology and attach it to the UMA character. More info you can find on the page: Integration with UMA plugin
  • There is legacy sample ("Assets/itseez3d/avatar_sdk/samples_local_compute/03_fullbody_legacy_sample_local_compute/scenes/03_fullbody_legacy_sample_local_compute.unity") where the "Head 2.0 | head/mobile" avatars are attached to the body. This method works for any humanoid rig, but of course, you will have the seam between the neck and the rest of the body, which should be covered with some kind of collar. See the detailed instruction on the scene documentation page.

I have an "EntryPointNotFoundException: initAvatarSdk" error or "DllNotFoundException". How to solve it?

Such error happens when the native library can't be found. Most often reason is invalid target platform in the Build Settings.

  • If you run the plugin in the Unity Editor on MacOS, the target platform should be "PC, Mac & Linux Standalone".
  • In case of Windows, please, make sure that target architecture is set to x86_64

Is avatar generation limited?

Yes, the limit of the avatar generation depends on your subscription plan: Avatar SDK pricing. Local Compute version of the plugin calculates avatars locally but it reports the number of generated avatars to the cloud.

Is there a trial/free mode for Local Compute SDK?

Yes, we provide 30-day trial for the Pro subscription plan: https://accounts.avatarsdk.com/subscription/

Does it work in runtime?

Yes, the plugin works in runtime. Any player of your game can generate his avatar.

How many triangles do the avatars have?

~58K triangles for bust/mobile avatars 8 LODs supported ranging from ~4K to ~23K triangles per avatar for bust/mobile.

How can I export the generated model?

The generated avatar can be exported in the OBJ and FBX formats.

  • OBJ format is supported on all platforms: Windows, MacOS, Android, IOS.
  • FBX format is supported on Windows and in the standlaone application on MAC.

How do I get more haircuts or blendshapes?

Please see the scene documentation for parameters_sample scene here.

What recommendations can I give my users in terms of input photos?

The SDK is pretty robust, but these recommendations will improve the quality (in priority order): Our recommendations:

  • The face should be easily recognizable on the photo. The photo is not blurry.
  • Only for Face pipeline: ask the user to remove hair from the forehead and from the sides of the face, otherwise, hair may appear on the face texture.
  • Uniform good lighting on the photo, without dark shadows or too bright glare
  • A person should keep a neutral facial expression or slightly smile but without opening their mouth. If teeth are visible on the input photo the lip texture might be incorrect.
  • It's best to look straight into the camera without turning your neck or eyes. This way Head pipeline can generate good looking shoulders.
  • We advise removing the glasses because they're reconstructed only in the texture, not 3D mesh. But this is not completely necessary.