Tag: Animation

Phase-Functioned Neural Networks for Character Control

理论部分:

 
 

http://theorangeduck.com/page/phase-functioned-neural-networks-character-control

 
 

视频内容:

讲者背景:角色动画机器学习方向的博士毕业,

动画系统,中间的就是黑盒,就是动画系统

用户输入动画系统的是按钮的指向信息,如图所示的平面的指向,是比较高级的指令,比如我们想走向哪个方向。

动画系统则像是巨大的状态机,不同的状态对应不同的动画,之间会有动画混合。这个复杂的系统直接写代码比较复杂,很难维护,

因此我们重新考虑能不能用一个简单的算法来实现这个复杂的动画交互系统。

我们希望动画系统的输入输出就变成一些参数而已。

我们再来看原来的复杂的动画系统,如果把输入输出看作是动画模型的参数,那么也是可以做到的,就像是在一个巨大的数据库里面挑东西输出。

因此我们希望做到的第二点就是直接输出下一个pose

当然可以,基本思想就是把动画系统当作是黑盒,我们给一些输入就有想要的输出,后面具体讲做法。

输入x:Trajectory Positions, Directions, Heights;Previous Joint Position, Velocities…

输出y:各关节的transform信息

要做训练我们首先需要数据,采用的是动作捕捉,每一段约十分钟,一共约两小时的非结构化的数据。

高度匹配的问题,我们捕捉的数据默认脚就是完全贴合地面的,以此来训练,我们是了大量的各种不同的地面来获得相关的数据。

然后我们来看怎么学习

一个神经网络事实上就是一个函数,

比如这个函数就可以让输入得到相应的输出

我们的函数的输入输出如图所示

而这个可以量化为vectors作为输入输出

如果是单层的最简单的NN举个例子可以像是这样,wb这两个参数就是我们需要学习得到的结果。

输入就是已知的xy

输出就是学习得到的结果wb,如这里所示

最终我们得到的这个函数叫做行为函数

这里可能涉及各种不同的函数,比如这个是非线性混合函数

这两个就是很类似的

如果是深度学习的多层的函数,其表现就是这样

这个例子就是一个三层的神经网络公式

训练的做法就是每次输入值,然后跟据得到的结果衡量错误,然后在调整网络的参数,这就是基本的思路

我们采用了GPU运算节省时间

Phase-functioned NN意思就是我们采用了一种特殊的NN方法,对于不同的motion采用不同的网络权重,避免混合motions,细节请看paper

这是我们最终获得的,简单的动画驱动模型来替代state machineg和blend tree

然后展示demo

性能展示

结论

 
 

首先完整看一遍PPT:

SIGGRAPH上面的演讲PPT

目标:游戏中的角色控制做到快速紧凑的表现力

最终结果展示

第一部分背景

前人的工作存在的可改进之处:

  1. 需要将整个数据库全存放于内存
  2. 需要手动处理数据
  3. 需要一些复杂的加速方法

NN可以带来什么帮助呢?

  1. 虚拟的无限制的数据容量(任意动作)
  2. 快速的实时的低内存使用

但是怎么来生成动作呢?

CNN:学习用户角色控制信号与角色行为的关系

demo

问题是什么?

存在歧义,会发生相同的输入得到不同的角色行为结果

实际上:

  1. 需要特殊处理解决掉歧义
  2. 一开始需要提供所有的输入轨迹情况
  3. 多层CNN对于游戏来讲还是太慢了

RNN:学习从前一帧到后一帧的对应关系

demo

RNN结果质量:

  1. 只能坚持10
  2. 无法避免漂浮
  3. 无法避免歧义

总结我们面对的问题:

  1. 我们怎么去处理大规模的数据
  2. 我们怎么解决歧义的问题
  3. 我们怎么样让生成的结果看上去不错

数据捕捉部分

非结构化的数据捕捉,一共补了差不多两小时的数据,每一段十分钟左右,摆放了很多桌子椅子来模拟复杂地形,使得尽量包含各种复杂的情况

demo

demo

地形匹配

  1. 我们希望地形数据和运动数据一起加入学习
  2. 但是同时捕捉运动和地形数据是比较麻烦的
  3. 制作一个高度图数据库,然后让一段运动匹配高度图中的一块

例子

参数设置:

  1. 最终效果不错
  2. 角色轨迹采用窗口模式
  3. 加上了步态,地形高度等信息

神经网络部分

PFNN:一个phase函数看作是权重的NN

phase是0-2pi的标量,表示的是当前locomotion cycle下当前的pose

图示:输入是当前帧pose,输出是下一帧pose,NN里面的参数是phase function

demo

特征:前回馈NN,有两个隐藏层,每层有512个影藏单元,ELU驱动函数

输出是NN的权重,循环三次方函数差值四个控制点,每个控制点由一组NN权重组成。

训练算法:

  1. 输入phase生成权重
  2. 使用权重和输入值到nn得到输出值
  3. 衡量输出错误
  4. 反向传播nn和phase函数更新控制点的值

结果

demo

结论

phase函数预计算:因为这个函数的计算对于游戏来说是比较耗时的

  1. 控制数值范围 0-2pi,在这个范围内可以预计算
  2. 运行时对于预计算的结果做差值
  3. 得到速度和内存之间的平衡

性能参数

缺点:

  1. 模型的训练时间非常耗时
  2. 对于美术的编辑和修改,不能直接得到正反馈
  3. 很难预测结果,有问题也不能直接知道为什么

优点:

  1. NN很容易处理大量的数据,可以得到万般种结果
  2. 语义分解解决了歧义的问题
  3. 简单的结构和参数化的使用方式很容易控制质量

 
 

 
 

实践部分两步走:

  1. 先看demo怎么实现的
  2. 再看network怎么处理的

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

AIAnimation使用代码分析

历尽艰难跑起来了,问题是没有办法操作,猜测Ubuntu和windows的操作代码不兼容,代码分析部分的主要目标之一就是把操作改掉:

 
 

代码结构分析

一切从main函数开始:

  • 首先是SDL初始化,Simple DirectMedia Layer(SDL) is a cross-platform development library designed to provide low level access to audio, keyboard, mouse, joystick, and graphics hardware via OpenGL and Direct3D. 可以说是一个常用的跨平台opengl支持库。

    这边就是想到应该不会是操作兼容性的问题导致不能动,一看代码原来是只支持手柄。

    可惜的是,windows平台上面跑起来好卡,感觉帧率小于10帧!

  • GLEW初始化

  • 资源加载

    • Options 里面定义的是一些我们可以修改的设置选项:

    • CameraOrbit 则是相机初始化设置

    • LightDirectional 场景光照初始化设置

      这里就是一堆GL设置

    • Character 角色初始化设置

      这部分是这里的角色的定义信息和加载,这部分很重要!

      首先我们来看一下存储的角色数据的文件包涵下面四个,坑爹的采用了二进制存储:

      顶点,三角形,父子关系,xforms信息分别四个文件。

      读文件的函数这边也很简单,将数据直接存进对应的容器,角色数据结构信息如下:

      这部分最后就是一个前向动力学的实现,这个很简单,就是子类跟着父类运动。

    • Trajectory 路径初始化设置

      这里就是定义了运动路径的数据结构,如下所示:

    • IK 初始化设置

      Ik数据结构如下所示:

      这里还提供了一个two_joint函数,这个后面用到再讲,因为暂时也看不出来其功能。

    • Shader 相关

      这部分就是加载shader的函数,并将其运用到opengl

    • Heightmap 高度图初始化设置

      这边主要来看一下高度数据的读取和存储

      高度文件数据示例,包括两个文件:

       
       

       
       

      Load()函数就是一个一个读取float存储在 vector<vector<float>> data 里面:

      xxx.txt 信息就是用来生出 data,xxx_ao.txt 信息则是用来生出 vbo/tbo(坐标,颜色等信息);vbo_data/tbo_data(索引坐标信息)。

    • Areas 区域初始化设置

      这部分的数据结构如下:

    • PFNN模型

      模型的加载和初始化,首先来看其数据结构:

      ArrayXf 这个数据结构是Eigen下面存储 float array的结构。Load函数底下就是加载的文件,很多很多文件啊!

      我们来看上图所示的文件结构就可以发现,pfnn这个网格模型相关的数据内容,主要包含的就是网络模型和角色。

    • 加载游戏世界

      load_world 这些函数,目前来看这些函数里面主要是在做地形标记,所以来说这程序跑起来需要做的地形标记?

  • Game Loop 部分
    • Input处理

      目前只支持手柄,SDL包含跨平台的输入交互模块,细节不解释,见下图

      但事实上不是所有的交互都在这里,在渲染那边很多的主要操作都是直接写在渲染的部分的,但都是用了SDL接口。

    • 渲染

      一共包含前处理,渲染处理,后处理三部分,我们分别来看。

 
 

前处理

  • 更新相机(直接按键确定)

    右手柄摇杆控制相机旋转,LR控制zoomin/zoomout,然后直接作用于相机参数。

  • 更新目标方向和速度(直接按键确定)

    这部分也是直接响应按键输入,按键就确定了用户期望的目标方向和速度。

  • 更新步态(算法数据前处理第一步)

    通过上一时刻的 trajectory 参数 和 options 参数来确定当前时刻 trajectory 的参数。

  • 预测未来的 Trajectory(算法数据前处理第二步)

    通过上一步获得的 trajectory 参数 和 character 参数,来混合获得 trajectory_positions_blend 这个对象

  • 碰撞处理(算法数据前处理第三步)

    根据 areas 的 walls 的信息,来调整 trajectory_positions_blend 的值。

    在这里,又做了一步将 trajectory_positions_blend 的值写回 trajectory

  • 跳跃(算法数据前处理第四步)

    根据 areas 的 jump 的信息,来调整 trajectory 的值。

  • Crouch 区域(算法数据前处理第五步)

    根据 areas 的 crouch_pos 的信息,来调整 trajectory 的值。

  • 墙(算法数据前处理第六步)

    根据 areas 的 walls 的信息,来直接调整 trajectory 的值。

  • Trajectory 旋转(算法数据前处理第七步)

    trajectory->rotations 的值调整

  • Trajectory 高(算法数据前处理第八步)

    根据 heightmap 的值来调整 trajectory 的值

  • 输入的 Trajectory 位置方向(pfnn输入内容第一步)

    Trajectory 信息来获得 pfnn->Xp

  • 输入的 Trajectory 步态(pfnn输入内容第二步)

    Trajectory 信息来获得 pfnn->Xp

  • 输入的 当前的 Joint 位置速度和旋转角度(pfnn输入内容第三步)

    Trajectory 信息来获得 pfnn->Xp

  • 输入的 Trajectory 高度(pfnn输入内容第四步)

    Trajectory 信息来获得 pfnn->Xp

  • Perform Regression 【核心步骤:模型predict

    上面在设置的是pfnn的参数,而这里还需要设置的是predict函数的传入参数,是character->phase

  • 时间处理,这一步就是计算一下predict时间,debug用。
  • Build Local Transformpfnn输出)

    这一步就是运用pfnn的输出结果,来获得角色每个关节的position/velocity/rotation

    这里还需要的一步就是上面得到的关节数据是世界坐标,要转换到局部坐标。

  • IK 处理

    这一步就是对上面获得的关节数据,一个一个的应用到角色的IK关节!

 
 

渲染处理

  • Render Shadow
  • Render Terrain
  • Render Character
  • Render the Rest
  • Render Crouch Area
  • Render Jump Areas
  • Render Walls
  • Render Trajectory
  • Render Joints
  • UI Elements
  • PFNN Visual
  • Display UI

这里都是opengl使用,和AI数据的使用无关,就不在赘述。

 
 

后处理

  • Update Past Trajectory

    Trajectory 数据传递更新

  • Update Current Trajectory

    Trajectory数值计算更新

  • Collide with walls

    Trajectory 碰撞更新

  • Update Future Trajectory

    Trajectory 依据 pfnn结果来做更新

  • Update Phase

  • Update Camera

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

AI4Animation 工程

这里尝试非常多次还是没有办法对于里面的demo信息完整打开且没有报错,可以跑的demo。

因此我们在AIAnimation的基础上首先来看看这个工程怎么使用起来。

 
 

首先来看重点在角色,我们来看其角色的构造:

上面是两个demo的使用结构,可以看到就一个重要的csharp文件,我们来对比分析。

Original对应的是Siggraph17的内容,Adam对应的是Siggraph18的内容,我们先来看17。

 
 

首先看大的结构:

第二个类继承Editor对象,作用是在editor里面形成层级菜单Animation,其余的三个则是分别由另外的三个类来完成。

这三个对象也分别形成了三个子标签菜单项,如上面所示的图。


  • NeuralNetwork 这个类,里面只做了一件事情,就是让用户选择NN模型,也就是说这个类处理的是交互UI表现和逻辑,没有其他。NN里面应该包含的信息都在Model这个类里面。下图就是model里面存储的数据结构:

    然后我们来看接口函数:

    这边是为了兼容和扩展多种的NN方法设置的接口。

    剩下的就是一些Tensor相关的函数,Tensor是对Eigen数据的封装,其真实的计算实现都是由Eigen实现的,这边同时提供了一堆的数据结构关联操作的方法。

    最后model里面涉及的比较重要的内容就是Parameters,这边unity里面主要做的就是加载读取和存储方法。

  • Controller 这个类,处理的是Input,主要就是WSADQE. 还有一个很重要的变量是Styles数组,记录的应该是状态权重。
  • Character 这里做的就是驱动骨架运动。

而作为核心的中介数据 Trajectory 这个类,其就是一组数据点数组,并且包含对这个数组,单个点的操作方法。单个点的数据内容很丰富,包括各种变换和状态等:

 
 

所有的核心使用方法就是在Update函数里面,这边的做法应该是和AIAnimation里面是一模一样的,我们可以对比一下:

  • 只有存在NN模型的情况下,才会执行下面的所有内容。
  • Update Target Direction / Velocity

    这里做的就是:

    TargetDirection = TargetDirection Trajectory定义的当前位置 跟据 TargetBlending 权重混合。

    TargetVelocity = TargetVelocity Controller输入速度 跟据 TargetBlending 权重混合。

  • Update Gait

    Trajectory.Points[RootPointIndex] 的每一个Style的值 = 当前值 和 用户是否点选了要成为该Style 跟据 GaitTransition 权重混合。

  • Predict Future Trajectory

    预测的位置 = 当前位置和前一个位置的差值的延续 和 TargetPosition 差值获得

    预测的style = 延续当前的style

    预测的方向 = 当前的方向 和 TargetDirection 差值获得

  • Avoid Collisions

    保证新的位置可靠,也就是考虑了碰撞。

  • Input Trajectory Positions / Directions

    给NN.Model喂数据,Trajectory的每一个Point的位置和方向(都是xz轴值)

  • Input Trajectory Gaits

    给NN.Model喂数据,Trajectory的每一个Point的Style数组

  • Input Previous Bone Positions / Velocities

    给NN.Model喂数据,Joints的每一个关节的位置和速度

  • Input Trajectory Heights

    给NN.Model喂数据,Trajectory的每一个Point的高度信息(y轴值)

  • Predict【利用模型运算】

  • Update Past Trajectory (轨迹上 i < RootPointIndex 的点)

    更新Trajectory.Points[i] 的每一个点的信息:i位置=i+1位置的值(意思就是向前取一个点)

  • Update Current Trajectory(轨迹上 RootPointIndex 所在的点)

    跟据NN的输出结果来构建一个新的点Trajectory.Points[RootPointIndex]的信息,设置其位置方向

  • Update Future Trajectory(轨迹上 RootPointIndex+1 < i < Trajectory.Points.Length的点)

    每个点新的位置 = 每个点原来位置 + 当前方向 与 跟据模型输出值混合得到的距离和方向 差值(这边做了多重的影响差值考虑)

  • Avoid Collisions

    同 5 做法一致

  • Compute Posture

    positionsrotations两个数组存每一个joint的变换;

    每个 positions[i] = NN返回结果 * 0.5 + 上一个位置按照上一个方向到达的这一个应该在的位置 * 0.5;

    每个 Velocities[i] = NN返回的结果

  • Update Posture

    每个joint的position,rotation直接取上一步数组中对应的值

  • Map to Character

    transform应用在角色上面

     
     

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 

AIAnimation 工程安装

GPU Gems – Animation in the “Dawn” Demo

4.1 Introduction

 
 

“Dawn” is a demonstration(示范) that was created by NVIDIA Corporation to introduce the GeForce FX product line and illustrate how a high-level language (such as HLSL or Cg) could be used to create a realistic human character. The vertex shaders deform a high-resolution mesh through indexed skinning and morph targets, and they provide setup for the lighting model used in the fragment shaders. The skin and wing fragment shaders offer both range and detail that could not have been achieved before the introduction of advanced programmable graphics hardware. See Figure 4-1.

【Dawn是Nvidia新产品中怎样将HLSL应用于真人角色的示范,主要涉及Vertex shader用于morphing, fragment shader用于光照。】

 
 


Figure 4-1 A Screen Capture of the Real-Time Dawn

 
 

This chapter discusses how programmable graphics hardware was used to accelerate the animation of the Dawn character in the demo.

【这里讲的就是如何使用图形硬件编程来加速Draw的角色动画。】

 
 

 
 

 
 

4.2 Mesh Animation

 
 

Traditionally, mesh animation has been prohibitively expensive for complex meshes because it was performed on the CPU, which was already burdened with physical simulation, artificial intelligence, and other computations required by today’s applications. Newer graphics hardware has replaced the traditional fixed-function pipeline with programmable vertex and fragment shaders, and it can now alleviate some of that burden from the CPU.

【传统的网格动画开销非常贵因为局限于CPU顶点计算,而CPU还承担其他的大量的工作,新的图形硬件带来的vertex/fragment shader可以分担部分CPU工作。】

 
 

Sometimes it is still necessary to perform such operations on the CPU. Many stencil-based shadow volume techniques must traverse the transformed mesh in order to find the silhouette edges, and the generation of the dynamic shadow frustum is often best done on the CPU (see Chapter 9, “Efficient Shadow Volume Rendering”). In scenes where the character is drawn multiple times per frame into shadow buffers, glow buffers, and other such temporary surfaces, it may be better to perform the deformations on the CPU if the application becomes vertex-limited. Deciding whether to perform mesh deformations on the CPU or on the GPU should be done on a per-application or even on a per-object basis.

【有时候还是需要将网格动画计算放在CPU执行,因为像体阴影需要实时去找到阴影遮挡关系来计算阴影,这些工作是在CPu上面做的。当一个character需要很多次的去渲染到模版/texture的时候就比较不适合使用GPU运算。】

 
 

The modeling, texturing, and animation of the Dawn character were done primarily in Alias Systems’ Maya package. We therefore based our mesh animation methods on the tool set the software provides. We have since created a similar demo (“Dusk,” used to launch the GeForce FX 5900) in discreet’s 3ds max package, using the same techniques; these methods are common to a variety of modeling packages and not tied to any single workflow. The methods used in these two demos are (indexed) skinning, where vertices are influenced by a weighted array of matrices, and weighted morph targets, used to drive the emotions on Dawn’s face.

【dawn美术资源来源于maya,用于这里的morphing demo】

 
 

 
 

4.3 Morph Targets

 
 

Using morph targets is a common way to represent complex mesh deformation, and the NVIDIA demo team has created a variety of demos using this technique. The “Zoltar” demo and the “Yeah! The Movie” demo (content provided by Spellcraft Studio) started with 30 mesh interpolants per second, then removed mesh keys based on an accumulated error scheme. This allowed us to reduce the file size and the memory footprint—up to two-thirds of the original keys could be removed with little to no visible artifacts. In this type of mesh interpolation, there are only two interpolants active at any given time, and they are animated sequentially.

【morphing 常用于网格变形,nvidia也做过很多相关demo。】

 
 

Alternatively, morph targets can be used in parallel. Dawn is a standard example of how this approach can be useful. Beginning with a neutral head (27,000 triangles), our artist created 50 copies of that head and modeled them into an array of morph targets, as shown in Figure 4-2. Approximately 30 of those heads corresponded to emotions (such as happy, sad, thoughtful, and so on), and 20 more were modifiers (such as left eyebrow up, right eyebrow up, smirk, and so on). In this style of animation, the morph target weights will probably not add to 1, because you may have (0.8 * happy + 1.0 * ear_wiggle), for example—Dawn is a fairy, after all.

【另外,morph target也可是并行的。Dawn 的头包含27000个三角形,做了50个头来作为morph的target array。下图展示了一些,最终morphing的结果可以是多个morph target的加权和。】

 
 


Figure 4-2 Emotional Blend Targets (Blend Shapes)

 
 

Although such complex emotional faces could have been made entirely of blends of more elemental modifiers, our artist found it more intuitive to model the face in the pose he desired, because it is hard to model an element such as an eyebrow creasing, without seeing how the eyes, cheeks, and mouth work together. This combination also helps with hardware register limitations, described later.

【要合成复杂的表情动画还是非常难的,最终的结果要看是否自然,是否会出一些明显的错误是不被允许的。譬如眼睛突出来这样的人不可能会有的行为,要加以约束,如何处理约束后面会讲。】

 
 

 
 

4.3.1 Morph Targets in a High-Level Language

 
 

Luckily, the implementation of morph targets in HLSL or Cg is simple. Assuming that vertexIn is our structure containing per-vertex data, applying morph targets in a linear or serial fashion is easy:

【幸运的是硬件实现morpg target很简单,首先来看先后时间位置的差值做法会是如下。】

 
 

float4 position = (1.0f – interp) * vertexIn.prevPositionKey + interp * vertexIn.nextPositionKey;

 
 

In this code, interp is a constant input parameter in the shader, but prevPositionKey and nextPositionKey are the positions at the prior time and next time, respectively. When applying morph targets in parallel, we find the spatial difference between the morph target and the neutral pose, which results in a difference vector. We then weight that difference vector by a scalar. The result is that a weight of 1.0 will apply the per-vertex offsets to achieve that morph target, but each morph target can be applied separately. The application of each morph target is just a single “multiply-add” instruction:

【interp 是常数输入值,prevPositionKey/nextPositionKey 是前后时刻的位置。同一时间多个morph target做法也是类似,如下加权平均。】

 
 

// vertexIn.positionDiffN = position morph target N – neutralPosition

 
 

float4 position = neutralPosition;

position += weight0 * vertexIn.positionDiff0;

position += weight1 * vertexIn.positionDiff1;

position += weight2 * vertexIn.positionDiff2;

 
 

 
 

4.3.2 Morph Target Implementation

 
 

We wanted our morph targets to influence both the vertex position and the basis (that is, the normal, binormal, and tangent) so that they might influence the lighting performed in the fragment shader. At first it would seem that one would just execute the previous lines for position, normal, binormal, and tangent, but it is easy to run out of vertex input registers. When we wrote the “Dawn” and “Dusk” demos, the GPU could map a maximum of 16 per-vertex input attributes. The mesh must begin with the neutral position, normal, binormal, texture coordinate, bone weights, and bone indices (described later), leaving 10 inputs open for morph targets. We might have mapped the tangent as well, but we opted to take the cross product of the normal and binormal in order to save one extra input.

【我们想要morph target影响顶点位置和basis,相应的影响fragment shader的光照性能。这里要注意的是GPU寄存器的数量是有限的,除去渲染要用的寄存器剩下的就是morph可以使用的寄存器。只用normal,binormal就可以,可以叉乘得到tengent,节约寄存器。】

 
 

Because each difference vector takes one input, we might have 10 blend shapes that influence position, five blend shapes that influence position and normal, three position-normal-binormal blend shapes, or two position-normal-binormal-tangent blend shapes. We ultimately chose to have our vertex shader apply five blend shapes that modified the position and normal. The vertex shader would then orthonormalize the neutral tangent against the new normal (that is, subtract the collinear elements of the new normal from the neutral tangent and then normalize) and take the cross product for the binormal. Orthonormalization is a reasonable approximation for meshes that do not twist around the surface normal:

【每一个vector作为一个输入,通过blend都会影响到最终的position。我们最终选的方案是应用五个shape blend出最终的shape。计算新的tangent如下:】

 
 

// assumes normal is the post-morph-target result

// normalize only needed if not performed in fragment shader

 
 

float3 tangent = vertexIn.neutralTangent – dot(vertexIn.neutralTangent, normal) * normal;

tangent = normalize(tangent);

 
 

Thus, we had a data set with 50 morph targets, but only five could be active (that is, with weight greater than 0) at any given time. We did not wish to burden the CPU with copying data into the mesh every time a different blend shape became active, so we allocated a mesh with vertex channels for neutralPosition, neutralNormal, neutralBinormal, textureCoord, and 50 * (positionDiff, NormalDiff). On a per-frame basis, we merely changed the names of the vertex input attributes so that those that should be active became the valid inputs and those that were inactive were ignored. For each frame, we would find those five position and normal pairs and map those into the vertex shader, allowing all other vertex data to go unused.

【因此我们有了50个morph目标但是只能在同一时刻激活使用5个。我们不希望每一次做差值都需要重新拷贝这五个目标的数据,因此我们为mesh分配相关的vertex channel包括neutralPosition…信息。在每一帧的基础上,我们只是改变vertex input的属性名字来决定其是否激活,在进行计算。】

 
 

Note that the .w components of the positionDiff and normalDiff were not really storing any useful interpolants. We took advantage of this fact and stored a scalar self-occlusion term in the .w of the neutralNormal and the occlusion difference in each of the normal targets. When extracting the resulting normal, we just used the .xyz modifier to the register, which allowed us to compute a dynamic occlusion term that changed based on whether Dawn’s eyes and mouth were open or closed, without any additional instructions. This provided for a soft shadow used in the lighting of her skin (as described in detail in Chapter 3, “Skin in the ‘Dawn’ Demo”).

【positionDiff/normalDiff 的 .w 分量在差值中用不到,我们根据这个来让这个值存储遮蔽信息,这样就可以做到跟据w判读是否启用这里的.xyz,节省空间时间。】

 
 

On the content-creation side, our animator had no difficulty remaining within the limit of five active blend shapes, because he primarily animated between three or so emotional faces and then added the elemental modifiers for complexity. We separated the head mesh from the rest of the body mesh because we did not want the added work of doing the math or storing the zero difference that, say, the happy face would apply to Dawn’s elbow. The result remained seamless—despite the fact that the head was doing morph targets and skinning while the body was doing just skinning—because the outermost vertices of the face mesh were untouched by any of the emotional blend shapes. They were still modified by the skinning described next, but the weights were identical to the matching vertices in the body mesh. This ensured that no visible artifact resulted.

【在内容创建的部分,其实五个差值已经足够用来差出目标效果了。我们这里切分出头和身体,一般身体不参与这里的运算驱动。】

 
 

 
 

4.4 Skinning

 
 

Skinning is a method of mesh deformation in which each vertex of that mesh is assigned an array of matrices that act upon it along with weights (that should add up to 1.0) that describe how bound to that matrix the vertex should be. For example, vertices on the bicep may be acted upon only by the shoulder joint, but a vertex on the elbow may be 50 percent shoulder joint and 50 percent elbow joint, becoming 100 percent elbow joint for vertices beyond the curve of the elbow.

【蒙皮就是骨骼驱动网格数据,就是去定义一个mesh顶点怎样根据其骨骼权重差值得到新的位置。】

 
 

Preparing a mesh for skinning usually involves creating a neutral state for the mesh, called a bind pose. This pose keeps the arms and legs somewhat separated and avoids creases as much as possible, as shown in Figure 4-3. First, we create a transform hierarchy that matches this mesh, and then we assign matrix influences based on distance—usually with the help of animation tools, which can do this reasonably well. Almost always, the result must be massaged to handle problems around shoulders, elbows, hips, and the like. This skeleton can then be animated through a variety of techniques. We used a combination of key-frame animation, inverse kinematics, and motion capture, as supported in our content-creation tool.

【准备好一些bind pose,就是预定义的一些关键帧,这些关键帧就是人为的去除一些不自然的情况。然后的做法就是上述的蒙皮得到变形网格。】

 
 


Figure 4-3 Dawn’s Bind Pose

 
 

A skinned vertex is the weighted summation of that vertex being put through its active joints, or:

【公式描述:vertex最终位置由joint的加权乘结果得到,存在矩阵乘法是因为骨骼间的继承关系。】

 
 


 
 

Conceptually, this equation takes the vertex from its neutral position into a weighted model space and back into world space for each matrix and then blends the results. The concatenated 


 matrices are stored as constant parameters, and the matrix indices and weights are passed as vertex properties. The application of four-bone skinning looks like this:

【上面的计算存在在模型空间完成计算,然后结果再应用到世界空间这一个过程。实现如下】

 
 

float4 skin(float4x4 bones[98],

float4 boneWeights0,

float4 boneIndices0)

{

float4 result = boneWeights0.x * mul(bones[boneIndices.x], position);

result = result + boneWeights0.y * mul(bones[boneIndices.y],

position);

result = result + boneWeights0.z * mul(bones[boneIndices.z],

position);

result = result + boneWeights0.w * mul(bones[boneIndices.w],

position);

return result;

}

 
 

In the “Dawn” demo, we drive a mesh of more than 180,000 triangles with a skeleton of 98 bones. We found that four matrices per vertex was more than enough to drive the body and head, so each vertex had to have four bone indices and four bone weights stored as vertex input attributes (the last two of the 16 xyzw vertex registers mentioned in Section 4.3.2). We sorted bone weights and bone indices so that we could rewrite the vertex shader to artificially truncate the number of bones acting on the vertex if we required higher vertex performance. Note that if you do this, you must also rescale the active bone weights so that they continue to add up to 1.

【在 Dawn 的例子中,我们的网格超过 180000 三角形, 骨骼有 98 根。 我们发现一个顶点被四根骨头驱动就已经足够了,这里就是这么应用的,在这里要注意的一点是要保证权重合为一。】

 
 

4.4.1 Accumulated Matrix Skinning

 
 

When skinning, one must apply the matrix and its bind pose inverse not only to the position, but also to the normal, binormal, and tangent for lighting to be correct. If your hierarchy cannot assume that scales are the same across x, y, and z, then you must apply the inverse transpose of this concatenated matrix. If scales are uniform, then the inverse is the transpose, so the matrix remains unchanged. Nonuniform scales create problems in a variety of areas, so our engine does not permit them.

【skin的时候,我们不仅仅要对pose应用matrix, 其他信息一样要这么做。 一定要注意 uniform scale是必要的。】

 
 

If we call the skin function from the previous code, we must call mul for each matrix for each vertex property. In current hardware, multiplying a point by a matrix is implemented as four dot products and three adds, and vector-multiply is three dot products and two adds. Thus, four-bone skinning of position, normal, binormal, and tangent results in:

【统计一下四骨头驱动那些信息的计算量:一个顶点乘上矩阵就是下面第一个小括号的计算量,再乘上四个顶点共88条指令。】

 
 


 
 

An unintuitive technique that creates the sum of the weighted matrices can be trivially implemented in HLSL or Cg as follows:

【GPU处理矩阵运算:】

 
 

float4x4 accumulate_skin(float4x4 bones[98],

 

float4 boneWeights0,

float4 boneIndices0)

{

float4x4 result = boneWeights0.x * bones[boneIndices0.x];

result = result + boneWeights0.y * bones[boneIndices0.y];

result = result + boneWeights0.z * bones[boneIndices0.z];

result = result + boneWeights0.w * bones[boneIndices0.w];

return result;

}

 
 

Although this technique does burn instructions to build the accumulated matrix (16 multiplies and 12 adds), it now takes only a single matrix multiply to skin a point or vector. Skinning the same properties as before costs:

【这样可以减少总数的指令】

 
 


 
 

 
 

4.5 Conclusion

 
 

It is almost always beneficial to offload mesh animation from the CPU and take advantage of the programmable vertex pipeline offered by modern graphics hardware. Having seen the implementation of skinning and morph targets using shaders, however, it is clear that the inner loops are quite easy to implement using Streaming SIMD Extensions (SSE) instructions and the like, and that in those few cases where it is desirable to remain on the CPU, these same techniques work well.

 
 

In the case of the “Dawn” demo, morph targets were used to drive only the expressions on the head. If we had had more time, we would have used morph targets all over the body to solve problems with simple skinning. Even a well-skinned mesh has the problem that elbows, knees, and other joints lose volume when rotated. This is because the mesh bends but the joint does not get “fatter” to compensate for the pressing of flesh against flesh. A morph target or other mesh deformation applied either before or after the skinning step could provide this soft, fleshy deformation and create a more realistic result. We have done some work on reproducing the variety of mesh deformers provided in digital content-creation tools, and we look forward to applying them in the future.

【废话不翻译了。】

 
 

【这里没有很值得让人记住的技术点,最主要的贡献在于N的显卡的能力强大到如此大计算量的蒙皮人物也能跑的起来,如此复杂的avatar实际应用价值有限,GPU蒙皮的优化方案的效果理论上都达不到50%的优化,实际效果应该更加不如人意。】

 
 

4.6 References

 
 

Alias Systems. Maya 5.0 Devkit. <installation_directory>/devkit/animEngine/

 
 

Alias Systems. Maya 5.0 Documentation.

 
 

Eberly, David H. 2001. 3D Game Engine Design, pp. 356–358. Academic Press.

 
 

Gritz, Larry, Tony Apodaca, Matt Pharr, Dan Goldman, Hayden Landis, Guido Quaroni, and Rob Bredow. 2002. “RenderMan in Production.” Course 16, SIGGRAPH 2002.

 
 

Hagland, Torgeir. 2000. “A Fast and Simple Skinning Technique.” In Game Programming Gems, edited by Mark DeLoura. Charles River Media.