Manipulate Neural Fields
This is part of my journey of learning NeRF.
2.5. Manipulate Neural Fields
Neural fields is ready to be a prime representation, similar as point clouds or meshes, that is able to be manipulated.

You can either edit the input coordinates, or edit the parameters \(\theta\).
On the other axis, you can edit through an explicit geometry, or an implicit neural fields.

The following examples 落在不同的象限。
Editing the input via Explicit geometry (left-up)
You can represent each object using a separated neural field (local frame), and then compose them together in different ways.
If you want to manipulate not only spatially, but also temporaly, it is also possible. You can add a time coordinate as the input of the neural field network, and transform the time input.
You can also manipulate (especially human body) via skeleton.
image-20221212212838893 Beyond human, we can also first estimate different moving parts of an object, to form some skeleton structure, and then do the same.
Noguchi etal, CVPR22
Beyond rigid, we can also manipulate via mesh. coz we have plenty of manipulation tools on mesh. The deformation on mesh can be re-mapped as the deformation on the input coordinate
image-20221212213601773
Editing the input via Neural Flow Fields (left-down)

We use the \(f_{i\rightarrow j}\) to edit the \(r_{i\rightarrow j}\) to represent one ray into another one.
We need to define the consistency here, so that the network can learn through forward and backward:

Editing network parameters via Explicit geometry (right-up)
The knowledge is already in the network. So instead of editing the inputs, we can directly edit the network parameters for generating new things.

- This proposed solution makes use of an encoder. The encoder learns to represent the rotated input as a high-dimensional latent code Z, with the same rotation R, in 3-dim space. The the following network use the latent code to generate the \(f_\theta\)

- In this work, the key idea is to map the high-resolutional object and the similar but lower resolutional object into the same latent space. Then, you can easily manipulate the lower resolutional object, and it should also affect the higher resolutional one. Then, the shared latent space are put into the following neural field network, which outputs high resolutional results.


- This work (Yang et al. NeurlPS'21) about shape editing is "super important" but the speaker does not have enough time... Basically it shows that the tools that we use to manipulate a mesh can also be used on a neural field, where we can keep some of the network parameters to make sure the basic shape of the object the same, and then the magical thing is the "curvature manipulation" item. Given the neural field is differentiable, this can be achieved.

- Obeying the points (a.k.a generalization). It makes sure the manipulation done on the input points are reconstructed.
Editing network parameters via Neural Fields (right-down)

- This work constructs a reasonable latent space of the object, then do interpolation of different objects.
- Beyond geometry, we can also manipulate color

It decomposes the network into shape and color networks, and we can edit each independently.

- This is the stylization work. It mainly depends on a different loss function, which does not search for the exact feature of the vgg, but somehow the nearest neighbor.