# Tianke Youke

A Base for Secreting and Running at Night

0%

# 齐次坐标系

## 齐次坐标系的现实意义

This is part of my journey of learning NeRF.

# 1. Introduction to NeRF

## What is NeRF

Reference: Original NeRF paper; an online ariticle

# Learning NeRF

This is part of my journey of learning NeRF.

### Classical

This is part of my journey of learning NeRF.

## 2.5. Manipulate Neural Fields

Neural fields is ready to be a prime representation, similar as point clouds or meshes, that is able to be manipulated.

This is part of my journey of learning NeRF.

## 2.3. Differentiable Forward Maps

### Differentiable rendering

This is part of my journey of learning NeRF.

## 2.2. Hybrid representations

### Tradeoffs of choosing a proper representation

This is part of my journey of learning NeRF.

## 2.1. Network Architecture

### 1. Input Encoding

Similar as NLP, they use position encodings. Like Sinusoid functions. I also remember an encoding method which takes into consider of the 光线的散射

I am currently burying myself into the sea of NeRF. I plan to archive my learning notes here. I am still a beginner, so the notes absolutely contain errors, and are not finished yet.

# Contents

Learning NeRF: Reading list, learning references, and plans

Notes of CVPR22' Tutorial:

This is part of my journey of learning NeRF.

## 2.4. Prior-based reconstruction of neural fields

Sounds like a one-shot task: instead of fitting and optimizing a neural field each for one scene; let's learn a prior distribution of neural field. Then, given a specific scene, it adjusts the neural field in just one forward.

I was so confused when doing a homework on implementing the Luong Attention, because it tells that the decoder is a RNN, which takes $$y_{t-1}$$ and $$s_{t-1}$$ as input, and outputs $$s_t$$, i.e., $$s_t = RNN(y_{t-1}, s_{t-1})$$.

But the pytorch implementation of RNN is: $$outputs, hidden\_last = RNN(inputs, hidden\_init)$$, which takes in a sequence of elements, computes in serials, and outputs a sequence also.

I was confused about what is the $$s_t$$. Is it the $$outputs$$, or the $$hidden\_states$$?