0%

齐次坐标系

之前不理解为什么要用一个和从小到大学的笛卡尔坐标系不同的齐次坐标系来表示东西,并且弄得很复杂;学了各种公式也很糊涂。现在终于明白了

齐次坐标系的现实意义

就是用来表示现实世界中我们眼睛看到的样子:两条平行线在无限远处能相交。

阅读全文 »

I am currently burying myself into the sea of NeRF. I plan to archive my learning notes here. I am still a beginner, so the notes absolutely contain errors, and are not finished yet.

Contents

Learning NeRF: Reading list, learning references, and plans

Notes of CVPR22' Tutorial:

阅读全文 »

This is part of my journey of learning NeRF.

2.4. Prior-based reconstruction of neural fields

Sounds like a one-shot task: instead of fitting and optimizing a neural field each for one scene; let's learn a prior distribution of neural field. Then, given a specific scene, it adjusts the neural field in just one forward.

image-20221211234430727
image-20221211234430727
阅读全文 »

Reference: https://stackoverflow.com/questions/48302810/whats-the-difference-between-hidden-and-output-in-pytorch-lstm

I was so confused when doing a homework on implementing the Luong Attention, because it tells that the decoder is a RNN, which takes \(y_{t-1}\) and \(s_{t-1}\) as input, and outputs \(s_t\), i.e., \(s_t = RNN(y_{t-1}, s_{t-1})\).

But the pytorch implementation of RNN is: \(outputs, hidden\_last = RNN(inputs, hidden\_init)\), which takes in a sequence of elements, computes in serials, and outputs a sequence also.

I was confused about what is the \(s_t\). Is it the \(outputs\), or the \(hidden\_states\)?

阅读全文 »