齐次坐标系
齐次坐标系
之前不理解为什么要用一个和从小到大学的笛卡尔坐标系不同的齐次坐标系来表示东西,并且弄得很复杂;学了各种公式也很糊涂。现在终于明白了
齐次坐标系的现实意义
就是用来表示现实世界中我们眼睛看到的样子:两条平行线在无限远处能相交。
之前不理解为什么要用一个和从小到大学的笛卡尔坐标系不同的齐次坐标系来表示东西,并且弄得很复杂;学了各种公式也很糊涂。现在终于明白了
就是用来表示现实世界中我们眼睛看到的样子:两条平行线在无限远处能相交。
This is part of my journey of learning NeRF.
Reference: Original NeRF paper; an online ariticle
This is part of my journey of learning NeRF.
Neural fields is ready to be a prime representation, similar as point clouds or meshes, that is able to be manipulated.
This is part of my journey of learning NeRF.
This is part of my journey of learning NeRF.
This is part of my journey of learning NeRF.
Similar as NLP, they use position encodings. Like Sinusoid functions. I also remember an encoding method which takes into consider of the 光线的散射
I am currently burying myself into the sea of NeRF. I plan to archive my learning notes here. I am still a beginner, so the notes absolutely contain errors, and are not finished yet.
Learning NeRF: Reading list, learning references, and plans
Notes of CVPR22' Tutorial:
This is part of my journey of learning NeRF.
Sounds like a one-shot task: instead of fitting and optimizing a neural field each for one scene; let's learn a prior distribution of neural field. Then, given a specific scene, it adjusts the neural field in just one forward.
I was so confused when doing a homework on implementing the Luong Attention, because it tells that the decoder is a RNN, which takes \(y_{t-1}\) and \(s_{t-1}\) as input, and outputs \(s_t\), i.e., \(s_t = RNN(y_{t-1}, s_{t-1})\).
But the pytorch implementation of RNN is: \(outputs, hidden\_last = RNN(inputs, hidden\_init)\), which takes in a sequence of elements, computes in serials, and outputs a sequence also.
I was confused about what is the \(s_t\). Is it the \(outputs\), or the \(hidden\_states\)?