Reproduce some of the effects from the paper by Ng et al using real lightfield data.
Note a link to my second project at the bottom.
This is a really clever trick that uses parallax. Objects which are far away from the camera move less when the camera moves, and they tend to keep the optical same axis, than objects that are closer. By averaging the images of the lightfield grid without shifting, we can focus on distant objects since they stay relatively still. What’s cool is that we can manipulate this effect by shifting the images appropriately and then averaging.
To achieve this, we compute shifts (dx, dy) from the camera positions provided in the dataset. By scaling these shifts with a scalar value, we control the focus depth, allowing us to generate images that simulate focusing on objects at different distances.
20 images, scalar from -0.1 to 0.5
The reason averaging a large number of images sampled over the grid perpendicular to the optical axis mimics a camera with a larger aperture is: decreasing the aperture is effectively the same as letting less light in with bigger angles. Averaging all grid images is equivalent to allowing light from all directions, mimicking a fully open aperture.
Conversely, to simulate a smaller aperture, we just include only the images closest to the optical center in our averaging process (within some radius). This reduces the effective aperture size, narrowing the range of angles contributing to the image.
20 images with radius from 0 to 10
Plenoptic cameras are amazing. With plenoptic cameras/light fields, we can simmulate different images that would have been taken from a normal camera. Parallax is not just a cool visual effect but also a useful feature that enables depth refocusing and aperture adjustment. Finally the intuition for Depth Refocusing is rooted in the very simple effects of averaging that we learned earlier in the semester.
This project demonstrated how lightfields enable complex visual effects, such as depth refocusing and aperture simulation, through simple operations like shifting and averaging. It built intuition on how plenoptic cameras work and showed how important precise image captures are when working with lightfields.
I tried several examples, and the easiest setup I could manage was using a chessboard at home. Unfortunately, I was unable to fully recreate the lightfield effect. I think it's because I could not get the neccesary precision. The paper specifies that the images must be captured over a plane orthogonal to the optical axis, with consistent spacing between them. It’s extremely difficult to maintain equal spacing and ensure orthogonality between the image plane and the optical axis at home, my hands shake too much. As a result my images don't line up at any scalar.
Left most view, middle view, rightmosts view
This website contains transitions not captured by the pdf, spesificaly, the title image changes into a high gamma verison and then into the black and white threshold filter version.
https://inst.eecs.berkeley.edu/~cs194-26/fa17/hw/proj5/
https://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf