dipityPix app

Friday, July 15, 2016

I do have a bug

I questioned whether this was right:



I was concerned about the bright bottom and thought maybe there was a light tunnel effect.   I looked through my stuff and found a dented cube, and its bottom seemed to show total internal reflection:
The light tunnel effect might be happening and there is a little glow under the cube, but you cant see it thought the cube.   Elevating it a little does show that:

Digging out some old code and adding a cube yielded:
This is for debugging so the noise was just because I didn't run to convergence.   That does look like total internal reflection on the bottom of the cube, but the back wall is similar to the floor.   Adding a sphere makes it more obvious:
Is this right?   Probably.



Wednesday, July 13, 2016

Always a hard question: do I have a bug?

In testing some new code involving box-intersection I prepared a Cornell Box with a glass block, and first question is "is this right?".    As usual, I am not sure.    Here's the picture (done with a bajillion samples so I wont get fooled by outliers):


It's smooth anyway.   The glass is plausible to my eye.   The strangest thing is how bright the bottom of the glass block is.   Is it right?    At first I figured bug.   But maybe that prism operates as a light tunnel (like fiber optics) so the bottom is the same color as a diffuse square on the prism top would be.   So now I will test that hypothesis somehow (google image search?   find a glass block?) and if that phenomenon is right and of about the right magnitude, I'll declare victory.

Tuesday, June 7, 2016

A sale on Limnu

The collaborative white-boarding program I used for my ray tracing e books has finished their enterprise team features that your boss will want if you are in a company.   It's normally $8 a month per user but if you buy in the next week it is $4 per month for a year.   Looks like that rate will apply to any users added to your team before or after the deadline as well.   I love this program-- try it!

Sunday, May 15, 2016

Prototyping video processing

I got a prototype of my 360 video project done in Quartz Composer using a custom Core Image filter.     I am in love with Quartz Composer and core graphics because it is such a nice prototyping environment and because I can stay in 2D for the video.   Here is the whole program:




A cool thing is I can use an image for debugging where I can stick in whatever calibration points I want to in Photoshop.   Then I just connect the video part and no changes are needed-- the Core Image Filter takes and image or video equally happily and Billboard displays the same.

The Filter is pretty simple and is approximately GLSL     

One thing to be careful on is the return range of atan (GLSL atan is the atan2 we know and love).

I need to test this with some higer-res equirectangular video.    Preferably with fixes viewpoint and with unmodified time.   If anyone can point me to some I would appreciate it.

Saturday, May 14, 2016

What resolution is needed for 360 video?

I got my basic 360 video viewer working and was not pleased with the resolution.   I've realized that people are really serious that they need very high res.   I was skeptical of these claims because I am not that impressed with 4K TVs relative to 2K TVs unless they are huge.   So what minimum res do we need?    Let's say I have the following 1080p TV (we'll call that 2K to conform to the 4K terminology-- 2K horizontal pixels):

Image from https://wallpaperscraft.com
If we wanted to tile the wall horizontally with that TV we would need 3-4 of them.   For a 360 surround we would need 12-20.   Let's call it 10 because we are after approximate minimum res.  So that's 20K pixels.   To get up to "good" surround video 20K pixels horizontally.   4K is much more like NTSC.   As we know, in some circumstances that is good enough.

Facebook engineers have a nice talk on some of the engineering issues these large numbers imply. 

Edit: Robert Menzel pointed out on Twitter that the same logic is why 8K does suffice for current HMDs.


Thursday, May 12, 2016

equirectangular image to spherical coords

An equirectangular image, popular in 360 video, is a projection that has equal area on the rectangle match area on the sphere.   Here it is for the Earth:

Equirectangular projection (source wikipedia)
This projection is much simpler than I would expect.    The area on the unit radius sphere from theta1 to theta2 (I am using the graphics convention of theta is the angle down from the pole) is:

area = 2*Pi*integral sin(theta) d_theta = 2*Pi*(cos(theta_1) - cos(theta_2))

In Cartesian coordinates this is just:

area = 2*Pi*(z_1 - z_2)

So we can just project the sphere points in the xy plane onto the unit radius cylinder and unwrap it!   If we have such an image with texture coordinates (u,v) in [0,1]^2, then

phi = 2*Pi*u
cos(theta) = 2*v -1

and the inverse:

u = phi / (2*Pi)
v = (1 + cos(theta)) / 2

So yes this projection has singularities at the poles, but it's pretty nice algebraically!

spherical to cartesian coords

This is probably easy to google if I had used the right key-words.   Apparently I didn't.   I will derive it here for my own future use.

One of the three formulas I remember learning in the dark ages:

x = rho cos(phi) sin(theta)
y = rho sin(phi) sin(theta)
z = rho cos (theta)

We know this from geometry but we could also square everything and sum it to get:

rho = sqrt(x^2 + y^2 + z^2)

This lets us solve for theta pretty easily:

cos(theta) = z / sqrt(x^2 + y^2 + z^2)

Because sin^2 + cos^2 = 1 we can get:

sin(theta) = sqrt(1 - z^2/( x^2 + y^2 + z^2))

phi we can also get from geometry using the ever useful atan2:

phi = atan2(y, x)