dipityPix app

Friday, November 13, 2015

iPad Pro with pencil stylus review

Summary:  Overall, the whiteboarding alone is reason enough for me to buy an iPad pro-- it has gone over that magic threshold of "good enough" that pads of paper will be a thing of the past for me.   Your excuse may be different (pencil drawing is a good one), so I think this device will be huge.

Dave Hart and Grue Debry got an iPad Pro with stylus for their company and they loaned it to me to try out tonight.   I used their limnu shared whiteboarding program to test it.

I tried it with the math I was messing with today (trying to meet Andrew Glassner's color space challenge to get uniformity into the prismatic color volume).

First, I **love** that I can rest my hand on the iPad while I draw (the iPad pro understands not to count that as a touch).

My hand is resting on the iPad as I draw and this is more important to my comfort than I would have thought
My biggest reaction was that this iPad is exactly the size I want.  It's about the size of 8.5 by 11 paper (actual working area size about 7 3/4" by 10 1/4"), so maybe that size evolved in paper to be the "right size" or maybe I am just so used to it that I like it.   Any bigger and it would be awkward to transport, and using this as a pad in a coffee shop is a great use case.   And of course you can pan so really it's a window into a much bigger sheet of paper.

The stylus is fantastic.   It feels good and has some features that has me not as eye-rolling about calling it a pencil.

As a white board marker I loved it.   Changing colors and nib sizes was more useful than I anticipated.   Using it as an eraser (which I had to do a lot as will be evident in some of the not very careful eases below-- I really do use limnu like a white board-- it's for blazing through ideas).   Here's my first screenful.  
A screenshot of my first session on limnu with the iPad Pro

 15 more 2x2 equations to solve (doing them as special cases to take advantage of zero dropouts) so I will definitely need the pan feature.   I used to use a big real white board or a giant artist pad for these situations, but I will most definitely use a tablet from now on.   Even without the saving and collaboration features I think it would be a win just because of physical portability and fluidity.  

Overall, the whiteboarding alone is reason enough for me to buy an iPad pro-- it has gone over that magic threshold of "good enough" that pads of paper will be a thing of the past for me.    I don't think it will make my laptop obsolete due to OS issues (Microsoft is making a better play for that now).   But the hardware of the iPad pro is in the laptop power zone.    John Gruber has a really interesting discussion of this hardware/software issue.

Tuesday, November 10, 2015

Cool 2D rendering project

Benedikt Bitterli has posted a really neat 2D rendering site that includes a javascript demo that he also has put on github.   (Via Dave Hart)

Monday, November 9, 2015

Uniform color models

I had an interesting discussion with Andrew Glassner (since this is often a ray tracing blog I'll tell the youngsters that Andrew is known for many things but he's also the inventor of the first sub-linear ray intersection algorithm!)  about the prismatic color model I have been touting.    Andrew points out that while it retains the good properties of HSV it also retains its worst property: terrible color uniformity.    The color scientists have long rightly held that it would be nice if a color space had intuitively similar changes in color for similar changes in distance in the space.   The so-called MacAdam ellipses on the CIE diagram (where each ellipse holds a collection of colors that are nearby) can be used to warp the CIE space to a more uniform one.

Each MacAdam Ellipse has a set of colors you can barely tell are different.   A "uniform" color space would have disks of the same size for these.   Source wiikipedia.

Andrew rightly points out that for a color space to be kick-ass (my term) it should be at least somewhat uniform.   So a challenge to all of you out there: create a uniform space that is RGB-centric.   Or if you know of one, tell me!

Thursday, November 5, 2015

Tech Report on the Prismatic Color Space

Dave Hart and I have written a tech report on our Prismatic Color Space.  Please let me know if you use it for anything fun.

Saturday, October 31, 2015

Our Pando ephermoral messaging app is private!

SnapChat has gotten a lot of bad press for its privacy policy lately.   It is likely overblown, but our Pando app (download links here) not only has a great privacy policy, we can't see your photos if we wanted to (we don't have the keys).

Here's they key bit of our policy:

Thursday, October 22, 2015

Seeking minimal sample program for head coupled motion

I am seeking your help!

I am working on a fishtank VR demo.   I have been looking at various engines and APIs and so far none of them have a very general camera model.   Most give me the ability to hardwire the viewing matrices, but I figure that if I do that maybe I should just use GLUT-- I don't want the disadvantages of a high level framework unless I also avoid low-level programming.

Here is a nice discussion of this for game engines.   Note it may be out of date but a really nice paper.   Since I am doing mono fishtank VR, this paper calls that "head couple motion" which I think is a term I will adopt-- it's very descriptive.    What I need in the camera is the ability to shear off the center of the screen (perhaps a lot), e.g.:

This is straightforward (if easy to mess up) to do with projection and other matrices.   Three.js does allow this functionality but only indirectly using a tile of screens analogy.

So what I want is one of two things:
  1. A simple GL or similar sample program that does this for some cool model or even just an environment map
  2. A high level toolkit that supports this naturally
If it has tracking from a camera bonus, but not essential.

Tuesday, October 20, 2015

Fishtank VR viewing in Three.js

I am working on a "Fishtank VR" prototype and asked around and a lot of people told me Three.js is the easiest API for 3D prototyping.   So last night I dug into learn it (and found and watched some of this cool udacity course from Eric Haines on graphics using Three.js as a vehicle for much of it).   I just started using it and dealing with the camera API is where I need to invest some classic graphics API wrangling.   Whenever you want to do something "weird" with a camera, that is where some pain will lie.

Here is the camera API for Three.js.

What I want for fishtank VR is ideally an API that allows me to specify viewpoint in some physical units (like meters), and the location of the physical rectangle of the screen is real physical units (like the position of one corner and the vectors of bottom and side).   Most camera APIs do not have this directly, so the question is can you get at them indirectly, or do you have to make your own from scratch.

The most general camera in Three.js is the perspective camera.   This method is what I will need to use if this is to work:

This appears to be when you have a wall of tiled screens.  But happily the API designer made it a little more general.      All I need to do is manage the relative position of the eye and the "portal",   So I think this can be made to work.