Will you buy an Apple watch?

Tuesday, September 16, 2014

The color crimson versus other reds in school colors

In the creation of FanPix, I had to acquire the school colors and logos for many teams.  I expected it to be hard but it turns out most universities have large branding departments and give a lot of very specific information about their visual branding.  For example, this site at Indiana University includes the following:
The HEX in particular you can cut and paste into Photoshop and many APIs.  (Note I often get asked if I am allowed to use these colors and logos.  We don't charge for or have ads in FanPix and the IP lawyer I consulted said this is probably fair use because a photo of a real logo painted on a face seems to be, and we are not costing the trademark owners money the way we would be if we gave away free t-shirts).    The word "crimson" comes up a decent amount in sports, and one has to wonder why this rarely used color name is used at all.  The word is one of the 954 most common color names at xkcd, where it is around #8c000f.  If we consult wikipedia, we see Crimson has been used in English for over 600 years.  That same article shows several college teams that use "crimson" as one of their colors.   The six schools are shown here (the colors are from their various branding sites):
 Interestingly Utah and Kansas appear to have a classic red, but call it "crimson".  The xkcg crimson gives this:
 I looked through some other logos and found these darkish reds not in the wikipedia list:
 With xkcd crimson in there we get where maybe Cornell and Montana are not really crimson:
 In reading order the universities' own names for their colors are: carnelian (Cornell Red),  cardinal, maroon, maroon, crimson (wikipedia missed that one!), maroon, and garnet (not common enough to have its own wikipedia page!)   Note it is not our imagination that many teams have some red as a color.

One thing is for sure from this: I am saying "maroon" and "red" from now on!   After looking at so many team branding sites, my award for best goes to Notre Dame which has fully modernized its treatment of ND gold.   "Electronic displays (LCD screens, CRT monitors, etc.) may display colors slightly different than in print. Gold is a particularly challenging color. As such, an alternate gold has been provided for electronic applications."   North Carolina has the most "have to get it just right" color: carolina blue.  My award for the word goes to my own Reed College, which to my surprise has a color: "Reed Red", or if we go to the wikipedia page it is richmond rose, whatever that is!   That being said, Reed has the best seal, unofficial, though to Reed's credit available on the official school bookstore.

Friday, September 12, 2014

Random diffuse rays

In my lazy coding for the path tracer in the last post I chose a ray that is uniformly random in direction but above the surface.  I used a pretty standard trick here.  First I choose a point in the cube [-1,1]^3.  Then I see if it is in the sphere r < 1.

do {
      scattered_direction = vec3(x: -1.0 + 2.0*drand48(), y: -1.0 + 2.0*drand48(), z: -1.0 + 2.0*drand48())
 } while dot(scattered_direction,scattered_direction) > 1.0


Then I stick a loop around that to test until

while dot(scattered_direction, hit_info.2) < 0.001

Here hit_info.2 is the surface normal.   I could go in and produce correct diffuse rays, but that would involve coordinate systems and all that.  Instead I wondered if I could use the trick shown on the right below:
Left: choose a point in the cube, and keep it if it is in the sphere, and keep that if it is above the normal.  Right, pick the cube (not shown) and then keep if in sphere and that is it (they are all above the normal).
It's not obvious to me that this is Lambertian, but it might be.  But it's probably closer than the one on the left.   I dumped that code in (excuse the lazy reuse of variables):

 do {
      scattered_direction = vec3(x: -1.0 + 2.0*drand48(), y: -1.0 + 2.0*drand48(), z: -1.0 + 2.0*drand48())
 } while dot(scattered_direction,scattered_direction) > 1.0
 let sphere_tangent_point = -unit_vector(hit_info.2)
 scattered_direction = scattered_direction - sphere_tangent_point


And this yields:

uniform rays
diffuseish rays


















Looks like darker shadows which makes sense: rays go straight up.  It would require some calculus to see if the rays are Lambertian, and this was an exercise to avoid work so I am not doing that.  My money is on it being more oriented to the normal than true diffuse, but not bad.

Thursday, September 11, 2014

A simple ambient occlusion ray trace in Swift

I got my hobby ray tracer producing images.   It is not true diffuse ambient occlusion because the rays are uniform on the hemisphere rather than a cosine distribution.  I include it as one big file because xcode has been under rapid evolution lately.   Here's a 100 sample image.  All at pixel center so not antialiased.


And here's all the code cut-and-pastable into xcode.  You can run it as an executable from the terminal (xcode shows the path as shown)


//
//  main.swift
//
//  Created by Peter Shirley on 7/20/14.
//  This work is in the public domain
//

import Foundation



protocol hitable {
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3)
}


class sphere : hitable  {
    var center : vec3 = vec3(x: 0.0, y: 0.0, z: 0.0)
    var radius : Double  = 0.0
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3) {
       
        var A : Double = dot(r.direction, r.direction)
        var B : Double = 2.0*dot(r.direction,r.origin - center)
        var C : Double = dot(r.origin - center,r.origin - center) - radius*radius
        let discriminant = B*B - 4.0*A*C
        if discriminant > 0 {
            var t : Double = (-B - sqrt(discriminant) ) / (2.0*A)
            if t < tmin {
                t  = (-B + sqrt(discriminant)) / (2.0*A)
            }
            return (t > tmin, t, r.location_at_parameter(t) - center)
        } else {
            return (false, 0.0, vec3(x:1.0, y:0.0, z:0.0))
        }
    }
   
}

class hitable_list : hitable  {
    var members : [hitable] = []
    func add(h : hitable) {
        members.append(h)
    }
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3) {
        var t_clostest_so_far = 1.0e6 // not defined: Double.max
        var hit_anything : Bool = false
        var new_t : Double
        var hit_it : Bool
        var normal : vec3 = vec3(x:1.0, y:0.0, z:0.0)
        var new_normal : vec3
       
        for item in members {
            (hit_it , new_t, new_normal) = item.hit(r, tmin: tmin)
            if (hit_it && new_t < t_clostest_so_far) {
                hit_anything = true
                t_clostest_so_far = new_t
                normal = new_normal
            }
        }
        return (hit_anything, t_clostest_so_far, normal)
    }
   
}



// implicit type inference.  Swift when in doubt assumes Double
struct vec3 : Printable {
    var x = 0.0, y = 0.0, z = 0.0
    var description : String {
        return "hello"
    }
}

func * (left: Double, right: vec3) -> vec3 {
    return vec3(x: left * right.x, y: left * right.y, z: left * right.z)
}

func / (left: Double, right: vec3) -> vec3 {
    return vec3(x: left / right.x, y: left / right.y, z: left / right.z)
}

func * (left: vec3, right: Double) -> vec3 {
    return vec3(x: left.x * right, y: left.y * right, z: left.z * right)
}

func / (left: vec3, right: Double) -> vec3 {
    return vec3(x: left.x / right, y: left.y / right, z: left.z / right)
}

func + (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x + right.x, y: left.y + right.y, z: left.z + right.z)
}

func - (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x - right.x, y: left.y - right.y, z: left.z - right.z)
}

func * (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x * right.x, y: left.y * right.y, z: left.z * right.z)
}

func / (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x / right.x, y: left.y / right.y, z: left.z / right.z)
}

func max(v: vec3) -> Double {
    if v.x > v.y && v.x > v.z {
        return v.x
    }
    else if v.y > v.x && v.y > v.z {
        return v.y
    }
    else {
        return v.z
    }
}

func min(v: vec3) -> Double {
    if v.x < v.y && v.x < v.z {
        return v.x
    }
    else if v.y < v.x && v.y < v.z {
        return v.y
    }
    else {
        return v.z
    }
}

func dot (left: vec3, right: vec3) -> Double {
    return left.x * right.x + left.y * right.y + left.z * right.z
}

func cross (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.y * right.z - left.z * right.y, y: left.z * right.x - left.z * right.x, z: left.x * right.y - left.y * right.x)
}

func unit_vector(v: vec3) -> vec3 {
    var length : Double = sqrt(dot(v, v))
    return vec3(x: v.x/length, y: v.y/length, z: v.z/length)
}

protocol Printable {
    var description: String { get }
}



struct ray  {
    var origin : vec3
    var direction : vec3
    func location_at_parameter(t: Double) -> vec3 {
        return origin + t*direction
    }
}




var u : vec3 = vec3(x: 1.0, y: 0.0, z: 0.0);
var v : vec3 = vec3(x: 0.0, y: 1.0, z: 0.0);
var w : vec3 = cross(u, v)
var n : vec3 = u

var spheres : hitable_list = hitable_list()

var my_sphere1 : sphere = sphere()
my_sphere1.center = vec3(x: 0.0, y: 0.0, z: 0.0)
my_sphere1.radius = 1.0
spheres.add(my_sphere1)

var my_sphere2 : sphere = sphere()
my_sphere2.center = vec3(x: 2.0, y: 0.0, z: 2.0)
my_sphere2.radius = 1.0
spheres.add(my_sphere2)

var my_sphere3 : sphere = sphere()
my_sphere3.center = vec3(x: -2.0, y: 0.0, z: 2.0)
my_sphere3.radius = 1.0
spheres.add(my_sphere3)

var my_sphere4 : sphere = sphere()
my_sphere4.center = vec3(x: 0.0, y: -1001.0, z: 0.0)
my_sphere4.radius = 1000.0
spheres.add(my_sphere4)





let the_world : hitable = spheres

let ray_origin = vec3(x: 0.0, y: 2.5, z: -10.0)
let nx = 256
let ny = 256
let ns = 100

println("P3");
println ("\(nx) \(ny)");
println ("255");

for j in 0...255 {
    for i in 0...255 {
        var accum :vec3 =  vec3(x: 0.0, y: 0.0, z: 0.0)
        var red = 0, green = 0, blue = 0
        for s in 1...ns {
           
            var attenuation : vec3 =  vec3(x: 1.0, y: 1.0, z: 1.0)
           
            var not_yet_missed : Bool = true
            var ray_target : vec3 = vec3(x: -1.0 + 2.0*Double(i)/Double(nx),  y: 2.5 + -1.0 + 2.0*Double(255-j)/Double(ny), z: -8.0)
            var the_ray : ray = ray(origin: ray_origin, direction: ray_target-ray_origin)
            while not_yet_missed {
                let hit_info  = the_world.hit(the_ray, tmin: 0.01)
               
                if hit_info.0 {
                    attenuation = 0.5*attenuation
                    let new_ray_origin = the_ray.location_at_parameter(hit_info.1)
                    var scattered_direction : vec3 = hit_info.2
                    do  {
                        do {
                           
                            scattered_direction = vec3(x: -1.0 + 2.0*drand48(), y: -1.0 + 2.0*drand48(), z: -1.0 + 2.0*drand48())
                        } while dot(scattered_direction,scattered_direction) > 1.0
                       
                    } while dot(scattered_direction, hit_info.2) < 0.001
                    the_ray = ray(origin: new_ray_origin, direction: scattered_direction)
                }
                else {
                    not_yet_missed = false
                }
               
               
            }
            accum = accum + attenuation
        }
        red = Int(255.5*accum.x/Double(ns))
        green = Int(255.5*accum.y/Double(ns))
        blue = Int(255.5*accum.z/Double(ns))
       
        println("\(red) \(green) \(blue)");
    }
}











Wednesday, September 10, 2014

iPhone 6 screen sizes

One of the bigger design pains in our neck is that iPhone 4 screens are 4:3 and iPhone 5 screens are 16:9.  The sensors in the iPhone (and almost all other portables) are 4:3.   The design considerations for a 4:3 image on a 4:3 screen and a 16:9 screen are very different, and Apple has solved that nicely with its own photo app.  So I was interested to see what the iPhone 6 aspect ratio was (the number of pixels doesn't matter that much for writing the software other than memory pressure).

It turns out the new phones are about 16:9, but in confirming that I was interested to see it is not always exact.  Those integers are pesky and thank goodness square pixels seem to be here to stay.  The exact numbers (from this chart) are:

iPhone 5 models
  •  1136-by-640-pixel
  •  326 ppi
  • 15.975:1 aspect ratio

iPhone 6
  •  1334-by-750-pixel
  •  326 ppi
  • 16.008:1 aspect ratio 

iPhone 6 plus
  •  1920-by-1080-pixel
  •  401 ppi
  • 16:1 aspect ratio

I doubt that being exactly 16:1 makes any real difference in practice, but I'll be curious to see if 1080p content looks noticeably better on the 6 plus.  Such content looks much better on 720p screens than I would expect.



A side note is I have heard a lot this morning that Apple has not added a "revolutionary" product with the watch, "unlike the last one, the iPad".  I recall the disappointment that the iPad was just a big iPod, which is why it was revolutionary in practice: a big screen information appliance.  I think the watch will be even more huge and is revolutionary, and I have bet a bottle of bourbon with John Regehr on this (and have not sold my Apple stock).

Monday, September 8, 2014

List of hitable in Swift ray tracer

I added a list of hitables.  This uses the "array" in swift which happily has an append function.  I decided to eliminate the "normal" member function and add the normal to the tuple returned by the hit() function (as predicted by a sharp commenter on the last post).  Other than the tuple, it doesn't look very different than the analogous function in C++.  I am still a rookie on declaration and initialization so this may be verbose.

class hitable_list : hitable  {
    var members : [hitable] = []
    func add(h : hitable) {
        members.append(h)
    }
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3) {
        var t_clostest_so_far = 1.0e6 // not defined: Double.max
        var hit_anything : Bool = false
        var new_t : Double
        var hit_it : Bool
        var normal : vec3 = vec3(x:1.0, y:0.0, z:0.0)
        var new_normal : vec3

        for item in members {
            (hit_it , new_t, new_normal) = item.hit(r, tmin: tmin)
            if (hit_it && new_t < t_clostest_so_far) {
                hit_anything = true
                t_clostest_so_far = new_t
                normal = new_normal
            }
        }
        return (hit_anything, t_clostest_so_far, normal)
    }


Sunday, September 7, 2014

Inheritance for a Swift ray tracer

I just did a test to see how class hierarchies are set up in Swift.  Standard design in a ray tracer is that all geometric models in the scene have a "hit" function and the program is blind to the black box that can answer "do I hit you?".   This design was first popularized (and as far as I know it was the first to be implimented) by David Kirk and Jim Arvo back during the Cold War days (The Ray Tracing Kernel," by David Kirk and James Arvo. In Proceedings of Ausgraph '88, Melbourne, Australia, July 1988.).  I found the Swift syntex pretty yummy overall for my "abstract class" (in Swift lingo a "protocol") "hitable".  Note that picking a name for this is often painful; graphics people often say "object" (too broad) and I used to say "surface" (too narrow as volumes are sometimes subclasses), so I now say what it can be asked.  The sphere class is the only hitable so far:

import Foundation

protocol hitable {
    func hit(r : ray, tmin : Double) -> (Bool, Double)
    func normal(p: vec3) -> vec3
}

class sphere : hitable  {
    var center : vec3 = vec3(x: 0.0, y: 0.0, z: 0.0)
    var radius : Double  = 0.0
    func hit(r : ray, tmin : Double) -> (Bool, Double) {
      
        var A : Double = dot(r.direction, r.direction)
        var B : Double = 2.0*dot(r.direction,r.origin - center)
        var C : Double = dot(r.origin - center,r.origin - center) - radius*radius
        let discriminant = B*B - 4.0*A*C
        if discriminant > 0 {
            var t : Double = (-B - sqrt(discriminant) ) / (2.0*A)
            if t > tmin {
                t  = (-B + sqrt(discriminant)) / (2.0*A)
            }
            return (t > tmin, t)
        } else {
            return (false, 0.0)
        }
    }
  
    func normal(p: vec3) -> vec3 {
        return p - center
    }
  
}


Note that the hit function returns a tuple so we don't need to define some struct to store stuff.  I've gone with not computing normal at time of hit as I'm a fan of compute it when you need it, but that is just taste.  Foundation we need to get "sqrt()".

Now in main we have (and my initialization is probably clumsy):

let my_sphere = sphere()
my_sphere.center = vec3(x: 0.0, y: 0.0, z: 0.0)
my_sphere.radius = 2.0
let the_world : hitable = my_sphere

let along_z = ray(origin: vec3(x: 0.0, y: 0.0, z: -10.0), direction: vec3(x: 0.0, y: 0.0, z: 1.0))
let hit_info  = the_world.hit(along_z, tmin: 100.0)
println("hit =\(hit_info.0), t = \(hit_info.1)")



Which prints out:
hit =false, t = 8.0

All in all, I really like how clean Swift makes this; so far it is my favorite language for writing a ray tracer (main complaint: not yet portable and I don't know what Apple's plan for that is).  I have no idea if it will actually be "swift", nor whether I need to make changes to make it swifter, but that is for the future.
 

Wednesday, September 3, 2014

Our new Photo App Extension feature in Pic!

Earlier this year Apple announced that you would be able to use some third-party components in other apps, with keyboards getting a lot of ink.  Photo-editing app extensions caught our eye and we looked into making Pic! also operate as an app extension.  To me, Apple's app extensions are a particular kind of plugin that also must be a stand-alone app.  Also trying the new language Swift was attractive to us.  We thought the timing was tight, but we had a great programmer Andew Cobb working with us who had a month before he left for graduate school.  He actually managed to finish the app extension in Swift, some of which he will later make available as an open source project.

We now have the Swift app extenstion version working and it should be available on or soon after the day iOS8 goes live.

To run a photo app extension, you run an app that has photo editing and app extensions allowed.  Here is an example in the xcode iOS8 simulator where we run the apple app Photos, and select an image and then hit the "edit button"
Here we can use the Apple editing options at the bottom.  Instead we hit the upper left button

This brings up the available photo editors that can be used in the app, and we choose ours.

Unlike the stand-alone Pic! app, there is a bar at the top the app Photos owns.  We have no flexibility there.   Note there is a "done" button.  In our standalone app, you click three screens and then you are done.  In the app extension, if the user hits done you have to bail out with some version of the image.  We use the one the user last clicked indicated by the box around the upper left.

Still in our app extension we make two more choices and we have our final image.  Now the user can say yes they are done and exit our app extension with saved changes, or say cancel and get out with no changes.  The back button at the bottom is ours, and can return to change choices.

We say done and return to the editor.  Now our app extension is out of the picture and we are in just the Photos app.  We could still cancel.  Or we could run pic! on its own output.  Or we could run another app extension.

We say done and back to browsing in Photos with our saved changes.

All in all now that we have done one, I'm a big fan of app extensions.   There are actually more clicks here than just using our app directly for the same task.  However, the real potential is running multiple photo app extension in series.  I haven't done that yet because only ours is available to us, but I look forward to trying it when the real ecosystem is here (next week?).