Tuesday, September 30, 2014

Greys in skies

I've been messing with sky photos to try to make them look better algorithmically and realized they have greys in them more than I realized (I thought my histograms of saturation were wrong at first).  Here's an example that shows a grey region between blue and orange (full at John Roever's flickr)
And one over the ocean (full photo at this Carmel page):
 And even when there is even the slightest hint of orange (from here), although there is a slight green bias:
So greys are just more common than I realized.

Wednesday, September 24, 2014

More real-world lighting that looks like a graphics bug

Here is a scene from my house.  No nightlights or floor lighting.  These pictures are not exaggerated.
That looks like there is a light!
The light actually comes from the window but still it looks like there is a light under the couch


Another view.  The sun on the couch is just very very bright

Tuesday, September 23, 2014

A curious search fail

I have just moved from a Windows Mobile Phone to an iPhone, and I am super-impressed with both devices' hardware and software.  However, I am equally surprised at the failure modes of Apple's and Microsoft's app stores.  We have a Windows app called "Pando".  If you search for "Pando" you don't find it.  Apple manages to fail as impressively on our app "Pic!" which one cannot find searching by title.  In each case, you can find it by searching under "Pixio", our partner whose app store portal we use (Pixio had 2 of the first 300 apps on the Apple app store, and we love the name!).  If anybody knows what channel to submit a bug report on either of these things, let me know.  App discovery is hard enough if you ARE in the index :)



Smartphone camera ratings

Mike Herf pointed me to a really serious evaluation of the iPhone6 where it came out the current leader (dxomark link).

Their numeric ratings I collected from their various pages (take with a grain of salt as any single number has issues-- look at their detailed tests before you buy anything based on it):

  1. 86  iPhone 6
  2. 79 Galaxy S5 
  3. 79 Galaxy S4
  4. 76 Sony Xperia Z1
  5. 76 iPhone 5s
  6. 74 Nokia Lumina 1020
  7. 73 LG G2
  8. 72 iPhone 5
  9. 63 HTC One M8   
Bottom line appears that everybody has caught up to the 5s, but Apple has created a gap again.  

Monday, September 22, 2014

A Rebuttal to the Daily Mail

Having just got out an app to beat the iPhone6 launch (Pic!  try it!),  I am catching up on all the reading I haven't done the last few months.   People who know me well will not be surprised that the Daily Mail is my favorite paper.  I try to be intellectual enough to read The Guardian and the FT, but the Mail is my kind of paper.   I was pleased to see an app by my buds at Pixio featured in the Mail during my news blackout.  Further, it got trashed by the Mail, which I know often says good things about a person, place, or thing.  I actually had never used the app, so I bought it for 99 cents US and in fact I think it is a nice little app that you can use to teach yourself how to use an abacus.   Further it was one of the first 300 apps on the store!  Not sure what the Mail has against the abacus, maybe some left over animosity from the Roman Empire (being stuck on the wrong side of that wall would make anyone mad). It’s cheaper and easier than going out and buying a physical one, the counters on it make it obvious how it works unlike the real ones which I never  understood before today.   So you can make yourself smarter for 99 cents, or you can buy a Sunday Daily Mail for $2 and make yourself dumber reading stories about Honey Boo Boo and seeing pictures of buxom Octoberfest waitresses.  Or you can spend $3 and come out exactly as smart as you were before you started.  That being said, I will use the language I have learned reading the Mail and tell them that on this issue they are a bunch of stupid gits that should probably be using two cans and a string instead of a reviewing apps for smart phones.

While I am talking about Pixio, let me tell you that its co-founder, Lorenzo Swank, got in line 33 hours before the iPhone6 was available, and he got one.  Exactly 33 hours later he dropped it in the toilet.  I think that he is probably the first person on Earth to test an iPhone 6 with a toilet dunk, results as expected (it does come with a prize: paying Apple more money).  Maybe the Daily Mail should do a story on that, because things involving idiocy and bathrooms seem to be more up their alley.  Like what ever happened to Honey Boo Boo for example...

More iPhone6 tests: app using the camera

As I thought about how almost absurdly good the iPhone6 is in low light, I became concerned this was only available in the Apple camera app.  I just took pictures on the inside of my cabinet (it's quite dark) and did a side-by-side test of the Apple Camera app and an app taken inside Pic!

taken inside Apple Camera app

Taken inside Pic!

Thankfully, whatever low-light mojo is going on under the hood, is the default for 3rd party developers.  For fun I also tried posterizing it to see how the noise looked, and the "random dithered" nature of the noise has a cool look to my eye.

Posterization inside Pic!

Just so my blog doesn't devolve into a Apple Fanboy festival, note that as an App Extension our app often crashes in the real phone but is fine with all the simulators.  Not surprising for a new feature (and in our case a new language: Swift), but it appears this is Apple's bug.  We eagerly await iOS 8.01.  But the app seems to work great as an app, so far from a wipe-out.  The other two apps with photo extensions we are aware of (Camera+ and Fragment) are faring a little better, and it would be nice to see if they are written in Swift or Objective-C.  Further if you search the Apple Store for "Pic!" it doesn't find our app.  Talk about an app discovery problem!  So search for "Pixio" or use this link.

Sunday, September 21, 2014

A test on moving objects and the iPhone 6+

Here is a low-light moving object (2 on the stand mixer scale).  Impressive!  I will not include my real camera until I find the charger :)

iPhone 5c

iPhone 6+

And here are some 200x200 pixel close-ups with the 5c on the left of each
Here is the post on the right
Moving object (farthest white part on the left of paddle)
Clearly the 6+ is way better.  But interesting that the noise is really at the pixel level with smaller blobs you see there is a better dithering effect.  As a Monte Carlo guy, I like the path tracing look, and with so many pixels it will work.  Whatever is going on under the hood it is impressive, because as far as I know the sensors are not that different.

Test of iPhone6+ camera in low light

I have been out of the iPhone world (as an everyday user) for a year as I tried the Android and Windows phones.   I just got a 6+ and have heard rumors it is a serious contender in the camera world.   So I decided to try it in the conditions I break out my real camera: low light.  I also tried an iPhone 5c because I had access to one (I would love to see how the 5s fares which I am sure we will see from other users).    The test scene subjectively was about as dark a picture as I would take that's not some night image and I had adjusted this picture to "seem" like what I saw.   Basically a tavern illumination level.
The specs of the cameras I used were:
  • iPhone 5c, 3264x2448 pixels, 1/3.2" sensor (1.4um pixel size)
  • iPhone 6+, 3264x2448 pixels, 1/3" sensor (1.5um pixel size)
  • Canon G1X, 4352x2904 pixels, 1.5" sensor (4.3um pixel size)
Here is a nice image from gizmag to show how ginormous that canon sensor is (it's almost as big as the serious DLSR cameras now):
 So the area of the big pixels is about 8 times that of the phone pixels.  If pixel size tells most of the story at low light levels, as I have always said, we should expect the phones to not do nearly as well.  But to listen to the interwebs, the phones are now competitive.  In my test let's see:
iPhone 5c

iPhone 6+
Canon G1X
 To my surprise, they all did quite well on color.  The tone mapping does crank up the brightnesses and make the scene appear brighter, but that is not wrong; it's a matter of taste.  The blurry foreground of the Canon shows it was opening up its aperture to get more light, so more than sensor size should come into play.  Let's look closer to see if the phones can possibly compete with the Canon's big sensor/aperture working in concert.  Here are 100x100 pixels of each near where things are in focus.
Left to right 5c, 6+, Canon G1X

My only conclusion is WOW.  Apple really has a home run here.  Granted the Canon is a few years old and I skipped the 5s and maybe this is old news and the 6 isn't much better than the 5s (and other cameras of that generation are competitive from what I have seen) but I am amazed Apple could pull this off.  I am looking forward to seeing how the other manufacturers respond (for example, if Nokia, which has a great camera track record, makes a phone with a much bigger sensor, can they do even better), and how the DSLR makers do as this technology makes its way to the high end.

I suppose I have a more personal conclusion.  If I don't need defocus blur effects, the Canon is going in the drawer for now.

Wednesday, September 17, 2014

Our Pic! app with app extension is now live on the app store

Our update of our Pic! app to include an app extension (allows you to fire it up as a plugin in the Camera and Photos app) just went live in the app store.  The iOS8 download is live too.  To my surprise, this is all basically working on the first go; good job Apple!  https://itunes.apple.com/us/app/pic!/id906938836?mt=8

Tuesday, September 16, 2014

The color crimson versus other reds in school colors

In the creation of FanPix, I had to acquire the school colors and logos for many teams.  I expected it to be hard but it turns out most universities have large branding departments and give a lot of very specific information about their visual branding.  For example, this site at Indiana University includes the following:
The HEX in particular you can cut and paste into Photoshop and many APIs.  (Note I often get asked if I am allowed to use these colors and logos.  We don't charge for or have ads in FanPix and the IP lawyer I consulted said this is probably fair use because a photo of a real logo painted on a face seems to be, and we are not costing the trademark owners money the way we would be if we gave away free t-shirts).    The word "crimson" comes up a decent amount in sports, and one has to wonder why this rarely used color name is used at all.  The word is one of the 954 most common color names at xkcd, where it is around #8c000f.  If we consult wikipedia, we see Crimson has been used in English for over 600 years.  That same article shows several college teams that use "crimson" as one of their colors.   The six schools are shown here (the colors are from their various branding sites):
 Interestingly Utah and Kansas appear to have a classic red, but call it "crimson".  The xkcg crimson gives this:
 I looked through some other logos and found these darkish reds not in the wikipedia list:
 With xkcd crimson in there we get where maybe Cornell and Montana are not really crimson:
 In reading order the universities' own names for their colors are: carnelian (Cornell Red),  cardinal, maroon, maroon, crimson (wikipedia missed that one!), maroon, and garnet (not common enough to have its own wikipedia page!)   Note it is not our imagination that many teams have some red as a color.

One thing is for sure from this: I am saying "maroon" and "red" from now on!   After looking at so many team branding sites, my award for best goes to Notre Dame which has fully modernized its treatment of ND gold.   "Electronic displays (LCD screens, CRT monitors, etc.) may display colors slightly different than in print. Gold is a particularly challenging color. As such, an alternate gold has been provided for electronic applications."   North Carolina has the most "have to get it just right" color: carolina blue.  My award for the word goes to my own Reed College, which to my surprise has a color: "Reed Red", or if we go to the wikipedia page it is richmond rose, whatever that is!   That being said, Reed has the best seal, unofficial, though to Reed's credit available on the official school bookstore.

Friday, September 12, 2014

Random diffuse rays

In my lazy coding for the path tracer in the last post I chose a ray that is uniformly random in direction but above the surface.  I used a pretty standard trick here.  First I choose a point in the cube [-1,1]^3.  Then I see if it is in the sphere r < 1.

do {
      scattered_direction = vec3(x: -1.0 + 2.0*drand48(), y: -1.0 + 2.0*drand48(), z: -1.0 + 2.0*drand48())
 } while dot(scattered_direction,scattered_direction) > 1.0


Then I stick a loop around that to test until

while dot(scattered_direction, hit_info.2) < 0.001

Here hit_info.2 is the surface normal.   I could go in and produce correct diffuse rays, but that would involve coordinate systems and all that.  Instead I wondered if I could use the trick shown on the right below:
Left: choose a point in the cube, and keep it if it is in the sphere, and keep that if it is above the normal.  Right, pick the cube (not shown) and then keep if in sphere and that is it (they are all above the normal).
It's not obvious to me that this is Lambertian, but it might be.  But it's probably closer than the one on the left.   I dumped that code in (excuse the lazy reuse of variables):

 do {
      scattered_direction = vec3(x: -1.0 + 2.0*drand48(), y: -1.0 + 2.0*drand48(), z: -1.0 + 2.0*drand48())
 } while dot(scattered_direction,scattered_direction) > 1.0
 let sphere_tangent_point = -unit_vector(hit_info.2)
 scattered_direction = scattered_direction - sphere_tangent_point


And this yields:

uniform rays
diffuseish rays


















Looks like darker shadows which makes sense: rays go straight up.  It would require some calculus to see if the rays are Lambertian, and this was an exercise to avoid work so I am not doing that.  My money is on it being more oriented to the normal than true diffuse, but not bad.

Thursday, September 11, 2014

A simple ambient occlusion ray trace in Swift

I got my hobby ray tracer producing images.   It is not true diffuse ambient occlusion because the rays are uniform on the hemisphere rather than a cosine distribution.  I include it as one big file because xcode has been under rapid evolution lately.   Here's a 100 sample image.  All at pixel center so not antialiased.


And here's all the code cut-and-pastable into xcode.  You can run it as an executable from the terminal (xcode shows the path as shown)


//
//  main.swift
//
//  Created by Peter Shirley on 7/20/14.
//  This work is in the public domain
//

import Foundation



protocol hitable {
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3)
}


class sphere : hitable  {
    var center : vec3 = vec3(x: 0.0, y: 0.0, z: 0.0)
    var radius : Double  = 0.0
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3) {
       
        var A : Double = dot(r.direction, r.direction)
        var B : Double = 2.0*dot(r.direction,r.origin - center)
        var C : Double = dot(r.origin - center,r.origin - center) - radius*radius
        let discriminant = B*B - 4.0*A*C
        if discriminant > 0 {
            var t : Double = (-B - sqrt(discriminant) ) / (2.0*A)
            if t < tmin {
                t  = (-B + sqrt(discriminant)) / (2.0*A)
            }
            return (t > tmin, t, r.location_at_parameter(t) - center)
        } else {
            return (false, 0.0, vec3(x:1.0, y:0.0, z:0.0))
        }
    }
   
}

class hitable_list : hitable  {
    var members : [hitable] = []
    func add(h : hitable) {
        members.append(h)
    }
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3) {
        var t_clostest_so_far = 1.0e6 // not defined: Double.max
        var hit_anything : Bool = false
        var new_t : Double
        var hit_it : Bool
        var normal : vec3 = vec3(x:1.0, y:0.0, z:0.0)
        var new_normal : vec3
       
        for item in members {
            (hit_it , new_t, new_normal) = item.hit(r, tmin: tmin)
            if (hit_it && new_t < t_clostest_so_far) {
                hit_anything = true
                t_clostest_so_far = new_t
                normal = new_normal
            }
        }
        return (hit_anything, t_clostest_so_far, normal)
    }
   
}



// implicit type inference.  Swift when in doubt assumes Double
struct vec3 : Printable {
    var x = 0.0, y = 0.0, z = 0.0
    var description : String {
        return "hello"
    }
}

func * (left: Double, right: vec3) -> vec3 {
    return vec3(x: left * right.x, y: left * right.y, z: left * right.z)
}

func / (left: Double, right: vec3) -> vec3 {
    return vec3(x: left / right.x, y: left / right.y, z: left / right.z)
}

func * (left: vec3, right: Double) -> vec3 {
    return vec3(x: left.x * right, y: left.y * right, z: left.z * right)
}

func / (left: vec3, right: Double) -> vec3 {
    return vec3(x: left.x / right, y: left.y / right, z: left.z / right)
}

func + (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x + right.x, y: left.y + right.y, z: left.z + right.z)
}

func - (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x - right.x, y: left.y - right.y, z: left.z - right.z)
}

func * (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x * right.x, y: left.y * right.y, z: left.z * right.z)
}

func / (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x / right.x, y: left.y / right.y, z: left.z / right.z)
}

func max(v: vec3) -> Double {
    if v.x > v.y && v.x > v.z {
        return v.x
    }
    else if v.y > v.x && v.y > v.z {
        return v.y
    }
    else {
        return v.z
    }
}

func min(v: vec3) -> Double {
    if v.x < v.y && v.x < v.z {
        return v.x
    }
    else if v.y < v.x && v.y < v.z {
        return v.y
    }
    else {
        return v.z
    }
}

func dot (left: vec3, right: vec3) -> Double {
    return left.x * right.x + left.y * right.y + left.z * right.z
}

func cross (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.y * right.z - left.z * right.y, y: left.z * right.x - left.z * right.x, z: left.x * right.y - left.y * right.x)
}

func unit_vector(v: vec3) -> vec3 {
    var length : Double = sqrt(dot(v, v))
    return vec3(x: v.x/length, y: v.y/length, z: v.z/length)
}

protocol Printable {
    var description: String { get }
}



struct ray  {
    var origin : vec3
    var direction : vec3
    func location_at_parameter(t: Double) -> vec3 {
        return origin + t*direction
    }
}




var u : vec3 = vec3(x: 1.0, y: 0.0, z: 0.0);
var v : vec3 = vec3(x: 0.0, y: 1.0, z: 0.0);
var w : vec3 = cross(u, v)
var n : vec3 = u

var spheres : hitable_list = hitable_list()

var my_sphere1 : sphere = sphere()
my_sphere1.center = vec3(x: 0.0, y: 0.0, z: 0.0)
my_sphere1.radius = 1.0
spheres.add(my_sphere1)

var my_sphere2 : sphere = sphere()
my_sphere2.center = vec3(x: 2.0, y: 0.0, z: 2.0)
my_sphere2.radius = 1.0
spheres.add(my_sphere2)

var my_sphere3 : sphere = sphere()
my_sphere3.center = vec3(x: -2.0, y: 0.0, z: 2.0)
my_sphere3.radius = 1.0
spheres.add(my_sphere3)

var my_sphere4 : sphere = sphere()
my_sphere4.center = vec3(x: 0.0, y: -1001.0, z: 0.0)
my_sphere4.radius = 1000.0
spheres.add(my_sphere4)





let the_world : hitable = spheres

let ray_origin = vec3(x: 0.0, y: 2.5, z: -10.0)
let nx = 256
let ny = 256
let ns = 100

println("P3");
println ("\(nx) \(ny)");
println ("255");

for j in 0...255 {
    for i in 0...255 {
        var accum :vec3 =  vec3(x: 0.0, y: 0.0, z: 0.0)
        var red = 0, green = 0, blue = 0
        for s in 1...ns {
           
            var attenuation : vec3 =  vec3(x: 1.0, y: 1.0, z: 1.0)
           
            var not_yet_missed : Bool = true
            var ray_target : vec3 = vec3(x: -1.0 + 2.0*Double(i)/Double(nx),  y: 2.5 + -1.0 + 2.0*Double(255-j)/Double(ny), z: -8.0)
            var the_ray : ray = ray(origin: ray_origin, direction: ray_target-ray_origin)
            while not_yet_missed {
                let hit_info  = the_world.hit(the_ray, tmin: 0.01)
               
                if hit_info.0 {
                    attenuation = 0.5*attenuation
                    let new_ray_origin = the_ray.location_at_parameter(hit_info.1)
                    var scattered_direction : vec3 = hit_info.2
                    do  {
                        do {
                           
                            scattered_direction = vec3(x: -1.0 + 2.0*drand48(), y: -1.0 + 2.0*drand48(), z: -1.0 + 2.0*drand48())
                        } while dot(scattered_direction,scattered_direction) > 1.0
                       
                    } while dot(scattered_direction, hit_info.2) < 0.001
                    the_ray = ray(origin: new_ray_origin, direction: scattered_direction)
                }
                else {
                    not_yet_missed = false
                }
               
               
            }
            accum = accum + attenuation
        }
        red = Int(255.5*accum.x/Double(ns))
        green = Int(255.5*accum.y/Double(ns))
        blue = Int(255.5*accum.z/Double(ns))
       
        println("\(red) \(green) \(blue)");
    }
}











Wednesday, September 10, 2014

iPhone 6 screen sizes

One of the bigger design pains in our neck is that iPhone 4 screens are 4:3 and iPhone 5 screens are 16:9.  The sensors in the iPhone (and almost all other portables) are 4:3.   The design considerations for a 4:3 image on a 4:3 screen and a 16:9 screen are very different, and Apple has solved that nicely with its own photo app.  So I was interested to see what the iPhone 6 aspect ratio was (the number of pixels doesn't matter that much for writing the software other than memory pressure).

It turns out the new phones are about 16:9, but in confirming that I was interested to see it is not always exact.  Those integers are pesky and thank goodness square pixels seem to be here to stay.  The exact numbers (from this chart) are:

iPhone 5 models
  •  1136-by-640-pixel
  •  326 ppi
  • 15.975:1 aspect ratio

iPhone 6
  •  1334-by-750-pixel
  •  326 ppi
  • 16.008:1 aspect ratio 

iPhone 6 plus
  •  1920-by-1080-pixel
  •  401 ppi
  • 16:1 aspect ratio

I doubt that being exactly 16:1 makes any real difference in practice, but I'll be curious to see if 1080p content looks noticeably better on the 6 plus.  Such content looks much better on 720p screens than I would expect.



A side note is I have heard a lot this morning that Apple has not added a "revolutionary" product with the watch, "unlike the last one, the iPad".  I recall the disappointment that the iPad was just a big iPod, which is why it was revolutionary in practice: a big screen information appliance.  I think the watch will be even more huge and is revolutionary, and I have bet a bottle of bourbon with John Regehr on this (and have not sold my Apple stock).

Monday, September 8, 2014

List of hitable in Swift ray tracer

I added a list of hitables.  This uses the "array" in swift which happily has an append function.  I decided to eliminate the "normal" member function and add the normal to the tuple returned by the hit() function (as predicted by a sharp commenter on the last post).  Other than the tuple, it doesn't look very different than the analogous function in C++.  I am still a rookie on declaration and initialization so this may be verbose.

class hitable_list : hitable  {
    var members : [hitable] = []
    func add(h : hitable) {
        members.append(h)
    }
    func hit(r : ray, tmin : Double) -> (Bool, Double, vec3) {
        var t_clostest_so_far = 1.0e6 // not defined: Double.max
        var hit_anything : Bool = false
        var new_t : Double
        var hit_it : Bool
        var normal : vec3 = vec3(x:1.0, y:0.0, z:0.0)
        var new_normal : vec3

        for item in members {
            (hit_it , new_t, new_normal) = item.hit(r, tmin: tmin)
            if (hit_it && new_t < t_clostest_so_far) {
                hit_anything = true
                t_clostest_so_far = new_t
                normal = new_normal
            }
        }
        return (hit_anything, t_clostest_so_far, normal)
    }


Sunday, September 7, 2014

Inheritance for a Swift ray tracer

I just did a test to see how class hierarchies are set up in Swift.  Standard design in a ray tracer is that all geometric models in the scene have a "hit" function and the program is blind to the black box that can answer "do I hit you?".   This design was first popularized (and as far as I know it was the first to be implimented) by David Kirk and Jim Arvo back during the Cold War days (The Ray Tracing Kernel," by David Kirk and James Arvo. In Proceedings of Ausgraph '88, Melbourne, Australia, July 1988.).  I found the Swift syntex pretty yummy overall for my "abstract class" (in Swift lingo a "protocol") "hitable".  Note that picking a name for this is often painful; graphics people often say "object" (too broad) and I used to say "surface" (too narrow as volumes are sometimes subclasses), so I now say what it can be asked.  The sphere class is the only hitable so far:

import Foundation

protocol hitable {
    func hit(r : ray, tmin : Double) -> (Bool, Double)
    func normal(p: vec3) -> vec3
}

class sphere : hitable  {
    var center : vec3 = vec3(x: 0.0, y: 0.0, z: 0.0)
    var radius : Double  = 0.0
    func hit(r : ray, tmin : Double) -> (Bool, Double) {
      
        var A : Double = dot(r.direction, r.direction)
        var B : Double = 2.0*dot(r.direction,r.origin - center)
        var C : Double = dot(r.origin - center,r.origin - center) - radius*radius
        let discriminant = B*B - 4.0*A*C
        if discriminant > 0 {
            var t : Double = (-B - sqrt(discriminant) ) / (2.0*A)
            if t > tmin {
                t  = (-B + sqrt(discriminant)) / (2.0*A)
            }
            return (t > tmin, t)
        } else {
            return (false, 0.0)
        }
    }
  
    func normal(p: vec3) -> vec3 {
        return p - center
    }
  
}


Note that the hit function returns a tuple so we don't need to define some struct to store stuff.  I've gone with not computing normal at time of hit as I'm a fan of compute it when you need it, but that is just taste.  Foundation we need to get "sqrt()".

Now in main we have (and my initialization is probably clumsy):

let my_sphere = sphere()
my_sphere.center = vec3(x: 0.0, y: 0.0, z: 0.0)
my_sphere.radius = 2.0
let the_world : hitable = my_sphere

let along_z = ray(origin: vec3(x: 0.0, y: 0.0, z: -10.0), direction: vec3(x: 0.0, y: 0.0, z: 1.0))
let hit_info  = the_world.hit(along_z, tmin: 100.0)
println("hit =\(hit_info.0), t = \(hit_info.1)")



Which prints out:
hit =false, t = 8.0

All in all, I really like how clean Swift makes this; so far it is my favorite language for writing a ray tracer (main complaint: not yet portable and I don't know what Apple's plan for that is).  I have no idea if it will actually be "swift", nor whether I need to make changes to make it swifter, but that is for the future.
 

Wednesday, September 3, 2014

Our new Photo App Extension feature in Pic!

Earlier this year Apple announced that you would be able to use some third-party components in other apps, with keyboards getting a lot of ink.  Photo-editing app extensions caught our eye and we looked into making Pic! also operate as an app extension.  To me, Apple's app extensions are a particular kind of plugin that also must be a stand-alone app.  Also trying the new language Swift was attractive to us.  We thought the timing was tight, but we had a great programmer Andew Cobb working with us who had a month before he left for graduate school.  He actually managed to finish the app extension in Swift, some of which he will later make available as an open source project.

We now have the Swift app extenstion version working and it should be available on or soon after the day iOS8 goes live.

To run a photo app extension, you run an app that has photo editing and app extensions allowed.  Here is an example in the xcode iOS8 simulator where we run the apple app Photos, and select an image and then hit the "edit button"
Here we can use the Apple editing options at the bottom.  Instead we hit the upper left button

This brings up the available photo editors that can be used in the app, and we choose ours.

Unlike the stand-alone Pic! app, there is a bar at the top the app Photos owns.  We have no flexibility there.   Note there is a "done" button.  In our standalone app, you click three screens and then you are done.  In the app extension, if the user hits done you have to bail out with some version of the image.  We use the one the user last clicked indicated by the box around the upper left.

Still in our app extension we make two more choices and we have our final image.  Now the user can say yes they are done and exit our app extension with saved changes, or say cancel and get out with no changes.  The back button at the bottom is ours, and can return to change choices.

We say done and return to the editor.  Now our app extension is out of the picture and we are in just the Photos app.  We could still cancel.  Or we could run pic! on its own output.  Or we could run another app extension.

We say done and back to browsing in Photos with our saved changes.

All in all now that we have done one, I'm a big fan of app extensions.   There are actually more clicks here than just using our app directly for the same task.  However, the real potential is running multiple photo app extension in series.  I haven't done that yet because only ours is available to us, but I look forward to trying it when the real ecosystem is here (next week?).

Is an "yellow" traffic light yellow?

Once common disagreement in my household is what color is the "slow down yellow" light between red and green (or in Utah the "speed up" light)?  I always said it was orange.  Some people call it amber.  Yellow is what the cop usually calls it as he or she is writing your ticket.   But since writing that glue/green post I have become very aware of teals around my environment and I swear the "green light" is teal.  A quick web search revealed that LEDs have not surprisingly taken over the traffic light industry, and every buying government has its own spec.  There probably is a converging standard, but my quick search didn't find it.  And these LED lights are probably NOT the same as what I've been looking at all these years.  So I took a picture near my house and by eye calibrated it so it looks "right" when at the scene.  Obviously not scientific, and just my streetlights. 

Here they are (note the awesome aliasing from the too many megapixels trend... ironic that video games can finally render wires but cameras finally can't!):


Blurring and grabbing the center of the blurred light yields:


I thought about mining the xkcd data to see what people call these and them and realized I was too lazy to do that.  But before giving up I made one last google and found this awesome visualization.  My read from that is pretty good names for the above colors are "turquoise",  "salmon", and "magenta".  Of course context is important, and as red, yellow, and green are highly salient primary colors so I think sticking with "green", "yellow", and "red" makes things less confusing.  And note how complex the interaction of names and colors can be, check out this discussion that leads with blue Japanese traffic lights.

Tuesday, September 2, 2014

Our "crazy" experiment for app discovery

With millions of apps out there, it's hard to get anyone to notice your app.   This "app discovery" problem is nicely summarized in this  Forbes article.  We learned with our app Fanpix that "if you build it they will come" is a non-starter no matter how broad the potential audience is.  We're big fans of Snapchat, but:
  • it doesn't map very well to some of our own use cases;
  • we wanted to emphasize privacy as much as possible;
  • we wanted stickers;
  • we wanted anonymous sharing to the Web.
So we developed our own social photograpy app, Pando.  Our central problem with Pando is of course app discovery.  We began with the depressing truth that if 0.1% of app developers are following a particular discovery strategy, that is one thousand developers, so it's still easy to get lost in the crowd.

Our basic assumption is that we need to do something different.   Because so many smart people are out there trying everything under the sun, our strategy either needs to involve something that is hard, or something that is seemingly stupid, or both. 

We did this: wrote a serious Windows Phone implementation

This does seem questionable considering the following data from idc: in 3Q  2013 Windows phones were only 3.6% of those purchased.  But it sounds better this way: in 3Q 2013, there were 3.7 million Windows phones sold.

iOS phones meanwhile sold 27 million phones in that quarter.    When you factor in degree of difficulty of getting discovered (Windows still only has a few hundred thousand apps), in which market is it easier to reach a million users?  So we think this strategy of emphasizing a Windows Phone version of our app is not stupid, just difficult.  It is difficult because the Windows Phone programming environment is completely different than the iOS or Android environments and we conservatively rejected the current portable wrappers on the three environments because a native implementation is always safe. 

In addition, Pando is a social app, so if we can penetrate the Windows market, it can then spread to their friends.   We like the Windows Phone strategy even better when we see the Windows Phone users are concentrated more in some countries than others, and in 24 countries Windows outsells iOS!   In Q3 2013 that article discusses several countries but the 16% sales share of Windows in Italy particularly caught our eye. 

One question that we had to answer before devoting resources to it,  is do Windows Phones suck?  I decided the only way to find out was to use one, which I have done for the last six months.   The Nokia/Windows combo I think is really super.  I like the phone a lot (more detail here).  The problem is lack of apps, but that just encourages us with our strategy.

Our conclusion is that if developing a social app you of course need iOS and Android, but best to spend the marketing budget on developing a Windows Phone app and get the app discovered there, where discovery is not yet crazy hard.  That is exactly what we have done with Pando, now available in beta form on iOS, Android, and WindowsMobile!

Here is a gratuitous screenshot of a selfie on my trustie Windows phone:




Monday, September 1, 2014

vec3 type in Swift

I am back to writing a ray tracer in Swift for my own education.  I just did a vec3 class.  My current practice is directions, locations, and RGB colors are all vec3.  This is mainly because I like that in glsl.  I'm going with Double because I am not trying to be very fast here for now.  Swift has nice syntax for this.  All lines from the file are here.    Any suggestions appreciated.  Next on the list is a class for ray-intersectable surfaces.

// implicit type inference.  Swift when in doubt assumes Double
struct vec3 {
    var x = 0.0, y = 0.0, z = 0.0
}

func * (left: Double, right: vec3) -> vec3 {
    return vec3(x: left * right.x, y: left * right.y, z: left * right.z)
}

func / (left: Double, right: vec3) -> vec3 {
    return vec3(x: left / right.x, y: left / right.y, z: left / right.z)
}

func * (left: vec3, right: Double) -> vec3 {
    return vec3(x: left.x * right, y: left.y * right, z: left.z * right)
}

func / (left: vec3, right: Double) -> vec3 {
    return vec3(x: left.x / right, y: left.y / right, z: left.z / right)
}

func + (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x + right.x, y: left.y + right.y, z: left.z + right.z)
}

func - (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x - right.x, y: left.y - right.y, z: left.z - right.z)
}

func * (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x * right.x, y: left.y * right.y, z: left.z * right.z)
}

func / (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.x / right.x, y: left.y / right.y, z: left.z / right.z)
}

func max(v: vec3) -> Double {
    if v.x > v.y && v.x > v.z {
        return v.x
    }
    else if v.y > v.x && v.y > v.z {
        return v.y
    }
    else {
        return v.z
    }
}

func min(v: vec3) -> Double {
    if v.x < v.y && v.x < v.z {
        return v.x
    }
    else if v.y < v.x && v.y < v.z {
        return v.y
    }
    else {
        return v.z
    }
}

func dot (left: vec3, right: vec3) -> Double {
    return left.x * right.x + left.y * right.y + left.z * right.z
}

func cross (left: vec3, right: vec3) -> vec3 {
    return vec3(x: left.y * right.z - left.z * right.y, y: left.z * right.x - left.z * right.x, z: left.x * right.y - left.y * right.x)
}