The 130 million pixel camera

We all have them. Forget Apple's, the original retina display is still the best: the human eye.

The article is fascinating throughout. For example, the focal length of lens that best approximates human vision is not 50mm, as is commonly supposed, but 43mm. Its aperture is roughly f3.2 to f3.5. Since the human retina is curved, it is sharper in the corners than a camera sensor, which is flat and causes the corners of the sensor to be further away from the center. Of the human eyes' roughly 130 million pixels, only 6 million see color.

We are still waiting for some new type of connector or bus that will allow us to use retina displays larger than those on Macbook Pros today. The amount of data to transmit is beyond that of the existingThunderbolt connectors.

So how does your brain deal with 130 million pixels of information being thrown at it in a constant stream? The answer is it doesn't.

The subconscious brain also rejects a lot of the incoming bandwidth, sending only a small fraction of its data on to the conscious brain. You can control this to some extent: for example, right now your conscious brain is telling the lateral geniculate nucleus “send me information from the central vision only, focus on those typed words in the center of the field of vision, move from left to right so I can read them”. Stop reading for a second and without moving your eyes try to see what’s in your peripheral field of view. A second ago you didn’t “see” that object to the right or left of the computer monitor because the peripheral vision wasn’t getting passed on to the conscious brain.

If you concentrate, even without moving your eyes, you can at least tell the object is there. If you want to see it clearly, though, you’ll have to send another brain signal to the eye, shifting the cone of visual attention over to that object. Notice also that you can’t both read the text and see the peripheral objects — the brain can’t process that much data.

The brain isn’t done when the image has reached the conscious part (called the visual cortex). This area connects strongly with the memory portions of the brain, allowing you to ‘recognize’ objects in the image. We’ve all experienced that moment when we see something, but don’t recognize what it is for a second or two. After we’ve recognized it, we wonder why in the world it wasn’t obvious immediately. It’s because it took the brain a split second to access the memory files for image recognition. (If you haven’t experienced this yet, just wait a few years. You will.)

ADDENDUM: The way human vision works, always putting the center of your vision in focus and blurring the edges so as to avoid overwhelming your brain with data, is somewhat replicated in form by these hyperphotos. That is, you are presented a photo with some baseline of resolution, but as you drill in on particular sections, the photo zooms and increases the resolution.