NVIDIA is not only building high end, more expensive processors, like the couplers and like the GPUs you might buy for gaming, but it also really works very hard in the mobile arena. So how is GPU computing going to influence that? Well it’s not in our current shipping products but the strategy will be available in the relatively near future is to have the same GPU from the top to the bottom of the line. So eventually you will have a Kepler GPU in the Titan supercomputer and you’ll have a Kepler GPU in your cell phone, and you can apply the same programming models across that line and do GPU computing in your cell phone to the extent that it makes sense to do so. So where do you see the most interesting applications going for things that you carry in your pocket? I think that the really compelling applications for mobile devices are in computational photography and computer vision. I think that photography is at this cusp of being completely transformed from something that you do primarily with lenses and optics to something you do primarily with computing, and a GPU is just a perfectly matched tool to basically do the type of image processing and signal processing you need, basically after you’ve collected a bunch of photons, to process them in a way to produce really compelling images. So that sounds fairly abstract, and I think this is a fascinating field. So can you give an example of something that maybe a GPU would help you do in this computational photography realm? Little low level things or like if you want to do high dynamic range images, you’ll actually acquire a set of images. Then there is a bunch of problems in composing them together. The camera may have moved slightly between the images, so they have to be registered. There may be objects moving in the images, so even if you registered the images, you have to back that object motion out to get the equivalent of a still photograph that was taken at one point in time. You may have what’s called a rolling shutter, where because of the way the CCDs work you expose one line of the image at a time, and so that if there’s any motion, that will produce sort of a wavy action in the photo. You, again, have to remove that. Then you want to do some processing of those images to remove noise and to enhance the image. And then finally to decide both when you acquired, what exposure times you wanted to get an optimal high dynamic range image and then how to combine those images into one final image that basically you’re given a limited gamut that you’ll display on your display device. Gives the human who’s viewing it the appearance of actually many orders of magnitude dynamic range, from the original collected image. So bright areas of the image look bright but not that bright. Not much, though. We want you to have that much dynamic range structures display that.