Coming of the Connected Digital Eye
By Frits Ahlefeldt
Today more and more scientists, psychologist and brain experts around the planet work to solve the challenges of understanding how “seeing” works, and the reason they want to know is not as much because they are curious about us humans as it is because this knowledge might make it possible to create giant new technological leaps into the future.
The scientists started out thinking that seeing might work like some kind of a simple camera feeding our brains with images for our thoughts to wonder about. But as new knowledge about our vision grow, the further the understanding of vision gets from the classic metaphor where our eyes work just like a camera. It is now clear that human vision is way more complicated than that…
What the scientists have found out so far is that “seeing” plays a very important, but extremely complicated part of how we construct what we call “reality” A reality that now looks like it come much more from inside our heads than from the rays of light that reach our eyes from objects out in the world.
It’s a reality where our pre-understandings, our needs, what we like and fear – and what our brain wants us to focus on, before we even open our eyes, all work together to simultaneously shape up the flow of images we think we “see” in a way that has been constructed more by our heads, by our cultural backgrounds, needs and ideas than by what surrounds us.
Why the research into seeing has gone into hyper-speed over this is properly not because we suddenly like to know much more about how our vision works, but more because of the sprouting of a lot of new technological business ideas that involve sight and machines in new ways. Ideas that have made all the gigantic hi-tech companies and the world’s best universities more and more interested in the science of seeing: Simple because the perspectives of learning machines to see is mind-blowing – especially from a business perspective.
And that is why, right behind the researches solving the mystery of how humans and other species see, there is another group of researchers, busy taking notes. It’s the AI (artificial intelligence) scientists and hi-tech companies who work to construct, program and teach computers and robots to do the same… build up understandings from a flow of images – in the hope to create machines that can “see” reality in more and more intelligent ways. So the robots, drones, cars, phones and toys will become much better at adapting, calculating risks, speeds, moods, needs, habits and threats… “to help us through the day”, as the companies say.
Nobody really know where we will be heading if we humans manage to create advanced, intelligent machines that can see and move around among us while they observe us and evolve on.
But it will most likely be machines with an ability to see, not only modeled on our limited human eyes and understandings, but also with the visual abilities of other lifeforms, coupled with advanced technological ways of seeing like infrared vision, 4G vision, x-ray vision, brain-wave vision, heartbeat vision, surround vision, emotional vision, time-lapse vision, flow vision, night vision, satellite vision, ultra-wave vision and more, combined to a whole new way of visual sensing…
A vision that will be logged as points, values and patterns in space and time, stored and analysed in new global data-networks and shared live, forth and back among countless digital eyes to create and feed a very different global multi-directional stream of network connected many-dimensional images and maybe even a whole new understanding…
The open question is then how much of this possible new understanding we humans will be able to grasp?
Text and drawing by Frits Ahlefeldt