With all the talk about AI this and AI that you would think that Artificial Intelligence was easy. What is not apparent is that these AI advances are not native AI. They are the equivalent of thin client environments that connect lesser compute hardware to the real AI’s that reside within more massive environments. These AI (Ailites?) as we can call them, consist of front end audio parser’s (for input) and text to speech programs. In between there is a communications that forwards these parsed ‘language’ requests to a real AI that does the interpretation of the request and creates the text response that will be returned and spoken by the text to speech process on the client.
This all seems pretty interesting, but not a lot different than Apple’s SIRI, Mycroft, Google’s Speak or Amazon’s Alexa. These systems all have one thing in common, and that is to collect information on everything we do. Profiling technologies that will tailor responses and requests but will also record our interests and activities just like our browser activities do.
This are not the AI’s you are looking for. (but may be a lot of fun to play with)
The diversity of humans and other creatures is often a cause of wonder, and this diversity is often reflected in our understanding of Artificial Intelligence (AI). Huge strides have been made in this field, but somehow the fundamental differences in Humans obscures the commonality of the ‘Human’ experience.
One of these factors is Sight, the act of seeing. While we often overlook this day to day, for an AI, or any robotic devices, intelligent or not, is the vision. This vision, our ability to see what other humans see is a basic element of language and communications. Try describing the Color Red to a blind person, and you will quickly see the issue. No artificial ‘Eye’, sensor or camera in the AI/Robotic world ‘sees’ like we do, nor do any AI or robotic devices share the same ‘vision’ devices.
Explaining ‘Red’ to the blind is the same as two AI’s trying to explain ‘Red’ to another AI. Complex is not a big enough word to explain this.
The solution is problematic, the Technology of Seeing, needs to become a common denominator within the AI community. Current vision systems are at best a mixed bag, and require an upgrade, and a standardization that is currently lacking. And while the vision information obtained from the Human Eye and a Robotic replacement might attain equality, they may never contain the same data due to the differences in the technology. What must happen is that common robotic vision devices (eyes) need to be good enough and be interchangeable so that different AI’s can resolve the color ‘Red’ the same way. Paving the way for a common communication interchange regarding the external world.
Yesterday while talking with a colleague, I was trying to get a cross the idea the most ‘programmers’ don’t understand what goes on inside a computer. And his response was, “Does it matter any more?” and while it took me back, I had to respond, “No!” After sleeping on it, I came to a revelation of sorts.
Current IT is equivalent to being a Hot-dog vendor on the street.
And while we IT/CS folk might try and elevate our profession to that status of demigod status we are merely vendors of what the computer can DO!‘ We don’t create the computer, we splash condiments on the hot-dog, and sell it as computing. We don’t even make the condiments anymore, call them libraries, functions written by gnomes in dark caves. And don’t even mention the buns, the dressing ,the cover, beyond us.
In the early days of computing, the common question was, what do I use my computer for. And the first answer often was, you could put your cooking recipes in it. Creating the first cookbook you needed to plugin. The computer is still the same, just that the cookbook has gotten more sophisticated.
I have harped for years that the ‘hardware’ of computing has crippled real advances in computing, more and more systems are opting for generic in their selection of Hot-dog instead, choosing to dress it up with more and intriguing spices and toppings, things like AI and Neural Networks. While these latter are more sophisticated and sexy, they are more or less toppings on the same Hot-dog.
Note to self, the computer is not built to do anything other than execute instructions, hardware advances over the years have only advanced the ability of the CPU to gather instructions, it does not make decisions about what to execute, or in what order to execute them in. That is the organization of the basic boot loader, in combination with the operating system loaded.
There are no elements of artificial intelligence built into the hardware, it has no ability to reprogram itself or to change it’s wiring. External forces must be applied to force change either by altering microcode-code in the core of the CPU (should that be possible) or by execution of programs within the confines of the operating system, instructions provided by the boot loader or via operating systems loaded and executing programs. It is through those processes that constitute what a computer does, with what it ‘sees’ .
Any hope of producing the next generation of computing must therefore be a revolution in how the CPU is instructed to perform it’s instructions, what is done with the output, and any associated hardware connected to the system to perform ‘tasks’ assigned by that process. The argument that Windows, or Linux/Unix or any other operating system is better than another DOES create opportunities and restrictions uniquely to any new programming, computing Paradigm.
Anything like artificial intelligence will have to preceded by a new suite of hardware, with a new way of ‘booting’ the system and or an entirely new operating system tailored to artificial intelligence operations. Current hardware/software standardization is at once the primary blockage to any future advances to computing.
UPDATE #1: IBM Creates Custom-Made Brain-Like Chip