Being a science fiction writer, and a science geek, and doing lots of research for my new book, I have lots of fingers in all sorts of pies. One of which is the Medium newsletter.
A little while back the published this lovely, clickbait-y titled article – Did Google Duplex just pass the Turing Test?
First up, some extremely basic definitions …
… an extremely basic description of Google Duplex: a software program that enables a conversation between humans and computers … and an extremely basic description of the Turing Test: One person listens to two others having a conversation, one of which is a computer. If the listener can’t tell which ‘voice’ is the computer, the computer passes the Turing Test, and gets a nice little certificate welcoming it to the Sentient Species of the Galaxy Club (I made that last little bit up. It’s more of a loose confederation of species whose last name starts with the letter ‘B’)
The movie, ‘Her’ was also referenced in the article, as an example of true A.I., but, and here’s the clincher about that little movie, (which if you’re into ‘robot as lovers’-type movies, is a great one to watch) once the program had achieved true sophont-hood, it, and all the other programs, chose to bugger off, probably into some alternate quantum reality, leaving their humans behind to wilt in their individual existential crises. Let’s face it, what healthy adult wants to hang around being their parent’s slave? (Same goes for the film Ex Machina)
Anyway, the answer to Medium’s headline question is – No. At this point Duplex is just a nifty scheduling program.
However, it sent me off in some interesting directions about how would you actually create/build/program an entity that you could actually call an Artificial Intelligence? … and even more importantly, why would you want to?
There’s the weaponized, oh excuse me, the ‘reasoned arsenalization’ option of course – to make more efficient killing machines, but leaving that not-insignificant kettle of kittens aside, I can’t really think of any usage for an artificially constructed intelligent being, that can’t be covered more efficiently and effectively by the good old homo sapiens 1.0. in conjunction with the technology available.
(To be clear, I’m not talking about advanced programs that can mimic certain human interactions, I’m talking about an actual evolved consciousness that is not organic)
Why are the techies pushing this ‘AI-for-everything-which-isn’t-really-AI-anyway’? 1 – because they can, (which has been the rationale for every tyrant since Gilgamesh decided he just had to be immortal) …and 2 – for the data they can mine, which translates into, you guessed it, money’. (which is exactly the same reason the media, social and otherwise, is pushing it too)
It’s not like we need AI for space (inner, outer, etc) exploration either. The vehicles and programs we have now, and are developing at a great rate of knots, are quite sufficient unto the day, and let’s be honest here, humans don’t like, or want, to give over final control of even the most basic of exploration vehicles.
So, why is it so important to colonize the human potential of our evolution, as a species, with an artificial construct? (we’ve barely tapped our potential as it is) Or, is being human just too hard to bear in these ‘interesting times’, and rather than resolve those conflicts, we bypass them altogether?
I don’t know the answers… well I do know for me personally, but I wonder what our species as a whole will choose.
One last thing that’s a bit disturbing about the Google Duplex experiment is that the human it was ‘talking’ to, had no idea she was being manipulated by a machine. Therein lies a slippery slope on the avalanche path of ‘consent’ which taken to a not-too-out-there extreme can lead to violations of the Nuremberg Code. (because humans have never resorted to extreme methods to get what they want, now have they?)
Here’s a mundane example of ‘slippery slope’ consent that starts off innocuous … … our truck has an indicator that monitors the speed limit of every road we travel on and what speed we may be going at any given time. That information is accessible to whomsoever has the inclination and sufficient motivation to mine it. (legally or otherwise) The truck also has a back-up camera that refuses to allow the truck to get any closer to a predetermined object than a predetermined distance. (We had no choice with either of these two ‘functions’, they came as ‘standard’)
The rationale is that these two function limitations, and a whole bunch that I’m sure we’ll never even know about, are designated as ‘safety’ (or ‘value added’) features. (not because of anything we might do, never that, we’re assured, but what they, the ‘others’ might do that would endanger us) Perhaps they are, and we may have even considered such features, but that’s not the point. We had no choice, our consent was not considered nor asked for.