I arrived at the idea for this post from an article that included the following statement: “[Natural Language Processing] underpins all conversational AI, as ‘you must first understand a request before responding.'” This should sound familiar to people who read this blog.
I’m usually an optimist about technology, because I think of technology as an extension of human nature, and I’m generally an optimist when it comes to human nature. But this idea scares me. A lot.
At this point, I’ll fight my urge to add a colorful storyline to this post, and get right to the punchline.
Point 1:
From the perspective of the use of language, chatting with software like ChatGPT is virtually indistinguishable from chatting with a human. That’s my experience, anyway. ChatGPT is more articulate than the average human. Today.
Point 2:
Most people think of “Artificial Intelligence” as real intelligence coming from a non-biological (artificial) entity (or even life form). It’s not — not yet, anyway. Currently, the word “artificial” in “AI” refers to the intelligence. It’s not real intelligence — it only seems to be real intelligence — and only because of its use of natural language. “GPT,” after all, stands for “Generative Pre-trained Transformer.” In short, “Artificial Intelligence” isn’t “Actual Intelligence.” Yet.
Point 3:
However… because ChatGPT (for instance) uses language so well, we interact with it as if it is Actually Intelligent… AND… (hold onto your seats)… when, someday soon, it truly evolves to that stage… we won’t know it.
Gulp.