The Turing Test: Still Applicable in the Modern Technological Landscape Part 2

The Turing Test: Is There a Need to Improve or Adapt the Requirements?

In part 1 of my blog, I discussed the history of the Turing Test. We will now look at its applicability in the modern world, more specifically in reference to sophisticated AI such as Bard and ChatGPT.

It is well established that AI such as Bard and ChatGPT can do various tasks based upon prompts: anything from essays, solving coding problems, translations, and even creative writing. It even presents opportunities for integration with social media and other platforms and by adding more data to its training dataset, could expand its capabilities even further.

On the face of it, the ChatGPT bot would however be the more ideal candidate to determine if the Turing Test requirements still hold true or need to be adapted. This is because its programming is designed to be more conversational in nature.

However, a potential challenge with testing this would be that the AI appears to be programmed to specify that it is a language model in conversation – no element of deception there!

The Turing Test in the Modern Context

Considering the sophistication of modern bots, the application of the original Turing Test would present several challenges: First and foremost that it is not such much a measure of intelligence, as it is about deception in that it measures the success of the machine at being able to mimic human behavior.

Passing the Turing Test would therefore simply be an indication of a machine processing responses similarly to a human.

If one acknowledges the premise of the Turing Test as a measure of the ability of AI to mimic or deceive, the test would, in my opinion, still have its drawbacks in the modern setting as much of the emphasis of the original Turing Test was placed on language and is therefore not an appropriate measure to account for all the modern ways which various types of human intelligence is now measured.

After all: human intelligence is now accepted to be comprised and measured in various fields. If the machine is to be deemed to be truly effective at mimicking or deceiving human behavior, it should theoretically be measured in a similar manner.

It has long been proposed that linguistics is just one part of 8 separate intelligences, namely logical-mathematical, linguistic-verbal, visual-spatial, musical-rhythmical, bodily-kinesthetic, interpersonal, intrapersonal, and existential.

The intelligence rating of the AI could therefore be more effectively measured in each intelligence category.

If one uses this framework to measure ChatGPT or Bard, both will likely display average intelligence in terms of logical-mathematical and linguistic-verbal intelligence but would fail in respect of the remainder of the fields.

We must be cautious of equating “mimicking” with interpretation and problem-solving skills: There are inherent limitations to programming. No matter how sophisticated the code, it could only do as it is programmed, and it is no substitute for emotional intelligence and could never produce original thoughts as its results are generated from algorithms that shape its “thinking”.

Therefore, whilst AI bots, such as ChatGPT and Bard are very valuable tools, I believe that they are not capable of truly understanding the complexities of human conversation – they are simply following coding instructions and can not be a replacement for the value of human interaction.

AI bots should also never be used as a measure for client support in fields such as psychology and even education as its sophisticated answers can quickly blur the user’s perception of reality and lacks the inherently human element of empathy that is an essential requirement in those kind of communications.

It is for this reason that data generated by AI bots should never be used without the “human element”: Data and results generated would still need to be independently verified before its application in the field. After all: Even machines can make mistakes!

The speed at which results are generated is no substitution for the human consciousness and informed decision-making: No matter how great the programming, that is one gap that a machine will never be able to truly fill.

Regardless of the limitations of the original Turing Test, I submit that it could still serve a renewed purpose in that any machine that can pass it, carries with it the ethical danger of being able to deceive humans.

The increased sophistication of the programming of the bots can have an impact on the ability of individuals to distinguish between human and machine communication and this opens the arena for misuse and abuse: Misinformation generated by humans is already difficult to track, but the sheer volume and speed at which artificially generated information can spread is simply unrivalled.

A dual approach would in all likelihood be the best way forward: Therefore, to increase awareness of “fake news” but also developing bots or systems to identify bot behaviors and verify the authenticity of content. A reverse Turing Test if you will.

The Way Forward

Whether or not ChatGPT or Google Bard would be able to pass the Turing Test, it and the creation of other bots have challenged my perception of the Turning Test. More specifically about what it truly entails in terms of what it means to pass at being considered “human”.

These days to pass the Turing Test, programmers now effectively have to intentionally add pauses to their response times, make typing errors and effectively program the responses to be less sophisticated.

The Turing Test could conceivably be evolved by changing the rules to how much memory a program taking the test can have: After all, just as human intelligence is limited to the brain’s memory and capacity, so machine intelligence should be limited to level the playing field.

Another consideration would be the concept of requiring a machine to show its “decision-making process”: This would mean that an algorithm’s design would need to show how it derived a specific answer. A measure like this would encourage transparency and ethics and would go a long way in destigmatizing the perception of the AI “Blackbox”.

In conclusion, while we should explore the possibilities of AI technology bots, we should also be cognizant of not confusing the innovative programming of chatbots with “thinking” in any way. It is up to us to think of ways in how to use technology responsibly and ethically. In my opinion, this is always the crucial and foremost element to any invention.

P.S. These blogs have not been written with the aid of Bard or ChatGPT, but with the upcoming advancements in technology, it will be interesting to see for how long bloggers and indeed, journalists will write articles without incorporating the aid of AI.