The Turing Test was created 65 years ago, by scientist Alan Turing. It asked the question: can a computer convince a human that it’s also human? This June, for the first time, the answer seems to be yes.
But is that “yes” really as impressive as it seems?
Earlier in June, a Russian computer program took on the identity of 13-year-old Eugene Goostman and tried its luck with human interrogators, who had five minutes to ask the mystery respondent questions that could divulge its sentience (or lack thereof).
“Eugene” was one of five computers that entered the competition. The criteria for the Turing Test only mandated that the computer trick 30% of human interrogators. Eugene did win, but just barely at 33%.
Despite the slim margin of victory, Eugene does something to boast. The Turing Test has been an iconic benchmark in the world of artificial intelligence, and Eugene was the first to ever reach that milestone.
Eugene’s success was groundbreaking—but it has also brought up the very real possibility of advanced cybercrime attempts in the future. Bamboozling a panel of interrogators is one thing, but tricking in-the-dark computer network users could lead to undoubtedly more sinister outcomes.