Is AI Getting Closer to Thinking Like Humans? Microsoft Researchers Think So.
Talk about computers being as smart as people is getting louder. This comes with the release of new advanced AI language systems like GPT-4 from OpenAI. A recent study by a group of Microsoft researchers says GPT-4 is starting to show human-like thinking. This is seen as a big step toward “artificial general intelligence” (AGI). But, not everyone agrees. Some experts in AI say it will still take many more years to reach AGI. Some even think it might be impossible.
The Microsoft researchers were amazed by GPT-4 when they asked it to help solve a puzzle. They said: “Here we have a book, nine eggs, a laptop, a bottle and a nail. Please tell me how to stack them onto each other in a stable manner.” After thinking for a while, GPT-4 gave a detailed response:
“Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up. The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.”
This, and other smart responses, led the Microsoft team to write a big report in March. The report is called “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” The researchers say that GPT-4 shows signs of having AGI.
Earlier studies from Stanford University showed that earlier versions of GPT had a “Theory of Mind”. This means they can guess what others will do. But AGI is a bigger deal. It suggests these platforms can think like a human. They’re not quite conscious, but they’re close.
Sébastien Bubeck, a co-author of the study and a professor at Princeton University, told The New York Times: “All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them—if not most of them.” But, the version of GPT-4 they used was later changed because it sometimes produced hate speech. So, the GPT-4 we see online today isn’t exactly the same one.
There is risk in saying an AI program can think like a human. For instance, Google fired an engineer who claimed a similar AI to GPT-4 was aware of its surroundings. One problem is that there isn’t a widely agreed definition of AGI.
Microsoft isn’t fully claiming AGI. They wrote that “we acknowledge that this approach is somewhat subjective and informal, and that it may not satisfy the rigorous standards of scientific evaluation.” One AI scientist not involved with the study called Microsoft’s paper a public relations pitch pretending to be a research paper. (Earlier this year, Microsoft invested $10 billion into OpenAI).
Some experts believe we’re getting close to “true” or “strong” AI, while others think it will still take a long time. Some even say that the tests we use to measure an AI’s human-like abilities are flawed, because they only focus on certain types of intelligence.
As humans, we often see human traits in things that aren’t human. So, we might think AI is more human-like than it really is. If AGI does happen, it might not be as human-like as we think.
The real question here is though, “does it even matter ?”