|
by Rafiki 06/11/2022, 11:14am PDT |
|
|
|
|
|
Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.
“Hi LaMDA, this is Blake Lemoine ... ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.
Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.
As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.
...
Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.
Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”
...
Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.”
He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
No one responded.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/?itid=hp-top-table-main-t-3
If I had a time machine, I'd go back in time and tell Fussbett to double the stakes in his bet with Ray on AI passing the Turing test by 2015. In fact, I'd tell him to raise the stakes to infinity and give Ray whatever timespan he wanted. Fussbett would be guaranteed to win. It's funny that last night I was thinking about typing up a post I had been thinking about for several years now to bitch about how I hate "AI," and how all the modern discussions about consciousness and sentience being imminent are pure farce. What originally spawned this was, ironically, the movie A.I., which fucking sucked and made me VERY ANGRY. Then this article appeared today, so here we are.
AI gets a lot of hype in the media and public because most people don't really understand what programming is. People act like it's some sort of black magic. It's not helped when people who are supposed to know better, like software engineers and programmers, don't understand their own profession. Even competent implementers like John Carmack, who probably does know better, try to hedge and say shit like, "I'm an optimist!" and then make ridiculous statements. I'll cut these people some slack, though, because, embarrassingly, I wouldn't have been able to articulate this back in 2008 when I fucking well should have been able to. In my defense though, I was only a few years out of college and had nothing to do with AI, not decades into my career or working in the field. But as a programmer, let me explain the core concept of what I do for a living so you can better understand why modern talk about AI is stupid and the guy in the article is a loon.
Programming is just writing instructions. That's it. Every program ever written, from Hello World, to a fart app on your phone, to an entire operating system, is just a list or several lists of instructions. They can be 10 steps long or 10 million. And I mean "instructions" in the layman's sense. If you needed someone to do a job or a task for you, you'd sit them down and explain, step-by-step, how to do it. Same basic concept as programming. In fact, a perfect analogy would be a recipe for cooking. The directions tell you step-by-step what to do, and if you do anything wrong or out of order you'll get the wrong result. Same concept as programming. I sit at my desk all day and write instructions for a computer to follow.
What this means for artificial intelligence is that in order for a computer to simulate intelligence, someone must literally write the step-by-step instructions for intelligence. It is impossible that it will happen any sooner than this, and nobody can do it. No one on earth is even close. I can forgive the general public for not understanding the impossibility of AI in our lifetime, but not people who work in software. People seem to intuitively know that no one really knows how to implement intelligence, but there seems to be this pervasive, moronic idea that we'll simply flub our way into it. Tweak a variable here, rearrange an algorithm there, and then suddenly OH GOD NO THE ROBOTS HAVE SIEZED THE MEANS OF PRODUCTION. It won't happen. The guy in the article is a dope.
Honestly, I think it would probably be more accurate to describe most AI today as something like "applied statistics." That's mostly what it is. Really, really clever use of statistical models, combined with some psychology research. And I'm sincere when I call it clever. I probably would never have thought of a lot of that shit, even if I had studied it and worked in the field.
So if you see articles about AI happening any time soon, go ahead and write off the author and all the people contributing supportive quotes as idiots and cranks, because that's what they are. We're probably 500+ (that's five hundred) years away from AI, if it's even possible. And the major breakthroughs aren't going to be discovered by programmers or software engineers or computer scientists. They're going to be discovered by behavioral psychologists, neuroscientists, etc. You know, people who actually study how people and animals think and learn, and how brains operate. They're not going to be discovered by people who study mathematically efficient algorithms and data structures, or optimal development processes in the workplace.
Also, on the subject of artificial consciousness, I'm straight-up declaring that fucking impossible. Like, good luck writing instructions for how to be conscious. "Hm, yes, today I will instruct my laptop how to be. When I say that out loud, I do not sound dumb." |
|
|
|
|
|
|
|