I’ve been in AI since the 1980’s, I’ve had the good fortune to have worked at some of the world’s best academic and commercial AI labs. I’ve seen a lot of where the field has gone in the last 40-some-odd years.
When the Internet (actually then ARPANet, NSFNet, uucp, and BBS’s -- it wasn’t a unified "Internet" until 1993) first came out, we thought infinite connectivity would bring humanity together. Instead, it has created fake news, factionalized everyone it has touched, and become a haven for hateful and violent rhetoric.
In the earlier days of AI, we (mostly) thought of the good that our research would bring. There were always the Skynet scenarios, though, too.
The computing power we have today is literally billions of times more powerful in just a single iPhone when compared to, say, Xerox’s or Schlumberger’s research labs back in the 1980’s. It boggles the mind in the abstract, yet I lived through all that and it didn’t seem that strange. It’s weird to me that we spend much of that compute power in an endless arms race (cryptography, spying, bitcoin), and so much less on the creative endeavors that we envisioned in the early days of AI (and BTW @mahgister, at MIT’s AI Lab in the 1980’s we had a Bosendorfer grand piano outfitted with special microsensors as part of a project to detect minute changes in timing/velocity/force of a pianist’s fingers, in an effort to understand what separated good music from great).
Now what is clear to me, with large language models and generative AI, is that the amount of AI-generated output will soon dwarf the human output on the Internet. When that happens, AIs will no longer be responding to what humans do or say, but rather 95%+ to what other AIs do or say. If you think disinformation on the Internet is a problem today, boy, you ain’t seen nothin’ yet... The AI’s reality and our reality will not overlap all that much in relatively short order. Human opinions will be irrelevant; we will be spectators.
I use AI and language models to help people in healthcare, and it can do amazing things. But the history of the Internet and computing says that the bad and/or careless people will dominate in the end, and in this case more than any other to date, the genie is out of the bottle. The people who can make money or influence elections won’t care how dangerous AI can become if not properly nurtured in the early stages. I fear Geoff Hinton is right to fear AI, but I think where he and I differ is that I think we are the creators not of our destruction, but rather of our own irrelevance (having created something, that while not yet mature, can evolve at rates we will not be able to fathom).
On a less pessimistic note, @snilf -- curious: are you more in the Dan Dennett camp, John Searle camp, or something else? I’ll look forward to reading your paper at some point.