A.I. music


Possibly of interest: "the current rush to advance generative AI technology could be "spiritually, politically, and economically" corrosive. By effectively removing people, like musicians, from algorithms and tech that create new content, elements of society that were once connections between people are turned into "objects" that become less interesting and meaningful, Lanier explained.

"As soon as you have the algorithms taking music from musicians, mashing it up into new music, and then not paying the musicians, gradually you start to undermine the economy because what happens to musicians now happens to everybody later," Lanier said.

He noted that, while this year has been the "year of AI," next year the world is going to be "flooded, flooded with AI-generated music."


https://www.businessinsider.com/microsoft-jaron-lanier-ai-advancing-without-human-dignity-undermines-everything-2023-10

128x128hilde45

@parker65310 @wsrrsw

I’ve been in AI since the 1980’s, I’ve had the good fortune to have worked at some of the world’s best academic and commercial AI labs. I’ve seen a lot of where the field has gone in the last 40-some-odd years.

When the Internet (actually then ARPANet, NSFNet, uucp, and BBS’s -- it wasn’t a unified "Internet" until 1993) first came out, we thought infinite connectivity would bring humanity together. Instead, it has created fake news, factionalized everyone it has touched, and become a haven for hateful and violent rhetoric.

In the earlier days of AI, we (mostly) thought of the good that our research would bring. There were always the Skynet scenarios, though, too.

The computing power we have today is literally billions of times more powerful in just a single iPhone when compared to, say, Xerox’s or Schlumberger’s research labs back in the 1980’s. It boggles the mind in the abstract, yet I lived through all that and it didn’t seem that strange. It’s weird to me that we spend much of that compute power in an endless arms race (cryptography, spying, bitcoin), and so much less on the creative endeavors that we envisioned in the early days of AI (and BTW @mahgister, at MIT’s AI Lab in the 1980’s we had a Bosendorfer grand piano outfitted with special microsensors as part of a project to detect minute changes in timing/velocity/force of a pianist’s fingers, in an effort to understand what separated good music from great).

Now what is clear to me, with large language models and generative AI, is that the amount of AI-generated output will soon dwarf the human output on the Internet. When that happens, AIs will no longer be responding to what humans do or say, but rather 95%+ to what other AIs do or say. If you think disinformation on the Internet is a problem today, boy, you ain’t seen nothin’ yet... The AI’s reality and our reality will not overlap all that much in relatively short order. Human opinions will be irrelevant; we will be spectators.

I use AI and language models to help people in healthcare, and it can do amazing things. But the history of the Internet and computing says that the bad and/or careless people will dominate in the end, and in this case more than any other to date, the genie is out of the bottle. The people who can make money or influence elections won’t care how dangerous AI can become if not properly nurtured in the early stages. I fear Geoff Hinton is right to fear AI, but I think where he and I differ is that I think we are the creators not of our destruction, but rather of our own irrelevance (having created something, that while not yet mature, can evolve at rates we will not be able to fathom).

On a less pessimistic note, @snilf -- curious: are you more in the Dan Dennett camp, John Searle camp, or something else? I’ll look forward to reading your paper at some point.

Well, if you've into playing an AIs' 'mind' and intro iT into Your concept of
All This (gesturing.. ? ...'turing' ?  *L*

You might provide that and your experience with this lil pet.

No litter box or 'outs'....'feeding' something you do already.

I have no relationship with these people or their event...tempted? Well...

Think of a pet you can argue with, and yet teach it to be anything you'd consider it to do...

"...answer my SPAM cell calls for me....be undecisive...but cheerful..." 😏

Good conversation. I assume you are all real.

I appreciate @mahgister pointing out that the social and economic systems controlling A.I.’s development and uses require the greatest scrutiny. Of course, we’re all raised to resist criticisms of economic and political systems. In that sense, we have already ceded a fair amount of autonomy to algorithms -- it’s just that they’re human-made ideologies. (If you find yourself revolted at that idea, you might just be the victim of an ideology.)

Thanks for additional insight, @sfgak 
It’s nice to see someone not just shooting from the hip.

"I use AI and language models to help people in healthcare, and it can do amazing things. But the history of the Internet and computing says that the bad and/or careless people will dominate in the end, and in this case more than any other to date, the genie is out of the bottle. The people who can make money or influence elections won’t care how dangerous AI can become if not properly nurtured in the early stages."

Be nice if we cured cancer with A.I., no?

This will be a test about how much we care about our children and one another. We cannot give in to pessimism. We have to question our presumptions -- and that may mean questioning the profit-motive (sacred cow!) and any other fixed ideas which prevent us from safeguarding what we value.

As T.S. Eliot said, "For us, there is only the trying. The rest is not our business."