A.I. music


Possibly of interest: "the current rush to advance generative AI technology could be "spiritually, politically, and economically" corrosive. By effectively removing people, like musicians, from algorithms and tech that create new content, elements of society that were once connections between people are turned into "objects" that become less interesting and meaningful, Lanier explained.

"As soon as you have the algorithms taking music from musicians, mashing it up into new music, and then not paying the musicians, gradually you start to undermine the economy because what happens to musicians now happens to everybody later," Lanier said.

He noted that, while this year has been the "year of AI," next year the world is going to be "flooded, flooded with AI-generated music."


https://www.businessinsider.com/microsoft-jaron-lanier-ai-advancing-without-human-dignity-undermines-everything-2023-10

128x128hilde45

Pandora has left the building. Sure there’s “good”’AI but in the hands of evil doers or computers on their own ( that's coming along too ) and big trouble will be in the mix more and more.  AI is perfect tool for the root of all evil. 

Great post!

I will read your paper...If you want... Thanks in advance ...

 

Here my own guiding ideas:

I think that any future A.I. will be rooted in an information field containing at most a finite number of primes numbers...

By contrast all living organisms are rooted in an information field containing an infinite number of primes numbers. All life is the result of a source of infinite information.

Then any robots or A.I. even with a civilization of the future cannot own a "soul" nor reincarnate as a spirit inhabiting and owning the cosmic information field in the form of a continuous evolutive chain of living bodies ( this field  is primarily  an ether of numbers not an energy field which is only a manifestation of the primary field  ).

A robot may become at most a captive entity in a cosmos, his life span even indefinite will stay finite forever. All living organism are ONE and not captive save temporarily ...All life is immortal...

In a way we must choose between the Borg assimilation or stay human...

The choice is easy if we let our soul guiding us and not fear or greed ...

 

I just published a paper that speaks directly to this subject ("Our Minds, Our Selves: Mind, Meaning, and Machines," forthcoming in Borderless Philosophy 7 later this year). It argues that machines cannot be minds because they lack sentience and community, the two features of embodied beings (human beings) for whom things have meaning and value. Computers certainly can, because they already do, create poems, artworks, stories, music, even jokes. But such products become valuable and meaningful (become "art," if you like) only in a complex process of reception. The essay is fairly technical, regarding both computer engineering and philosophy, but I’d be happy to provide a PDF to anyone who might be interested. DM me if you’d like to take a look.

@parker65310 @wsrrsw

I’ve been in AI since the 1980’s, I’ve had the good fortune to have worked at some of the world’s best academic and commercial AI labs. I’ve seen a lot of where the field has gone in the last 40-some-odd years.

When the Internet (actually then ARPANet, NSFNet, uucp, and BBS’s -- it wasn’t a unified "Internet" until 1993) first came out, we thought infinite connectivity would bring humanity together. Instead, it has created fake news, factionalized everyone it has touched, and become a haven for hateful and violent rhetoric.

In the earlier days of AI, we (mostly) thought of the good that our research would bring. There were always the Skynet scenarios, though, too.

The computing power we have today is literally billions of times more powerful in just a single iPhone when compared to, say, Xerox’s or Schlumberger’s research labs back in the 1980’s. It boggles the mind in the abstract, yet I lived through all that and it didn’t seem that strange. It’s weird to me that we spend much of that compute power in an endless arms race (cryptography, spying, bitcoin), and so much less on the creative endeavors that we envisioned in the early days of AI (and BTW @mahgister, at MIT’s AI Lab in the 1980’s we had a Bosendorfer grand piano outfitted with special microsensors as part of a project to detect minute changes in timing/velocity/force of a pianist’s fingers, in an effort to understand what separated good music from great).

Now what is clear to me, with large language models and generative AI, is that the amount of AI-generated output will soon dwarf the human output on the Internet. When that happens, AIs will no longer be responding to what humans do or say, but rather 95%+ to what other AIs do or say. If you think disinformation on the Internet is a problem today, boy, you ain’t seen nothin’ yet... The AI’s reality and our reality will not overlap all that much in relatively short order. Human opinions will be irrelevant; we will be spectators.

I use AI and language models to help people in healthcare, and it can do amazing things. But the history of the Internet and computing says that the bad and/or careless people will dominate in the end, and in this case more than any other to date, the genie is out of the bottle. The people who can make money or influence elections won’t care how dangerous AI can become if not properly nurtured in the early stages. I fear Geoff Hinton is right to fear AI, but I think where he and I differ is that I think we are the creators not of our destruction, but rather of our own irrelevance (having created something, that while not yet mature, can evolve at rates we will not be able to fathom).

On a less pessimistic note, @snilf -- curious: are you more in the Dan Dennett camp, John Searle camp, or something else? I’ll look forward to reading your paper at some point.