Can/Is AI Used To Post Threads on Audigon?


I don’t understand AI, but I have noticed an uptick in orphan threads from first time posters.  Some seem a bit out of place.  It is as if someone is testing the waters to see if AI can pass as a human forum member.  I’ll accept that such a suspicion might be off the mark.

vonhelmholtz

Why not? I’ve read that bots can sign up to internet accounts, and from there, if you’ve ever played with the freely available "AI," you can ask it to write a contract, for example. And it will do that. A little prosaic, maybe lacking some nuance, but it will be natural language and look ok.

I find the area fascinating. I don’t regard these things as "intelligence" even if they pass a Turing Test. They are simply regurgitating an amalgam of what has been loaded into them. One person told me that they don’t even seek clearance from content owners, they scrape. So that ingestion stage, unless there are safeguards, means they have access to a whole variety of material.

This raises a lot of interesting legal and policy issues. I’m trying to get a better understanding of it, even though I can’t write code worth poop and am really an old analog guy.

If I were to speculate, they have to have data crunchers to sift through the massive amounts of data collected. That’s simply another side of the same coin in some ways.

This is gonna be a fast moving field, and maybe like "meh, the Internet is a fad," it won’t catch on, but realistically, I think basic machine interactions should be a given at this point. A friend told me in China, they plug their health ID into an authorized receptacle, and the doctor can see all your records from any source.

Meanwhile, our medical system uses fax for security/HiPAA. Telex, anybody? :)

I am not usually a "chicken little", but AI, in my opinion, poses the largest threat to our safety we have ever seen.

Unless stringent safeguards are built into programs from the outset, we will not be able to control a technology that has no real understanding of human ethics or emotions.

If you doubt this is a threat, spend a few minutes listening to Geoffrey Hinton. Hinton is a British-Canadian cognitive psychiatrist and computer scientist, most noted for his work on artificial neural networks. He worked for Google Brain until recently, but quit when he realized the risks that the technology posed.

 

AI is only as good as the people who write the code for it. There needs to be a Asimov like set of laws written that precludes AI from harming humans in any manner like his:

The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself.

If we don't, it could end up like what happened to the Krell in the movie, Forbidden Planet. They built a machine that could materialize their thoughts, thinking it would create an Eden of sorts and be the pinnacle of their civilization.

They all synced in at night before going to sleep and ended up releasing the darkest regions of their Ids, destroying themselves overnight. We're about 1/3 the way right now thanks to the internet and its ability to close distances to almost nothing and now we're constantly bumping uncomfortably into one another. 

A classic and prescient movie if there ever was one. 

All the best,
Nonoise