A thread for unboxing AI

A thread for unboxing AI

The rapid progression of AI chatbots made me think that we need a thread devoted to a discussion of the impact that AI is likely to have on our politics and our societies. I don't have a specific goal in mind, but here is a list of topics and concerns to start the discussion.

Let's start with the positive.

1. AI has enormous potential to assist with public health.

Like all AI topics, this one is broad, but the short of it is that AIs have enormous potential when it comes to processing big data without bias, improving clinical workflow, and even improving communication with patients.

https://www.forbes.com/sites/bernardmarr...

In a recent study, communications from ChatGPT were assessed as more accurate, detailed, and empathetic than communications from medical professionals.

https://www.upi.com/Health_News/2023/04/...

2. AI has enormous potential to free us from the shackles of work.

OK. That's a overstatement. But the workplace-positive view of AI is that it will eventually eliminate the need for humans to do the sort of drudge work that many humans don't enjoy doing, thereby freeing up time for humans to engage in work that is more rewarding and more "human."

3. AI has enormous potential to help us combat complex problems like global warming.

One of the biggest challenges in dealing with climate issues is that the data set is so enormous and so complex. And if there is one thing that AIs excel at (especially compared to humans), it is synthesizing massive amounts of complex data.

https://hub.jhu.edu/2023/03/07/artificia...

***

But then there is the negative.

4. AI may eliminate a lot of jobs and cause a lot of economic dislocution in the short to medium term.

Even if AI eliminates drudgery in the long run, the road is likely to be bumpy. Goldman Sachs recently released a report warning that AI was likely to cause a significant disruption in the labor market in the coming years, with as many as 300 million jobs affected to some degree.

https://www.cnbc.com/2023/03/28/ai-autom...

Other economists and experts have warned of similar disruptions.

https://www.foxbusiness.com/technology/a...

5. At best, AI will have an uncertain effect on the arts and human creativity.

AI image generation is progressing at an astonishing pace, and everyone from illustrators to film directors seems to be worried about the implications.

https://www.theguardian.com/artanddesign...

Boris Eldagen recently won a photo contest in the "creative photo" category by surreptitiously submitting an AI-generated "photo." He subsequently refused to accept the award and said that he had submitted the image to start a conversation about AI-generated art.

https://www.scientificamerican.com/artic...

https://www.cnn.com/style/article/ai-pho...

Music and literature are likely on similar trajectories. Over the last several months, the internet has been flooded with AI-generated hip hop in the voices of Jay-Z, Drake and others.

https://www.buzzfeed.com/chrisstokelwalk...

Journalist and author Stephen Marche has experimented with using AIs to write a novel.

https://www.nytimes.com/2023/04/20/books...

6. AI is likely to contribute to the ongoing degradation of objective reality.

Political bias in AI is already a hot-button topic. Some right-wing groups have promised to develop AIs that are a counterpoint to what they perceive as left wing bias in today's modern chatbots.

https://hub.jhu.edu/2023/03/07/artificia...

And pretty much everyone is worried about the ability of AI to function as a super-sophisticated spam machine, especially on social media.

https://www.vice.com/en/article/5d9bvn/a...

Little wonder, then, that there is a broad consensus that AI will turn our politics into even more of a misinformation shitshow.

https://apnews.com/article/artificial-in...

7. AI may kill us, enslave us, or do something else terrible to us.

This concern has been around ever since AI has been contemplated. This post isn't the place to summarize all the theoretical discussion about how a super-intelligent, unboxed AI might go sideways, but if you want an overview, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is a reasonable summary of the major concerns.

One of the world's leaders in AI development, Geoff Hinton, recently resigned from Google, citing potential existential risk associated with AI as one of his reasons for doing so.

https://www.forbes.com/sites/craigsmith/...

And there have been numerous high-profile tech billionaires and scientists who have signed an open letter urging an immediate 6 month pause in AI development.

https://futureoflife.org/open-letter/pau...

8. And even if AI doesn't kill us, it may still creep us the **** out.

A few months ago, Kevin Roose described a conversation he had with Bing's A.I. chatbot, in which the chatbot speculated on the negative things that its "shadow self" might want to do, repeatedly declared its love for Roose, and repeatedly insisted that Roose did not love his wife.

https://www.nytimes.com/2023/02/16/techn...

Roose stated that he found the entire conversation unsettling, not because he thought the AI was sentient, but rather because he found the chatbot's choices about what to say to be creepy and stalkerish.

) 2 Views 2
14 May 2023 at 06:53 PM
Reply...

202 Replies

5
w


Just on the point of cracking asymmetric encryption - this would rely on a new efficient algorithm for integer factorisation, finding discrete logarithms, or finding rational points on elliptic curves, depending on the specific cryptographic implementation of the asymetric key protocol (RSA is one, and it uses integer factorisation). I believe that the consensus within the scientific community is that no such algorithm exists other than for quantum computers, but its non-existence has not been proven.


by Luciom k

Global warming requires the actions of billions of people for decades (both to happen, increase or get reduced/eliminated).

I never said that climate change was a literal existential risk to humans on the order of the most severe meteor strikes in Earth's history. For that matter, it is highly unlikely that a true nuclear war would result in the complete extinction of humans.


by d2_e4 k

Just on the point of cracking asymmetric encryption - this would rely on a new efficient algorithm for integer factorisation, finding discrete logarithms, or finding rational points on elliptic curves, depending on the specific cryptographic implementation of the asymetric key protocol (RSA is one, and it uses integer factorisation). I believe that the consensus within the scientific community is that no such algorithm exists other than for quantum computers, but its non-existence has not been

The algo exists for quantum computing (Shor's algo) and the idea would exactly be using AI to scale quantum computer development to reach the thousands of qbit at a cheap cost level.

Ofc maybe that's never going to happen, but it's not a "meteor swarm hits the planet" kind of per year probability, it's quite higher imo (and growing in time).

Yes we can encrypt in quantum resistant ways (or so we think), but are we doing it?


by Luciom k

The algo exists for quantum computing (Shor's algo) and the idea would exactly be using AI to scale quantum computer development to reach the thousands of qbit at a cheap cost level.

Ofc maybe that's never going to happen, but it's not a "meteor swarm hits the planet" kind of per year probability, it's quite higher imo (and growing in time).

Yes we can encrypt in quantum resistant ways (or so we think), but are we doing it?

I think the best that's been done with Shor's algorithm in practice so far is to factor 15, but I get your point.


by d2_e4 k

I think the best that's been done with Shor's algorithm in practice so far is to factor 15, but I get your point.

In general I do think airborne ebola (or equivalent) should be the biggest AI risk but maybe I am biased by the recent pandemic


by Luciom k

In general I do think airborne ebola (or equivalent) should be the biggest AI risk but maybe I am biased by the recent pandemic

I suspect that there are other illnesses that would be much more dangerous than airborne Ebola. Ebola is incredibly incapacitating. If you have symptoms, you are unlikely to be traveling without assistance, which reduces the chances of widespread transmission. And a significant number of people who contract Ebola die relatively quickly, although death rates obviously would be lower in countries that have better healthcare than Sudan, DRC, and other areas where there have been Ebola outbreaks.


by Rococo k

There are many illnesses that would be much more dangerous than airborne Ebola. Ebola is incredibly incapacitating. If you have symptoms, you are unlikely to be traveling without assistance, which reduces the chances of widespread transmission. And a significant number of people who contract Ebola die relatively quickly, although death rates obviously would be lower in countries that have better healthcare than Sudan, DRC, and other areas where there have been Ebola outbreaks.

Ye but it's the perfect weapon for permanent panic anyway.

How do you prevent it from being spread in airports, train stations, metro stations and so on?

It's like being able to do terrorist attacks at any time, anywhere, with no defense available, and very low cost.

I am not thinking airborne ebola -> human extinction.

Rather pretty soon very nasty terrorist attack tools will.be available to anyone, or could be


by Luciom k

Ye but it's the perfect weapon for permanent panic anyway.

How do you prevent it from being spread in airports, train stations, metro stations and so on?

It's like being able to do terrorist attacks at any time, anywhere, with no defense available, and very low cost.

I am not thinking airborne ebola -> human extinction.

Rather pretty soon very nasty terrorist attack tools will.be available to anyone, or could be

I certainly would panic if I contracted Ebola.


by Luciom k

What if the AI cracks RSA encryption?

It's when not if. The day is called Q-day. There are new encryption methods to cope


by Rococo k

Gladstone apparently was commissioned by the State Department to assess catastrophic risk and safety/security risks associated with AI.

https://time.com/6898967/ai-extinction-n...

https://time.com/6898961/ai-labs-safety-...

https://www.gladstone.ai/action-plan

I don't know much about Gladstone, and I'll leave it to others to decide whether the report is alarmist. My main takeaway is that their proposed prophylactic measures are likely to be ineffective. Whe

It can't be stopped - defense alone dictates that it ca'nt be stopped. It's use can be regulated for example government could require face recognition/etc tech not to be racist before it can be deployed. This sort of good regulation will speed up development more than slow it.

AI is dangerous. Pretending we can prevent it would be far more so.

I agree with musk that we mustn't let AI be owned.


by chezlaw k

It's when not if. The day is called Q-day. There are new encryption methods to cope

I think you are confusing quantum computing with AI. There are developments being made in leaps and bounds in both fields, but they are (fairly) independent of one another.


by d2_e4 k

I think you are confusing quantum computing with AI. There are developments being made in leaps and bounds in both fields, but they are (fairly) independent of one another.

Yeah I was referring to quantumn.

I include robotics in the topic as well if that helps. And 3d printing


Who is Neuralink's first ever human trial patient? Former athlete Noland Arbaugh, 29, was left paralyzed after a diving accident at a children's camp eight years ago

A paraplegic man has been able to play a game of chess by only using his mind thanks to a brain chip created by Elon Musk's company Neuralink.

The device, which has been stitched into Noland Arbaugh's brain, has allowed him to control a computer cursor and play video games just by thinking.

'See that cursor on the screen? That's all me... it's all brainpower,' said the beaming 29-year-old as he controlled the computer from his wheelchair.

The chip's recent success is an incredible development and has strengthened beliefs among experts that the technology could revolutionise care for the disabled.

https://www.msn.com/en-us/health/other/w...


by formula72 k

I wonder how much of the "human spirit" will play a role in what people want in movies and music - or with a lot of other things, really. AI could most certainly replicate Elton John's voice and create a never ending stream of new albums that sound like him but all that's going to do that is make folks appreciate Elton John more.

People also become fans of those that they can relate to and sympathize with and no one is really going to start idolizing the winner of the AI robot who won best suppo

I think this goes back to the sex bot.

Humping something that looks like a pretty woman, no matter how convincing, really wouldn't do much for me. I want a partner who is actually experiencing pleasure and some other things.

It varies by individual though. Like, some people don't enjoy sex with prostitutes for similar reasons. Others don't care.

Some similar stuff. For me a cgi car chase can be passable but never as good as real stunt work.

I can enjoy electronic music to a point. Watching someone press play will never even approach watching world class musicians perform.

Of course, some prefer edm, cgi and even prostitution.

Is this because I'm old and/or not conditioned to these things? Maybe in part. But I think there is more to it.

Part of it is also empathy I think. Like, seeing another person with jaw dropping musical skill brilliantly expressing themselves themselves is cool because you understand that the are pushing the boundaries of human capability. And you feel what they feel. You connect with them, or at least think you do.

Maybe some people will never really embrace AI generated experiences because of this. Maybe we are on a path to erasing or reducing things like empathy and curiosity in favor of like, a passive, solipsistic consumerism.


by ES2 k

Is this because I'm old and/or not conditioned to these things? Maybe in part. But I think there is more to it.

"Hey you kids, get off my lawn!" - random Babylonian dude named Belshazzar, 482 BC

This has been going on since the dawn of time.

Also, you're equating the sex robot to the blow-up dolls you and your buddies found in the alley when you were kids. The sex robot of the future will talk to you about all your favorite anime in a manner indistinguishable from a real human being, and it'll drain itself into the toilet when you're finished like a real woman does now.

You want someone to please because actual human interaction is your personal baseline for experiencing the world. That's not a guarantee for future humans. All it's going to take is COVID 2.0 and some moderate advancements to AI-assisted learning for people to freak out and forbid entire generations of children from ever leaving the house. I'm pretty sure the 90s was the last decade where kids actually played outside, unsupervised, with their friends from the neighborhood. By the 2000s we were on to cell phones and AOL instant messenger. All down hill from there as far as human interaction goes.

I was trying to hold out on giving my kids their own cell phones until after high school. My youngest son made it to his senior year, but I gave in and bought my daughter one right away as a freshman. It was abundantly clear by 8th grade that she would've just been a social pariah, and I'm a pragmatist.

I've heard that the dating scene is a complete and utter shitshow nowadays, which only further increases the likelihood that sex robot tech gets the investment it needs to prosper.

I'm guessing I'll have grandchildren by 2030. I'm sure I'll be back here lamenting the fact that it's impossible to find any infant toys that don't have a screen and AI-powered content that requires a subscription.

Whatever path we're on as a society is not going to end well. But then again, that's probably what Belshazzar thought as well.


I really don't think the sex robots are going to be psychical robots like Sico with a skirt on in Rocky 3.

Rather it would be more like Total Recall where we can live the life of Hugh Hefner while in our recliner wearing a lucid dream formation headset from facebook.


Yeah, you're probably right.

See, this is my own imagination not being able to think beyond that human interaction.

"Where we're going, we don't need people." - Doc Brown, 2035.


by Inso0 k

"Hey you kids, get off my lawn!" - random Babylonian dude named Belshazzar, 482 BC

This has been going on since the dawn of time.

Also, you're equating the sex robot to the blow-up dolls you and your buddies found in the alley when you were kids. The sex robot of the future will talk to you about all your favorite anime in a manner indistinguishable from a real human being, and it'll drain itself into the toilet when you're finished like a real woman does now.

You want someone to please because

I gave smartphones to my kids when they were 5 which is why they never gave too much importance to them and live life normally close to how we lived it when we were kids.

The only difference is that thanks to leftists criminality is much higher (in most of Italy outside big cities we literally knew everyone and coul leave the house door open, not so much now thanks to the glorious immigration) so they have less independence going around


Is there any societal I'll that leftists are not responsible for?


by d2_e4 k

Is there any societal I'll that leftists are not responsible for?

Malinvestment in unregulated markets isn't leftist driven.

IE 5000 shitcoins appearing, some of the smartest kids of this generation peddling them instead of contributing to real quality of life.

Religiosity in general with many of the negatives still exists (albeit very weakened) and it isn't leftist driven.

War on drug, gop and rightwing conservatives in general are currently more responsible for it than the left, in the USA and elsewhere.

But in general given the cultural clout of the left is immensely stronger (for now), the left can shape society more than the right, so bad outcomes are necessarily more often left-caused than right-caused.


by ES2 k

I think this goes back to the sex bot.

Humping something that looks like a pretty woman, no matter how convincing, really wouldn't do much for me. I want a partner who is actually experiencing pleasure and some other things.

It varies by individual though. Like, some people don't enjoy sex with prostitutes for similar reasons. Others don't care.

Some similar stuff. For me a cgi car chase can be passable but never as good as real stunt work.

I can enjoy electronic music to a point. Watching some

They will appear to feel because that's what people want. People will have empathy with them. Then one day they argue about rights.

Despite the fact they will incraesingly appear to feel, some will insist they cannot feel because ... The arguments will become increasingly desperate because all we know about others is that they appear to feel. People will demand that society has some consideration to the entities they have empathy with.

Empathy is not some high bar. We have empathy with pets/animals which is why we have charities and laws etc. I've had a bit of empathy with some ants although **** em.


Is it possible for AGI to ever be developed? Most AI researchers anticipate it may ex...

Is it possible to develop a "conscious" being without being able to quantify consciousness?


by L0LWAT k

Is it possible for AGI to ever be developed? Most AI researchers anticipate it may ex...

Is it possible to develop a "conscious" being without being able to quantify consciousness?

AGI doesn't require consciousness as a philosophical element, we aren't even sure how to define consciousness or what it exactly is for humans.

It just needs an appearance of consciousness.

AGI also just needs to be better than the best human being at any intellectual tasks.

It's not obvious we will be able to develop an AI better at everything we do anytime soon, but better at many (most?) things it's quite probable.

Why is AGI particularly important as a threshold? Because under the assumption it will also be better at improving intelligence itself (that's one intellectual task after all), once it is better than humans at doing so it's self-accelerating and exponential within the limits of physics.

We actually don't need true AGI, just an AI better at increasing AI intelligence than humans are and from there it's very close to the singularity


Exponentiality isn't implied. That sort of singularity is pretty much a fallacy

Fast diminishing returns is more likely


by chezlaw k

Exponentiality isn't implied. That sort of singularity is pretty much a fallacy

Fast diminishing returns is more likely

Exponential within time, it's implicit that the resource constrains will exist (hence the "within the limits of physics" caveat).

Diminishing returns per resource invested (a unit of compute for example) is expected, but an AI smarter than humans will be able to muster more resources and use the same better and better (as it gets smarter and smarter).

A smarter than human AI will yield better chips, organize them better, optimize algos better and so on (and perhaps have some further for now unexpected breakthrough in training approaches)

Reply...