A thread for unboxing AI

A thread for unboxing AI

The rapid progression of AI chatbots made me think that we need a thread devoted to a discussion of the impact that AI is likely to have on our politics and our societies. I don't have a specific goal in mind, but here is a list of topics and concerns to start the discussion.

Let's start with the positive.

1. AI has enormous potential to assist with public health.

Like all AI topics, this one is broad, but the short of it is that AIs have enormous potential when it comes to processing big data without bias, improving clinical workflow, and even improving communication with patients.

https://www.forbes.com/sites/bernardmarr...

In a recent study, communications from ChatGPT were assessed as more accurate, detailed, and empathetic than communications from medical professionals.

https://www.upi.com/Health_News/2023/04/...

2. AI has enormous potential to free us from the shackles of work.

OK. That's a overstatement. But the workplace-positive view of AI is that it will eventually eliminate the need for humans to do the sort of drudge work that many humans don't enjoy doing, thereby freeing up time for humans to engage in work that is more rewarding and more "human."

3. AI has enormous potential to help us combat complex problems like global warming.

One of the biggest challenges in dealing with climate issues is that the data set is so enormous and so complex. And if there is one thing that AIs excel at (especially compared to humans), it is synthesizing massive amounts of complex data.

https://hub.jhu.edu/2023/03/07/artificia...

***

But then there is the negative.

4. AI may eliminate a lot of jobs and cause a lot of economic dislocution in the short to medium term.

Even if AI eliminates drudgery in the long run, the road is likely to be bumpy. Goldman Sachs recently released a report warning that AI was likely to cause a significant disruption in the labor market in the coming years, with as many as 300 million jobs affected to some degree.

https://www.cnbc.com/2023/03/28/ai-autom...

Other economists and experts have warned of similar disruptions.

https://www.foxbusiness.com/technology/a...

5. At best, AI will have an uncertain effect on the arts and human creativity.

AI image generation is progressing at an astonishing pace, and everyone from illustrators to film directors seems to be worried about the implications.

https://www.theguardian.com/artanddesign...

Boris Eldagen recently won a photo contest in the "creative photo" category by surreptitiously submitting an AI-generated "photo." He subsequently refused to accept the award and said that he had submitted the image to start a conversation about AI-generated art.

https://www.scientificamerican.com/artic...

https://www.cnn.com/style/article/ai-pho...

Music and literature are likely on similar trajectories. Over the last several months, the internet has been flooded with AI-generated hip hop in the voices of Jay-Z, Drake and others.

https://www.buzzfeed.com/chrisstokelwalk...

Journalist and author Stephen Marche has experimented with using AIs to write a novel.

https://www.nytimes.com/2023/04/20/books...

6. AI is likely to contribute to the ongoing degradation of objective reality.

Political bias in AI is already a hot-button topic. Some right-wing groups have promised to develop AIs that are a counterpoint to what they perceive as left wing bias in today's modern chatbots.

https://hub.jhu.edu/2023/03/07/artificia...

And pretty much everyone is worried about the ability of AI to function as a super-sophisticated spam machine, especially on social media.

https://www.vice.com/en/article/5d9bvn/a...

Little wonder, then, that there is a broad consensus that AI will turn our politics into even more of a misinformation shitshow.

https://apnews.com/article/artificial-in...

7. AI may kill us, enslave us, or do something else terrible to us.

This concern has been around ever since AI has been contemplated. This post isn't the place to summarize all the theoretical discussion about how a super-intelligent, unboxed AI might go sideways, but if you want an overview, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is a reasonable summary of the major concerns.

One of the world's leaders in AI development, Geoff Hinton, recently resigned from Google, citing potential existential risk associated with AI as one of his reasons for doing so.

https://www.forbes.com/sites/craigsmith/...

And there have been numerous high-profile tech billionaires and scientists who have signed an open letter urging an immediate 6 month pause in AI development.

https://futureoflife.org/open-letter/pau...

8. And even if AI doesn't kill us, it may still creep us the **** out.

A few months ago, Kevin Roose described a conversation he had with Bing's A.I. chatbot, in which the chatbot speculated on the negative things that its "shadow self" might want to do, repeatedly declared its love for Roose, and repeatedly insisted that Roose did not love his wife.

https://www.nytimes.com/2023/02/16/techn...

Roose stated that he found the entire conversation unsettling, not because he thought the AI was sentient, but rather because he found the chatbot's choices about what to say to be creepy and stalkerish.

) 2 Views 2
14 May 2023 at 06:53 PM
Reply...

202 Replies

5
w


by Rococo k

Can we please, please, please not turn this thread into another Luciom/leftism debate

When good faithed people like chezlaw define problems in society which are *already happened* an di believe are entirely caused by leftism, as being something that *will* happen with AI, I have to be able to make my claims.


by Luciom k
by Rococo k

Can we please, please, please not turn this thread into another Luciom/leftism debate

When good faithed people like chezlaw define problems in society which are *already happened* an di believe are entirely caused by leftism, as being something that *will* happen with AI, I have to be able to make my claims.

Bolded is exactly why you irritate so many people.


Chat GPT (4.0) at the end of 2023 already perfomed a lot better than trained physicians at diagnosis (90% vs 75%), while trained physicians using chatgpt gained nothing.

We could *already* do the vast majority of diagnosis with AI for close to 0 $ achieving *better results* with technology that is already significantly outdated.

This is kinda mindblowing for 2 reasons:

1) i didn't think we were there already, but it looks quite clear that we are

2) it's basically insane not to be using this already at mass scale everywhere


https://jamanetwork.com/journals/jamanet...

This should relatively tranquilize people who fear jobs will get destroyed fast, because if not even savings of orders of magnitude while achieving measurably better results are enough to get a race started to substitute human labor with AI *ASAP*, then the timeframe this will happen will be longer than expected i guess.

And btw any regulatory reason making it harder or impossible to substitute physicians with LLMs for those tasks is literally killing people, both because of worse diagnosis, and because the time spent by physicians doing diagnosis isn't spent treating people.

Reply...