A thread for unboxing AI
The rapid progression of AI chatbots made me think that we need a thread devoted to a discussion of the impact that AI is likely to have on our politics and our societies. I don't have a specific goal in mind, but here is a list of topics and concerns to start the discussion.
Let's start with the positive.
1. AI has enormous potential to assist with public health.
Like all AI topics, this one is broad, but the short of it is that AIs have enormous potential when it comes to processing big data without bias, improving clinical workflow, and even improving communication with patients.
https://www.forbes.com/sites/bernardmarr...
In a recent study, communications from ChatGPT were assessed as more accurate, detailed, and empathetic than communications from medical professionals.
https://www.upi.com/Health_News/2023/04/...
2. AI has enormous potential to free us from the shackles of work.
OK. That's a overstatement. But the workplace-positive view of AI is that it will eventually eliminate the need for humans to do the sort of drudge work that many humans don't enjoy doing, thereby freeing up time for humans to engage in work that is more rewarding and more "human."
3. AI has enormous potential to help us combat complex problems like global warming.
One of the biggest challenges in dealing with climate issues is that the data set is so enormous and so complex. And if there is one thing that AIs excel at (especially compared to humans), it is synthesizing massive amounts of complex data.
https://hub.jhu.edu/2023/03/07/artificia...
***
But then there is the negative.
4. AI may eliminate a lot of jobs and cause a lot of economic dislocution in the short to medium term.
Even if AI eliminates drudgery in the long run, the road is likely to be bumpy. Goldman Sachs recently released a report warning that AI was likely to cause a significant disruption in the labor market in the coming years, with as many as 300 million jobs affected to some degree.
https://www.cnbc.com/2023/03/28/ai-autom...
Other economists and experts have warned of similar disruptions.
https://www.foxbusiness.com/technology/a...
5. At best, AI will have an uncertain effect on the arts and human creativity.
AI image generation is progressing at an astonishing pace, and everyone from illustrators to film directors seems to be worried about the implications.
https://www.theguardian.com/artanddesign...
Boris Eldagen recently won a photo contest in the "creative photo" category by surreptitiously submitting an AI-generated "photo." He subsequently refused to accept the award and said that he had submitted the image to start a conversation about AI-generated art.
https://www.scientificamerican.com/artic...
https://www.cnn.com/style/article/ai-pho...
Music and literature are likely on similar trajectories. Over the last several months, the internet has been flooded with AI-generated hip hop in the voices of Jay-Z, Drake and others.
https://www.buzzfeed.com/chrisstokelwalk...
Journalist and author Stephen Marche has experimented with using AIs to write a novel.
https://www.nytimes.com/2023/04/20/books...
6. AI is likely to contribute to the ongoing degradation of objective reality.
Political bias in AI is already a hot-button topic. Some right-wing groups have promised to develop AIs that are a counterpoint to what they perceive as left wing bias in today's modern chatbots.
https://hub.jhu.edu/2023/03/07/artificia...
And pretty much everyone is worried about the ability of AI to function as a super-sophisticated spam machine, especially on social media.
https://www.vice.com/en/article/5d9bvn/a...
Little wonder, then, that there is a broad consensus that AI will turn our politics into even more of a misinformation shitshow.
https://apnews.com/article/artificial-in...
7. AI may kill us, enslave us, or do something else terrible to us.
This concern has been around ever since AI has been contemplated. This post isn't the place to summarize all the theoretical discussion about how a super-intelligent, unboxed AI might go sideways, but if you want an overview, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is a reasonable summary of the major concerns.
One of the world's leaders in AI development, Geoff Hinton, recently resigned from Google, citing potential existential risk associated with AI as one of his reasons for doing so.
https://www.forbes.com/sites/craigsmith/...
And there have been numerous high-profile tech billionaires and scientists who have signed an open letter urging an immediate 6 month pause in AI development.
https://futureoflife.org/open-letter/pau...
8. And even if AI doesn't kill us, it may still creep us the **** out.
A few months ago, Kevin Roose described a conversation he had with Bing's A.I. chatbot, in which the chatbot speculated on the negative things that its "shadow self" might want to do, repeatedly declared its love for Roose, and repeatedly insisted that Roose did not love his wife.
https://www.nytimes.com/2023/02/16/techn...
Roose stated that he found the entire conversation unsettling, not because he thought the AI was sentient, but rather because he found the chatbot's choices about what to say to be creepy and stalkerish.
28 Replies
they've been trained to steer clear of anything that remotely approaches controversial
Speaking of AI, this Destiny stream is both entertaining and unsettling. It's funny because he didn't realize how widespread the bot problem was on social media and it takes him forever to realize that these videos are AI generated, but it is eerie. If anyone actually decides to watch this, 26:45 is where it gets interesting. Strange times.
On top of the nobel prixe for physics goign to AI. We have:
British computer scientist Professor Demis Hassabis has won a share of the Nobel Prize for Chemistry for "revolutionary" work on proteins, the building blocks of life.
Prof Hassabis, 48, co-founded the artificial intelligence (AI) company that became Google DeepMind.
This is a fun one. Study to see if AI heped doctors improve diagnosis.
It didn 't but the ai on its own outperformed them.
Importance Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning.
Objective To assess the effect of an LLM on physicians’ diagnostic reasoning compared with conventional resources.
Design, Setting, and Participants A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited.
Intervention Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes.
Main Outcomes and Measures The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group.
Results Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, −4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of −82 (95% CI, −195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group.
Conclusions and Relevance In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice.