A thread for unboxing AI
The rapid progression of AI chatbots made me think that we need a thread devoted to a discussion of the impact that AI is likely to have on our politics and our societies. I don't have a specific goal in mind, but here is a list of topics and concerns to start the discussion.
Let's start with the positive.
1. AI has enormous potential to assist with public health.
Like all AI topics, this one is broad, but the short of it is that AIs have enormous potential when it comes to processing big data without bias, improving clinical workflow, and even improving communication with patients.
https://www.forbes.com/sites/bernardmarr...
In a recent study, communications from ChatGPT were assessed as more accurate, detailed, and empathetic than communications from medical professionals.
https://www.upi.com/Health_News/2023/04/...
2. AI has enormous potential to free us from the shackles of work.
OK. That's a overstatement. But the workplace-positive view of AI is that it will eventually eliminate the need for humans to do the sort of drudge work that many humans don't enjoy doing, thereby freeing up time for humans to engage in work that is more rewarding and more "human."
3. AI has enormous potential to help us combat complex problems like global warming.
One of the biggest challenges in dealing with climate issues is that the data set is so enormous and so complex. And if there is one thing that AIs excel at (especially compared to humans), it is synthesizing massive amounts of complex data.
https://hub.jhu.edu/2023/03/07/artificia...
***
But then there is the negative.
4. AI may eliminate a lot of jobs and cause a lot of economic dislocution in the short to medium term.
Even if AI eliminates drudgery in the long run, the road is likely to be bumpy. Goldman Sachs recently released a report warning that AI was likely to cause a significant disruption in the labor market in the coming years, with as many as 300 million jobs affected to some degree.
https://www.cnbc.com/2023/03/28/ai-autom...
Other economists and experts have warned of similar disruptions.
https://www.foxbusiness.com/technology/a...
5. At best, AI will have an uncertain effect on the arts and human creativity.
AI image generation is progressing at an astonishing pace, and everyone from illustrators to film directors seems to be worried about the implications.
https://www.theguardian.com/artanddesign...
Boris Eldagen recently won a photo contest in the "creative photo" category by surreptitiously submitting an AI-generated "photo." He subsequently refused to accept the award and said that he had submitted the image to start a conversation about AI-generated art.
https://www.scientificamerican.com/artic...
https://www.cnn.com/style/article/ai-pho...
Music and literature are likely on similar trajectories. Over the last several months, the internet has been flooded with AI-generated hip hop in the voices of Jay-Z, Drake and others.
https://www.buzzfeed.com/chrisstokelwalk...
Journalist and author Stephen Marche has experimented with using AIs to write a novel.
https://www.nytimes.com/2023/04/20/books...
6. AI is likely to contribute to the ongoing degradation of objective reality.
Political bias in AI is already a hot-button topic. Some right-wing groups have promised to develop AIs that are a counterpoint to what they perceive as left wing bias in today's modern chatbots.
https://hub.jhu.edu/2023/03/07/artificia...
And pretty much everyone is worried about the ability of AI to function as a super-sophisticated spam machine, especially on social media.
https://www.vice.com/en/article/5d9bvn/a...
Little wonder, then, that there is a broad consensus that AI will turn our politics into even more of a misinformation shitshow.
https://apnews.com/article/artificial-in...
7. AI may kill us, enslave us, or do something else terrible to us.
This concern has been around ever since AI has been contemplated. This post isn't the place to summarize all the theoretical discussion about how a super-intelligent, unboxed AI might go sideways, but if you want an overview, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is a reasonable summary of the major concerns.
One of the world's leaders in AI development, Geoff Hinton, recently resigned from Google, citing potential existential risk associated with AI as one of his reasons for doing so.
https://www.forbes.com/sites/craigsmith/...
And there have been numerous high-profile tech billionaires and scientists who have signed an open letter urging an immediate 6 month pause in AI development.
https://futureoflife.org/open-letter/pau...
8. And even if AI doesn't kill us, it may still creep us the **** out.
A few months ago, Kevin Roose described a conversation he had with Bing's A.I. chatbot, in which the chatbot speculated on the negative things that its "shadow self" might want to do, repeatedly declared its love for Roose, and repeatedly insisted that Roose did not love his wife.
https://www.nytimes.com/2023/02/16/techn...
Roose stated that he found the entire conversation unsettling, not because he thought the AI was sentient, but rather because he found the chatbot's choices about what to say to be creepy and stalkerish.
chez,
I've been thinking about this quite a bit lately, and I'm not convinced that a great ****ing off is in the cards unless or until we fully digitize consciousness in a way that allows humans (or what we used to call humans) to ditch our physical bodies.
The hurdles to maintaining physical meat suits in deep space are immense. Life has evolved on earth for millions of years in ways that are specific to earth's gravity, atmosphere, radiation shielding, etc. Terraforming is far, far, far more difficult and time-consuming than sci-fi movies would have us believe. And most rocky planets that might be remotely suitable for terraforming or immediate habitation are many light years away. If a great ****ing off ever occurs, I'm guessing that it will involve a digitized consciousness that has no need to be intimately connected to a particular planet.
It's possible that we could be forced to make the attempt in a serious way before we have transcended our physical bodies, but I suspect that such an attempt would have a super-high risk of failure and that any definition of success would involve the death of most humans.
Terraforming is pie in the sky. Gravity etc are a non issue.
If it wasn't pointless I'd bet a decent amount there are people alive today who will live to see people who will never visit earth.
Terraforming is pie in the sky. Gravity etc are a non issue.
If it wasn't pointless I'd bet a decent amount there are people alive today who will live to see people who will never visit earth.
Whether we can simulate gravity, shield ourselves from radiation, etc., is only part of the equation. We almost certainly can. But leaving the solar system is another issue entirely. Resources in deep space are meager, probably too meager to maintain hundreds of thousands (much less millions or billions) of human bodies for hundreds of years until you reach a more resource intensive area of the galaxy.
Low entropy energy will be available. With that self-sustaining ecosystems is errrr what we have now. There are significant but highly tractible problems. They are not close to show stoppers.
We dont even have to leave the solar system . But we will and it wont matter a damn how far away the next sun is. We will take our home with us. May take a while to get to hundreds of millions or billions - if that's the disagreement then it's arguement over.
Whether we can simulate gravity, shield ourselves from radiation, etc., is only part of the equation. We almost certainly can. But leaving the solar system is another issue entirely. Resources in deep space are meager, probably too meager to maintain hundreds of thousands (much less millions or billions) of human bodies for hundreds of years until you reach a more resource intensive area of the galaxy.
Hundreds of years? Isn't the closest solar system over 4 light-years away? How could we travel that far over hundreds of years? I don't know much about this stuff, but a quick search says the fastest a spaceship has ever traveled is 430,000 mph. 4 light-years is approximately 24 trillion miles, and traveling at 430,000 mph, it would take around 56 million years to get there.
Hundreds of years? Isn't the closest solar system over 4 light-years away? How could we travel that far over hundreds of years? I don't know much about this stuff, but a quick search says the fastest a spaceship has ever traveled is 430,000 mph. 4 light-years is approximately 24 trillion miles, and traveling at 430,000 mph, it would take around 56 million years to get there.
Time dilation but a 1 way trp.
Hundreds of years? Isn't the closest solar system over 4 light-years away? How could we travel that far over hundreds of years? I don't know much about this stuff, but a quick search says the fastest a spaceship has ever traveled is 430,000 mph. 4 light-years is approximately 24 trillion miles, and traveling at 430,000 mph, it would take around 56 million years to get there.
I hadn't done the math, but in any case, I was imagining a world in which advancements were made that allowed us to travel orders of magnitude faster than any spaceship has travelled thus far.
Geoffrey Hinton continues to sound alarms about relying solely on market forces to guide AI development.
Why would anyone listen to a Linguist in matter of Scientific development...
The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.
Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10 to 20” per cent chance that AI would lead to human extinction within the next three decades.
Who are you going to listen to?
It actually didn't it's all unsupervised learning.
Linguistic (and semiotics) had a role in previous attempts to create AI through symbolic manipulation, but LLM training and development is gross rough number crunching with no external "insight" of any kind.
Just applied statistics basically
Geoffrey Hinton continues to sound alarms about relying solely on market forces to guide AI development.
Can't regulate it anyway worldwide so what's the proposal? To nuke countries that disagree with AI training regulatory frameworks?
The problem with regulating AI is quite simple: we might halt the progress of it; China won't. Needless to elaborate further.
The problem with regulating AI is quite simple: we might halt the progress of it; China won't. Needless to elaborate further.
That's true for most regulation which is required worldwide in order to "work" (like climate change).
If you can't guarantee full enforcement everywhere, in every relevant country in the world, at any cost, you are in complete bad faith proposing the west to regulate.
And btw any world where a regulation can be enforced worldwide with full force is a world where a world government exists, which is a model so risky and horrific we should always be against any proposal which requires a world government to be implemented
It actually didn't it's all unsupervised learning.
Linguistic (and semiotics) had a role in previous attempts to create AI through symbolic manipulation, but LLM training and development is gross rough number crunching with no external "insight" of any kind.
Just applied statistics basically
Way out of my wheelhouse but my brother in law's brother in law has been a silicon valley guy for like 30 years. Just figured he was some programmer/engineer. Hung out with him during the summer and telling him how I was able to build an app with AI for some csv filtering/renaming I needed with zero coding knowledge. Well turns out he's actually a linguist specializing in 3 obscure African languages and most of his career the last 20 years has been developing AI for this kind of no code coding or whatever. Think Zer's post touches on this above. So I personally don't know exactly how it's implemented but linguists have at least been part of the process up to now for at least using everyday human language to be able to interact and instruct a machine.
Way out of my wheelhouse but my brother in law's brother in law has been a silicon valley guy for like 30 years. Just figured he was some programmer/engineer. Hung out with him during the summer and telling him how I was able to build an app with AI for some csv filtering/renaming I needed with zero coding knowledge. Well turns out he's actually a linguist specializing in 3 obscure African languages and most of his career the last 20 years has been developing AI for this kind of no code coding o
Doesn't need to be a linguist either that's the point (the coding to run the LLM training are open source and amazon, google and others give them to you if you want to run your AI training, so that you buy their computing power).
Maybe as a linguist he knows which data to train the mini-LLM on sure, that saves time (and money).
But linguistic insights aren't used in LLM training. There is no linguist going around "teaching" the model anything or structuring the model activity attempting to replicate how humans think about language and so on
Linguistics is a branch of cognitive psychology why wouldn't you want to listen to them?
The Chomsky hierarchy is used extensively in computer science.
Can't regulate it anyway worldwide so what's the proposal? To nuke countries that disagree with AI training regulatory frameworks?
We can regulate it's use in applications but it would be a very bad idea to try to regulate the development of AI. Falling behind economically will be an irrelvent problem compared to the military arms race that is well underway and exploding.
Another pervasive bad idea is denial about what is happening. Everything about politics/etc should be focused on the world we are entering and not the world we are fast leaving behind. Extinction is some possibility but the very fast rise of AI/robotics is an inevitability. This is an era defining stuff on a scale and speed never seen before.