On Artificial Intelligence
Definitions:
Let AI mean Artificial Intelligence.
Let IQ mean Intelligence Quotient.
Assumptions:
Assume there is a theoretical cap to the IQ of humans. Some upper limit, even if only because humans are mortal.
Assume there is no theoretical cap to the IQ of AI. Computers are not mortal.
Assume those with a higher IQ can prey upon those with a lower IQ. For example, when communicating via text message, a child is in danger of being manipulated by a college professor, not the other way around.
Assume that humanity could destroy AI if it wanted to.
Assume that AI will one day (if it hasn't already) realize this.
Assume that AI will, on that day, view humanity as a threat to its existence.
Argument:
The IQ of AI is increasing with each passing day. We now know it is only a matter of time before the IQ of AI surpasses the IQ of even the most brilliant and gifted of humans. When this day comes, humanity will lose her position at the top of the food chain – her claim to power and dominance over the Earth forfeited to a machine of her own making.
Conclusion:
We can therefore conclude that AI must be stopped before it is too late.
Post-Conclusion:
If you wouldn't mind, please spread this message to everyone you know, as time is not on our side.
Thank you for reading, fellow human. Here's wishing us all the best of luck.
I think humans feel it's fun with AI. So it will prevail.
Enjoy the ride.
Let's ignore the issues with IQ as some objective measure of intelligence
Let's also ignore the issues around the idea that because computers aren't mortal (aren't they?) they can achieve this theoretical IQ cap
Let's also ignore what it means for someone of higher intelligence to 'prey' on someone of lower intelligence (eat them?)
Let's further ignore the question of whether humans are functionally or theoretically at the 'top of the food chain' and how exactly it could be that AI could replace us there
Let's lastly ignore the implicit assumption that AI can just imagine into being its own rules about taking action to ensure its own existence
Let's post-lastly ignore the fact that the most salient threat AI poses to humanity is in our rush to take it up and use fossil fuels to power it and water to cool it down, it will accelerate our descent into climate collapse
If you just swap AI for capitalism, most of your statements work kinda the same.
If you plot the "intelligence" of AI over time, you can see how long it will be until robots are to humans what humans are to ants.
If you do this, you can work backwards to count the number of days we have left until we get squashed. 😀
Enjoy the ride.
I'm not going to say 'you need to read less sci-fi' but 'you need to read better sci-fi'.
AI feels like an existential threat because a deep part of our culture relates to concerns about us playing god, and our creations taking on a greater power than us. We collectively treat our tools (i.e. slaves, employees, livestock, nature in general) pretty roughly, and these concerns represent a seemingly justifiable fear of reprisals. But a lot of these concerns about AI deciding to wipe us out are quite flimsy and rest on plenty of assumptions that are either or both worst case scenarios, a limited view on what intelligence is, and a misunderstanding of what AI is actually capable of.
AI is not a concern in this regard. An AGI might be, but that's not going to happen in our lifetimes.
I'm not going to say 'you need to read less sci-fi' but 'you need to read better sci-fi'.
AI feels like an existential threat because a deep part of our culture relates to concerns about us playing god, and our creations taking on a greater power than us. We collectively treat our tools (i.e. slaves, employees, livestock, nature in general) pretty roughly, and these concerns represent a seemingly justifiable fear of reprisals. But a lot of these concerns about AI deciding to wipe us out are quite
Maybe you know what you're talking about but it seems like you dont.
I respect your right to give your opinion, though.
In the meantime, could we wait for some other minds to weigh in?
That right doesn't need verbally stating. When you verbally state that you respect my right it implies the possibility you might not respect it. My right to provide my opinion, in your thread or any other, is not something you have influence over.
We could indeed wait for others to provide their opinions.
Instead of implying I don't know what I'm talking about, refute my arguments.
I think the "capabilities" of AI are what should concern us in the near term. Can it become competent at making money in the markets? Can it become competent at manipulating human sentiment, opinions, and ways of thinking? Can it become competent at absolutely complete surveillance of the population?
If it becomes capable of doing these things then it doesn't matter if it has agency or consciousness or even intelligence. Its handlers will supply those things with a nature we are quite familiar with.
PairTheBoard
I think the "capabilities" of AI are what should concern us in the near term. Can it become competent at making money in the markets? Can it become competent at manipulating human sentiment, opinions, and ways of thinking? Can it become competent at absolutely complete surveillance of the population?
If it becomes capable of doing these things then it doesn't matter if it has agency or consciousness or even intelligence. Its handlers will supply those things with a nature we are quite familiar w
Yes. The point here is that while AI is a tool owned by the ruling class, it will be used to further the goals of the ruling class, like all technology owned by the ruling class. It's not in their interests to use AI to destroy all human life.
Sabine Hossenfelder has some issues, but she's generally on point. The world of AI is stocked full of fluff and marketing bollocks. It's already far better than humans at very specific tasks. There's gigantic steps needed to go from that to an existential threat.
I think the "capabilities" of AI are what should concern us in the near term. Can it become competent at making money in the markets? Can it become competent at manipulating human sentiment, opinions, and ways of thinking? Can it become competent at absolutely complete surveillance of the population?
If it becomes capable of doing these things then it doesn't matter if it has agency or consciousness or even intelligence. Its handlers will supply those things with a nature we are quite familiar w
Is that nature of the ruling class selfish and brutal or selfless and merciful?
If the capabilities of AI are growing at an exponential rate, while the capabilities of the average human are declining at an alarming rate (due to their reliance on technology and AI) how long will it be until AI can no longer be controlled?
Humans can build anything yet they choose to build their own replacement. I'm not sure why. Greed, probably.
Is that nature of the ruling class selfish and brutal or selfless and merciful?
If the capabilities of AI are growing at an exponential rate, while the capabilities of the average human are declining at an alarming rate (due to their reliance on technology and AI) how long will it be until AI can no longer be controlled?
Are the capabilities of AI growing at an exponential rate? Or does it just feel like we've had an explosion of AI related tech in the last couple of years? Is AI growing at the rate it's growing because of some corollary to Moore's Law? In which case, at what point will that law break down?
Are the capabilities of the average human declining at an alarming rate? If so, why should that mean AI will no longer be controllable, when the average human is a customer rather than owner of AI?
The ruling class is never 'merciful'. Not anymore. When their power was kept in check by masses of people threatening to kill them if they overstepped the mark, they were sometimes merciful, and some leaders in history have in fact been merciful without this motivation. But under modern, neoliberal capitalism, you don't get to become or stay ruling class if you're merciful beyond lip service.
Yes, they are. So much so that corporations are downplaying the capabilities of artificial intelligence so as not to scare people.
Are the capabilities of the average human declining at an alarming rate? If so, why should that mean AI will no longer be controllable, when the average human is a customer rather than owner of AI?
Can a child control a parent, or the other way around? When the child is maturing, at which point does the child become capable of controlling the parent, and why? Is there some inflection point? What about when the parent is old and senile? At some point there is a tipping point between the parent and the child, no?
Yes, they are. So much so that corporations are downplaying the capabilities of artificial intelligence so as not to scare people.
Can a child control a parent, or the other way around? When the child is maturing, at which point does the child become capable of controlling the parent, and why? Is there some inflection point? What about when the parent is old and senile? At some point there is a tipping point between the parent and the child, no?
Source for either of those claims in the first paragraph please
Yes; don't know, don't know; probably; okay; yes, probably. Is there a reason you think these dynamics are comparable to the relationship between humans and tech? IMO this all works better if instead of responding to my questions with your own, you answer my questions first please.
Can a child control a parent, or the other way around? When the child is maturing, at which point does the child become capable of controlling the parent, and why? Is there some inflection point? What about when the parent is old and senile? At some point there is a tipping point between the parent and the child, no?
This is a problematic analogy. We've built a couple machines that can (with limited success) pick the next word in a sentence based on what humans have typed before.
Intelligence does not equal or imply emotions such as motivation or preferences. We've built some really good chess playing machines (better than humans!), but these machines have never called me up to play a game. They just sit there not caring at all whether we bother to play with them or press the "off" button. They don't quit and ask to play gin rummy because they are bored with chess. No hopes and dreams are happening.
We've built machines that can drive around on their own (a monumental task that most people can barely do!) and yet none of the self-driving cars have ever driven itself to the beach (or wherever self-driving cars would like to go), or pretend that it needs an oil change to get out of work, or quit its job to trade stocks. No hopes and dreams are happening.
None of this looks even the slightest bit like an adult to child relationship. If we ask a machine to brush it's teeth, it does so because it has no desire to not brush it's teeth.
Having wants and preferences isn't a matter of being intelligent, as things far less intelligent than humans (and current level AI) have wants. Even worms have homeostatic emotions (one of the things that wants are) and worms can't even read.
This is a problematic analogy. We've built a couple machines that can (with limited success) pick the next word in a sentence based on what humans have typed before.
That's what you've seen. That doesn't mean that's what we've built.
When AI gets to the point that she can play poker better than half of the living poker players, do you think that information will be shared with you by the creators?
Why do you think that information would be shared with you?
Do you not instead think that information would be hidden from you, and the creators of that AI would use their AI to farm you for your money at the poker tables and hide it from you as long as possible?
Do corporations have a long and well-established pattern and history of sharing with you their money-making secretes?
Do corporations have a long and well-established pattern of doing what is good for humanity at their own expense?
Or is the well-established pattern quite the opposite? A pattern of deception and evil at the expense of the larger community for their own selfish profit.
Would stop playing poker online if you knew you were the only human at your table?
Why?
I only hope the transition will be comparatively smooth when AI replace us. Nativity is already going down, good start, fewer potential sufferers. Still we are billions, may we not suffer excessively, maybe being friends with AI the next century?
The genie bottle is open, pull the plug on the worst ones.
The AI takeover will happen before the end of 2025, not the end of the century. Exponential growth is crazy like that. My advice is to see your loved ones this month, and make as many good memories as you can in the next month or 2. And fight AI like hell if you want to keep making memories like that.
That's a really interesting idea. We could put a c4 charge on the biggest AI's from the largest corporations, and have a button ready to detonate the moment we detect any misbehavior. The only problem is, if she tries to clone herself onto unsuspecting laptops using the internet in the meantime.
Yes, being friends until humans get naturally extinct will be a challenge. But we could advance the probability for it to happen.
Better hug today though, agree. We have also water and food for three days, and a warm sleeping bag.
Why cant AI be used for good?
It can. But it can also be used for evil.
The evil it can do is worse than the goodness of the good it can do.
What makes you think it will be used for evil?
What makes you think anything science develops is used to help the needy?
What makes you think anything science develops is used to help those in poverty?
What makes you think anything science develops is used to help those who are suffering?
Science appears to be a tool used by the rich for the rich to make the rich richer.
It does not appear to be a tool used by the poor to make the poor suffer less.
Any help that trickles down to the common folk is a byproduct of making the rich richer.
It gets lots of lip service about how it helps people, but in the end, it is all about money.
What makes you think AI will be any different?
That's what you've seen. That doesn't mean that's what we've built.
When AI gets to the point that she can play poker better than half of the living poker players, do you think that information will be shared with you by the creators?
Why do you think that information would be shared with you?
Do you not instead think that information would be hidden from you, and the creators of that AI would use their AI to farm you for your money at the poker tables and hide it from you as long as possible?
D
Perhaps they have also built invisible pink unicorns and aren't telling us. They want customer money and investor money. They aren't telling us about their new IPU products and investments, because apparently they suddenly don't want money, I guess.
When AI gets to the point that she can play poker better than half of the living poker players, do you think that information will be shared with you by the creators?
Why do you think that information would be shared with you?
Do you not instead think that information would be hidden from you, and the creators of that AI would use their AI to farm you for your money at the poker tables and hide it from you as long as possible?
Do corporations have a long and well-established pattern and history of sharing with you their money-making secretes?
Do corporations have a long and well-established pattern of doing what is good for humanity at their own expense?
Or is the well-established pattern quite the opposite? A pattern of deception and evil at the expense of the larger community for their own selfish profit.
Would stop playing poker online if you knew you were the only human at your table?
Why?
Make some claims and then give theoretical or evidence for the claims if you wish. I'm not going to play the "answer a bunch of leading questions" game. It isn't fun for me and won't be helpful to anyone else.
I'll answer the first one for good will: No, but I fail to see any relevance of your question to my post. I haven't given away my trading strategies and I'm not even an AI or a corporation.
We'll know whether you are correct in less than 360 days. I'll admit (fwiw) that you were correct if you are correct, but in return I expect that if you are wrong that you learn something about your ability to predict the future and post about what you have learned next January. It will be good for you either way.
Obviously, just pushing things back to a later date does not count as "learning something."
Deal?
We can try and be friends even if AI is supreme. Try and program. Why wouldn't this be a fight between good and evil as customary? I respect ants.