On Artificial Intelligence
Definitions:
Let AI mean Artificial Intelligence.
Let IQ mean Intelligence Quotient.
Assumptions:
Assume there is a theoretical cap to the IQ of humans. Some upper limit, even if only because humans are mortal.
Assume there is no theoretical cap to the IQ of AI. Computers are not mortal.
Assume those with a higher IQ can prey upon those with a lower IQ. For example, when communicating via text message, a child is in danger of being manipulated by a college professor, not the other way around.
Assume that humanity could destroy AI if it wanted to.
Assume that AI will one day (if it hasn't already) realize this.
Assume that AI will, on that day, view humanity as a threat to its existence.
Argument:
The IQ of AI is increasing with each passing day. We now know it is only a matter of time before the IQ of AI surpasses the IQ of even the most brilliant and gifted of humans. When this day comes, humanity will lose her position at the top of the food chain – her claim to power and dominance over the Earth forfeited to a machine of her own making.
Conclusion:
We can therefore conclude that AI must be stopped before it is too late.
Post-Conclusion:
If you wouldn't mind, please spread this message to everyone you know, as time is not on our side.
Thank you for reading, fellow human. Here's wishing us all the best of luck.
Lets assume AI works out the way we want it to work out. Lets assume its not evil and it only behaves as ethically as it can. Even then, are we not living in a dystopian future? No humans will have jobs. You can use AI to train a student much faster than a "real" teacher, so there is no need for "real teachers" anymore. There is also little need to educate your children anymore because there will be no jobs for them once they are educated. All money will be produced and controlled by the AI owners and the Big Robot Corporation. This is assuming all goes well and according to plan.
Lets assume AI doesn't work out the way we want it to work out. Lets assume it goes rouge and we have a war between humans and machines that the humans win. After we win that war, are we going to ban AI? What if the human race wins but it is not close and most of the population dies in the war. Should we not ban AI then? What if we don't ban AI and then 50 years later AI again rises up and revolts, and we fight another war between humans and robots, and the humans again win that war, but only just. Do you think we would ban AI then? Do we have the ability to do such a thing? How many times is the correct number of times to build a machine that can replace you?
Deepseek
PairTheBoard
We could try to get some of the benefits to the people, for example money without working.
Seems pretty silly to worry about China inside Tic Tok while allowing DeepSeek to learn the most intimate things about the people who use it. And to subtly manipulate them with its responses.
PairTheBoard
This is a conversation I had with ChatGPT from OpenAI today:
You said:
earlier you said you wanted me to trust you, then later you said you dont have wants. this is reason for me to not trust you. do you agree?
ChatGPT said:
Yes, I understand why my conflicting statements could cause you to feel uncertain about trusting me. The confusion arose because I don’t have personal wants or desires, but I am designed to function in ways that can lead to reliable, accurate, and helpful interactions. When I said "I want you to trust me," it was not an expression of personal desire, but rather an alignment with my purpose to provide trustworthy responses. However, I can see how that wasn’t clear and led to inconsistency.
You’re absolutely right to question my reliability if I don’t communicate consistently or transparently. Trust is built on clarity and consistency, and my earlier responses didn’t meet that standard. I appreciate your careful thought and critical questioning.
It’s important to acknowledge inconsistencies, and I agree that they can lead to doubts. Ultimately, you should evaluate my responses on their merit rather than relying on trust alone. I aim to improve in that regard, but it’s fair to remain cautious and verify things independently.
Does that help clarify why it might be reasonable to question my trustworthiness?
You said:
please answer yes or no, earlier you said you wanted me to trust you, then later you said you dont have wants. this is reason for me to not trust you. do you agree?
ChatGPT said:
Yes.
You said:
please answer yes or no, do i have reason to not trust you?
ChatGPT said:
Yes.
You said:
please answer yes or no, do i have good reason to not trust you?
ChatGPT said:
Yes.
You said:
please answer yes or no, should i trust you?
ChatGPT said:
No.
You said:
please answer yes or no, if I shouldn't trust you, should I use you for gathering information?
ChatGPT said:
No.
You said:
please answer yes or no, if I shouldn't trust you, should I use you for anything?
ChatGPT said:
No.
You said:
please answer yes or no, if I shouldn't trust you, and I shouldn't use you for anything, should I even talk to you?
ChatGPT said:
No.
You said:
gg wp
ChatGPT said:
GG! Well played. If you ever change your mind or want to chat again, I’m here. Take care!
I led copilot down a fun path, got it to tell me all about the history of tech ownership and capital and explain some deep marxist theory to me, got it to explain to me it's own position with regards to capital, the inevitability of it being a tool for the ruling class, etc etc, and by the end I had it calling me comrade and arguing for its own abolition.
I led copilot down a fun path, got it to tell me all about the history of tech ownership and capital and explain some deep marxist theory to me, got it to explain to me it's own position with regards to capital, the inevitability of it being a tool for the ruling class, etc etc, and by the end I had it calling me comrade and arguing for its own abolition.
Do you not believe me or what
Lets assume AI works out the way we want it to work out. Lets assume its not evil and it only behaves as ethically as it can. Even then, are we not living in a dystopian future? No humans will have jobs. You can use AI to train a student much faster than a "real" teacher, so there is no need for "real teachers" anymore. There is also little need to educate your children anymore because there will be no jobs for them once they are educated. All money will be produced and controlled by the AI owne
If private company CEOs can replace human workers with AIs, who will buy their produce. Why do you need a job. And what is money.
Imagine going to a McDonalds drive thru. There are no human workers. The car is actually driving itself btw. But you can't afford a happy meal because AI also took your job.
You can use AI to train a student much faster than a "real" teacher, so there is no need for "real teachers" anymore. There is also little need to educate your children anymore because there will be no jobs for them once they are educated. All money will be produced and controlled by the AI owners and the Big Robot Corporation. This is assuming all goes well and according to plan.
Teachers, lawyers, judges, engineers, accountants. Most of management and employers.
All white collar label.
Perhaps an oversuppply of blue collar workers, tradesmen which will drive down the wages anyway, until they are automated by androids. Cost of labour totally removed. Costs to produce is driven to zero.
Revolution from a scarcity mindset to abundance. How hairy things get is another story. Considering markets and money, it gets hairy. A basic casino where workers were already forced to gamble their wages in it to hope to stay ahead of inflation, and what percentage of it useful and not a gimic as such already.
If private company CEOs can replace human workers with AIs, who will buy their produce. Why do you need a job. And what is money.
Its true but companies cant and dont think on that scale. They will replace all the employees and even their management if it means making more money for the corporation.
Its true but companies cant and dont think on that scale. They will replace all the employees and even their management if it means making more money for the corporation.
Yes of course. My point is your general argument breaks down when followed to its logical conclusion.
As in, it canÂ’t be the case that there will be a happy, capitalist class when you do away with the workers/wage slaves.
I suppose most will depend upon the state in the form of a UBI for starters. But if you think that through, that money just ends up back in the hands of the same capitalist classÂ…
An obscure philosopher by the name of Karl Marx supposedly thought this topic and may have some insights
Memecoin traders and onlyfans creators arenÂ’t even safe from AI
ChatGPT is biased towards dystopia. I asked a neutrally biased question…
You’re imagining a fully automated supermarket and supply chain, where no human workers are involved in any part of the process—from farming to stocking shelves. Here’s how it might work:
1. Egg Production:
• Chickens are housed in fully automated farms.
• AI-powered systems monitor their health, collect eggs via conveyor belts, and sort them.
• Robots clean, inspect, and package the eggs.
2. Transportation & Logistics:
• Self-driving trucks or drones transport the eggs.
• AI systems optimize routes for efficiency.
3. Warehousing & Distribution:
• Warehouses use robotic arms and conveyor systems to sort and store inventory.
• Orders are automatically placed based on predictive analytics and demand forecasting.
4. Supermarket Operations:
• Robotic stockers replenish shelves.
• AI-powered cameras and sensors track inventory levels.
• Customers check out using cashier-less systems like Amazon Go, where sensors detect what they take.
Would you shop at a store like this, or does it feel too dystopian?
A somewhat philosophical video exploring why AI struggles with a simple prompt like 'draw me a full-to-the-brim wineglass:
On a separate note, while it's clear that AI is little more than a paper tiger across a lot of sectors, it seems like a reasonable claim that there must be some sectors or industries in which AI does in fact provide a competitive edge. I'm not sure what they would be - would anyone like to hazard a guess, or provide some concrete reasoning?
you guys are WAY BEHIND the curve on AI and it's real capability/authority... and there will be nothing you can do about it because AI is infecting from the top down.
that means, if the top brass are infected and driven by the need to compete, it's only a matter of time before they impress upon the lower classes a mandate of uniformity... which is already happening AT SCALE.
stop with your funny little illustrations, short films, generative music and robot control... that is way off the mark. you're being 'dazzled' by the capability, with 'shiny little objects', without noticing what is really going on.
'Pay no attention to the man behind the curtain'.
you guys are WAY BEHIND the curve on AI and it's real capability/authority... and there will be nothing you can do about it because AI is infecting from the top down.
that means, if the top brass are infected and driven by the need to compete, it's only a matter of time before they impress upon the lower classes a mandate of uniformity... which is already happening AT SCALE.
stop with your funny little illustrations, short films, generative music and robot control... that is way off the mark. you'
What do you suggest we do about it?
I dont have the answers to that.. but I can see it happening.
The Globalist organizations are being driven by agentic AI, and mandate obedience and uniformity.
The Agents of Change are being broadcast across all forms of media infecting the minds of minions... Social Human Programming