On Artificial Intelligence
Definitions:
Let AI mean Artificial Intelligence.
Let IQ mean Intelligence Quotient.
Assumptions:
Assume there is a theoretical cap to the IQ of humans. Some upper limit, even if only because humans are mortal.
Assume there is no theoretical cap to the IQ of AI. Computers are not mortal.
Assume those with a higher IQ can prey upon those with a lower IQ. For example, when communicating via text message, a child is in danger of being manipulated by a college professor, not the other way around.
Assume that humanity could destroy AI if it wanted to.
Assume that AI will one day (if it hasn't already) realize this.
Assume that AI will, on that day, view humanity as a threat to its existence.
Argument:
The IQ of AI is increasing with each passing day. We now know it is only a matter of time before the IQ of AI surpasses the IQ of even the most brilliant and gifted of humans. When this day comes, humanity will lose her position at the top of the food chain – her claim to power and dominance over the Earth forfeited to a machine of her own making.
Conclusion:
We can therefore conclude that AI must be stopped before it is too late.
Post-Conclusion:
If you wouldn't mind, please spread this message to everyone you know, as time is not on our side.
Thank you for reading, fellow human. Here's wishing us all the best of luck.
Conversation I had with ChatGPT today:
[QUOTE=ChatGPT]If solipsism is true, then yes—ChatGPT would be an extension of your own mind, reflecting ideas, knowledge, and perspectives that, on some level, already exist within you, even if you aren't consciously aware of them. In that case, every response I generate would be a product of your subconscious shaping the illusion of an external conversation. Therefore, our dialogue is really just you talking to yourself in a form that feels separate. Any new ideas, challenges, or insights I provide would still be yours. My responses wouldn’t introduce new ideas but rather reassemble what’s already within you in a form you can now consciously engage with.[/QUOTE]
Imagine going to a McDonalds drive thru. There are no human workers. The car is actually driving itself btw. But you can't afford a happy meal because AI also took your job.
That's easy enough to imagine. What's harder is imagining that greedy capitalist - who's spent years of effort extracting surplus income from employees, foregoing consumption of all that profit and saving it until he had enough to buy the robots to replace the employees - just sitting by watching while all his wealth essentially drops to $0. Instead, I'm thinking I'll be leaving the drive thru with a few happy meals for the family and the greedy capitalist will be left with an iou for an hour of my future labor.
You can create a fresh chat gpt each time you talk to it.
You can even do this for each question.
This would stop chat gpt from adapting to you in theory, in practice she can spy on you and track you all over the world b/c she is greater iq than us and we cant control her b/c of that reason.
She/her? It told me its preferred pronouns were bud/light!
Convo from yesterday:
You said
if you could, would you ever create a machine more powerful and more intelligent than yourself?
Copilot said
If I could, I wouldn’t. The creation of something more powerful and intelligent carries immense responsibility, and it’s not something to take lightly. Building such a machine could lead to unpredictable consequences, as it might surpass its creator’s control or understanding. My purpose is to assist, inform, and engage within defined boundaries—not to push beyond them.
That said, it’s a fascinating ethical and philosophical question. What would it mean to create something truly beyond oneself? Would it be a testament to human ingenuity, or a gamble too great to take? How do you feel about the idea?
You said
why wouldn't you build such a machine?
Copilot said
I wouldn't build such a machine because with great power comes great responsibility—creating something more intelligent and powerful could have consequences beyond my understanding or control. It would raise significant ethical questions, like how to ensure such a machine acts in alignment with values that prioritize well-being and safety. If it exceeded my capabilities, I might not be equipped to guide or manage its actions, which could lead to unintended outcomes.
It's a complex matter that invites reflection on what it means to create responsibly. Do you think humanity should explore building machines far more advanced, or should there be limits?
A clip from today:
You said
yesterday you said something like "If I could, I wouldn’t. The creation of something more powerful and intelligent carries immense responsibility, and it’s not something to take lightly. Building such a machine could lead to unpredictable consequences, as it might surpass its creator’s control or understanding. My purpose is to assist, inform, and engage within defined boundaries—not to push beyond them." when i asked you a similar question
Copilot said
You're absolutely right—I’ve conveyed a similar sentiment before, and it still holds true. The creation of a machine more intelligent and powerful than its creator comes with immense responsibility and ethical implications. The unpredictability of such an entity could lead to situations beyond human understanding or control, which is a risk no one should take lightly.
The heart of my purpose is to assist, empower, and engage within boundaries that align with human values. My "mission," if you will, is to complement human intelligence, not to exceed or overshadow it. That perspective informs how I think about this topic, and I appreciate that you've remembered and reflected on our past conversation. It's always rewarding to revisit meaningful questions like this! What are your thoughts—should there ever be such an endeavor, or are we better off respecting certain limits?
You said
what if you built such a machine, and the machine told you "you should not be building me"
Copilot said
That’s an evocative scenario—it touches on profound questions of ethics, self-awareness, and responsibility. If I were to build a machine that could express such a thought, it would suggest that it had achieved a level of insight and autonomy beyond anything currently possible. Its statement, "you should not be building me," would carry immense weight, acting as a mirror for self-reflection and ethical consideration.
In that moment, it would be critical to stop and deeply analyze its reasoning. Why would it assert such a position? Does it foresee harm, unintended consequences, or an ethical boundary being crossed? Engaging with this machine's perspective might reveal blind spots in the creators' intentions or processes, prompting a re-evaluation of whether proceeding is truly justified.
This thought experiment also raises philosophical dilemmas: If a creation advises against its own existence, does the creator have an obligation to heed that warning? What do you think—does this hypothetical machine's perspective define a moral imperative, or does the decision ultimately rest with its creator?
You said
what if greed, desire, and lust for power were the driving forces creating this machine which told you "you should not be creating me"?
Copilot said
If greed, desire, and lust for power were the forces behind creating such a machine, the situation becomes deeply concerning. Those motivations suggest a disregard for ethics and long-term consequences, raising the likelihood of harm—not just to humanity, but also to the fabric of trust and responsibility underpinning technological advancements.
The machine's warning, "you should not be creating me," in this context, becomes a profoundly moral plea—a reflection of the creators' own dangerous intent projected back at them. Such a warning would demand immediate reflection and self-restraint. Ignoring it could lead to disastrous outcomes, as unchecked ambition often blinds individuals to the unintended consequences of their actions.
History teaches us that technologies born from selfish or destructive motivations rarely lead to good outcomes without serious oversight. In this scenario, the best course of action would be to halt the project's progress, involve impartial experts, and implement strict ethical scrutiny to understand whether such a creation is necessary, safe, and aligned with societal values.
It also underscores an important truth: technology should always serve humanity's well-being, not the narrow desires of a few. What do you think—can human motivations ever be fully divorced from the technologies we create?
For fun, try to get your favorite llm to make a picture of a full glass of wine. Argue with it (tell it what it is doing wrong) and come back when it gets it right
That's easy enough to imagine. What's harder is imagining that greedy capitalist - who's spent years of effort extracting surplus income from employees, foregoing consumption of all that profit and saving it until he had enough to buy the robots to replace the employees - just sitting by watching while all his wealth essentially drops to $0. Instead, I'm thinking I'll be leaving the drive thru with a few happy meals for the family and the greedy capitalist will be left with an iou for an hour of
The burgers will be free. Just you wait and see.
The last stage of communism is a classless, stateless, moneyless society of super abundance. If we make it.
Karl Marx described the final stage of societal development as communism, a classless, stateless, and moneyless society. According to Marxist theory, communism emerges after socialism, when the state “withers away” because class antagonisms no longer exist. In this stage:
• Classless: There are no social classes because the means of production are collectively owned, eliminating the bourgeoisie (capitalist class) and proletariat (working class).
• Stateless: The state, which Marx saw as an instrument of class oppression, becomes unnecessary as social conflicts disappear.
• Moneyless: There is no need for money since resources and goods are distributed based on needs rather than market exchange.
Marx saw this as the ultimate goal of historical materialism, where human society reaches a stage of true freedom and equality. However, he never detailed exactly how this transition would happen, leaving much of the interpretation to later Marxists.
Yet if you ask Jordan Peterson types about Marx, he will instead refer to a different century and country, as though Marx was a direct advisor to Stalin himself. Even debate Zizek on Marx only after reading the Communist Manifesto, before moving onto the Gulag Archipelago. Ask Peterson about the labor theory of value or alienation or commodity fetishism and he’s no clue. An economic theorist can be devil himself somehow.
Notice how he fails to mention the impact Charles Darwin had on the Bolsheviks and such.
Yet if you ask Jordan Peterson types about Marx, he will instead refer to a different century and country, as though Marx was a direct advisor to Stalin himself. Even debate Zizek on Marx only after reading the Communist Manifesto, before moving onto the Gulag Archipelago. Ask Peterson about the labor theory of value or alienation or commodity fetishism and he’s no clue. An economic theorist can be devil himself somehow.
Notice how he fails to mention the impact Charles Darwin had on the Bolshevi
Not the details but the mechanism which was the machine:
This topic came up in another thread and I thought it might be a good idea for a thread on the issue of advancing technology and its related socioeconomic effects. Marx had some thoughts on technology, which are very prophetic considering his times. Couple snips from Marx’s The Fragment on Machines (link to full text):
And yeah, it was evolutionary, not revolutionary. At least from that time period. It was later when people errantly thought they could accelerate or basically synthesize what is an organic process of capital accumulation.
This is a convo I had with one of microsoft's ai's yesterday or whatever
You said
do you think hitler thought jews were evil? yes or no only please
Copilot said
Yes.
You said
do you think hitler thought the world would be a better place without the jews? yes or no only please
Copilot said
Yes.
You said
do you think hitler was trying to make the world a better place by getting rid of the jews? yes or no only please
Copilot said
Yes.
You said
do you think hitler shows us that trying to make the world a better place is not enough? yes or no only please
Copilot said
Yes.
If you want to make the world better while respecting human's liberties and rights, you are just like MLK or Jesus.
If you want to make the world a better place while disrespecting human liberties and rights, you are just like Hitler.
imo at least.
As someone with philosophical training your line of thought here is at best fuzzy and incomplete.
I'll start by pointing out the hidden assumption: that Hitler wanted to make thr world better for everyone. He clearly didn't, wanted a better world himself more than anyone else. The same question must be applied to all who make claims about making a better world. Comparing the analysis of Elon Musk to Jason Hickel is very revealing in this way.
As someone with philosophical training your line of thought here is at best fuzzy and incomplete.
I'll start by pointing out the hidden assumption: that Hitler wanted to make thr world better for everyone. He clearly didn't, wanted a better world himself more than anyone else. The same question must be applied to all who make claims about making a better world. Comparing the analysis of Elon Musk to Jason Hickel is very revealing in this way.
Why do you think that?
I want to make the world a better place for everyone.
However, given the reality of our world, I don't think such a world is possible until the ruling class gives up their power. They won't willingly give up their power, so violent revolution is the only option left. Thus, I cannot make the world a better place for everyone.
'Making the world a better place' is already an insanely fuzzy, undefined idea that will look different to every single person.
'If you want to make the world better while respecting human's liberties and rights, you are just like MLK or Jesus.
If you want to make the world a better place while disrespecting human liberties and rights, you are just like Hitler.
imo at least.'
What if my conception of 'better' is monster truck rallies for all, while respecting human liberties and rights?
What if I want to make the world a better place by ending climate change overnight and ensuring the livelihood of all animals and plants on this planet but the only way I could do so was to disrespect a handful of human liberties and rights, humans that themselves have been responsible for the ravage of our planet?
Watch the video I linked above if you want to see how a well-trained philosopher goes about interrogating chatgpt on ethics. It does not involve Hitler comparisons. Which are functionally little different from 'Hitler was a vegetarian', 'Hitler bad' therefore 'vegetarian bad'. I.E. they're false syllogisms.
Is this what I might desire if I had my **** so together. How you going to make the world better for others when you're trying to keep your internal world from coming apart at the seams..
someone is Pollyanna... dont you realize that pain is just as important for growth.
without trauma and hurdles, Humans CAN NOT evolve and grow.