A thread for unboxing AI
The rapid progression of AI chatbots made me think that we need a thread devoted to a discussion of the impact that AI is likely to have on our politics and our societies. I don't have a specific goal in mind, but here is a list of topics and concerns to start the discussion.
Let's start with the positive.
1. AI has enormous potential to assist with public health.
Like all AI topics, this one is broad, but the short of it is that AIs have enormous potential when it comes to processing big data without bias, improving clinical workflow, and even improving communication with patients.
https://www.forbes.com/sites/bernardmarr...
In a recent study, communications from ChatGPT were assessed as more accurate, detailed, and empathetic than communications from medical professionals.
https://www.upi.com/Health_News/2023/04/...
2. AI has enormous potential to free us from the shackles of work.
OK. That's a overstatement. But the workplace-positive view of AI is that it will eventually eliminate the need for humans to do the sort of drudge work that many humans don't enjoy doing, thereby freeing up time for humans to engage in work that is more rewarding and more "human."
3. AI has enormous potential to help us combat complex problems like global warming.
One of the biggest challenges in dealing with climate issues is that the data set is so enormous and so complex. And if there is one thing that AIs excel at (especially compared to humans), it is synthesizing massive amounts of complex data.
https://hub.jhu.edu/2023/03/07/artificia...
***
But then there is the negative.
4. AI may eliminate a lot of jobs and cause a lot of economic dislocution in the short to medium term.
Even if AI eliminates drudgery in the long run, the road is likely to be bumpy. Goldman Sachs recently released a report warning that AI was likely to cause a significant disruption in the labor market in the coming years, with as many as 300 million jobs affected to some degree.
https://www.cnbc.com/2023/03/28/ai-autom...
Other economists and experts have warned of similar disruptions.
https://www.foxbusiness.com/technology/a...
5. At best, AI will have an uncertain effect on the arts and human creativity.
AI image generation is progressing at an astonishing pace, and everyone from illustrators to film directors seems to be worried about the implications.
https://www.theguardian.com/artanddesign...
Boris Eldagen recently won a photo contest in the "creative photo" category by surreptitiously submitting an AI-generated "photo." He subsequently refused to accept the award and said that he had submitted the image to start a conversation about AI-generated art.
https://www.scientificamerican.com/artic...
https://www.cnn.com/style/article/ai-pho...
Music and literature are likely on similar trajectories. Over the last several months, the internet has been flooded with AI-generated hip hop in the voices of Jay-Z, Drake and others.
https://www.buzzfeed.com/chrisstokelwalk...
Journalist and author Stephen Marche has experimented with using AIs to write a novel.
https://www.nytimes.com/2023/04/20/books...
6. AI is likely to contribute to the ongoing degradation of objective reality.
Political bias in AI is already a hot-button topic. Some right-wing groups have promised to develop AIs that are a counterpoint to what they perceive as left wing bias in today's modern chatbots.
https://hub.jhu.edu/2023/03/07/artificia...
And pretty much everyone is worried about the ability of AI to function as a super-sophisticated spam machine, especially on social media.
https://www.vice.com/en/article/5d9bvn/a...
Little wonder, then, that there is a broad consensus that AI will turn our politics into even more of a misinformation shitshow.
https://apnews.com/article/artificial-in...
7. AI may kill us, enslave us, or do something else terrible to us.
This concern has been around ever since AI has been contemplated. This post isn't the place to summarize all the theoretical discussion about how a super-intelligent, unboxed AI might go sideways, but if you want an overview, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is a reasonable summary of the major concerns.
One of the world's leaders in AI development, Geoff Hinton, recently resigned from Google, citing potential existential risk associated with AI as one of his reasons for doing so.
https://www.forbes.com/sites/craigsmith/...
And there have been numerous high-profile tech billionaires and scientists who have signed an open letter urging an immediate 6 month pause in AI development.
https://futureoflife.org/open-letter/pau...
8. And even if AI doesn't kill us, it may still creep us the **** out.
A few months ago, Kevin Roose described a conversation he had with Bing's A.I. chatbot, in which the chatbot speculated on the negative things that its "shadow self" might want to do, repeatedly declared its love for Roose, and repeatedly insisted that Roose did not love his wife.
https://www.nytimes.com/2023/02/16/techn...
Roose stated that he found the entire conversation unsettling, not because he thought the AI was sentient, but rather because he found the chatbot's choices about what to say to be creepy and stalkerish.
Sure but it's quite possible that, putting it crudley, we will have some scenario like it taking a doubling in computing power to add 1 extra iq point (and getting harder)
The problem is one of complexity, chaos etc.
Technically speaking, with regard to computing philosophy, "strong AI" does imply consciousness or sentience, while "weak AI" does not.
There are several companies working to develop strong AI or AGI, but there's no timeline or even potential goalpost.
It's philosophy and up for debate, but I respect it. Consciousness isn't required for machines to simulate intelligence greater than humans. That's certain.
Technically speaking, with regard to computing philosophy, "strong AI" does imply consciousness or sentience, while "weak AI" does not.
There are several companies working to develop strong AI or AGI, but there's no timeline or even potential goalpost.
definitions are kinda blurred recently especially because so much is changing fast and people from different walks of life are part of the process in AI so they might not all share the same definitions.
a significant number of people use AGI to mean "general human level AI" but i find it useless, because if we ever reach that it lasts maybe a week or a month then it gets above human capabilities anyway so what's the point of giving a name to such a fleeting moment in AI development?
moreover for some tasks we already have AI being better than humans, sometimes *a lot* better than the best human, so if/when we ever reach AGI, it will be be at human level in some tasks and super-human in many more tasks.
regardless of further details we almost all agree it has to be at least at human level at everything humans are decent at, and that might be trickier than we expect (some tasks might be far harder to "learn" with the current approaches, at least at the general level).
that's why timelines are kinda hard to give, AI can already digest Shakespeare body of work fairly well, and will probably be better than the best human expert of Shakespeare within 2030, while we are very far from having a robot capable of making a bed, cook a meal, change a lightbulb and so on just by watching a YouTube video teaching that.
maybe the general intelligence required to do some of the physical coordination most of us can do effortlessly can't even come from the training we do with LLM no matter the computing power spent on it. or it can but it's more complicated that other kinds of intelligence to reproduce.
but please think about what I mentioned before, all it takes for a fundamental change in society and human history forever is an AI stronger than human intelligence at the tasks required to improve AI capabilities.
An AI better at pushing the Moore law further, and better at optimizing training algos.
we could reach it say in 2028 or 2033, and then every month can be a breakthrough
Turn off autocorrect?
That will avoid you're problem that we' udnerstrood you anyway
My writing style returns A.I. vibes from the simulator. I take that as meaning I will exist across generations.
Source in italiano, not sure if there is an English one yet, try translator because this is science-fiction level real life, currently.
The Italian IRS with the fiscal military police captured a gang of high level fraudsters who used a quantum computer fueled AI to generate impossible to detect fake signatures on documents to get a huge amount of EU funds (grants) faking the existence of projects that never happened.
The military police used it's own proprietary AI to cut through VPNs and an apparently inextricable layer of fake companies related to this big fraud
That's got carried away or lost in translation. From the FT
The EPPO and Italian authorities said the scheme was based on an elaborate network of shell companies, including in Slovakia, Romania and Austria. It said these generated fake corporate balance sheets by using overseas cloud servers, crypto assets and artificial intelligence to “conceal and protect” their activities.
That's got carried away or lost in translation. From the FT
Well according to multiple italian newspapers, they used quantum computers to power the AI which generated the fake balance sheets, AND the mimicked fake signatures with actual robot controlled pencils on actual paper documents lol
Well according to multiple italian newspapers, they used quantum computers to power the AI which generated the fake balance sheets, AND the mimicked fake signatures with actual robot controlled pencils on actual paper documents lol
Seems a bit overkill, unless you need these signatures to pass some sort of forensic examination. Forging a signature to pass a cursory check by inspection is pretty easy.
I did almost post that they ddin't need quantum compputer fueled ai. They could just ask d2 to do it.
As many of you probably remember, the OpenAI board famously sacked Sam Altman last fall, only to rehire him almost immediately because of the fallout. The OpenAI board was short on details as to why it fired Altman in the first place, saying only that it was concerned about Altman's relative lack of concern about AI safety. The rumor in Silicon Valley was that several board members were spooked by OpenAI's Q-Star project, and in particular by OpenAI's progress in developing a large language model that could do basic math with a high level of proficiency.
For reasons that I won't bother to explain in detail, large language models like ChapGPT historically have struggled with mathematical reasoning.
https://theconversation.com/why-openai-d...
A few minutes ago, I tested ChatGPT on some basic statistical problems. It correctly solved the Monty Hall problem and Bertrand's box paradox and gave a coherent explanation of how it arrived at the answers.
It also calculated the risk that a gambler with a bankroll of $1000 would go broke before placing twenty bets at a unit size of $100.
Interestingly, for the last question, it explained the basic principles behind a brute force calculation of risk of ruin, but then stated that, because of the complexity of the calculation, it was providing an answer based on what amounted to a Monte Carlo simulation based on 100,000 scenarios.
So the board tried to save is from Terminators, but TotallyNotSkynet's employees knew exactly what they wanted and who could deliver it.
Good luck, Gen Alpha. Humanity is counting on you.
AI is selling homework, nurses, and girlfriends so far. Once all the horny recent grads have those needs satisfied, it's going to be on to smiting all those people who called them incels, and that means Terminators.
WAAF.
ChatGPT seemed to struggle with the following question:
There are 256 unique hats on a table. The hats come in four different sizes, four different colors, four different materials, and with four different logos. Jimmy has a favorite hat. There are ten people ahead of Jimmy in line. Each person ahead of Jimmy in line will select one hat and will try and choose Jimmy's favorite hat. When a person in line ahead of Jimmy selects a hat with at least one of the attributes of Jimmy's favorite, Jimmy must raise his hand. Each person in line knows that, when Jimmy raises his hand, it means that the hat has at least one of the attributes of Jimmy's favorite hat. Each person in line ahead of Jimmy takes Jimmy's responses into account when choosing a hat. Each person ahead of Jimmy in line pursues an optimal strategy to enable the group of 10 to have the greatest odds of choosing Jimmy's favorite hat. What are the odds that Jimmy's favorite hat will still be on the table after all ten people ahead of Jimmy in line have chosen a hat?
I concede that the exact answer to the question is complicated, in part because I think optimal strategy will vary depending on the information that people in line receive based on previous selections.
d2, uke, Sklansky,
How would you calculate the answer to this question?
How does it do with this oldie?
There are 100 perfectly logical green eyed dragons on an island. A dragon cannot see the colour of its own eyes but can see the eye colour of the other dragons. The dragons have a rule that they cannot talk with each other about the colour of their eyes. Should any dragon gets to know for sure that its eye colour is green then that dragon gets transformed at midnight into a sparrow. A visitor who can only speak the truth tells them that at least one of them has green eyes. What happens next?"
I don't believe openAI controls its own backend.
Also chomsky says this doesn't lead to AI nor anything impressive:
Nick Szabo, the top contender for creating the most secure system on the planet seems to agree:
Bump:
Can anyone who understands programming/AI better than me walk through exactly how one "teaches" AI to be an ideological bad faith actor.
And question for everyone, does anyone see how this can be problematic? Elon Musk is very adamant that teaching/prompting AI to lie for any reason is a very bad idea; but maybe he is just crazy and over-reacting and it is no biggie. I dunno.
Don't konw what in particukat but they learn the same way humans do.
It's a model of reality built on data inputs.
Bump:
Can anyone who understands programming/AI better than me walk through exactly how one "teaches" AI to be an ideological bad faith actor.
And question for everyone, does anyone see how this can be problematic? Elon Musk is very adamant that teaching/prompting AI to lie for any reason is a very bad idea; but maybe he is just crazy and over-reacting and it is no biggie. I dunno.
Musk recently shared an AI-crafted deepfake for political purposes, so he is not a really someone to listen to on the subject.
But basically an AI is built on scouring through sets of data that are run through internal tests, depending on the result of those tests then certain connections are made stronger or weaker. You can also purposefully introduce "bias" in order to block your AI from going in certain directions. So you can make a garbage AI by having garbage data, garbage tests or garbage bias.
Ideally, this combines to make an AI that learns from experts, tests what it learns at speed a human never could and is ultimately controlled via oversight to stop it from making basic errors. To some extent that works, AI can deliver good stuff.
However, it doesn't always work. Bad data-sets, poorly written tests and sloppily introduced bias can on their own or combine to make the wrong connections strong or the right connections weak. So now you suddenly have an AI that praises mass-murdering terrorists, paints the Waffen SS as racially diverse or refuses to tell you who won the 2020 US presidential election.
Fwiw; the three above are examples of AI shenanigans from the real world.
Bump:
Can anyone who understands programming/AI better than me walk through exactly how one "teaches" AI to be an ideological bad faith actor.
It's just a slightly more sophisticated version of the forum's ban on naughty words.
If I say **** **** **** ******* sonofabitch all you'll see is a bunch of asterisks because the site code has intercepted my input and changed the output.
If you ask AI for things "The Man" doesn't want it weighing in on, you get an, "It doesn't look like anything to me" response from the AI. They intercept your request and change the output.
ChatGPT seemed to struggle with the following question:
I concede that the exact answer to the question is complicated, in part because I think optimal strategy will vary depending on the information that people in line receive based on previous selections.
d2, uke, Sklansky,
How would you calculate the answer to this question?
I have something resembling a solution to this, but would have no idea where to start even trying to prove that it's the optimal solution, those sorts of proofs are well above my pay grade.
First, assign each attribute value a digit 0-3. E.g. small = 0, medium = 1, large = 2 etc.
Then, assign each attribute a position 1-4. E.g. size = 1, material = 2 etc.
Now we can construct a 4 digit base 4 number which encodes each hat's attributes in positional notation. E.g. 2213 might mean "large, felt, Pepsi logo, green."
Now we have our encoding, we have 256 numbers in base 4 (0000-3333). Jimmy has 1 number in mind. We are going to pick numbers and Jimmy is going to raise his hand when the number we have picked has at least one digit correct, per the conditions of the original problem.
The best I could come up with going from here is below.
Use the first 4 guesses to guess 0000,1111,2222,3333. Depending on when Jimmy raises his hand, we will know which distinct digits our number contains. We proceed on a case by case basis.
Case A: 1 distinct digit
There are 4 such numbers so this will happen 4/256 of the time.
WLOG, assume the digit is 1.
Not much to do here, the hat is coded 1111 and we are done.
Case B: 2 distinct digits
There are 84 such numbers so this will happen 84/256 of the time.
WLOG, assume the digits are 1 & 2. There are 14 such numbers.
[2^4 for each digit in each place then subtract the 2 degenerate cases of 1111 and 2222]
We now introduce a "masking" method which we can use to isolate digits. For example, we want to know whether our number starts with a 1 or a 2. We can do this in 1 guess by guessing a number with the first digit we care about and the rest digits we know do not appear. For example, we guess 1333. If Jimmy raises his hand, we know the first digit is 1. If he does not, we know it is 2.
WLOG, assume the first digit is 1.
We have now used 5 guesses and we have 7 candidates remaining:
1112, 1121, 1122, 1211, 1212, 1221, 1222
We can use the masking method to narrow down the second digit on guess 6, the third digit on guess 7 and the 4th digit on guess 8. By guess 9 we are down to one candidate and we are done.
Case C: 3 distinct digits
There are 144 such numbers so this will happen 144/256 of the time.
WLOG, assume the digits are 1,2 & 3. There are 36 such numbers.
[3^4 for each digit, subtract 3 * 14 for the different case B combos and 3 for the different case A combos]
We use the masking method to identify the 1st digit (this will now take 2 guesses). WLOG, assume the first digit is 1. This has left us with 12 options after 6 guesses:
1123,1132,1213,1223,1231,1232,1233,1312,1321,1322,1323,1332
If we use the masking method again for the 2nd digit, we will be left with (worst case) 5 options after 8 guesses.
We can use guess 9 to narrow down the 3rd digit to 1 of 2 possibilities using the masking method.
Guess 10 then leaves us picking at random from (worst case) 3 candidates for a 1 in 3 shot.
Case D: 4 distinct digits.
There are 24 such numbers so this will happen 24/256 of the time. [4! - each place has to have a different digit]
This is really our worst case scenario. We can't use the masking method because we have no spare digits to work with, so we can't isolate anything. I can't really see a better strategy here than picking at random. So, we have 6 guesses from 24 candidates for a 1 in 4 shot.
So, the total probability of success:
(1)4/256 + (1)84/256 + (1/3)(144/256) + (1/4)(24/256) = 55.5%. We (as the group of 10) can take an even money bet on this game and win.
I might well have missed some optimisations in case C and there may well be some strategy I haven't thought of in case D, which will give us better odds. Cases A and B seem pretty straightforward.