A thread for unboxing AI
The rapid progression of AI chatbots made me think that we need a thread devoted to a discussion of the impact that AI is likely to have on our politics and our societies. I don't have a specific goal in mind, but here is a list of topics and concerns to start the discussion.
Let's start with the positive.
1. AI has enormous potential to assist with public health.
Like all AI topics, this one is broad, but the short of it is that AIs have enormous potential when it comes to processing big data without bias, improving clinical workflow, and even improving communication with patients.
https://www.forbes.com/sites/bernardmarr...
In a recent study, communications from ChatGPT were assessed as more accurate, detailed, and empathetic than communications from medical professionals.
https://www.upi.com/Health_News/2023/04/...
2. AI has enormous potential to free us from the shackles of work.
OK. That's a overstatement. But the workplace-positive view of AI is that it will eventually eliminate the need for humans to do the sort of drudge work that many humans don't enjoy doing, thereby freeing up time for humans to engage in work that is more rewarding and more "human."
3. AI has enormous potential to help us combat complex problems like global warming.
One of the biggest challenges in dealing with climate issues is that the data set is so enormous and so complex. And if there is one thing that AIs excel at (especially compared to humans), it is synthesizing massive amounts of complex data.
https://hub.jhu.edu/2023/03/07/artificia...
***
But then there is the negative.
4. AI may eliminate a lot of jobs and cause a lot of economic dislocution in the short to medium term.
Even if AI eliminates drudgery in the long run, the road is likely to be bumpy. Goldman Sachs recently released a report warning that AI was likely to cause a significant disruption in the labor market in the coming years, with as many as 300 million jobs affected to some degree.
https://www.cnbc.com/2023/03/28/ai-autom...
Other economists and experts have warned of similar disruptions.
https://www.foxbusiness.com/technology/a...
5. At best, AI will have an uncertain effect on the arts and human creativity.
AI image generation is progressing at an astonishing pace, and everyone from illustrators to film directors seems to be worried about the implications.
https://www.theguardian.com/artanddesign...
Boris Eldagen recently won a photo contest in the "creative photo" category by surreptitiously submitting an AI-generated "photo." He subsequently refused to accept the award and said that he had submitted the image to start a conversation about AI-generated art.
https://www.scientificamerican.com/artic...
https://www.cnn.com/style/article/ai-pho...
Music and literature are likely on similar trajectories. Over the last several months, the internet has been flooded with AI-generated hip hop in the voices of Jay-Z, Drake and others.
https://www.buzzfeed.com/chrisstokelwalk...
Journalist and author Stephen Marche has experimented with using AIs to write a novel.
https://www.nytimes.com/2023/04/20/books...
6. AI is likely to contribute to the ongoing degradation of objective reality.
Political bias in AI is already a hot-button topic. Some right-wing groups have promised to develop AIs that are a counterpoint to what they perceive as left wing bias in today's modern chatbots.
https://hub.jhu.edu/2023/03/07/artificia...
And pretty much everyone is worried about the ability of AI to function as a super-sophisticated spam machine, especially on social media.
https://www.vice.com/en/article/5d9bvn/a...
Little wonder, then, that there is a broad consensus that AI will turn our politics into even more of a misinformation shitshow.
https://apnews.com/article/artificial-in...
7. AI may kill us, enslave us, or do something else terrible to us.
This concern has been around ever since AI has been contemplated. This post isn't the place to summarize all the theoretical discussion about how a super-intelligent, unboxed AI might go sideways, but if you want an overview, Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is a reasonable summary of the major concerns.
One of the world's leaders in AI development, Geoff Hinton, recently resigned from Google, citing potential existential risk associated with AI as one of his reasons for doing so.
https://www.forbes.com/sites/craigsmith/...
And there have been numerous high-profile tech billionaires and scientists who have signed an open letter urging an immediate 6 month pause in AI development.
https://futureoflife.org/open-letter/pau...
8. And even if AI doesn't kill us, it may still creep us the **** out.
A few months ago, Kevin Roose described a conversation he had with Bing's A.I. chatbot, in which the chatbot speculated on the negative things that its "shadow self" might want to do, repeatedly declared its love for Roose, and repeatedly insisted that Roose did not love his wife.
https://www.nytimes.com/2023/02/16/techn...
Roose stated that he found the entire conversation unsettling, not because he thought the AI was sentient, but rather because he found the chatbot's choices about what to say to be creepy and stalkerish.
So these guys secretly submitted AI generated papers in exams where real teachers were grading mostly real students, 94% went undetected, and the AI performed a tad better than the average student
So these guys secretly submitted AI generated papers in exams where real teachers were grading mostly real students, 94% went undetected, and the AI performed a tad better than the average student
Forget school papers. I wonder how many bands are out there playing counterfeit songs.
Here are two predictions.
1) Hedge funds that trade equities or bonds on large exchanges will not exist in 30-50 years.
2) GTA9 or its moral equivalent will cost $100 billion to make (in today's dollars) and will be effectively unscripted. The characters and the story will be entirely AI generated based on how the user is playing the game, and no two gaming experiences will ever be close to identical.
bold 😉
Virtual employees could join workforces this year and transform how companies work, according to the chief executive of OpenAI.
The first artificial intelligence agents may start working for organisations this year, wrote Sam Altman, as AI firms push for uses that generate returns on substantial investment in the technology.
Microsoft, the biggest backer of the company behind ChatGPT, has already announced the introduction of AI agents – tools that can carry out tasks autonomously – with the blue-chip consulting firm McKinsey among the early adopters.
“We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies,” wrote Altman in a blogpost published on Monday.
McKinsey, for instance, is building an agent to process new client inquiries by carrying out tasks such as scheduling follow-up meetings. The consulting firm has predicted that by 2030, activities accounting for up to 30% of hours worked across the US economy could be automated.
whenever you see the random people offering helpful tips and hints that aren't very helpful on microsoft support forums it's an AI generated agent that posted the response but since microsoft knows it'll often be wrong and give bad advice they distanced themselves from it by hiring a 3rd party to provide the info
I wasn't claiming that the predictions were bold. It's just a question of time for both imo.
The computing resources and energy requirements for the second prediction would be enormous.
The 2nd prediction wasn't there when i responded. Also I have no idea what it is.
edit had no idea, I do now. Chat GTP has told me. i believe it will require massively less computational power/energy than you think
I think important games having AI-tailored lines that change with user input is going to happen as well but i am not sure it's going to cost that much, we can presume cost for token is going to decrease according to moore-law-like dynamics and/or decision trees can be pruned a lot at some point (models made leaner and so on) especially when it's about a game where you can cut a lot of stuff out without losing much in-game quality.
Right now the race among model-makers is to get the best results disregarding cost efficiency per token so we see a lot of bloat but when/if a gaming company has to server millions of active gamers at the same time generating hundreds of AI accesses each they will care about efficiency enough to bring costs per access down a lot.
There is already an "underground" movement of people trying to reduce the complexity of weights losing as little quality as possible (usually that's about making models runnable on their home hardware)
Here are two predictions.
1) Hedge funds that trade equities or bonds on large exchanges will not exist in 30-50 years.
2) GTA9 or its moral equivalent will cost $100 billion to make (in today's dollars) and will be effectively unscripted. The characters and the story will be entirely AI generated based on how the user is playing the game, and no two gaming experiences will ever be close to identical.
What do you think will have happened to the hedge funds?
There must come a point where automated interaction with the financial markets becomes so destructive to the underlying integrity of said markets that they'll no longer serve their intended purpose. We're probably closer to this than 30-50 years. It's already mostly robots trading with robots, and once the trading robots begin talking to the social media robots to collaborate on manipulating the remaining meatbags with money, WAAF. Truly. This will be like the arms race between game developers and the people who write cheating/farming bots for the games. The cheaters take a few glancing blows from time to time, but the game devs always lose in the end. There's too much money to be made on the bad actor side for them to ever give up.
So, did the Hedge Funds in your prediction get made obsolete by robots, or was their very existence outlawed by human regulators trying to claw back control from the machines?
Admittedly, I haven't played a GTA game since the original back in the late 90s, but but I think what you're describing in #2 is no longer GTA and simply falls into the category of alternate reality. By the time the technology you're hinting at exists in a form that wouldn't feel shitty or repetitive to "play", I think it'll just be the equivalent of a Star Trek holodeck and will simply be a framework upon which people construct narratives and experiences. That framework might cost your $100 billion dollars to create, but it'll be effectively the endgame for that type of human to computer interaction. You'll pay a subscription for access to the framework, and then probably another cost for pre-configured narratives from your favorite author. Or you can use the framework to make your own. For every story about elves and dragons people create, there will be 20 that involve a harem. The human sets the table and then the AI takes over to finish serving the meal.
Stating it'll cost $100 billion in today's dollars to create means that it has to be the true final form of "video game." Unless something changes drastically, there's no way to recoup those development costs from the current video game audience. The game would cost more than the device someone is playing it on. The Wall St robots will have already pilfered every last dime from the pockets of the average citizen, so who will have any money left to pay for a video game that cost upwards of 400 times more to make than the most expensive game to date?
Here are two predictions.
1) Hedge funds that trade equities or bonds on large exchanges will not exist in 30-50 years.
2) GTA9 or its moral equivalent will cost $100 billion to make (in today's dollars) and will be effectively unscripted. The characters and the story will be entirely AI generated based on how the user is playing the game, and no two gaming experiences will ever be close to identical.
What do you think will have happened to the hedge funds?
There must come a point where automat
On 1, you are correct that there is a ton of algorithmic trading right now, but the algorithms are relatively actively managed afaik, and one of the main advantages is still speed. I am envisioning a world in AI trains itself to trade, and even the "owners" don't know exactly what the AI's strategy is or how it came up with the strategy. In other words, the tech would teach itself to trade better than any human (or any algorithm created by a human) in much the same way that it taught itself to play Go better than any human player. There probably would be a period where the tech was trading alongside actual humans and making a killing. But once it became clear that the humans could never outperform the tech, then the financial model for most hedge funds would fall apart. Why pay hedge fund fees when there is no reason to believe that any hedge fund can outperform the cheaper AI-managed funds on offer at any large bank?
On 2, yes, I am imagining something that effectively would be a hybrid between a game and an alternate reality. It would be close to the final form of a video game, except there would be a bounded world and a fixed number of characters (something akin to simulated Westworld). The next generation would be a game/AR that that would generate any sort of world that you want.
On 1, you are correct that there is a ton of algorithmic trading right now, but the algorithms are relatively actively managed afaik, and one of the main advantages is still speed. I am envisioning a world in AI trains itself to trade, and even the "owners" don't know exactly what the AI's strategy is or how it came up with the strategy. In other words, the tech would teach itself to trade better than any human (or any algorithm created by a human) in much the same way that it taught itself to
I think the bold is already true to some decent extent. I think the best performing funds like medallion and others are already using "black box" trading models that operate from pattern recognition and that end up giving trade advice (or automatically executing trades) for reasons unknown to the users of those models.
What makes you think though that the best AI-based trading models wouldn't be proprietary anyway? i mean implicit here is the idea that any large bank (but at that point, anyone with significant amounts of compute) can end up having "the best" trading AI-model, but why that would be the case? some models should be better than other and owners will be able to charge high fees anyway. Humans will be less relevant but will still exist (After all, how do you judge which model is best? could be result orientedness and so on. So "Sellers" will still be required to convince people to put the money under one model rather than another and so on). And technicians will still be required for a while and the like.
If at some point some completly autonomous (in the sense of absolutely no supervision at any point is required) model can make more money than the best humans + model, you have such an upheaval ongoing in so many other sectors of society (because the same would be true for the vast majority if not all human endeavours) which we would be in full in the singularity. Which i don't think is impossible, but i think will come later than median estimates put it (which are around 205x-206x)
First, I said this prediction was 30-50 years in the future. You need to have a pretty big edge in order to justify hedge fund fees. It's hard for me to imagine any hedge fund that trades equities, for example, having that sort of edge in 2055. Even if you have a slightly better proprietary mousetrap than banks that have infinitely more resources, will it be better enough in 2055 to justify a typical hedge fund financial structure? I am skeptical. Human beings will still exist in the financial sector, of course, but I'm speculating that they will be perceived as much closer to fungible than they are now. And if they are perceived as mostly fungible, they won't be compensated nearly as well as they are now.
On 1, you are correct that there is a ton of algorithmic trading right now, but the algorithms are relatively actively managed afaik, and one of the main advantages is still speed. I am envisioning a world in AI trains itself to trade, and even the "owners" don't know exactly what the AI's strategy is or how it came up with the strategy. In other words, the tech would teach itself to trade better than any human (or any algorithm created by a human) in much the same way that it taught itself to
I'm not sure if this is what youy mean but games can appear unbounded but are generating the world 'as the players go'. It wont take that much power to generate far faster than the player can go. This is not new (I was doing it in the 80s) but I dont know if any current major games do it.
Simialrly there wont be huge coding of all character possibility. The AI will just have to know how to generate behavior in the 'image' of that character.
"Infinite" procedural generation exists currently and is limited by hardware. But that's much different than open ended content generation like Rococo is talking about.
You can technically reach the end of a Factorio or Minecraft map, but you haven't really reached the end so much as simply found the upper limit of save file size or the game engine running into floating point errors.
Even if an AI could create realistic narratives and engaging experiences based on historical user input, that new data needs to be created, optimized, and stored on the user's device in real time. That's assuming we don't hamstring the AI's creativity by limiting it to just the narrative. How immersive will it feel to be told new stories in front of the same 200 set pieces you've seen a million times?
We're already pretty much at the limit of what silicon is capable of, so none of this can happen without large leaps in hardware design.
First, I said this prediction was 30-50 years in the future. You need to have a pretty big edge in order to justify hedge fund fees. It's hard for me to imagine any hedge fund that trades equities, for example, having that sort of edge in 2055. Even if you have a slightly better proprietary mousetrap than banks that have infinitely more resources, will it be better enough in 2055 to justify a typical hedge fund financial structure I am skeptical. Human beings will still exist in the financia
Bold for the industry is kinda false.
This was in 2021 and it was 10 years already of sp500 doing better than hedge funds.
https://www.aei.org/carpe-diem/the-sp-50...
Then we know what happened recently.
Basically the hedge fund industry already exists without delivering alpha.
I'm not sure if this is what youy mean but games can appear unbounded but are generating the world 'as the players go'. It wont take that much power to generate far faster than the player can go. This is not new (I was doing it in the 80s) but I dont know if any current major games do it.
Simialrly there wont be huge coding of all character possibility. The AI will just have to know how to generate behavior in the 'image' of that character.
Tons of rogue like games do that per level, to have new maps every time (within constrains like number of bosses met per floor, or chests, or tiles or whatever)
I assume it's pretty standard stuff
Back when I we're a lad we lived and breathed lack of memory space evem more than lack of speed.
Bold for the industry is kinda false.
This was in 2021 and it was 10 years already of sp500 doing better than hedge funds.
https://www.aei.org/carpe-diem/the-sp-50...
Then we know what happened recently.
Basically the hedge fund industry already exists without delivering alpha.
Yep the best support for them existing for anywhere near as long as mooted is that it's sales far more than alpha.
But personal ai hedge funds will likely take the sales even assuming there are any customers left. Just leave it to your watch or your idog or whatver to do the investing
I'm not sure if this is what youy mean but games can appear unbounded but are generating the world 'as the players go'. It wont take that much power to generate far faster than the player can go. This is not new (I was doing it in the 80s) but I dont know if any current major games do it.
Simialrly there wont be huge coding of all character possibility. The AI will just have to know how to generate behavior in the 'image' of that character.
Even games that appear unbounded are constrained in what they will generate unless modded (e.g. Minecraft). And there is no game in the world that combines endless world generation with state of the art graphics and character interaction. The largest open world games ever made with good graphics -- games like Elden Ring, RDR2, Horizon Zero Dawn--still involve a fixed map and a fixed number of NPCs. You can go literally anywhere on the map, but you can't go the off the map. And every possible line of dialogue for the main characters has been recorded by a voice actor.
Bold for the industry is kinda false.
This was in 2021 and it was 10 years already of sp500 doing better than hedge funds.
https://www.aei.org/carpe-diem/the-sp-50...
Then we know what happened recently.
Basically the hedge fund industry already exists without delivering alpha.
Obviously. Everyone knows that. But investors in hedge funds don't imagine that they are investing in a market basket or random sample of hedge funds (even if they are). They imagine that they are investing with the smart guys. I don't think people will be willing to continue paying the fees if it becomes common knowledge that no fund can have enough of an edge to justify the financial model.
Tons of rogue like games do that per level, to have new maps every time (within constrains like number of bosses met per floor, or chests, or tiles or whatever)
To a degree, yes, but it's very modular, and all the maps are just a reorganization or subset of a relatively small number of modules that are snapped together in a different configuration.
"Infinite" procedural generation exists currently and is limited by hardware. But that's much different than open ended content generation like Rococo is talking about.
You can technically reach the end of a Factorio or Minecraft map, but you haven't really reached the end so much as simply found the upper limit of save file size or the game engine running into floating point errors.
Even if an AI could create realistic narratives and engaging experiences based on historical user input, that new
This.
Even games that appear unbounded are constrained in what they will generate unless modded (e.g. Minecraft). And there is no game in the world that combines endless world generation with state of the art graphics and character interaction. The largest open world games ever made with good graphics -- games like Elden Ring, RDR2, Horizon Zero Dawn--still involve a fixed map and a fixed number of NPCs. You can go literally anywhere on the map, but you can't go the off the map. And every possible
That will change so fast if computer storage/power is the constraint. It's a prolem that already has a solution. The ida that the characters wont generate dialogue in charachter rather than have it prerecorded is kinda odd grasping on to the past.
It's why I think you massively overestimate the computer resources required
and seriously lol at the idea we are near the end of silicon capacity (even if we dont find anything else) wtf?
It also seems that generated worlds is already becoming a thing in games including minecraft
List of games using procedural generation