Can LLMs understand?

Can LLMs understand?

I've been having discussions with a lot of skeptics of LLMs and their recent achievements. I am curious to hear what people on this forum think about the topic.

My general argument is this: there is only one way to prove that an LLM truly understands anything. To be provide it a set of complex difficult questions that it has never seen before, and see if it is able to answer them correctly.

If an LLM is able to take in set of new questions it's never seen before and arrive at the answer, then for all intents and purposes it must be able to understand at least in the domain of the questions.

The two main arguments I hear back are:

1. There is no such thing as a "new" question. Every question we test it on is just a variation of some question that already exists on the internet and therefore the LLM is just using "statistics" to parrot an answer to something it's already seen before.

I don't think this argument holds much weight, because there are many types of questions that can't be simply inferred with past answers.

2. Even if an LLM could answer every new question perfectly, it still doesn't "understand" because that is a human concept and machines are doing something different.

I find this argument is lacking too. It is essentially circular logic. They define understanding as a human process and therefore no machine could ever understand unless we make them out of the same biological matter as humans.

Curious what others think here 😀

12 January 2025 at 01:41 PM
Reply...

1 Reply



"understand" is semantic cope mechanism and people will find it increasingly difficult to articulate uniqueness of human intelligence

Reply...