donderdag 18 mei 2023
Large language models and the human mind
Artificial intelligence’s large language models (LLM’s) are premised on the thesis that processing language provides a promising pathway to produce thoughts. This presumption is an ontic reduction of Heidegger’s ontological thesis that thinking and language are deeply connected. Or more precisely: The core premise of AI’s large language models is an ontic reduction of Heidegger’s ontological thesis that real or genuine thinking amounts to listening to the voice of language or to what language has to say and so reveals. Large language model AI’s interest in past datasets is also an ontic reduction of Heidegger’s ontological thesis that as historical beings our past is deeply involved in our understanding, so that real thought is rooted in remembrance and thus in what tradition shows and so reveals. These ontic reductions should warn us. The human mind is not closed. It’s open towards wholly new and yet unforeseen possibilities. It doesn’t merely pay attention to past patterns to create similar patterns. A relentless adoption of ontic large language model based AI could therefore lead to a closing of the human mind.
Labels:
AI,
artificial intelligence,
Heidegger,
language,
large language model,
LLM,
ontical,
ontological,
thought
Abonneren op:
Reacties posten (Atom)
Geen opmerkingen:
Een reactie posten