ChatGPT: deconstructing the debate and moving it forward
In particular, we argue that the discussion about LLMs like ChatGPT reveals and assumes (1) an externalist and instrumentalist view of technology that presents technology as just a tool and, paradoxically, at the same time as having little to do with human users, (2) an anthropocentric and instrumentalist view of language use that assumes that humans are fully in control of language and that language is a tool, (3) a Platonic distinction between appearance and the real that is at the heart of Western metaphysics and that continues to shape responses to new and emerging technologies. Instead, we argue for a view of the relation between humans, technology, and language in which neither is fully in control and all are related and inter-dependent for the production of meaning and the making/construction of authorship, which is always a co-authorship. Moreover, we also argue that we can (and should) do without the Platonic assumption and thereby create a new way of interpreting and constructing a critical relation towards the phenomena of LLMs like ChatGPT.
Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. (Searle 1999, 115).