6 Conclusions
Because AI coding assistants often feel so human in their responses and completions, it is enormously tempting to anthropomorphize them and imagine that they have a “mental model” of the computer programs and regexen that they are writing (whether correctly or erroneously).
This belief is, of course, completely wrong. It is not merely wrong in the sense that large language models are built from silicon and linear algebra rather than from ganglia and axons, however. The LLMs of today’s AI coding assistants are very specifically not “knowledge engines” (also called “expert systems” in some contexts). There does exist a different kind of computer system that tries to represent taxonomies, ontologies, inference rules, and other elements that are more closely analogous to “thinking about the problem.” These kinds of models are largely creatures of the 2000s, not of the 2020s, but they could come back to prominence. As of the end of 2022, AI coding assistants are simply not those other kinds of models though.