Opinion: ChatGPT Is Not Intelligent

Note: We at DigiTex believe in the importance of conversation regarding the latest technologies that affect instruction in the digital age. To that end, we offer the following discourse on ChatGPT, a technology which is sure to transform education in profound ways in the years to come. Please note that while DigiTex understands the value of debate, we do not endorse any one position on the subject of ChatGPT or any other educational technology or trend.


by Todd Ellis, Director of Teaching & Learning, Grayson College

We have merely used our new machines and energies to further processes which were begun under the auspices of capitalist and military enterprise: we have not yet utilized them to conquer these forms of enterprise and subdue them to more vital and humane purposes. – Lewis Mumford, Technics and Civilization, 1934

In a recent warning letter signed by Stuart Russell, Elon Musk, Steve Wozniak, and over a dozen other insiders of the digital economy Sam Altman, the CEO of OpenAI, the group that created ChatGPT, writes that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”  Ironically, media responses to this seem to assume Altman was enlightened because he was transparent enough to warn us about the tool when they should have been questioning the morality of developing that tool in the first place, given what the developers admit they knew about it. So now we know that Silicon Valley somewhat agrees with physicist Stephen Hawking when he wrote in 2014 that AI poses a threat to the very existence of humanity.

Probably Altman should have read and taken to heart Joseph Weizenbaum’s 1976 classic Computer Power and Human Reason before he developed a tool with profound risks to society and humanity. In this seminal work Weizenbaum, the leading computer scientist at MIT at the time, said that computers should not be made to do everything that they can potentially do. I wonder what kind of moral vacuum Silicon Valley engineers live in that they can be in an AI lab doing “extensive research” and seeing the potential “profound risk to humanity” of their tools and then saying essentially, “Whatever.” There’s some serious compartmentalization going on there to say the least.

What I want to do in this post is point out a couple things that I see missing from current discussions on the artificial intelligence that is represented in ChatGPT and now Google Bard. The risks to society and humanity are the priority problems that as higher education professionals we should be discussing about AI. The isolated, hyper-individualistic, compartmental assumptions of Silicon Valley’s nineteenth century mechanistic paradigm is a major problem. But there are other issues.

ChatGPT is AI engineering and not AI cognitive science. It’s worth noting that AI developers have made almost no progress on AI cognitive science (mirroring neural networks and such, think The Borg) since they started work in the 1960s. That’s a good thing in light of Hawking’s warning. As a piece of engineering, ChatGPT does not represent what the Teaching and Learning field would call intelligence. ChatGPT processes data using probability and statistics; it doesn’t model human neurons or the awareness of the human mind. By tacitly agreeing to believe it represents intelligence we are allowing technological fields to define and drive culture, something that western civilization has a long history of doing. If we don’t have broader discussions with our students and colleagues about varieties of intelligence, then we are passively admitting that the logical-mathematical intelligence of algorithms represents the only valid form of intelligence worth discussing. We need to push back against the unspoken monopolistic cultural implications of Big Tech. The digital economy is already controlled by monopolies so powerful that scholars, such as Cedric Durand at the Sorbonne, have begun labeling it Techno-feudalism. That economic monopoly now, especially with the development of AI engineering, threatens to complete its monopolization of culture and society.

Higher education should respond with more nuanced and complex discussions of what intelligence really is. We should control that narrative.

Criticizing technological inventions that impact education does not mean I’m a luddite, although in the current climate of conferences I’ve been to in the Teaching and Learning field I feel about as isolated as one. Rather, what it should mean is that I’m practicing the very essence of Teaching and Learning, which is critical thinking skills. I’ve been to a few online conferences and seen a few webinars and vlog posts in 2023 addressing the emerging ChatGPT iterations and so far, I’ve only seen the usual “hurried and mindless embrace,” as Neil Postman in his book Technopoly (p107) called it. The American trend toward technopoly treats all digital innovations as faits accomplis. The Teaching and Learning field has not generally shown itself to be immune from this.

All the Teaching and Learning discussions I’ve seen so far jump straight past the “What is it and what are its unseen implications?” phase and straight to the “It’s inevitable so how do we use it” phase. While I haven’t done an exhaustive analysis of the Teaching and Learning field’s response to AI, in general it rarely criticizes the assumptions behind technological innovations in any deep sense. I would like to add a few other questions to the discussion.


Algorithms as Intelligence?

What do we mean by intelligence? Scholars as far back as Hubert Dreyfus in the 1960s and Joseph Weizenbaum in the 1970s have been pointing out that the hidden assumption behind artificial intelligence is its computational theory of mind. The computational theory of mind, arising directly from the 19th century mechanistic paradigm, says that the human brain is like a computer and the mind its software. Essentially, when we recommend adopting tools like ChatGPT in our classrooms without adequate critical analysis or discussion of them, we’re promoting the further evolution of a system that treats humans as machines. Even if we eventually decide that ChatGPT is inevitable at some level in our classrooms, we should have nuanced discussions of the differences between machine algorithms and embodied, self-aware consciousness and intelligence.

It is incorrect to say that ChatGPT represents a form of intelligence, artificial or otherwise. Even descriptions of it in the popular media are contradictory in describing it as “intelligence” in one sentence and then a large language processing model in the next. The assumption here is that intelligence is the ability to process vast amounts of information and humans are information processors. The mere processing of data could only be equated with intelligence under the assumptions of a computational theory of mind. In its place I would propose Maurice Merleau-Ponty’s humanistic embodied consciousness theory of mind. As opposed to the computational theory of mind which views mind as a sort of solipsistic homunculus processing things inside of our heads, the embodied consciousness model looks at human consciousness holistically. “Our body perceives,” Merleau-Ponty notes. We don’t process bits of information; we intuit gestalts. That is, we get immediate impressions and understandings of holistic situations. We also don’t ever experience intelligence or awareness except through our bodies. Our bodies can’t be compartmentalized out of the equation.

Furthermore, our consciousness, and therefore intelligence, is always embedded in and dependent upon an environment. Conscious intelligence is embedded, it is never an abstraction in some ideal and ghostly realm. We experience our consciousness holistically and never as separated from our body or our immediate environment. The embodied consciousness paradigm represents a transcending of the classic cartesian duality between body and mind. This sort of overcoming of the cartesian duality has arguably become critical to our global civilization. Many philosophers and psychologists have pointed out over the past hundred years that to evolve a just and ecological society we must stop seeing ourselves as isolated from ourselves, from others and from nature and begin to realize our interdependence. The long-term vision of Big Tech, on the other hand, relies on viewing humans as passive, individualized computation machines.

Gus. Embodied and evolving intelligence, highly empathetic and instinctual with minimal processing.

ChatGPT obviously doesn’t possess intelligence. Given enough electrical power it can process data through limited and biased algorithms (even ChatGPT admits it has bias) elaborate enough to give some passable though often inadequate answers to questions by producing plagiarized writing that our students may also use for further plagiarism. By contrast, my rescue dog Gus, for example, is a form of sensitive intelligence. It’s easy to intuit it by the look in his eye and how the expressions on his face have changed over the past year as he has learned to trust humans again. It’s an immediate impression that no one with a mind and a heart and a moment of focused attention could miss. Intuitional intelligence is an aspect of the embodied consciousness theory of mind and it’s easiest to see the existence of intuition when considering our relationships with animals. Gus has self-aware embodied conscious intelligence and he’s big on intuition.

Intelligence presupposes consciousness. Therefore, intelligence presupposes both a body and awareness as opposed to the “meat machines” that our technofeudal digital economy unconsciously promotes as representing humans.

Even if, for the sake of argument, you were to grant that AI is intelligent, it would still represent only a very limited form of intelligence, something that anyone in the Teaching and Learning field would be aware of. The algorithms of ChatGPT would represent a form of logical-mathematical intelligence. We must make sure our students are aware of and value other forms of intelligence as well like musical, artistic, linguistic, interpersonal, spatial, existential, naturalist and kinesthetic forms. We need to take back discussions and definitions of intelligence from Silicon Valley.


Resistance: The Only Thing That Is Never Futile

At our schools we should create a culture where our students don’t feel they have to cheat to compete. We should have policies about using AI tools in coursework.

The Borg from Star Trek. Are you ready for your neural interface? *

Students are already using ChatGPT in assignments at our community colleges. Fortunately, we use Turnitin plagiarism detection here at Grayson College and just last week they enabled their AI detection on their platform and are claiming a 98% success rate. We allow students to see their immediate score in Turnitin so as to use it as more of a learning device and less a policing tool although students cannot yet see their AI score. Turnitin plans to make this available to students soon. Some reactions in the Teaching and Learning field have said that AI detection is going to be an unwinnable tit for tat with AI platforms attempting to evade detection systems and detection systems further evolving to respond to the AI platforms. Therefore, stop resisting AI. I disagree. AI detection has to be an integral part of the culture that we are trying to create. Obviously, you can find legitimate uses for ChatGPT in the classroom and in assignments, such as having students analyze its response to an essay prompt. But first we need the ability to detect it and the understanding that we must. We must also deeply believe in our imperfect humanity because it’s relatively easy to create a disruptive tool like ChatGPT, while in response we are trying to evolve a culture in the face of a well-evolved technopoly.


What would resistance look like?

  1. Become familiar with Chat GPT. Experiment with it yourself. What is it good for? What are its limits? What is its tone or writing style? But be aware of what philosopher Kenneth A Taylor said about AI, “Once you’ve invented a really cool new hammer—which deep learning very much is—it’s a very natural human tendency to start looking for nails to hammer everywhere.”
  2. Create a learning community that analyzes and discusses Chat GPT and the relationship between pedagogy and technology. ”To exist humanly” according to Paulo Freire “is to name the world in dialogue, to change it.” We need to create and trust our own crowdsourced and humanly intelligent response to AI. I would love to be a part of a community that could help us to create a humanistic response to AI. This is where my hope lies.
  3. Create AI policy and publicize it. Does your school have an AI policy? Front it on your school website.
  4. We should prime students not to cheat by having them sign academic integrity statements and communicate with them regularly about the harm to their education that’s caused by cheating. This is not to say that ChatGPT and other AI can’t have a place in education, but that the possible benefit of AI is not the first conversation we should be having.
  5. Use plagiarism detection tools with AI detection capabilities like Turnitin. Copyleaks is advertising better AI detection results than Turnitin, but I haven’t heard anything from users of this platform yet. One of our government courses using Turnitin in the first week of April detected two assignments that had been submitted using writing created 100 % by ChatGPT. Students are using AI in assignment submissions now.
  6. Learn about critical pedagogy. Some great books I recommend from this field include Hybrid Teaching: Pedagogy, People, Politics. Edited by Chris Friend, 2021, On Critical Pedagogy. Henry Giroux, 2011, and, of course, the classic Pedagogy of the Oppressed by Paulo Freire, 1970. A great open resource book to read online is An Urgency of Teachers: the Work of Critical Digital Pedagogy. Sean Michael Morris and Jesse Stommel. Critical pedagogy, according to Henry Giroux, highlights “the performative nature of agency as an act of participating in shaping the world in which we live. Critical pedagogy must be seen as a political and moral project and not a technique.” We should respond to the inevitability argument of Ed-tech with a recommitment to our personal and collective agency.

As I was finishing this post, the US Commerce Department announced it is requesting public comment on how to create accountability measures for AI. That is a good start and a major shift from the past when the government was usually absent in any discussion of technological impacts on society. Our technopoly has always created technology and then left culture and society to respond to its implications. The best example of this is nuclear weapons. We can change that. We can resist the idea that any technological innovation is our predestined fate. We need to develop and believe in renewed personal agency and resistance in the face of evolving AI. We need to talk, that most reciprocally human of things. And we need to reclaim the agency to resist.

*“Montreal Comiccon 2016” by Pikawil is licensed under CC BY-SA 2.0.



Freire, Paulo. Pedagogy of the Oppressed. The Continuum International Publishing Group. 1970.

Giroux, Henry. On Critical Pedagogy. Bloomsbury Academic. 2011.

Merleau-Ponty, Maurice. Phenomenology of Perception. Routledge. 2014. Originally published by Editions Gallimard, Paris, 1945.

Morris, Sean Michael & Stommel, Jesse. An Urgency of Teachers; the Work of Critical Digital Pedagogy. Pressbooks. Creative Commons Attribution Non-Commercial.

Mumford, Lewis. Technics and Civilization. The University of Chicago Press. 1934. P. 265.

Postman, Neil. Technopoly: The Surrender of Culture to Technology. Vintage Books. 1992.

Weizenbaum, Joseph. Computer Power and Human Reason. W. H. Freeman and Company. 1976.