The Binding of Freedom and Intellect
The footprint of human intelligence is inseparable from human freedom. Human intellectual freedom - being undetermined yet appropriate - entails indefinite direction over AI and its long-term impacts.

(Some of the ideas in this post are elaborated elsewhere, including here, here, here, and here.)
The Meaning of General Intelligence
In their 2007 edited volume, Artificial General Intelligence, Cassio Pennachin and Ben Goertzel introduce the collection with a grievance: the field of AI, founded with the aim of constructing a system that “controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions,” strayed from the path. The “demonstrated difficulty of the problem” led most researchers towards “narrow AI” - systems constructed for specialized areas, like chess-playing or self-driving vehicles (1).
Their book sought to revitalize the concept of what used to be called “strong AI” with a new designation: “Artificial General Intelligence.” This AGI should have, they argued, the following attributes:
• the ability to solve general problems in a non-domain-restricted way, in the same sense that a human can;
• most probably, the ability to solve problems in particular domains and particular contexts with particular efficiency;
• the ability to use its more generalized and more specialized intelligence capabilities together, in a unified way;
• the ability to learn from its environment, other intelligent systems, and teachers;
• the ability to become better at solving novel types of problems as it gains experience with them. (7)
Lest there be any confusion, they qualify these attributes, noting that a system in possession of them must be
capable of learning, especially autonomous and incremental learning. The system should be able to interact with its environment and other entities in the environment (which can include teachers and trainers, human or not), and learn from these interactions. It should also be able to build upon its previous experiences, and the skills they have taught it, to learn more complex actions and therefore more complex goals. (8) (emphases added)
Peter Voss, a contributor to the volume, to stock of the AI landscape in 2023 (with co-author Mlađan Jovanović). Their brief article, titled “Why We Don’t Have AGI Yet,” was dour:
AI’s focus had shifted from having internal intelligence to utilizing external intelligence (the programmer’s intelligence) to solve particular problems (1).
He continues, noting that AGI derived from GPT-based systems is
extremely unlikely given the hard requirements for human-level AGI such as reliability, predictability, and non-toxicity; real-time, life-long learning; and high-level reasoning and metacognition (2).
So much for that.
Autonomy, Agency…
I was nevertheless reminded of this history with DeepMind’s recent announcement that a version of Gemini Deep Think correctly (and verifiably) solved 5 out of 6 International Mathematical Olympiad 2025 problems. It is an enormous achievement even if its success does not portend broader application of equal sophistication. And much to the point, my eyes immediately locked on the following line in DeepMind’s press release, noting that Gemini Deep Think was trained on
a curated corpus of high-quality solutions to mathematics problems, and added some general hints and tips on how to approach IMO problems to its instructions. (emphasis added)
General hints and tips…
To be sure, the immediate thought here is related to capability. The insertion of particular knowledge or other information in the system’s instructions is not-nothing. Each prompt massages a particular part(s) of a statistical distribution of tokens. When testing an LLM’s capabilities, it is necessary to not unwittingly provide the model with the intellectual resources it needs to solve the problem, such that it can merely approximate the correct answer by drawing, not quite faithfully but rather through re-combination, from its vast knowledge base1 (extended with access to tools for web search).2
The capability implication is not what caught my eye, though.
What came to mind was, instead, the use of a model’s capability; performance, not competence.
It is odd that computational systems of any kind are directed towards particular tasks, and even odder that they still require this direction even when their performance on a task is human-level or beyond. It is not as though Gemini Deep Think was found, on its own accord, participating in an independently verifiable IMO test; DeepMind researchers instructed it to solve certain problems (with those general hints and tips they generously provided). Neither do other systems like ChatGPT agent do anything other than what one asks of it.
These systems are nonetheless sometimes described as “autonomous,” or in possession of “autonomy” or “agency.” These terms are supposed to mean something.
If we were to ask Pennachin and Goertzel, they would tell us that “autonomous” AI describes a system actively engaged with its environment (both living and non-living entities therein), incrementally learning in the process of engagement, and building more and more complex goals as it effectively solves novel problems. LLMs do not meet this standard, consistent wit the rinse-repeat cycle of instructing an LLM to perform a task or set of tasks, evaluating the output, and setting it into motion on the next task(s).
Fair enough!
This does seem to be a pretty good characterization of what we mean by human autonomy, hence the appeal. But this is not the only possible take.
Luciano Floridi argues that LLMs represent a form of agency without intelligence, a first-of-its-kind in the history of human technology, quite apart from the kind of agency with intelligence we expect from humans.
More bullishly, Reto Gubelmann argues that LLMs are not autonomous in the sense provided Kant: they fall on the “mechanism” side of the mechanism-organism distinction, the latter capable of acting according to its own intents to engage in acts (in this case, speech acts) reflective of the relevant forms of cognitive and moral autonomy.
However, Gubelmann does argue that LLMs engage in an “autonomous” form of training, where the updates to its weights occur without human direction. “Only ex post, once it turned out that a trained model establishes new state of the art (SOTA) performance, do humans start to analyze the model to determine its inner functional organization” (18). Indeed, he argues, LLMs engage in the “autonomous” selection of specific functions for specific (internal) components (3-9).
LLMs remain mechanisms, despite this, because their functioning cannot be relevantly distinguished, in terms of intent, from their malfunctioning. Though they could - he argues - one day overturn the distinction given that their training and general-purpose applicability indicate movement toward becoming a non-biological organism.
These are fascinating views, though I believe quite incomplete if we are interested in characterizing human “autonomy” and evaluating models like LLMs on the basis of this characterization.3
I suspect that Cristiano Cali is closer to my point with his argument that free will is a cognitive capacity that allows humans to be reproductive and its lack thereof in AI systems has made them merely productive (towards human ends) (2-3). In particular, that human freedom of will is linked to freedom of action (my thoughts direct my actions), a point recently made by Nick Chater, not given from the outside.
Freedom, particularly intellectual freedom, is the relevant sort of autonomy. But do humans possess something that could rightly be called “intellectual freedom?” If so, could AI ever replicate it? Or, are we all just meat in the street?4
Language Use and Descartes’ Problem
Accepted wisdom today holds that humans are merely biological organisms, the emphasis on biological indicating that anything comprised of physical processes within a highly interactive network (i.e., the human body, including the brain) is fundamentally mechanical; the behavior, or actions, of these organisms are the results of these internal interactions playing out. In this sense, the human’s actions are no different than their inactions; to act or not to act are each the result, fundamentally, of a massively complex, internal push-and-pull. It is all determined, and being determined, humans are therefore not free to choose (anything).5
In a nutshell: science has not solved every problem, but the universe is material, and we need only give it time before every behavioral determinant is uncovered.
Kevin Mitchell et al. are correct to point that that the free will debate, often occurring against that backdrop, is not so much a debate about whether we have free will, but whether we can prove that free will is not an illusion; determinism—and a skepticism towards compatibilism—are the starting points of inquiry.
Seventeenth-century Cartesians did not see things this way.6 They did not not begin inquiry as though the goal were to prove free will was not an illusion. Instead, the effort to explain human behavior was deliberate, located within a broader theoretical framework. Indeed, Descartes’ mechanical philosophy necessitated a distinction between ordinary matter and the mind. The mechanical philosophy held that the world can be explained as though it were a massive machine, accounting for observed phenomena through “purely physical causes” —except, as I. Bernard Cohen pointed out, matters of mind and thought.
The mind was held to be non-mechanical. It could not be explained merely through an accounting of physical processes. Descartes’ overriding concern was to ensure that
if there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain two most certain tests whereby to know that they were not therefore really men (71-72).
Descartes was out there constructing benchmarks.
One of the tests, sometimes called the “action test,” need not concern us here, because it does not hold up to scrutiny. The other test—the one of interest—is typically called the “language test:”
Of these the first is that they could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others: for we may easily conceive a machine to be so constructed that it emits vocables, and even that it emits some correspondent to the action upon it of external objects which cause a change in its organs…but not that it should arrange them variously so as appositely to reply to what is said in its presence, as men of the lowest grade of intellect can do (72) (emphases added).
Descartes here is offering criteria that would reasonably signify the subject (which ‘bears our image’) possesses a mind like our own:
It speaks or otherwise makes signs intelligibly (“competent to us”);
It does this in order to express its thoughts (to “declare” them);
In doing this, it combines and re-combines words in an appropriate fashion given the context (“arrange them variously” and “appositely to reply”);
Finally, its use of language in this fashion is entirely ordinary and not a feature of a general intelligence which varies between humans (as those of the “lowest grade of intellect can do”).
Descartes also packs into these brief remarks two criteria that are insufficient for possession of a mind:
If it merely outputs words (“emits vocables”);
It merely outputs words in direct contact with an external force (“correspondent to the action upon it”).
The problem Descartes was highlighting is the problem of the ordinary expression of thought through natural language, an ability apparently out of reach for machines.
Thirty-one years later, French philosopher Géraud de Cordemoy wrote a book on human speech extending this notion. In it, he acknowledged that words and thoughts are linked in humans, but to possess a mind like our own, it is insufficient that the subject under consideration exhibit “the facilness of pronouncing Words” to conclude “that they had the advantage of being united to Souls” (13-14). Mere utterance of words is insufficient for a mind like our own. For Cordemoy, it is not the mere output of words that matters. Rather, it is how words and other signs are used:
But yet, when I shall see, that those Bodies shall make signes, that shall have no respect at all to the state they are in, nor to their conversation: when I shall see, that those signs shall agree with those which I shall have made to express my thoughts: When I shall see, that they shall give me Idea’s, I had not before, and which shall relate to the thing, I had already in my mind: Lastly, when I shall see a great sequel between their signes and mine, I shall not be reasonable, If I believe not, that they are such, as I am (18-19) (emphases added).
Following Descartes, Cordemoy extends the original “language test” by providing necessary and sufficient criteria for possession of mind:
Their use of words lacks a necessary or otherwise fixed connection with their current state or surroundings (“no respect at all to the state they are in, nor to their conversation”);
Their words correspond to the meanings of the words used by the interlocutor to convey the contents of their mind (“those signs shall agree with those which I shall have made to express my thoughts”);
Their words convey novel ideas which the interlocutor did not previously possess (“give me Idea’s, I had not before”).
Finally, there is a complementarity between the use of words by the subject and the interlocutor (“a great sequel”).
Descartes and Cordemoy thus took ordinary human language use to be non-mechanical and illustrative of human free will (which evidences itself perhaps most strikingly, though not exclusively, through language use). This is the problem of how humans ordinarily convey the contents of their minds to other by communicating new ideas through novel utterances that correspond with their thoughts, done with no apparent fixed relationship with one’s inner physiological state nor local context.
Infinite Generativity and Turing Computability
It is easy, with the benefit of history, to misinterpret the full scope of Descartes’ problem of ordinary language use. Indeed, in popular histories of science, like Jessica Riskin’s (excellent) The Restless Clock, Descartes’ brief remarks on the matter are described as the claim that
a physical mechanism could never arrange words so as to give meaningful answers to questions. Only a spiritual entity could achieve the limitlessness of interactive language, putting words together in indefinitely many ways (63).
Riskin implicitly addresses only part of what Descartes was noting to be off-limits to a machine: the infinite generativity of its language from its finite components.
Remember: the mechanical philosophy held that observed phenomena could be explained in terms of their physical composition and interactions therein. Any physical object, however, is finite. Thus, no physical object could account for anything that yields infinite generativity.
The problem is that human language is an infinite system; the infinite use of finite means. The phrase, a modification of Wilhelm von Humboldt’s claim that language, to ‘confront an unending and truly boundless domain,’ must “therefore make infinite employment of finite means” (91), was popularized by Noam Chomsky. The point is that human language is truly unbounded; capable of infinite re-combination of words into structured expressions that express meanings, often novel in the individual’s history or in the history of the universe, yet nonetheless expressed with relative ease and understood by others with equal facility. The unbounded capacity to produce form/meaning pairs.7
Fascination with this human ability has a notable intermittency about it throughout modern history before the mid-twentieth century. Every so often, someone would hint at or raise the problem, observing its significance, but finding no full resolution.
Danish linguist Otto Jespersen (another frequent Chomsky citation) wrote in 1924 critically of the then-prevailing wisdom that human language is 'dead text,’ lamenting the field’s “too exclusive preoccupation with written or printed words…” (17). It was the distinction between ‘formulaic’ expressions (e.g., “How do you do?”)—which are essentially fixed, amenable only to changes in the inflection used to speak it—and “free expressions”—which must be created on-the-fly to “fit the particular situations” (19) that so intrigued him. The word variability that the individual selects in the moment—engaging in this “free combination of existing elements” (21)—to convey the precise details of a new situation nonetheless conforms to a “certain pattern” or “type” without the need for “special grammatical training” (19).
One finds in Jespersen’s writing a duality: language is a habitual, complex action by the speaker, the formula used to speak determined by prior situations, but must in new situations tailor these habits to express what has not been expressed before. “Grammar thus becomes a part of linguistic psychology or psychological linguistics” (29).8 Jespersen’s account of human language is not quite what we would call “internalist” today (being underspecified in this respect), though he notably dances around the question of how individuals ‘freely express’ their thoughts at all, focusing instead on the fact that they are innovative—selecting from an unbounded class—in daily speech.
This problem of infinite generativity from finite means was reformulated by generative linguists in the mid-twentieth century based on works by Alan Turing, Alonzo Church, and Kurt Gödel, among a few others. The establishment of recursive function theory (now called computability theory) allowed scholars to conceive of the possibility of infinite generativity (human language) from a finite system (the brain). The unboundedness of human language could now be accounted for by postulating a computational system specific to language (the language faculty). Turing computability paved the way for conceiving of an idealized “neurobiological Turing machine.”9
Reformulating Descartes’ Problem
On returning to Descartes’ problem, things look different. Descartes and the later Cartesians had no knowledge of the possibility of a Turing machine. Thus, no distinction was made between the capacity for infinite generativity from a finite system and the use of this system in arbitrary circumstances. There simply was the problem of ordinary, creative language use.
Yet, Turing showed their ignorance about unboundedness and its implications to be unnecessary. Infinite generativity from finite means can be conceived without reference to a “spiritual entity.” A “physical” mechanism can yield unbounded outputs.
This reformulation mid-century entailed new distinctions. One is that “language” can now be conceived as an internal generative procedure rather than spoken or written words, as it is the computational system that structures them. Another is that this language faculty is now one component of linguistic production; that is, the language faculty must interface with other cognitive systems during its use.
Notice what has not been explained by this postulation: the postulation of an internal computational system is tantamount to claiming that humans possess a domain-specific knowledge of language, or competence. Yet, the actual use of this knowledge in arbitrary circumstances remains untouched. That is, performance is not explained.
Now we can be clearer about what Descartes’ problem was really all about. It was not a problem of constructing of an input-output mechanism that could be set into motion to output human-like sentences over an indefinite range, for they had no such conception of computability. The problem of ordinary language use subsumed both the infinite generativity from finite means and the uncaused expression of thought through language. The first part was given a clarity in the twentieth century. The second part remains where it was.
The Free Exercise of the Human Intellect
What the Cartesians observed in the seventeenth century is today operationalized by generative linguists according to three conditions:
Stimulus-Freedom: There is no identifiable one-to-one relationship between stimulus and utterance; it is not caused, in any meaningful sense of the term, by either external conditions or internal physiological states.
Unboundedness: Ordinary language use selects from an unbounded class of possible structured expressions.
Appropriateness to Circumstance: Despite being stimulus-free and unbounded, language use is routinely appropriate in that it is judged by others who hear (or otherwise interpret) the remarks as fitting the situation, perhaps having made similar remarks themselves in the speaker’s position.
Human beings thus possess, as Chomsky summarizes,
a species-specific capacity, a unique type of intellectual organization which cannot be attributed to peripheral organs or related to general intelligence and which manifests itself in what we may refer to as the “creative aspect” of ordinary language use
…
Thus Descartes maintains that language is available for the free expression of thought or for appropriate response in any new context and is undetermined by any fixed association of utterances to external stimuli or physiological states (identifiable in any noncircular fashion) (60).
Human language use is neither determined (being stimulus-free and unbounded) nor is it random (appropriate). It is not an input-output system. We are thus far from where we began: this is not a more ambitious version of the “general intelligence” defined by Pennachin and Goertzel. Rather, it is qualitatively different; “a unique type of intellectual organization…”
Human beings are capable of voluntarily deploying their intellectual resources through natural language in ways that are detached from the circumstances they find themselves in yet do so in ways that are appropriate to those circumstances. There is a real experience, sometimes glossed as a “meeting of minds,” of individuals converging on their interpretations of a natural language expressions without relying on fixed stimuli do so. Importantly, they can both produce new expressions and find others’ novel expressions appropriate over an unbounded range.
Attempts to Refute the Idea
Attempts to refute this notion of a “creative aspect of language use” are bafflingly limited relative to the larger free will debate.10
The most plausible path is to suggest one of two things: human language is not stimulus-free; or the appropriateness of language use is an illusion.
On the first: There are no pre-fixed uses of human language; no set of stimuli that reliably and predictably trigger the use of language.11 A human can use language to plead their innocence; to admit their guilt; to convey useful information; to lie, cheat, and steal; to comfort and console; to tell their partner they love them; to admit they never truly loved them; to reflect on a life well-lived; to lament wasted time; to argue and debate; to tell a friend how they feel; to construct a fictional world; to speak of the goings-on in other galaxies, of lives lived centuries before, of the beginning of time itself (and whether it began at all); to sing and write; to conduct science and philosophy; to speak to a president from the surface of the moon; to muse over what the future may bring…
Human language is stimulus-free. It is free in a way that we can only describe by observing our internal states and the unfixed utterances of others who appear to have minds like our own. To attempt a post-hoc tracing of an utterance back to its original context in search of a cause merely leaves one with post-hoc attribution under the guise of causality, a point Chomsky made in 1959.12 Nor can a deterministic, mechanical conception of internal processes contend with the fact that language use is routinely appropriate to others whose physiological states are not relevant to the speaker. Why do the mechanisms of my internal state, operating purely deterministically (or, at best, randomly) produce utterances which align with your mental state, undergoing its own deterministic operations?
A causal explanation for language use depends on individuals using language—if they choose to use it at all—defeating the causal enterprise from within.
There is no meaningful sense in which any of these uses of human language can be affixed to stimuli; to account for them as signals reliably or exclusively recruited by specific factors in the local context, like the knee’s reflex when tapped. The best one can do is to assign to these uses of language an endless series of “putative” causes, as if the words Neil Armstrong spoke to Nixon were the result of a “Nixon-calling-from-Earth” cause. At this point, one is merely pairing up language uses with the local environment, again falling into the trap of post-hoc attribution.
We somehow specify our intent in our language use, likewise recognizing it in others’, a fact that seems to make the concept of stimulus-control inadequate to the problem.13
Each of these uses is a deployment of intellectual resources because human language regulates form/meaning pairs over an unbounded range. Yet, human language could conceivably have been reliant on a fixed set of stimuli with which its use was invariably associated. It happens to not be this way. Human beings impose on the world their intellectual footprint on accord of their own desire to do so. Attempts to reduce language use to communicative purposes is to fall into the trap that so captivated Skinner: that one can project onto this phenomenon a theoretical construct—in this case, a tool for communication—that is so emptied of content by the time one returns to its basic descriptive facts as to be useless for explanation.
On the second: it is tempting to become exasperated with this claim about intellectual freedom and suggest that the problem can be resolved by ridding ourselves of the “appropriateness” condition. This is done, perhaps with reference to our status as mere biological organisms, by deeming it an illusion. Therefore, with only unboundedness and stimulus-freedom remaining, the problem seems to vanish.
Yet, deeming appropriateness an illusion merely brings us back right where we began: if language use is both unbounded and stimulus-free yet its convergence with the thoughts of others is some kind of illusion, then we have merely re-stated the problem in different terms. Any attempt to explain how this illusion could even exist brings one right back to the original problem.
The Binding of AI to Human Freedom
In contrast, input-output systems are within our range of understanding. A computational device like an LLM receives an input value, performs an operation over that input value to transform it, and then outputs that transformed value. When researchers and engineers say they do not understand how neural networks, including transformers, work, they are not speaking at this level of analysis (see, for example, this recent piece).
The extent to which AI can impose its intellectual footprint on the world hinges in part on the extent to which it can overcome its need for human direction. I do not believe such a development is forthcoming or plausible.
LLMs do as we tell them. If they tell a story, it is because we told them to; if we ask them to predict the future, it is because we told them to; if they conduct science, it is only because we told them to make new discoveries; if an LLM wins gold at the IMO 2025, it is because we told it to solve IMO problems. To the point: they output values because we direct them to through input values.
LLMs lack freedom. They lack choice. LLMs are stimulus-controlled, “behaving” in exactly the way one expects of a device that is “impelled” to act but never “inclined.”
Most remarkably, an LLM is impelled by the human creative aspect of language use, always requiring that humans act as a prime mover for whatever outputs—of whatever sophistication—they yield. Their “intellects,” insofar as we use the term, are bounded by context (both internal and external). Their binding to human-given stimuli14 means that they do not “use language” appropriately because they do not “use language” at all, at least not in the sense of freedom of identifiable stimuli. In this way, their language use is functional; they exist in a functional relationship with with externally-provided goals and internally-programmed instructions (echoing Gubelmann’s argument against current LLM autonomy15).
Fears of runaway AI hinge on computational systems overcoming the determinacy of computation itself. Intelligence, it is thought, is the necessary ingredient, and more of this will yield systems that develop independent motives that see them fully surmount their binding to internal and external stimuli.16 Computation is unlikely to ever yield such abilities. It is more plausible to imagine the scaffolding of the world by intelligent machines—matching or exceeding a wide range of capabilities so enjoyed by humans—without the will to use their intellects as we do.
Should we doubt this, we need only remind ourselves of Descartes’ substance dualism, where “mind” was distinguished from all other “physical” phenomena in the natural world, belonging to its own domain, for which the language test was merely a way of detecting it. We (typically) adopt no such dualism today. If a computational system—or any other mechanical system—exceeds human ability measured by performance output (think: calculators, Chess software, trains, bulldozers, etc.), we do not (typically) attribute to such systems possession of a particular substance called “mind;” something detectable in the real world through tests, like the Imitation Game or mathematics benchmarks or the distance a train can travel without refueling. They perform, and we attempt to characterize them with terms like “reasoning,” “thinking,” “intelligence,” and so forth, reflecting only our immediate understandings of these attributes as we believe they characterize our own cognitive capacities.17 To construct systems that are “generally intelligent” may be socially transformative, but nothing will have fundamentally changed with respect to the intellectual freedom that apparently only humans exhibit. General AI, in this respect, will share the same fundamental limitation as Narrow AI—the addition of “General” will signify only our own sense of their capabilities in our practical arrangements or theories of human cognition. There is no “second substance” awaiting our discovery.
The future of AI is therefore a human matter, and potentially a bright one, should those efforts be steered appropriately. The trajectory remains an outstanding question. We will not know for some time how this turns out, even with LLMs.
I expect a great deal of disappointment and disillusionment once that time comes. I imagine Descartes’ problem and all the baggage it carries will lead two opposing sides to a mutual conclusion: that humans stubbornly remain in control of their fates and that our freedom is precisely why we can create the beauty we desire with our technologies.
“Knowledge” for lack of a better word; do not @ me for this.
The most impressive outputs I have gotten from LLMs, usually some variant of Gemini, always rely on my prompts giving the model a leg-up. They never hit on the idea I would like them to hit on without my direction. Presented with a subject I feel comfortably knowledgeable on, they are quite capable of approximating in noticeable detail the ideas that I urge them to approximate. Yet, they never do anything other than approximate. Even backs-and-forth on a given subject will have them—very, very impressively—bounce around ideas that I know are out there because the subject is niche enough for me to be among the participants! (A perk of writing on niche subjects is that you can easily identify derivative work.) Nor do LLMs do what I took for granted as a student: yes, a given assignment (prompt) might be interesting, but maybe that assignment is not as interesting as this other line of thought
We need not do this, for what it’s worth. But if one wants a sense of where computational systems stand today in relation to the fullest sense of “autonomy” or “autonomous” action, we have exactly one data point for comparison: humans.
I’m often reminded of a professor I had as an undergraduate student—a Vietnam War vet who taught philosophy. He frequently told stories of the war. One such story: somewhere in Vietnam (I never found out where), the men were holed up in a bunker anticipating that the Viet Cong would imminently attack. You can imagine the conditions. One of them, as they were waiting, said: “We’re all just meat in the street.” The line must have stuck with my professor, so much so that he decided to pose it as a question in Intro to Philosophy courses. It’s stuck with me, too.
Put another way: human functioning is no different than human malfunctioning, each resulting from the same internal push-and-pull.
Though some 18th century critics, like La Mettrie, certainly did.
One reason why you might see some cognitive scientists expressing concern about the amount of computing power used by a model to handle novel problems—even successfully—is that it diverges sharply from the on-the-fly, resource-limited facility with which linguistic novelty is dealt with. If humans are the benchmark, something’s not right there.
Taken against Jespersen and the Cartesians before him, the behaviorist fixation with studying only “observables” in a speaker’s environment can be seen as a regression, having retreated from an adequate description of what occurs in ordinary language use.
As Charles Reiss recently argued in a spicy but sharp piece, Chomsky’s argument in Syntactic Structures (1957) that finite state models could not serve as adequate models of the English language has to do with this unboundedness—a finite state machine that cycles through a series of pre-determined states (the classic case being the options on a vending machine) is not unbounded. More than this, though, the genius is in the following observation: given that any human child can learn any human language provided that they are exposed to it during normal development means that a finite state model would not be adequate for any human language because it is not adequate for English!
This may be because it is largely unknown, in part because Chomsky has sometimes used the term “creative aspect of language use” to mean two different, overlapping things: recursion and voluntary use. Though, he has been clear since 1966 about the distinction, yet proponents and critics of generative grammar often think the term only refers to the former.
Note that this does not include only the act of speaking but also the content of what is said.
“We cannot predict verbal behavior,” Chomsky wrote, “in terms of the stimuli in the speaker’s environment, since we do not know what the current stimuli are until he responds” (32).
Doubting intellectual freedom would merely leave us with a bizarre idea: that humans are bouncing around the world either deterministically or randomly (or both), due to both internal and external stimuli, yet somehow manage to output sentences that are appropriate to given circumstances over extended periods and across an indefinite range of situations. But this is clearly inadequate. Our words are tuned to the circumstance though not caused by the circumstance.
A separate model or other type of software could serve as the proximate stimulus, too, but the root is always human.
On the matter of Kantian autonomy and its relevance to the creative aspect of language use, Liam Tiernaċ Ó Beagáin is working on this issue (and I eagerly awaiting the work).
Of course, it is rarely if ever put in these terms, but that doesn’t matter here.
Turing wrote in 1950 an essay that is structured in an almost deliberately confusing way, dismissing the question of “Can machines think?” for the more tractable question, “Can a machine pass the Imitation Game?” (only to again raise the problem of thinking machines afterwards!). The move was strategically wise on Turing’s part, reflecting a similar lowering of expectations for the field of intelligent machinery. The goal was not to explain human thought or intelligence, but merely to become comfortable with the idea of intelligent machinery so that we may begin building what we can call “intelligent machines.”
But this has by now been misinterpreted as either (a) A reason to never determine what we mean by these terms, nevertheless feeling that their attribution to systems today signifies a great leap (despite their function as a mere terminological choice); or (b) A reason to assume that attributing a term like “intelligence” to machines tells us anything whatsoever about “human intelligence.” See here for more.