My position on generative AI
I preface this post by stressing that I am myself an AI researcher: I obtained my PhD doing AI research and I continue to do such research today. I concluded my PhD manuscript with an epilogue where I reflected on the state of AI as a scientific discipline, and I stand by most of what I wrote back in 2023. In particular, on page 225 I said:
An expert is supposed to be more than simply someone who knows a lot about a given subject. An expert must also know how the subject is practiced by other experts, and must be able to critically examine this process. Experts are not glorified parrots; they are supposed to have a thorough knowledge of their field of study, of course, but they must also have their own well-informed opinions on how to improve the field. We are not passive bystanders but active participants in this culture of knowledge production who collectively shape the future direction of the field.
Therefore, given the current state of my chosen field of research, I feel compelled to write this post more out of a sense of professional obligation than anything else, because frankly I don’t like to spend time on things I hate. I would much rather write about things I actually enjoy and about which we can have interesting conversations, but circumstances being what they are I feel like I have no other choice.
The thoughts I put into words here have been circulating in some form or another in my mind for the past few years. What finally convinced me to write this piece was the publicity stunt by a colleague of mine, professor Ruben Verborgh, who is advocating for allowing students to use generative AI on essentially all university exams. He supports this position chiefly by the following arguments:
-
ChatGPT can’t pass his exam, so it doesn’t really matter whether students use it or not.
-
Generative AI is going to be everywhere soon anyway and students ought to be prepared for this.
-
If AI can solve your exam, then your exam is too easy.
As I will elaborate below, I don’t find any of these arguments compelling. However, even if they were, there are still plenty of other reasons to remain highly critical of the widespread use of generative AI and especially its incorporation into our education system. Unfortunately, Ruben’s position has become the dominant one even among many of my peers, which is why I finally felt compelled to write this piece. I will start with an overview of reasons why I find the current popularity of generative AI troubling rather than promising. I then examine Ruben’s specific case in light of these reasons, and end with some general conclusions that will hopefully stimulate more productive discussion on this topic.
Against generative AI
In this section, I outline the major reasons why I believe the current generative AI hype is unsustainable and misleading, and why we should not blindly accept the intrusion of this technology into every facet of our lives. I have attempted to support my case with peer-reviewed scientific studies from respectable venues whenever possible, but some arguments inevitably rely on personal opinion. I don’t believe these opinions to be at all controversial, however – at least not in principle – and I don’t think I could ever see eye to eye with anyone who disagrees with them, in much the same way I could never understand someone who opposes universal healthcare. The ideological distance between us is simply too great.
That said, in anticipation of certain criticisms I always get in these kinds of discussions, I will state a few preliminary caveats here for the record:
-
Generative AI is not universally bad. It obviously has certain benefits, particularly in specialized applications. Generative AI is, after all, merely a category of machine learning algorithms that can generate data. This is not problematic in and of itself, and generative AI has been used for many useful applications. However, whenever I mention “generative AI” in this post, obviously I’m referring to chatbots, coding assistants and other generators that seek to automate human creativity, since that’s where the controversy lies. The question we must ask, then, is not whether generative AI has any benefits, but whether these benefits outweigh the costs. As I will attempt to show here, any remotely serious cost-benefit analysis will conclude that, at least in its present form, generative AI in the sense intended here is a massively wasteful technology that costs us more than it benefits.
-
This is not an attack on AI as a whole. Contrary to popular belief, the field of AI is actually much more diverse than just chatbots and coding agents and the like. There are also many useful applications of AI, and in some of these the incorporation of generative models makes sense. I am not arguing against progress or science; I am arguing against very specific ways of using a certain technology, as well as the way this technology is sold to the public by large tech companies and certain academic shills. Indeed, as I demonstrate in this post through citations, there is much academic debate and research on the negative impact of generative AI, and many researchers evidently agree with much of what I say here. These issues are taken seriously by many scientists in this field. The problem is that these concerns are typically not reported in mainstream media outlets, which instead prefer to focus on clickbait and billionaire hagiography, so that the public remains largely unaware of these debates.
-
Some of the problems I mention can be solved. Obviously I’m aware that certain problems, such as the environmental impact, may be solvable in time. That doesn’t change the fact that they are still very real problems right now, and that the industry seems aggressively uninterested in solving these issues while still plowing full speed ahead. Moreover, as I will argue later on, even if we imagine a utopian scenario where all these contingent issues are resolved, there are still very good reasons to limit adoption of this technology. There are certain fundamental issues with the very concept of generative AI that no amount of technological advancement can resolve, because they pertain to the goals of the technology rather than its technical maturity.
-
There are problems I don’t discuss here. These include, among other things, issues of bias and fairness, and how generative AI encodes harmful stereotypes and perpetuates them in its applications. Well-known examples include racism and misogyny in decisions about people, such as automated resume screening, or in the generation of images (Hofman et al. 2024; AlDahoul et al. 2025; Bender et al. 2021). I don’t discuss these problems here because they seem to be taken much more seriously in both academia as well as industry than any of the other issues I mention below.
With these caveats out of the way, I present a list of the main reasons why I find generative AI and the attitudes of certain colleagues such as Ruben Verborgh problematic.
The environmental impact is apocalyptic
The current approach of growing the Gen-AI sector to satisfy every imaginable application considers neither what benefits have actually been realized in practice nor the extensive societal costs. We call for the sustainable development of Gen-AI and propose a comparative benefit-cost evaluation framework as a potential approach toward responsible development in Gen-AI.
The most straightforwardly quantifiable argument against the adoption of generative AI is its disastrous environmental impact. Many studies confirm this, and the scientific record goes back to well before the rise of ChatGPT and its siblings. Strubell et al. (2019) found, for instance, that training a single Transformer model – the type of model on which most chatbots are based – on GPU emits the equivalent of 35 592 kg of CO2. By contrast, the average human emits about 5 000 kg CO2-eq every year, meaning the training of a single Transformer model emits as much as seven humans do over the span of one year. It is important to stress that this figure relates to the training of one model. In practice, researchers and companies developing new state-of-the-art models will have to train many different architectures and try out many variants to see which one outperforms the rest by a sufficiently large margin. Strubell et al. estimate that, in this way, the total emissions can be as much as 284 000 kg CO2-eq, which is the same amount 56 humans emit over the course of a year, or five cars over their entire lifespan, or approximately 315 flights between New York and San Francisco. This study was done in 2019, when models were much smaller compared to the ones we have in 2026, so the situation today is actually even worse: GPT-2, which was published in 2019, had 1.5 billion parameters. GPT-3, released in 2020, had 175 billion, over a hundred times more than GPT-2. For GPT-4, which came out in 2023, OpenAI didn’t even officially release model sizes anymore, but estimates floating around the internet put it somewhere around 1.8 trillion. The models we’re using now are over a thousand times bigger than the ones we had in 2019, when their carbon footprint was already massive. Bender et al. (2021) make a similar analysis but also report the financial costs of marginal performance gains, noting that a slight increase in accuracy of machine translation applications can increase costs up to $150 000 in addition to carbon emissions.1
A more recent study finds that model inference also comes with a significant cost. For text generation tasks, a single query can generate between 2 and 20 grams of CO2-eq, with a median of 5 grams. Given that, as of October 2025, OpenAI processes about 2.5 billion queries per day, the total CO2-eq emissions can lie anywhere between 5 and 50 million kg, with a median of 12.5 million kg, every single day. On a yearly basis, this translates to 1.8 billion kg CO2-eq by the most conservative estimate. This is as much as 360 000 people emit over the course of one year. Every year, ChatGPT alone emits as much CO2-eq as a country roughly the size of Iceland (as of the 2026 census), and that’s based on current usage; this could very well still increase substantially. And things are looking to get much worse than that: Bashir et al. (2024) find that the projected electricity demand for data centers exceeds the demand of over 70% of countries worldwide, which is utterly insane. No technology should be this resource-heavy except perhaps if it is crucial for sustaining life, which generative AI categorically is not.
It is telling that OpenAI does not seem to have any official sustainability reports that quantify their emissions, nor water and electricity consumption. The closest to an official document I have been able to find on this topic is the Environmental Impact of AI page on the OpenAI Academy, which barely has any meaningful content. There also exist studies about OpenAI, but these often lack detail since they are not performed by OpenAI themselves. Therefore, to get a picture of just how bad for the environment generative AI is, I resorted to using Meta’s sustainability reports. Meta does a lot of AI research and they used to publish extremely detailed reports on their emissions and resource consumption. Note that these numbers only go up to 2023, since Meta has since stopped publishing detailed metrics in their sustainability reports for some reason 🧐.
| Metric | 2016 | 2017 | 2018 | 2019 | 2020 | 2021 | 2022 | 2023 |
|---|---|---|---|---|---|---|---|---|
| GHG | 0.71 | 1.096 | 1.008 | 4.33 | 4.984 | 5.740 | 8.453 | 7.443 |
| Electricity | 1.83 | 2.462 | 3.427 | 5.140 | 7.170 | 9.420 | 11.508 | 15.325 |
| Water | N/A | 0.838 | 1.279 | 1.971 | 2.202 | 2.569 | 2.638 | 3.078 |
Table 1. Meta’s market-based greenhouse gas (GHG) emissions (in million tonnes), electricity use (in TWh) and water consumption (in millions of cubic meters). Sources: Meta 2024 Sustainability Report and Facebook 2020 Sustainability Report.
Table 1 shows the values I extracted from their reports. For GHG, the values above are “market-based,” which means that these numbers are essentially under-estimates, and I use them here in order to be as favourable as possible (perhaps too favourable) towards Meta: it means Meta has subtracted an amount of emissions based on what they have “restored” through various environmentally-friendly programs, such as the planting of trees or carbon capture, to “compensate” for their emissions. The validity of this practice is a separate matter entirely which I won’t get into here.
We can see that Meta’s GHG emissions and resource consumption have significantly increased since 2016. Electricity and water have only increased over the years: electricity by a factor of about 8, and water by a factor of 4 based on 2017 numbers. GHG sometimes dipped a little, but as of 2023 Meta emits an order of magnitude more GHG compared to 2016. There is also a notable jump in 2019 when their emissions went up by 329% compared to 2018. The year 2019 might not mean much to most people, but AI experts know that 2019 was the year when LLM research really got off the ground, with several breakthroughs such as OpenAI’s GPT-2 and Google’s T5. It’s highly likely the increases we observe around 2019 are largely due to investments in AI infrastructure.
To put these values into perspective:
-
The United States has the world’s largest per capita CO2 emissions at 14.2 tonnes per person as of 2025. In 2023, Meta emitted the CO2-equivalent of over half a million Americans.
-
The average per capita electricity consumption in the United States in 2023 was 12.44 MWh. In 2023, Meta consumed as much electricity as a little over one million Americans did.
-
In 2022, the United States withdrew 1 300.87 cubic meters of water on average per inhabitant. This means Meta consumed the equivalent of 2 366 Americans in water in the year 2023.
From this we can conclude that Meta alone already consumes enough resources to count as a small nation-state, and that’s just one company. There are many others – OpenAI, Anthropic, Microsoft, Amazon – who probably far surpass Meta in this regard. It follows that the widespread adoption of generative AI in its current form is incompatible with any reasonable climate policy. Moreover, aside from the environmental concerns, water and electricity are scarce resources. The hugely increased demands these data centers put on our finite resources naturally causes prices to rise. A 2025 study by Ceres warned of water scarcity due to increased demand by data centers. Note that many regions in the world were already suffering from water stress due to climate change; data centers are now adding lots of fuel to this fire.
Personally, I don’t think all this is worth it just to create a fancy chatbot. In fact, I will go further still: it is my honest opinion that anyone who uses this incredibly polluting technology for frivolous things such as writing a bit of boilerplate code or creating a travel itinerary should be ashamed of themselves.2 I also know from friends in the tech industry that many companies are now actually requiring their developers to use coding agents, and their token burn is actively monitored to ensure it is sufficiently high. This is totally unheard of: can you come up with literally any other resource that costs the company money to use, and your manager wants you to use more of it? This is obviously a scam: AI companies are bleeding money, so they’re trying to get everyone dependent on this technology as much as possible for when they inevitably have to jack up prices.
It’s also clearly disastrous when the environmental impact is taken into account. This is absolutely not an exaggeration: the IPCC is very clear that we require rapid reductions in emissions by 2030 to avoid worst-case climate change scenarios. While we may debate the precise extent of these reductions, it is not up for debate that AI is a large contributing factor to anthropogenic climate change. It also should not be controversial to find it unacceptable to sacrifice the future of our planet for a digital assistant that might make some people a little more productive.3 The tech giants who are at the forefront of this generative AI technology are among the largest polluters in the world. Their emissions have increased substantially in the past years – even if we buy all of their carbon capture propaganda – when this is in fact a crucial time to be reducing emissions, and much of this increase is highly likely caused by the cancerous growth of a technology we don’t even really need.4
It doesn’t actually make you more productive
The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.
The basic function of generative AI is to generate high-quality output in virtually any modality (audio, video, images, text); you just have to ask. Hence it may naively be expected that this should lead to major productivity boosts for anyone involved in creative arts of any kind, from programmers to writers and painters, since you can use AI to gain inspiration or just fully create entire works from scratch in literal seconds where previously this could have taken hours or days. In theory this is true: generative AI can indeed create a lot of output very quickly with minimal prompting, and a lot of this output may even be good enough for your purposes. In practice, however, we live in capitalism, where companies strive to maximize profit above all else. Consequently, as we have already seen, the productivity gains that may be had through the use of AI are offset by the inevitable layoffs that follow every such new technology.
Reputable outlets constantly hype up the alleged productivity gains to be had from AI, such as an MIT Sloan article claiming “it can improve a worker’s performance by nearly 40%.” The capitalist calculus then dictates that, if worker productivity can be increased by 40%, you only need \( \frac{1}{1.4} = 5/7 \approx 71\% \) of your original workforce to retain the same level of production at a fraction of the cost, since generative AI tools are vastly cheaper than human labor. From a myopic C-suite point of view, AI thus provides an immediate and irresistible revenue boost in the form of massive downsizing, firing almost 30% of the workforce. This wave of layoffs is currently being put into practice by many companies around the world. It is a straightforward consequence of enshittification: the big tech companies no longer really have room to grow since most of them utterly dominate whatever market they carved out for themselves. They can no longer really innovate, so all that’s left to increase profit is to reduce costs, typically by making everything worse and having users pay to get back what was taken from them.
Of course, anyone who has actually done any work in their lives knows things are not that simple. Statements like “this tool increases worker productivity by X%” don’t actually make any real sense; they are vast oversimplifications, marketing slogans meant to sell a product to credulous investors. For a worker, the reality is that thanks to chatbots their manager has now fired half the team and expects the rest to compensate for it. To many people (myself included), the idea of losing half my colleagues and getting told Copilot will pick up the slack is nothing short of terrifying, not because of some Luddite technophobia but because human colleagues are much more than just machines that produce output for a company. You’d think that the current working people of the world, all of whom should be old enough to remember the COVID lockdowns, would be able to appreciate this: if AI replaces all or most of your colleagues, then we’ve basically created a world that simulates permanent lockdown, when all your colleagues were just faces on a Teams call, and all communication went by e-mail or chat. No more casual smalltalk, no more lunches together, no more coffee breaks, no more humanity; just output. All you ever get to talk to is a screen with a statistical model behind it that doesn’t even really sound human and, by virtue of its disembodied existence, could never relate to you the way actual human beings can. Rather than improving productivity, this sounds like a perfect recipe for inducing burnout.
It can be argued that this particular point is not a problem of generative AI itself but rather the society around it. While this is true, it’s also irrelevant, since we live in this society whether we like it or not. It is willfully ignorant to dismiss the social context in which a technology will be deployed and the associated harms it will cause.
The potential for abuse is unacceptably large
I mean “abuse” here in two distinct senses. First, AI agents themselves can be “hacked” in a way. This is done via so-called prompt injection attacks: specially crafted prompts that cause the agent to perform undesirable actions. A few interesting examples of such attacks:
-
An attacker can achieve remote code execution by opening malicious pull requests with hidden prompts on public GitHub repositories. These pull requests are crafted so that, if they are processed using an agentic AI system, they will execute specific code determined by the attacker, such as opening a reverse shell. This was demonstrated by researchers at NVIDIA.
-
AI coding assistants can hallucinate non-existent software packages. Attackers can take advantage of these hallucinations using a technique called slopsquatting: they register packages with names likely to be hallucinated, hence achieving remote code execution on unlucky people using a coding agent. Spracklen et al. (2025) conducted a large-scale study on the frequency of such hallucinations and found that the average percentage of hallucinated packages is between 5% and 22%, depending on the model, making this a serious problem.
Although I have no mathematical proof for this,5 it is my strong belief that prompt injection attacks are fundamentally unsolvable. My main reason for believing this is because an LLM is just a statistical model that predicts words; it has no built-in verification mechanism to sanitize or authenticate user input. As such, it is up to whoever deploys the LLM to ensure that its inputs are sanitized and any potential code it generates is handled responsibly. However, I do not believe user input can be sanitized from all possible prompt injection attacks, and I challenge anyone who claims otherwise to construct a universal sanitizer. On top of that, treating LLM-generated code as unsafe defeats the purpose of using an LLM in the first place: if you have to strenuously review all LLM-generated code for potential backdoors anyway, then what’s the point? You could have just written the code yourself instead of playing Russian roulette with an LLM. Humans may be fallible, but no human programmer will ever accidentally write code by hand that opens a reverse shell to some adversary’s server. Coding agents strictly enlarge the attack surface.
AI coding assistants can be dangerous even without malicious intent, however. They can unintentionally delete your entire business, and they are prone to generating insecure code (Yan et al. 2025). This makes sense given that these models are essentially fancy autocomplete algorithms that reproduce the data they are trained on. Since the vast majority of code on the internet contains security vulnerabilities, statistical models trained on such code will inevitably reproduce these issues. Of course, it is true that human beings make mistakes and write insecure code all the time as well, but I find such arguments unconvincing. For one, humans can easily be taught to write secure code once an exploit is known, and many companies in fact make their living by providing such education. While an AI coding assistant can theoretically learn this as well, this process is much more arduous since it involves the removal of insecure code from the training data followed by many iterations of reinforcement learning, and even then we don’t really have any guarantees. An LLM requires resources equivalent to a small nation state over the span of a year in order to learn something the average human expert can learn in a few hours, fueled by nothing more than coffee and stale sandwiches. Humans vastly outclass even our best AI systems when it comes to the ratio of learning speed over resource requirements.
Moreover, isn’t the point of AI that it should be better than humans? If your fancy AI agent makes the same mistakes humans make, then what’s the point? Automation is supposed to reduce mistakes because it allows us to benefit from the infinite patience of machines. You basically just automated a person out of a job for no real improvement in quality. Naturally, the capitalist response to this is that it cuts costs, but that is merely rhetoric: sure, it probably saved an employer a bunch of money, but it also cost somebody their job and hence potentially their livelihood. All cost-benefit analyses are positive if you only focus on the benefits! Furthermore, whatever skeleton crew of human programmers remains will now be forced to review all the code the AI has written for potential vulnerabilities, and when things inevitably go wrong it’s still the humans who will be blamed. Again, this seems like a fast track to burnout rather than an actual productivity improvement.
The second way AI agents can be abused is in automating the process of hacking itself. To this end, a few tools such as HackerGPT and WormGPT are already out in the wild, lowering the technical knowledge required for hacking and writing of malware. Other frameworks such as OpenCLAW don’t necessarily do any hacking, but allow users to create agents that can participate in online harassment campaigns. Imagining the sheer scale of the potential abuse this system can lead to gives me literal shivers. I can easily imagine creating a little script to continuously watch for new vulnerability reports and then using a combination of tools like Shodan, ZoomEye, HackerGPT and WormGPT to automatically find and exploit vulnerable devices. No doubt this is already being put into practice. We may well reach a point where there will be no more delay between the initial discovery of a 0-day and its widespread exploitation, rendering it almost impossible to properly secure any device; by the time patches are out, it will be too late and almost every vulnerable internet-connected device will already have been hacked.
To summarize, I find that pushing for the rollout of increasingly sophisticated AI agents into ever more facets of daily life puts us all at risk. These agents are too gullible to be trusted with any remotely sensitive task, and by making them more capable we are simply creating a stronger arsenal for malicious actors to automate cybercrime. Based on my experience researching adversarial attacks on ML models, I’m not convinced there are effective ways to resolve these issues without undermining what makes generative AI useful in the first place.
There is no business model
This argument has been made at length by others, such as Ed Zitron, so I won’t spend too much time on it here. The long and short of it is that AI companies are lying about how profitable their products actually are, and AI models do not scale in a manner compatible with current economics. Basically, the Western model when it comes to generative AI is closed-source and subscription-based: the company (e.g. OpenAI) keeps the model proprietary and charges you money for the privilege of querying it. Assuming an average fixed cost per query, you could set subscription prices so that you have a guaranteed profit once your platform reaches a minimum number of paid users. However, contrary to most prior software platforms in the tech industry, having more users querying your AI models causes costs to rise much faster than the income from any reasonably-priced subscription scheme. This is due to a number of factors:
-
Different queries require different computational resources. There is no fixed cost per query, because the model might need to “think” three seconds for one and three minutes for another. It’s very hard to predict exactly how much a given query will cost in terms of compute, which makes it hard to accurately price subscriptions.
-
More queries means more GPUs. This is the most difficult barrier: these models are compute-intensive and require expensive GPUs to run efficiently. When the number of users increases, eventually the number of GPUs will have to rise as well, which is why all these AI companies are scrambling to build gigantic data centers. These are enormous high-risk investments that may very well bankrupt certain companies. Moreover, as AI companies increase demand for GPUs, hardware prices will increase proportionally, which anyone with a gaming PC has already experienced.
As some companies have found out the hard way, these factors combined mean that you cannot run closed-source AI models on a monthly subscription model. The only way to make a profit on these things is to charge based on token usage, which is something a lot of users will never accept since it makes costs incredibly unpredictable and hard to factor into limited budgets. This is compounded by the fact that AI agents essentially allow no refunds: if you’re not happy with the response you got from the chatbot, you can’t get that token burn refunded; it simply doesn’t exist in this business model. Your only option is to burn more tokens on more queries, hoping the model will get it right next time. Human employees, on the other hand, often operate on a fixed monthly salary regardless of how (in)efficient they were at their job. If a software engineer insisted on getting paid for every line of code they wrote on a given day and then proceeded to take advantage of this by writing a bunch of useless code, they would likely be fired very quickly. When AI companies do the same thing through predatory pricing schemes, this is instead lauded as innovation.
It follows that the Western business model of closed-source subscription-based AI services cannot survive in the long term. All companies involved are bleeding money and the economics simply don’t make sense on general principle. If these technologies are to survive at all, it will be through open-source models, of which there are already quite a few, which users will have to run locally on their own devices. Open-source models still suffer from all the other problems mentioned on this page, however. This includes the environmental impact, which may be lower during inference since they run on more constrained hardware, but the models still have to be trained.
It amputates the hand that feeds it
There is a fundamental fallacy in the core operating principles of generative AI, and it is so obvious and so damaging that I’m surprised to see so few people talking about it. The closest I have seen to mainstream coverage of this problem comes in the form of model collapse, the phenomenon where AI models eventually produce gibberish when recursively trained on generated data. The idea is simple: AI models are trained on data scraped from the internet, but the internet is now itself full of AI-generated data, so we’re training AI models increasingly on synthetic data that was generated by some other model. This exacerbates the inaccuracies and biases of the previous models, like a game of telephone on a superhuman scale, until the model produces only nonsense.
My problem, though strongly related to model collapse, is slightly more fundamental. Specifically, I wonder: who exactly is going to keep providing training data when we’re all supposed to be replaced by AI? They claim AI can write code, but have we not constantly been inventing new programming languages and frameworks, to serve our evolving needs and insights? They claim AI can write novels, but have we always been writing the same stories and stuck to the same genres and tired old tropes?
The fundamental issue is that AI models are statistical models, and as such they cannot do any better than learning to mimick their training distribution. This isn’t some sort of obscure philosophical problem I happen to have with generative AI; it’s a mathematical fact that’s been known since at least the 1960s and that holds for all statistical models trained on finite data sets. The training distribution of generative AI models encompasses, at best, the set of all creative output of mankind up to a specific point in time, namely the point at which data collection stopped and training began. Of course this process can be iterated and the models can be retrained to account for more recent data. However, as we know from the model collapse problem, we cannot keep training AI models on the output of other AI models; this will quickly lead to a breakdown. This means that, if the models are to evolve, they must be trained on new data of human origin; they cannot be meaningfully improved without novel creative output by human beings (Dohmatob et al. 2024).
In fact, the model collapse phenomenon presents an interesting conundrum to the generative AI fanbase: if these models are so good, why do they collapse when trained on their own output, but not when trained on human creative work? Statistical models may interpolate the training distribution, but only humans can shift it.
A chatbot isn’t going to invent a new, useful programming language, nor will it write groundbreaking new novels or storylines for award-winning movies or shows, because everything it can produce is by its very nature derivative: it is literally sampled from the same distribution as the one on which it was trained. We have much to teach AI models, but they have nothing to teach us. They can assist us with tasks that we already know how to do, but they will never be able to figure out how to do something no human has done before. The technical term we use in the field of AI for such behavior is out-of-distribution generalization: an AI agent cannot execute any tasks it hasn’t been taught how to do. Or, to put it in more accurate scientific terms: out-of-distribution generalization is a major open problem in machine learning, and no effective methods to achieve this exist as of yet; I’m inclined to think none ever will, at least with the current paradigm. All generative AI models are still trained according to this classical machine learning paradigm of empirical risk minimization, which means they cannot generalize outside their training distribution. Unless something fundamentally changes about the way we create these models, they will never display real creativity, and anything they produce will be shallow imitations of real art.
To be sure: sometimes a shallow imitation is good enough. In software engineering, for instance, the code we write rarely needs to be groundbreakingly original. Similarly, e-mails, presentations, technical documents or project proposals don’t need to be literary masterpieces either. Again, I’m not claiming these models have no possible uses; I’m simply trying to make the case that their uses are vastly more limited than what the shills would have us believe. To claim that there will be no more writers or painters or even programmers due to AI is to simply misunderstand how AI currently works. AI cannot innovate, because it can only parrot its training distribution. It therefore cannot cope with a changing world unless we train it on those changes, but this involves gathering more human-generated data. The whole enterprise starts to seem quite absurd when put like this: if we replace all human creativity with AI, then we will essentially be freezing all technology and progress to the level of the training data cut-off date.6 What are we going to keep training AI on when we’ve so devalued human creativity that the well has run dry?
A useful metaphor for this problem is the following. Imagine if we could clone a human being exactly, down to the smallest details of their physical body as well as their deepest memories. Would we then use this technology to keep highly-esteemed writers of the past – such as Ursula Le Guin, Ernest Hemingway or Shakespeare – alive indefinitely, and never train new generations of writers? Would we set a “cut-off date,” e.g. 1980, past which we deem it no longer necessary to train new writers, instead relying on clones of the old guard to publish new works? This is clearly absurd, yet it’s precisely what we’re doing with AI right now: we’re essentially assuming that human beings, as of roughly 2022, have no novel ideas or insights worth considering, and so we can simply snapshot human knowledge up to that cut-off date and have a statistical model (the “clone”) handle all creative production for us.
Here’s my prediction for the future, based on the concept of ghost work: tech companies will eventually realize, probably quite soon, that we cannot produce real innovation using AI alone. So what will end up happening is the same thing we’ve seen happen with older AI systems when their real limitations conflicted with the delusional expectations of shareholders: we’ll just start using humans again, but keep it a secret. We’ll see new jobs with vague descriptions like “AI engineer” or “data specialist” or whatever. In reality, these jobs will consist of nothing more than providing training data for an AI system to learn a specific skill. The pay will be dogshit and the hours will be grueling. Then, once the model can do a good enough job, you’ll be made redundant again. Very likely these sorts of jobs will have very tenuous temporary contracts and proceed largely through dehumanizing platforms such as Mechanical Turk. The human labor underlying these systems will be made invisible to consumers, and we will be nothing more than a resource to be exploited by an endlessly hungry machine.
As an aside: there do exist (hypothetical) AI systems that can improve themselves, such as the Gödel machine. However, such systems have not yet actually been built despite the underlying theory being known for over two decades. Moreover, the Gödel machine requires a strict mathematical axiomatization of its knowledge and goals, something we haven’t even come close to achieving with generative AI. I remain unconvinced that recursive self-improvement is possible with the current empirical risk minimization paradigm, and the phenomenon of model collapse seems to confirm this.
It automates the wrong thing
Right now, we need writers who know the difference between production of a market commodity and the practice of an art. Developing written material to suit sales strategies in order to maximise corporate profit and advertising revenue is not the same thing as responsible book publishing or authorship.
Perhaps the most fundamental problem with generative AI, the one that cannot be solved regardless of how advanced the technology becomes, is that it automates entirely the wrong thing. The promise of automation has historically always been to alleviate arduous manual labor so that we humans can survive more comfortably. To that end, we have created powerful machines that can perform this labor in our stead, sparing us the effort and reducing associated health risks. This has allowed us more free time to do the things humans actually enjoy doing, such as creating art. Generative AI completely breaks that promise by automating precisely those tasks we enjoy doing – engaging in creative expression – leaving us with only the drudgery of manual labor.
If AI is so useful, then where are the robot farmers to tend to crops? Where are the AI construction workers, the robot mine workers and firefighters, welders and treetrimmers? Nowhere. We have robots that can fold laundry and robots that load trucks in warehouses, but that’s about it. The Congolese miners extracting conflict minerals for your smartphone won’t be replaced by robots for some time to come, because their labor is much cheaper than a complicated robot that can work autonomously in a mine. Under capitalism, we will only ever automate that which is cheap to automate rather than that which is expensive to automate, regardless of utility. This is why they want to get rid of writers and programmers but nobody’s talking about the mine workers without whom those precious GPUs wouldn’t even exist.
I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
In the generative AI “utopia” promised by fanatics such as Sam Altman and Dario Amodei, humans no longer produce any original works of art; instead, at best, we serve merely as supervisors of AI agents that produce such work for us. We no longer have to think, because the AI will answer all our questions and will solve all our problems. We no longer have to learn anything, because the AI has learned everything for us. We no longer have to talk to other people, because the AI will always patiently listen to us and converse with us instead. And at any rate, there’s a loneliness epidemic; chatbots are the cure. Why build community and connection when we can build data centers instead to talk to our imaginary friends that don’t even need to consume drugs in order to hallucinate? Why even eat food anymore when we can choke on dust? We might even get some cool new cults out of it!
The rabid insistence that a computer program with language synthesis constitutes the pinnacle of innovation belies a profound ignorance of the real problems we face. What, in fact, is achieved through the use of such programs? The programmer might write code a bit faster. The writer may compose a manuscript slightly more quickly than before. The artist could complete a painting more easily. Color me unimpressed. While an argument can be made that this is some form of innovation, it is by no means so important an achievement as to merit the attention it now receives, let alone at the cost it demands. Who cares that the programmer produces more lines of code per day, the writer more sentences or the painter more strokes of the brush? Only the shareholders, the C-suite, the board of directors; the moneyed class. As argued above, generative AI does not expand the set of problems we can solve; at best, it can only make us a little faster at solving problems that were already solvable without its help. While of course that is not entirely useless, it is hardly the paradigm shift many people seem to think it is.7
Generative AI brings us no closer to halting climate change (indeed, in its current form it does quite the opposite), to housing the unhoused, to curing cancer, to feeding the hungry. These are the real issues of our time, and they deserve much more attention than any chatbot, however clever. Naturally, some will object on the grounds that we don’t have to solve all problems at once, that an innovation need not solve climate change to be important, and that in fact generative AI is used fruitfully in medical research. All of this is true, obviously, but also completely besides the point. My point is that we spend a disproportionate amount of time forcing generative AI into every possible thing without regard for actual utility, when we could be using it to do truly useful things. My point is that, whenever wealthy men (and they are always men) speak of the potential of generative AI, they are referring to the costs they can cut by firing people; they aren’t talking about curing cancer. OpenAI isn’t investing billions in new data centers to cure cancer; they’re doing it to defraud Oracle. Moreover, I know several scientists who do cancer research with AI, and the closest ChatGPT has come to being useful in that regard is speeding up literature review, and even that is tenuous at best since ChatGPT still constantly makes up fake studies.
You don’t learn anything by using it
This argument applies especially to students, but more generally to anyone using generative AI in the course of learning a new skill. The fact that you don’t actually learn anything when using generative AI to solve problems for you is intuitively obvious, since it is essentially equivalent to having somebody else do your work for you. We’ve known for a very long time that you don’t learn anything by copying someone else’s homework, so why would you learn anything by copying an AI?
This intuition is (unsurprisingly) strongly supported by scientific research. There is, of course, the famous MIT study that explored the cognitive consequences of LLM-assisted essay writing. While personally I don’t put much faith in EEG measurements as a metric for cognitive effort, the study did reveal interesting qualitative findings:
-
LLM-assisted essays were considerably more homogeneous compared to the non-LLM ones. This is not surprising to anyone who knows how these models work: they are statistical word predictors and hence they generate text according to a fixed distribution, which will inevitably cause homogeneity when they are prompted to write essays on similar topics. This presents a subtle danger to scientific and cultural advancement, which can thrive only through diversity of thought and opinion. If AI is widely used in the production of creative output, then this output will be homogenized, and human society will face its own “thought collapse.” Moreover, the singularities to which our thoughts will collapse will be wholly determined by soulless companies maximizing profit for shareholders. As such, in an AI-dominated world, you will only be able to think what rich CEOs want you to think, and you will only write what they allow you to write. You won’t have freedom of speech; you’ll only have freedom of prompt, provided you pay a subscription.
-
Writers of LLM-assisted essays were less critically engaged with the text they produced. In fact, they mostly failed to even quote their own work. To me, this signals that the typical caveats of “responsible AI use” are utterly performative: once you’re used to relying on AI, you don’t critically examine its outputs anymore. Indeed, why would you? Doesn’t that defeat the point? So you copy and paste and perhaps skim the text a little bit, but you don’t proofread to the same extent you would had you written the text yourself. As a result, you’re less aware of what’s actually in there, you don’t really internalize what’s written and mistakes or things you find disagreeable slip through the cracks.
The MIT study basically confirmed what everyone should already know: when you use AI to solve a problem, you don’t learn how to solve that problem yourself. This is not only intuitively obvious but also supported by plenty of scientific evidence, and I won’t entertain anyone who denies this basic fact. So we reach the inevitable objection people always raise at this point: why should we still learn to do things that AI can do for us? Aside from the fact that this general attitude – why should I learn a thing if somebody else can already do it – is absurd on its face and only held by deeply uninteresting people, I have actually already answered this question in the previous two sections, depending on the particular problem we’re talking about:
-
If we’re talking about art, then you should learn to create it without AI because that is what art is about. Art is about human creative expression; hence its creation cannot be automated. Sure, you can use AI to mimick the form of art previously created by other humans, but then your creation is meaningless and I will never call you an artist. I’m not interested in the “creative expression” of an unfeeling machine that has never felt the sun on its face or the grass between its toes, that has never known love and has never known loss. Its creations carry no meaning worth pondering. Would we also expect a worm that has lived its entire life in the soil to be able to wax poetic about the ocean after having read a book about it? On a more basic level, many artists don’t actually like to use AI in the first place, because a typical artist derives joy from the very act of creating their own art. They don’t want the help of an AI any more than they would want somebody else to hold their pen or brush for them. Doing it yourself is the whole point. Are you also going to stop watching movies because an AI can summarize them for you? No, because the point of a movie is to watch the thing with your own eyes! We may as well stop going to amusement parks because ChatGPT can tell me all about what it feels like to ride the rollercoasters.8
-
If we’re talking about code or mathematical theorems, I’d argue it’s still important for humans to master these skills, because generative AI cannot shift its training distribution. That doesn’t mean you in particular have to know how to code or prove theorems or whatnot, in the same way not everyone who drives a car must also be a mechanic. I don’t care if a biologist uses a coding assistant to generate some plots of single-cell data or whatever, or a student uses ChatGPT to explain some mathematical concept to them. But I do care that there are still actual programmers and mathematicians left who push the field forward, because they understand its ins and outs and know where it needs to go next. We have machines that can build cars, so should we stop teaching engineers how engines work?
I find the idea that generative AI renders intellectual and creative skills obsolete to be profoundly obtuse. We should learn to understand the natural world ourselves because the world is inherently interesting, and anyone who thinks otherwise – who believes nobody needs to learn math anymore because we have calculators, or nobody needs to learn to paint anymore because we have image generators, or nobody needs to learn to write anymore because we have chatbots – is, at best, a deeply uninteresting person. At worst, such people entirely misunderstand the purpose of having a brain. They view art, programming, math, etc. as nothing more than the production of output, a market commodity the only purpose of which is to fetch a certain price. They don’t see the true value of things and have no patience for the human experience; they only care if the line goes up. Is this what we’ve reduced ourselves to: just empty husks, our spirits drained into a data center, with no interest in the world around us? Are we mindless cows, grazing blissfully on the desiccated fields of human intellect, oblivious to our impending doom as generative AI sets it all on fire?
You can’t handle the truth
To me, the most conspicuous thing missing from generative AI discourse is how we’re supposed to cope with the resulting job losses, both the theoretically predicted ones assuming AI keeps improving as well as the ones already happening right now. The answer, of course, is a universal basic income (UBI) to sustain those whose skills are no longer marketable thanks to AI. There is no way around this, and it isn’t rocket science: if we continue to automate ever more human labor, then by definition there is less work remaining for an increasingly large population of humans. We will build – as we are already in the process of doing – a growing surplus population, a reserve army of labor as Marx and Engels call it, whose circumstances make it impossible to find work that pays a living wage. The Chinese government seems to already recognize this and has ruled it illegal to fire people because AI would be cheaper. Although a step in the right direction, it doesn’t prevent firms from not hiring due to AI, so the fundamental problem remains.
If you are in favor of generative AI but simultaneously reject the idea of a basic income, then you have only two options: either you take it on faith that people will always find jobs that pay well enough or you’re okay with leaving large swaths of the population terminally unemployed and living in poverty through no fault of their own. The former statement is straightforwardly incorrect (we would have far fewer unemployed otherwise) and I am ideologically incompatible with the latter. If you support generative AI, then you should also support UBI. You should also, in particular, oppose our government’s current destruction of social safety nets, including the limitation of unemployment benefits. These are not debatable issues unless you’re the kind of neoliberal shill who has no problems with deliberately plunging people into poverty, but in that case I don’t want to waste my time with you. I will not entertain “debates” with people who defend human suffering and squalor.
Case study: Ruben Verborgh
So where does that leave us when it comes to the comments made by Ruben Verborgh to our Flemish media? For non-Dutch speakers, I briefly summarize the situation here.
Ruben Verborgh is an associate professor at Ghent University and colleague of mine, teaching Web Development to our second-year bachelor students. He reported to the media that the exam he’s preparing for this particular course will be open to everyone (even non-students), and that he will fully allow the use of generative AI tools. He makes the case that students ought to be prepared for a world where AI is everywhere, and that using AI alone is not sufficient to pass his exam: he reports that ChatGPT only got 4 out of a possible 20 points on the exam.
In fairness to Ruben, he is right that students ought to be prepared, since AI already is everywhere and this will only get worse in the future; I’m not naive on that point. I also appreciate that he explicitly tries to make the case that humans – his students in particular – still have intellectual advantages over AI, since ChatGPT couldn’t pass his exam. However, I also think he fails to make both these points convincingly.
First, our students don’t need to be “prepared for AI” in the sense Ruben means. What he suggests is to fully allow generative AI in all courses and exams, giving up on trying to fight usage of these tools anywhere in our education system. It should be clear by now why I cannot be in favor of this: because a university is an institution of learning, and you don’t learn anything by using generative AI. A university is a fundamentally different place from industry, and following a course is totally different from working a job. Learning to program is not the same as being a programmer; learning to write is not the same as being a writer. At a university – and any school for that matter – the point of taking a course is so you, the student, learn new skills and develop your own brain. You don’t do this by outsourcing the majority of work to a machine. Why become a programmer if you don’t want to program, or a painter if you don’t want to paint? The fact that students will be allowed (or even forced) to use generative AI tools when they eventually get a job is completely irrelevant, because again: a university course is not a job simulation.
Ruben’s remarks here reflect an attitude towards education to which I am diametrically opposed: the idea that the point of an educational institution is to produce workers, perfectly tailored to the current job market. It isn’t: a university’s goal is to produce scientists, and we frankly shouldn’t give a damn about the job market. This is why we also teach literature and other “useless” topics, because it is the intellectual pursuit that counts rather than the whims of the market. A university should not be subservient to parasitic tech companies that want to outsource employee training; let companies train their own damn employees with their own money instead of relying on our tax-funded institutions! I don’t care that students are going to have to use these tools in the future anyway; what I care about is that, at our computer science department, students become computer scientists. To become a scientist in any given field, there is a minimum amount of intellectual work you must be able to do without anyone else thinking for you, because it is your specific brain – not some stochastic parrot – that needs to develop the skills necessary so that you can call yourself a scientist. I’m not going to call someone a programmer when all they do is copy output from ChatGPT, in the same way I’m not going to call someone a mathematician because they prompted ChatGPT to write a proof of some theorem – however correct that proof may be.9
The second problem I have with Ruben’s publicity stunt is his claim that students still have an advantage over AI because ChatGPT can’t pass his exam. It’s an obvious unwarranted generalization to claim that generative AI can’t be that bad for education because it can’t solve this one single exam. I know ChatGPT by itself can already solve many of our exams satisfactorily, because of course I and many of my colleagues at the department have been testing this for years now. In fact, I’d say the observation that ChatGPT only scores 4/20 probably indicates that this particular exam is overly complicated and doesn’t really reflect a reasonable test of student knowledge, because ChatGPT absolutely knows everything that is taught in Ruben’s webdev course. I strongly suspect he specifically engineered this exam to be unsolvable by ChatGPT, which is not only a losing battle as AI will continue to improve but also potentially makes for a very unfair exam. Imagine trying to teach a kid basic math, but because ChatGPT can do basic math you instead ask the kid to prove the Riemann hypothesis! There is a reason our exams have a certain level of difficulty: because they are designed to test whether the student has attained a certain level of skill, no more and no less. The idea that an exam becomes moot once AI learns to solve it not only reflects a disdain for intellectual pursuits but is also self-defeating, because we can always add that exam to the training set and have the AI solve it with the next update.
At a baser level, I think Ruben neglects the fact that AI can actually do the things we teach it to do: ChatGPT in particular has obviously been trained to pass a bunch of university exams on many topics, and as a result it can do so. I’m not the kind of AI skeptic who believes AI can’t really think and is therefore utterly useless; I’m the kind of AI skeptic who understands statistical learning theory and hence knows what AI can be capable of, without resorting to philosophical discussions about what it means to think (interesting as those discussions can be). We’ve taught these bots how to pass exams and, as a result, they can actually pass many of our exams. Shocking! It’s almost as if empirical risk minimization has certain performance guarantees.
There is something almost desperate in Ruben’s ostensibly optimistic attempt at incorporating AI into education: he believes there’s still plenty that AI can’t do and likely never will, and therefore we don’t need to worry about students using it. By contrast, my position is that, as educators, we shouldn’t give a damn about what AI can and cannot do, because we’re trying to teach students certain knowledge and skills that have value even when they can be automated. There is inherent value in having human beings master certain skills and learn scientific knowledge. We didn’t stop memorizing facts after we invented writing, because there is value in having pieces of knowledge inside your human mind so that you – you, specifically, not some machine – can reflect on them.
All of this isn’t to say that AI cannot have any place in creative or intellectual endeavors. But to use AI effectively the way Ruben seems to want, you actually need to possess a particular set of relevant skills, and we cannot adequately ascertain whether students have such skills if they can freely use AI on exams. Compare the collaboration between two human colleagues: could they possibly work together effectively in, say, the proof of a mathematical theorem when one is a renowned expert but the other has never taken a maths course at all? Can we seriously expect any fruitful collaboration between humans and AI when we’ve so neglected our education that the humans are vastly underqualified compared to the AI? How could that possibly make sense?
Conclusions
Human beings may not be perfect, but a computer program with language synthesis is hardly the answer to the world’s problems.
– J.C. Denton
Long story short: I don’t think we should be allowing generative AI on our exams, or at the very least not all exams. I think we should be very selective where we allow this technology and where we ban it, but I also believe the vast majority of university lecturers would do well to ban it. In particular, as computer scientists, we would do well to teach our students to be more critical of this technology rather than to blindly embrace it.
I like to compare the use of generative AI to the use of computer-assisted proofs in mathematics. Terence Tao, one of the greatest living mathematicians, is excited about generative AI for what it can mean in automated theorem proving, and rightfully so in my opinion: reasoning agents hold the potential to provide aid to human mathematicians in the solution of unsolved problems. Crucially, such agents can learn to generate proofs in languages such as Lean, so that the correctness of the proof can be verified with absolute certainty. This is a good application of generative AI, but it’s only good because it comes with a few additional caveats:
-
The output of the LLM can be formally verified to be correct, so you will always know when the LLM has made a mistake.
-
The LLM is treated as a tool to augment human abilities rather than to replace them.
These two factors mean that we can trust the results of this AI-enhanced process and, importantly, it’s not going to cost any mathematician their job. Tao isn’t saying insane stuff like “AI doubles our productivity so we only need half the number of mathematicians,” because that’s just stupid. Mathematics isn’t an industry that produces theorems like a market commodity; it is an intellectual pursuit which most mathematicians practice for its own sake, because they like to use their own brains to think about problems. This is evident in recent manuscripts that use generative AI to prove theorems, such as Alexeev et al. (2026) who proved Erdős problem 1196 as well as a host of corollaries. While Liam Price, who originally submitted the ChatGPT-generated Lean proof, is apparently an amateur when it comes to mathematics, it still took seven co-authors to take ChatGPT’s output and get a meaningful academic contribution out of it. These researchers did much more than simply prompt ChatGPT to prove a theorem and copy its answer.
While the Price story is being used by some as an argument against everything I’ve written here, there are a few simple reasons why that’s short-sighted at best:
-
Price has admitted in an interview he doesn’t actually understand the problem he solved. This begs the question: why did he bother to do this? More generally, why would anyone have the motivation to solve a problem if they don’t understand it? Price clearly just did this for fun, but stumbling blindly through a catalog of unsolved problems until you accidentally get a proof that checks out seems suboptimal to me, to say the very least.
-
Tao has also stated that the problem in question was easier than expected, suggesting that AI-assisted proofs are predominantly helping us with “low hanging fruit.” There are many unsolved mathematical conjectures, and we don’t have unlimited energy to spend on each one, so it makes sense that a breakthrough in automated theorem proving will first yield a bunch of easy wins we didn’t know were easy. This is similar to the deep learning boom of the early 2010s, when we suddenly got major performance boosts with this new class of algorithms, but that eventually tapered too.
So my take-away from this story is that generative AI can clearly be useful provided we keep experts in the loop instead of rendering them redundant. Under those conditions, AI can be a democratizing technology, once we find out how to make it sustainable and switch to local LLMs that aren’t controlled by delusional monopolists.
The case for my life, then, or for that of any one else who has been a mathematician in the same sense which I have been one, is this: that I have added something to knowledge, and helped others to add more; and that these somethings have a value which differs in degree only, and not in kind, from that of the creations of the great mathematicians, or of any of the other artists, great or small, who have left some kind of memorial behind them.
To me, the way mathematicians currently treat generative AI should be the template on which all other applications are modeled. Assuming certain other problems are resolved, such as the environmental impact, I have no real objections to using generative AI in many domains, provided it is actually used to help people rather than to render them obsolete. However, in order for generative AI to be truly helpful, we cannot neglect our own creativity and skills. If you want to be an AI-assisted mathematician, you have to become a mathematician first. Similarly, if you want to be an AI-assisted software developer, you have to become a software developer first. Therefore, I don’t think there’s much place for generative AI in education, at least not on exams for beginner courses; you need to learn to walk before you can run.
There will always be value in human beings thinking for themselves and mastering intellectual and creative skills. But that could just be me being old-fashioned, I guess. To the AI tech bro, there is no need to think for yourself when you can have a machine think for you. To me, however, this is more indicative of the vacuousness of the tech bro mind than it is a condemnation of human thought and learning. Of course those whose own mind is mute would find no value in thinking for themselves.
Footnotes
-
They also have a bunch of other good points. This paper is very much worth reading in its entirety. ↩
-
This includes myself. I’m far from perfect, but neither am I a hypocrite. ↩
-
As someone on the internet put it: what is this world we’re rushing to build? A dessicated wasteland? ↩
-
I will concede that generative AI can effectively automate certain tasks. I will never concede, however, that these tasks absolutely need to be automated. Therefore, it is a luxury technology, not a necessary one. ↩
-
I’d love to work on this though, and I’m convinced it’s possible to prove this rigorously. Given that you can basically talk to a chatbot in any language, we can restrict our attention to formal languages, and from there we can probably obtain some undecidability results. ↩
-
The nice thing about this particular argument is that it’s actually sort of testable: you could train an LLM on all human knowledge up to, say, the 1950s, and then see if it can prove Fermat’s Last Theorem or other relatively recent mathematical breakthroughs. ↩
-
This aversion towards allowing people to just take their time to produce a form of creative output points to a more systemic illness in our society. We have no more patience, and we refuse to accept the basic reality that hard problems take time to solve. This is actually a proven mathematical fact; see the time hierarchy theorems in computational complexity theory. ↩
-
The writing of this essay in particular is something I could never outsource to anybody else, because these are supposed to be my thoughts, not someone else’s. Even using AI to “improve” the writing I find objectionable here, since the act of writing every single word in this thing has been a catharctic experience for me that would be undermined by automation. I also just dislike the way most AIs write, probably because I’m a teacher: AI writing just feels way to much like a pretentious student trying to come off as smart without having read the syllabus in any detail. ↩
-
Indeed, Ruben’s position on education is self-defeating in this regard: if the goal is to simulate the “real world” as accurately as possible, then why even have exams at all? When on Earth is a programmer ever going to be cornered by their manager, locked in a room for three hours with no permission to talk to anyone, and forced to solve some artificial coding problem to which their manager already knows the solution? We allow some level of artifice in university exams because any competent educator realizes that an exam is not a job simulation but a test of the student’s skill and knowledge, and any such test is fatally undermined by outsourcing thought. Furthermore, Ruben still disallows direct communication with other people during his exams, so at least he still acknowledges that students have to use their own brain to some extent. But if they can use AI chatbots and coding assistants, then what’s the point of not allowing them to communicate with each other? You’re already allowing them to consult an expert anyway. ↩
Enjoy Reading This Article?
Here are some more articles you might like to read next: