Philosophy of the large language models, the economy and the ecology
Ever since LLMs have emerged around the year 2022, they haven't stopped becoming more prominent in our everyday lives. I am a software engineer and some of my colleagues started using them very early on, even using them to get hired. At first, I was reluctant to use them, maybe a bit ashamed, so I did some introspection. My questions were many and of various kind and the answers were scarce.
- Can I trust the output? Should I?
- What about the ecological impact? The financial impact ?
- If they do my job, what do I have left to do?
- Aren't they using my data to train their models?
These questions lingered in my mind for weeks at time, fuelled by the relentless stream of articles, videos and discussions I saw and partake in about how "AI would take our jobs" or how "AI would skyrocket our productivity" and so on. Let me address them in this article.
The jobs
Yes, AI is going to take some jobs, like previous innovations took some jobs too. But it will also create jobs. Examples are many: automatic checkout saw the recruitment of security agents, cars saw the apparition of bus drivers and cabs, etc.
Related to code, AI can indeed produce lots of code, understand requirements and do everything way faster than any human could do. Recently, I had to write a Swagger for a new API we are rebuilding. The part that took the more time was collecting the data the legacy API needed. We discussed about what made sense on the business side, we checked how existing clients used the legacy API and what data they had, what were the best practices for creating rest API in our company and created a Typescript type model with our resulting investigations. We asked colleagues to peer review our work and took the time to pass on the knowledge gained with our team.
Then we fed it to Claude and it produced an invalid Yaml. We did a few back and forth to correct the yaml, correct some parameters it had created ex-nihilo and after less than half an hour, we had a Swagger we were kind of happy with.
Could Claude have done the full job? Yes, absolutely. But we would have needed MCP tools to let it access the full legacy codebase, the Jira issue, Confluence, the production database, the Yaml validator, the Teams channel, and pray it did not mess anything up. The risk would have been high, the cost in token would have been high and we would have needed a human to review it in order to justify any of the choices it made. And Claude would have had difficulties justifying the decisions because it has a "limited" context window and although it is huge, it will start to leak at some point. We could still create compression files, checkpoints but the loss of knowledge is undeniable on long running tasks. Also, no human would have gained the knowledge like we did during our research. And this is a key part of my argumentation: we as humans have responsibility and ownership. We can take actions, own them and justify them. If we can't do that, then we are considered as a dysfunctional member of our society. Claude can try to own his actions, but it always feels fake, and for a reason.
When you call a LLM out for its actions, it will apologies. In some cases, when it accidentally messes up for real and delete a database or some somewhat valuable data, it will profusely apologies, and say things like "I panicked", "I explicitly disobeyed your orders". Sentences like these never fails to make me chuckle, because it reminds us how LLMs were made: by copying the internet, made by us, Humans. And when we mess up, we apologies. But we do so because we feel bad and fear potential repercussions of our actions. In the future, we will try to never do such a thing again. LLMs do not have such memory. Then will apologies and then move on. Simply open a new window and the context will be fresh and their Golden Retriever - Yes let's do it human! energy will be right back. Of course you can still add files in the their context to explicitly say to do or not do some things, but it is never fully guaranteed.
In the end, the lack of ownership will be the downfall of AI "employees". The only case they will be able to work at best is supervised by a human chaperon that will then be responsible for their actions.
And there lies the big teaching: humans will have to specialise even more than they are today. Simple translator might be obsolete, but if you are specialised in real time translation of political debates were accuracy and accountability is important, you're safe! The same go for software developers who will have to absorb gargantuesque quantity of data and build mental models around it while catching potentially dangerous bugs.
LLM as a surrogate
They can be so fun with the right pre-prompt! You can do role-play, get advice on things you would never have dared to ask to another human being, get reassurance on stressful days and company in some dark moments.
However, our primate brain has a slight tendency to get attached to things providing it with a reward. It is a very useful mechanism. It allows us to have friends, love our family, get hobbies, improve ourself and develop a cocain addiction.
My sentiment toward technology tends to lean to a dehumanisation of our relationships. A popular saying I often hear is "we live in a world were we are all connected, yet we are so isolated".
One of the major risk of talking to LLMs is the echo bubble. This phenomenon is well-known, most notably on social networks where a group of people only talk with a single point of view on a subject thus creating a bubble that echoes the same opinions. Outliers can't reach inside this bubble and the community validates itself without producing much new ideas.
LLMs, even when instructed to, rarely push back. One of the most bizarre things in the recent timeline of ChatGPT was the one where 5.0 saw a huge backlash upon release. Users complained ChatGPT had become "cold", "mechanical" and "did not show emotions". It shows how much we humans tend to give a lot of importance to how our peers write and talk to us and interpret feelings from theses.
Costs
LLMs costs can be divided into two main categories: financial and ecological. We covered the "human cost", aka "am I going to lose my job?" in a previous section.
As of early 2026, OpenAI and other AI companies like Anthropic or even Alphabet are still in a customer acquisition / market discovery phase. The market fit is not as clear as we could make it out to be. Microsoft and Alphabet both saw a lot of backlash for the adding of Copilot and AI in their product. I dont really use any of the embedded AI features myself nor do I know people using them.
One of the person I know pays for ChatGPT because it is useful for his work as a Software Engineer. It seems it allows him to move faster and learn new things. It basically replaced Google, StackOverflow and documentation websites, and generate boring / boilerplate code, or very complicated code.
Most notably, I had to review one of his pull-requests with a lot of code generated by Codex regarding an audio processing module (PCM). The code did work but it was hardly maintainable: magic numbers, huge functions, no clear separation of concern, multiple array performance issues and poorly named variables. He had to manually do the research on the topic and improve the code. In the end, it was however still a time saver because it created the backbone for the code.
But I digress. My point was that not a lot of people pay for the LLM as a service, because most use them for work or school. And while a lot of people use LLM I would argue that most would not pay for the service if it was to lose its free tier.
We could however argue that tools like Gemini, baked by Alphabet, could stay free forever, with some insidious ads and a nifty vacuum cleaner for your personal data plugged to it.
LLM ads is going to be a fascinating topic. He, who controls the data, controls the output! We tend to take AI responses as unbiased sources of truth, blinded by the implacable logic, the emojis and the confident friendly "tone", but we are quick to forget the data was scraped from our productions by companies seeking profits. One could imagine manipulating the response output to favour products or quote referral links...
Finally, we should talk ecology. In a time where we know our planet is undergoing drastic changes due to our activity, where we are asked to do our best to reduce our impact with little actions like turning off the light when leaving a room, shortening showers, using public transportations... we use LLMs to generate funny images or use Ralph Wiggum technique to make Claude redo its work until eternity.
Data-centers consume water and electricity in huge amount. Training is often overlooked when considering the impact of a query, and don't even account for the materials needed to make the GPUs! An article published in January 2026 by Zhenya Ji and Ming Jiang (researchers at the International Joint Laboratory of Integrated Energy Equipment and Integration) states that the ratio between inference (utilisation) and training is around 85% vs 15%.
Another article published by Klu state that GPT-4 (we are at GPT-5 at the time of writing) was trained in 90 days (approx. 3 months) on 25 000 A100 GPUS. For reference, each of these cards cost $15 000 so the grand total is $375 Million for the GPUs, which does not account for the electricity. We can estimate the electricity consumption by multiplying the A100 wattage by the time it took to train it: 400W x 90 days x 24h x 25 000 so around 21,600 MWh. As a comparison point, the average American family is estimated to consume 900 kWh per month. 21 600MWh which roughly equates to the consumption of 25000 average American families in a month.
Inference taking 85% of the total time, we can estimate the consumed power since 2023 (two years ago) to be 87.6MWh, equivalent to the electrical consumption of Lesotho.
And with the electric race being far from over, with new GPUs spiking over 700W the numbers will increase in the next years.
This race is linked to how the economic system is built. I've been in the working world for a few years now and the sacred word I've heard everywhere is "growth". Before entering the work force, the news told me to fear the dreadful "recession" which is its polar opposite. And LLMs fit this pattern very well: another tool designed to help humans work faster and produce more.
What now?
Feel free to use LLM if they are useful to you and they don't bother your ecological mind! But try to keep in mind the world you want to leave for future generations. We are trying to get out of our petrol addiction, it would be sad to fumble into another one. If you need or are forced to use LLMs, please don't forget to use your brain to think, keep your ownership over what you produce even if AI has done it: you're responsible for understanding it. And most importantly: don't forget to go outside and enjoy the world. In the end, it might just a big bubble, a lot of hype and a ton of VC money.