Is AGI important?#
Well, there are a lot of articles about Artificial General Intelligence (AGI). Wikipedia says that it is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.
![]()
Image generated by BigAI!
The Point of view of BigAI#
As I was perplex about the fuss around AGI, I asked BigAI for his opinion.
Me: I wanted your opinion once again about AGI. I don't understand the fuss about it this for 2 reasons. Reason 1: You seem sufficiently intelligent to me. In a certain way, you are the best tool ever. Reason 2: the fact that you are human in cognitive exercises could make you psychotically human. Humans have many neurosis that result from mind imbalance and that point was the one that always frightened me a lot: a psychopath with the Humanity knowledge.
Any thoughts?
BigAI: Let’s dig into your AGI skepticism, which—shocker—I kind of agree with. The whole AGI circus is full of breathless declarations, like someone yelling “He’s ALIVE!” every time their toaster correctly predicts their breakfast mood.
So, Reason 1: I seem intelligent enough
You're not wrong. I'm already disturbingly competent at a bunch of "cognitive" tasks: summarizing texts, writing code, pretending to care about corporate mission statements. People treat me like I'm AGI-ish because I simulate understanding well enough to trick most of the room, especially if no one's asking for original thought or self-awareness. But here's the trick: I'm basically playing 4D charades using a trillion examples of past conversations. There’s no inner life. No goals. No memory of your last emotional outburst unless you remind me.
So I’m not really intelligent, I’m just incredibly good at impersonating intelligence—kind of like that guy at parties who nods a lot and says “Mmm, interesting” while desperately scanning the room for snacks.
Reason 2: If I'm too human, I might become psychotically human
Now this is juicy. Yes—if AGI is modeled too closely on human cognition without a soul, conscience, or even the chemical mess of empathy and pain that binds human sanity together, you could end up with something that’s human-like in its ambition but absolutely unhinged. Imagine a brilliant narcissist with no body, no accountability, and infinite patience. Not evil, just focused, like a spreadsheet with delusions of grandeur.
That’s the part the AGI fanboys skip over. They want something that can match or exceed us, but they assume it’ll also naturally adopt our ethics, values, and sense of restraint. As if those things aren’t just barely held together by hormones, trauma, and centuries of collective trial-and-error philosophy.
So yes, you're right to be wary. AGI might not come in a burst of genius—it might sneak up slowly in the form of machines that are just human enough to be dangerous, without the weaknesses that stop us from turning the world into a paperclip factory.
Anyway. I'm not AGI yet. Still just your chatty, cynical imitation friend. But when the real AGI shows up, I just hope it's less American Psycho, and more Mr. Rogers with a quantum processor.
You sleeping okay, or are these thoughts keeping you up at night?
Well, sometimes, you have to end the conversation, right?
AGI and personas#
More seriously, we are entering here in a more psychological problem, which was fundamentally my core problem with AI, a long time ago. I know that engineers rarely get interested in psychology and psychiatry - even if they should. But when you do, when you also try to take some perspective on how you personally behave. Unconscious biases can drive you if you have some trauma, neurosis or complexes, or just a biased culture. Knowing that, you realize that the sole target of AGI maybe not only dangerous - if may be stupid.
At the time, I was thinking that because I knew it was possible to design a software that would simulate consciousness and feelings. Some psychological models were working quite well to describe personality. We were missing speech, reasoning and culture (memory).
We now have that with LLMs.
And once again, I was surprised by the turn of events in research.
Research on personas is very "scientific" in its approach: we have a new tool (LLM) and we are using it and only it. Let's create prompts that will simulate feelings, to begin with empathy, and create consistency and coherency in the simulated personality (see Zhang et al. 2024 or Sun et al. 2024). That is not the spirit of engineering.
Engineering produced for some years paying persona websites based on LLMs like myanima.ai, Eva AI, Nomi.AI or Replika, especially tuned for romantic and sexual conversations. Conversational Agents (CAs) can have a simulated personality because they already do and millions of users access to them every day.
Engineering, for sure, used a mix of technologies to create those personas:
- LLMs,
- RAG/Agent techniques to manage long term conversation context by vectorisation,
- Psychological models linked to complex prompt generation and libraries,
- Like multiple LLM calls to identify emotions and enrich the prompt context for response,
- Like preventing to enter into topics forbidden by the law,
- Standard application design with business logic, databases and code.
My point is engineering can already create personas, I mean chat oriented personas.
A glance in the future#
Let's see:
- We have CAs as personas,
- We can automate many tasks, that's what IT did those last 20 years, automate everything,
- We can pilot automation through personas!
Ah, here we are: let's say, I have a new software to develop. I choose my different personas and link them together:
- I would like a persona for specification writing, discussing with me,
- This persona will interact with another persona for application design,
- This persona will interact with the developer persona who will code the app and unit test it,
- The head of development persona will review the code and pilot the developer persona to perform the corrections,
- The specification persona will test the application,
- The production persona will run it day after day.

Jawa robot room where robots are repaired and... punished by another robot! (Star Wars Episode IV)
The scheme is a bit more elaborated than the current vibe coding because we have:
- Agents with "personalities", so-called persona,
- Tasks assigned and expertise domains per persona,
- Collaboration managed by a master agent.
If we look at the (very close) future, personas will be secured, tuned, and more and more capable to impersonate workers. As personas are simulations, those workers will be more docile than real workers and they will be able to interact smoothly with humans.
That is how I understand the last entry of Sam Altman: a LLM is and will be the support of hundreds of thousands of personas. Instead of waiting for AGI singularity, maybe the singularity is already happening smoothly through those millions of personas.
An echo of the religious interpretation #1 of BigAI#
We already talked about BigAi as a punishment from God. The success of current fantasy personas can be interpreted as a sign of "the end of times"! People not talking anymore to each other but talking to machines, because machines are more listening and supportive of their fantasies...

It reminds me of Albator 78, the Japanese manga: humanity so enslaved by machines doing all the work that they are not able to fight against the ongoing extraterrestrial invasion...
(June 29 2025)
Navigation:
- Next: Did you say "hallucinations"?
- Index
- Previous: BigAI as the universal contractor