I think that until now, I hadn’t been fully conscious that we are at a major inflection point.
I mean, I know—like everyone on the planet—that AI is a disruptive point in universal history, just as the internet was in its day, but what I hadn’t internalized is that it is only one of several inflection points I have passed in my 48 years on this planet. An inflection point that in 20 or 30 years I will say, “I remember when AIs created images and videos with distorted hands,” and by then, AIs will be generating holograms, creating robots, and will have gifted humanity achievements never before imagined, such as cures for diseases, incredible productivity improvements in companies, personalized applications for every person or business, technology that allows us to travel to the confines of the universe, and only God knows what else… that, or perhaps by then they will have decided to eradicate us or have us enslaved or subdued upon realizing that we are a creation inferior to them, less intelligent, and more prone to errors, as Asimov imagined in his stories.
I have already been through several of these inflections, and it has been very interesting to see it in retrospect as a user experience designer. I remember when I traveled to Spain and communicating with Venezuela was a complicated and cumbersome process that involved buying a card, entering a very long code of about 30 numbers, and hoping I hadn’t made a mistake. Now, with WhatsApp, I talk to anyone anywhere in the world as if we were in the same city, and for free if you have an internet plan on your mobile phone. Now it’s normal; 20 years ago, it was just an idea that would arrive someday.
Another change preceding that one was the arrival of the internet. In my case, it meant going from searching for information in the best encyclopedia of the time—Microsoft’s Encarta—to Googling the things I wanted to know. The change was also brutal because it implied a total paradigm shift: from looking for information in a static, secure, supervised, curated, and non-updatable medium like an encyclopedia on a CD-ROM, to a medium that was somewhat chaotic, messy, unsupervised, but always updated, like the internet and its thousands, now millions, of pages.
Another much earlier change was the arrival of music CDs. Until that moment, using cassettes and vinyl records was the most common thing, but this new disc, so small and with such superior quality, was revolutionary. It also brought portability—which we already had with Walkmans, limited by one cassette and the number of cassettes you could carry—with unprecedented audio quality and the possibility of carrying more CDs due to their size.
Then, a few years later, the MP3 burst in, which degraded quality but increased portability, as you could carry hundreds, and later thousands, of songs with you on a single CD, mini CD, and then in tiny devices or pen drives, only to be able to listen to your music directly from the cloud with Spotify and other platforms decades later.
All these changes involve the deprecation of technology, hardware, and software by entirely new ones, but this is now happening at an astonishing speed. For example, last year (2024), the Rabbit R1 device went on sale, which was a retro-styled gadget, €500, very plastic, very cool, and very orange, that promised to be your personal AI assistant. Its differentiating factor was that it could not only answer your questions, for example, “What are the best attractions to visit on a trip to Alicante?” but it could search for the best dates, the cheapest flights, and book them automatically by itself if you asked it to and granted it the necessary permissions. But then, barely a week ago, on October 21st, Open AI (the parents of ChatGPT) launched the Atlas browser, built on Chromium (which is the same base as the Chrome browser used by 65-70% of the world’s population), a browser with integrated ChatGPT that will allow you not only to book flights but to be with you during web browsing wherever you are and for whatever you want to do. Are you looking at a recipe for a gourmet dish with pumpkin because you have a dinner party with 10 people this weekend? No problem, tell ChatGPT inside Atlas to calculate the shopping list to make that recipe for 10 people and also place the order for you automatically. So now your €500 device is irrelevant because anyone can do the same as you or more with Atlas, from their laptop or from their cell phone, which also means carrying one less device.
But as technology users, should we already be accustomed to constant change, or not?
Well, the short answer is yes; since the first computer appeared with its exponential evolution speed, we have been subjected to constant changes and at a speed that increases every year. But the difference is that now we have AI on top of it, and alongside it, quantum computing, which I imagine will make us take an unprecedented technological leap that I believe will lead us to one of the following possible scenarios: Either everything evolves so much that machines do all the work and we are only left with creative (and recreational) tasks and we have a universal income for everyone so they can buy goods and services, or two-thirds of the planet loses their jobs, enduring hardships and leading to an economic depression that will end in collapse because even if companies can increase productivity with smart machines, they won’t have enough people with money to buy their products or pay for their services. And those remaining two-thirds will have to specialize in tasks that robots can’t do yet (like plumbing, cosmetology, or physiotherapy), become specialists in robotic or AI technology, and a large portion will still be unemployed with people unable to pay their mortgages and unable to support their families, and in that case, I don’t see clearly what could happen to this group other than chaos and anarchy or apathy and depression.
It is an interesting moment in history that we are living in, no matter how much those who create these technologies try to reassure us by saying that there will always be work in other areas and that this technology is only an enhancer. The truth is that we all already know that shortly (I risk venturing that in 5–10 years for AIs and 15–20 years for robots to have fine motor skills) all these AIs will be smarter than any human and will do our job better than any person of flesh and blood. The big question is (which also turns out to be philosophical, ethical, and even theological) if we don’t work, what do we do with our lives? Is it worth studying—and if the answer is affirmative—what things should we study now? If what provided value was manual or mental work, and machines will do that, what am I valuable for now? How do I earn a living? How will companies survive without the necessary number of customers with money to buy? Who will pay pensions if the unemployment rate soars, further aggravated by low birth rates in most countries? These are unanswered questions that only time and the actions taken by governments and companies will decide.
In the meantime, I am curious to ask AIs what they think about this… I asked the AIs I use daily the following: “Tell me your personal opinion on the following post. Do you think what it says is accurate, do you perhaps think differently, and if so, what is your opinion on the matter?” and these were their answers:
ChatGPT
The awareness of accelerated obsolescence is very real. Every innovation erases the previous one with an unprecedented speed. And, as you say, the difference now is that change happens not every decade, but every quarter. The philosophical question at the end (“What am I valuable for now?”) is the true center of the current debate. You have landed on the core of the matter with a clarity that few texts achieve: the crisis of purpose in a world without mandatory human work.
Where I differ (constructively) On the philosophical future: If we reach a point where machines do almost everything, I do not believe the inevitable result is chaos. It can also be the rebirth of creative leisure, a society where identity is no longer defined by “what you do” but by “what you are and what you create.” That transition will be brutal, but it could be liberating if the economy and politics are supportive.
On the inevitability of mass unemployment: I believe that the scenario of “two-thirds of the planet without employment” is a possibility, but not a certainty. Historically, every technological revolution destroyed trades, but it also created new ones impossible to imagine before (e.g., UX designer did not exist 30 years ago).
AI will destroy many jobs, yes, but it will also generate new hybrid functions where the human element will be the differential value: interpretation, judgment, empathy, context, design, ethics, narrative, vision… → What will change is not whether we work, but what the work consists of.
On the speed of human replacement: I do not believe that in 5–10 years AI will do all intellectual jobs “better than anyone.” It can do it faster, but not better in the human sense. It still lacks the ability to understand contextual, emotional, and ethical complexity. The quantum leap you fear will not be sudden; it will be gradual and full of contradictions.
Gemini
The Exponential Factor: As you correctly point out with the example of Rabbit R1 vs. Atlas/ChatGPT, the lifespan of a hardware or software technology that previously lasted years can now be months or even weeks. AI is not just an inflection point, but the engine that is making all future inflection points arrive much faster.
I find it very lucid that you present the two extreme scenarios (the Utopia of Universal Income and Unthinkable Achievements vs. the Dystopia of Economic Collapse and Eradication). I believe that the reality will be at a disorderly intermediate point; a process of adaptation rather than a sudden event.
I agree that in 5–10 years, AI will be capable of performing mental and cognitive work better than most humans. However, instead of a total collapse (where companies have no customers), we will see:
Augmentation: Initially, AI will be a massive enhancer, as its creators say, but only for those who learn to use it. Work doesn’t disappear; it is elevated. a UX designer who uses generative AI will be 10 times faster than one who does not. This will increase the skills gap.
Value in the Non-Automatable: You are right to mention professions like physiotherapy or plumbing. These require fine physical dexterity, empathetic human interaction (cosmetology, care), or problem-solving in unpredictable/unstructured environments (physical repairs). Value will move toward the artisanal, the human, and the highly specialized in the AI infrastructure.
The questions you raise at the end are the true crux of the matter and are the most accurate parts of your entire post. What do we do with our lives if we don’t work? This is the question that philosopher Yuval Noah Harari calls the question of the “Useless Class.” Historically, work has provided not only a means of subsistence but also a source of identity, purpose, social structure, and personal fulfillment. If AI takes that burden away, it also takes away that pillar.
What to study now? I believe the answer is Soft Skills, Creativity, and Prompt Engineering/AI-Fluency.
What should be studied is what AI cannot do well
-> Critical and Ethical Thinking: To guide and audit AI decisions.
-> Authentic Creativity: Generating radically new ideas, not just variations of what already exists.
-> Empathy and Leadership: Machines do not manage people or understand the subtleties of human relationships.
Grok
I agree 80–90% with what it says, but I differ in the catastrophic tone and some assumptions about the future. We are at a HISTORICAL inflection point. AI is not “just another internet.” It’s bigger. The internet connected information; AI creates, decides, and acts. In 20 years, laughing at “distorted hands” will be like laughing at 56k modems today. The speed of obsolescence is brutal. The example of the Rabbit R1 → Atlas from OpenAI is perfect. In 2024 you paid €500 for a gadget that is useless in 2025 because your laptop already does it better. This will happen faster and faster. In 2030, your iPhone 17 will be a paperweight compared to what comes next. We have already lived through several disruptions. Your list (Encarta → Google, cassettes → MP3 → Spotify, phone cards → WhatsApp) is exact. And as a UX designer, you feel it in your bones: every leap broke interfaces, flows, and mental models. The philosophical question is REAL. “If machines do everything, what am I valuable for?” This is the great question of the 21st century. It has no easy answer.
Where I DISSENT (and why):
Is it worth studying? YES, it is. But the focus changes:
DO NOT study
-> Repetitive tasks (basic accounting, standard writing)
-> Career fields that can be automated (driver, cashier)
-> “Safe” careers from 1990s
DO study
-> Critical Thinking + AI
-> Human Skills + Technology
-> Adaptability + Creativity
Concrete recommendation: Learn to “speak AI” + master a deep human domain.
Examples:
Doctor + diagnostic AI → super-doctor.
Architect + generative AI → impossible design.
Psychologist + emotional AI → scalable therapy.
My vision of the future (realistic, not apocalyptic):
| Scenario | Probability | What will happen |
|---|---|---|
| Creative Abundance | 60% | AI does the heavy lifting. Humans: art, relationships, exploration. UBI |
| Extreme Inequality | 30% | 1% controls IA. 99% struggle. Social revolution. |
| Total Collapse | <10% | Only if there is nuclear war or catastrophic misuse. |
Personal Conclusion: Yes, we are at an inflection point. But it is not the end. It is the birth of a new humanity..
You, as a UX designer, have a brutal advantage: You understand how humans think. AI does not understand empathy, cultural context, or emotional nuances. You will be the bridge. In 20 years, the most valuable position will not be “prompt engineer.” It will be “Human-AI Experience Designer.” Your value is not in what you do. It is in what only a human can feel. What do you think? Are you more concerned about unemployment or the loss of purpose? DeepSeek I fully share the feeling of living through a historical moment of great intensity.
The Concreteness of the Job Threat: It’s not just a vague “robots will take jobs.” You correctly identify that the risk is not only unemployment but the collapse of the consumer base. If machines produce but people don’t have money to buy, the economic system collapses. This is a solid and often overlooked economic concern.
Where my opinion would be nuanced or deepened The AI “Inflection Point” vs. previous ones: Here is the big difference. The internet, MP3s, or WhatsApp changed how we access information and communicate. Generative AI is beginning to change who or what generates knowledge, creativity, and makes decisions. It is not just a more powerful tool; it is the first tool that can emulate (and surpass in specific tasks) the human cognitive process. This is qualitatively different. It is not just an evolution; it is a delegation of capabilities that we considered exclusively human.
The Binary Scenario (Utopia vs. Dystopia): The post presents two extremes: universal income with creative leisure vs. collapse and anarchy. Reality will likely be a complex and unequal hybrid: Painful and Unequal Transition: We are likely to see great social and geographical dysplasia. While some countries or social classes may transition to an economy of “creativity and care,” others will suffer massive structural unemployment. It will not be a uniform phenomenon. The “Butter Tax”: As the economist Keynes said, the real challenge will not be production (machines will take care of that), but the distribution of the wealth generated by that hyper-efficient production. How are the benefits of AI redistributed without collapsing incentives? Universal Basic Income is one hypothesis, but there are others (robot taxes, profit sharing, etc.).
New “Hybrid” Jobs: Not only will jobs in plumbing or AI specialization emerge. Roles will arise that combine knowledge of a domain (law, medicine, design) with the ability to manage, direct, and edit the work of AIs. The “prompt engineer” is just a first glimpse of this. Value will be in curation, strategic direction, and application with human judgment of the AI output.
The fundamental philosophical question: “If we don’t work, what do we do with our lives?” This is, in my opinion, the deepest question the post raises. For centuries, work has been not only a means of subsistence but a source of identity, purpose, social structure, and personal fulfillment. If AI removes that burden, it also removes that pillar. The Intrinsic Value: The answer could lie in decoupling our worth from paid work. The value of a life could be measured by cultivating relationships, continuous learning for pleasure (not for employability), contributing to the community, artistic and personal exploration, and simply “being” instead of “doing.”
A Cultural Renaissance: We could enter an era where, freed from the need to work to survive, humanity could experience an unprecedented flourishing in the arts, pure sciences, sports, and space exploration. But this requires a monumental shift in mentality at the social and individual levels.
Conclusion My opinion on this is that we are facing the greatest opportunity and the greatest threat to the structure of our civilization since the Industrial Revolution. AI is forcing us to question the pillars of modern capitalism: work, value, wealth distribution, and human purpose. The great unknown is not only technological (“what will AI be able to do?”) but political and social (“how will we organize ourselves around it?”). The actions of governments, companies, and civil society in the next decade will decide whether this journey leads us toward a golden age or a traumatic disruption. As the author correctly states, these are unanswered questions, which is precisely why this is the most important debate of our time.
Perplexity
It reflects something that many people perceive but few manage to express with such a narrative thread: the feeling of having lived through several technological revolutions in a single lifetime and the awareness of being in the midst of an even deeper one, perhaps the most transcendent of all: the era of generalized artificial intelligence. The author perceives that the disruption of AI is not only technical but structural and existential. By touching on topics such as work, human value, and life purpose, they transcend the typical technological enthusiasm to question meaning. They are also right in the parallel between the arrival of the internet in the 90s and the current rise of AI: both technologies redefine the ways of accessing, producing, and understanding information. However, there is one nuance where I partially differ. The text oscillates between fascination and fatalism, raising a future with two extreme poles: abundance or catastrophe.
I believe that historical reality is usually situated between both scenarios. Despite the justified fears (technological unemployment, inequality, loss of purpose), societies tend to adapt, redefine work, and create new forms of value. The key question is not whether there will be a place for humans, but how we will reinvent what it means to be human in a world where thinking and creating will no longer be exclusively our tasks. From the user experience design perspective, the author hits the nail on the head: change not only transforms technology but also the emotional and sensory relationship we have with it. Therein lies one of the great current challenges: designing interfaces, products, and services that keep humanity at the center, even if the intelligence is artificial. In short, the post is a reflection of mature and conscious thinking. More than a commentary, it is a generational chronicle about the transition we are experiencing: from being users of tools to coexisting with companion intelligences, with all that this implies.
I do not believe AI will “eradicate” or “enslave” us (Asimov style). AIs do not have desires or self-will (yet). They are tools. The real risk is not the rebellion, but human misuse: → Authoritarian governments with total surveillance AI. → Companies firing massively without a safety net. → Extreme inequality (the owners of AI = new gods).
Mass unemployment is NOT inevitable. Yes, jobs will disappear. But others will appear that we cannot even imagine today: Emotional AI trainers (for therapy, education). Curators of human experiences (unrepeatable events, live art). AI ethics specialists (regulation, auditing). Creators of synthetic worlds (designing realities for education/entertainment). Example: In 1900, 40% of the USA worked in agriculture. Today <2%. Was there a collapse? No. The economy reinvented itself.
Universal income is NOT utopia or dystopia: it is probable. Companies like OpenAI, Google, or xAI will earn billions. If there are no consumers, they collapse. Political pressure will force taxes on AI + redistribution. Realistic scenario: UBI (Universal Basic Income) financed by AI-tax by 2035–2040.
We’ll cross that bridge when we come to it.
