Once AI starts thinking for us
This is me, reflecting on the true impact of AI in our society.

I have been reading a lot of excited opinions about how people are using Artificial Intelligence tools to reduce the time they usually spend doing repetitive tasks.
From writing email templates, to crafting presentations or summarizing documents, AI seems to be everywhere now.
And as long as we understand that this “Artificial Intelligence” isn’t really “intelligent” in the general sense, and we need to double check that it didn’t hallucinate something barbaric into that important brief we are about to deliver to the court, everything will be ok.
Because the nature of what we call today “Artificial Intelligence” is still far from being the trustworthy advisor, with bulletproof precision, that we have been promised by Science Fiction.
That’s the reason why Generative AI is still under the scientific classification of “Narrow AI” or “Weak AI”, and the search for the Artificial General Intelligence (AGI) continues.
But there is an even greater risk for us, as Humanity, that comes from the irresponsible use of Artificial Intelligence.
Nir Diamant explains in his excellent article the transition from Search Engines into AI Agents doing the searches and summarizations for us.
Before Google… before the Internet and its Search Engines (because Google wasn’t the first), when we needed information, we went to the library.
We first looked into general stuff, trying to find the right topic… to ask the right question.
Then, after stumbling with a few books or encyclopedias that weren’t what we were looking for, we found our first book, publication or paper really relevant for our research.
As we read, and jumped from author to author, we acquired the knowledge that gave us critical understanding about the subject, and the ability to know what else to look for.
In the end, we had built for ourselves a theoretical framework that now made us a lot more knowledgeable in the subject, and capable of discussing it, or even adding to it.
Now jump into the late 1990’s.
Internet brought entire libraries online, with the capability of searching text directly into them.
Then content was created on Internet and for Internet, and the amount of possible sources exploded.
But if you think about it, the knowledge acquisition didn’t change on its core, it just became more efficient.
Today, when you know nothing about a topic, and are not even capable of articulating the right question, you can start on Google or Wikipedia.
Google will give you a few hundred results that will look useful, and you’ll stumble with a few dozen of them, finally becoming literate enough to find your first authoritative source.
Then, after some reading and bouncing between useful and not useful material, you’ll be able to create your own theoretical framework.
Just what you did in the library, but faster.
Enter the Artificial Intelligence into the scene.
Now people no longer tries to understand enough to ask the proper questions, then find potential sources and use their own critical thinking to decide which ones to use and which ones to leave out.
They just ask the AI tools to summarize a topic, providing references to the sources, and most of the time they just stop there.
People are taking whatever the AI produced as final, many times not even reviewing it for correctness or even if it makes sense (remember our disgraced lawyer from Colorado).
When we research a topic, every source we find and partially read that doesn’t lead directly to our final result, becomes part of our “human neural network”.
The next time we need something, there is a chance we have stumbled with it before, and we at least know where to start looking.
That extended network of loosely related knowledge about a topic is what makes us experts on it.
If some AI spoon-feeds us summaries, it prevents us from building our own neural network, and using our critical thinking.
In short, the AI is doing the thinking, instead of us.
And this reminded me of a scene from the movie The Matrix, where Agent Smith (an Artificial Intelligence entity) opens up to Morpheus about The Matrix and how it came to be:
“Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you it really became our civilization which is of course what this is all about. Evolution.”
— The Matrix (1999)
I dare you to perform a large mathematical operation in your head.
We lost that capability with the introduction of calculators in our Education systems
I dare you to remember 10 or 20 phone numbers from your family or friends (as everyone did in the 80’s).
Mobile phones with address books made us lose that capability too.
If we start delegating into AI our critical thinking, our own neural networks will become weakened.
And, paraphrasing Agent Smith, when will OUR civilization become THEIR civilization?
We might not even notice it.
So, what I think we should be doing?
The current state of our Artificial Intelligence is a powerful tool, and shouldn't be ignored.
AI is here to stay, and will slowly evolve into something we will be able to depend on more and more, with confidence.
But while we reach that point, the approach of tools like Google’s NotebookLM can be the perfect way to get the benefits while avoiding the pitfalls.
It is possible to use NotebookLM as a repository of the information we have found, and we will be safe from AI Hallucinations, because the summaries will be constrained to the information we provided.
How do we find the information to feed into NotebookLM?
We can use the old method of Search Engines, or ask AI tools to fetch sources for us.
Read.
Review.
Then use our critical thinking to know what to include and what not.
But remember that when you ask an AI tool to fetch sources for you, it is deciding internally which ones and which ones not.
And this decision is guided by its programming, and the data from its training model.
And both can (and will) be biased.
So it is your responsibility as Human In The Loop (HITL) to make sure you are picking the right information, and learning without biases.
I strongly recommend reading the excellent article from Wyndo about how to learn complex topics by using NotebookLM.
I hope this (long) rant about the potential misuse of Artificial Intelligence and its impact in our society was interesting and useful to you.
Let me know what you think in the comments, and share this article with anyone you think should be aware of this topic.
Talk to you soon!
Alfredo.
When you are ready, here is how I can help you:
By subscribing to my publication and reading this article you have already taken the first steps into redesigning your business to succeed during the Age of AI, .
But if you want to go further, here are two options:
If you want to start right away, you can join me in my Ready yourself for the future coaching program, where we will begin implementing together my “Future-Proof Entrepreneur System™” in your organization.
Or join my Future-Proof Entrepreneur Community, and continue learning about which technologies you must be prepared to adopt, with tips and strategies from me and your peers.
The Entrepreneuring AI is free today. But if you enjoyed this post, you can let me know that my content is valuable by fueling my coffee addiction, so I can keep producing it for free 😸☕️
Thanks for this powerful piece, Alfredo — it hits a growing issue's core 💥 🥊
We’re releasing increasingly powerful technologies into the world without equipping people with the critical thinking skills or frameworks needed to use them wisely. The excitement is high, but the understanding is shallow. AI tools encourage a culture of instant answers, bypassing the messy but essential thinking process for ourselves.
This isn’t just about hallucinations — it’s about how we’re shaping human cognition. If we keep outsourcing our judgment, we risk weakening the very mental muscles that make us capable of deeper understanding.
Tools like NotebookLM are a step in the right direction — letting users work with their own sources helps reintroduce context and agency into the process.
We don’t just need more conversation — we need embedded, practical frameworks within the tools themselves to nudge users toward reflection, not just efficiency. Fast tools shouldn’t mean shallow thinking.
Let’s build tech that supports thinking, not replaces it.