AI is changing almost everything about how we work, and at an unprecedented pace. Across virtually all sectors, organizations are scrambling to adapt as AI reshapes job roles and workflows.
In our lane, learning and development (L&D), the shift is especially dramatic. As I’ve discussed in previous posts, training capabilities that once sounded like science fiction are now part of everyday practice.
But there’s a catch: as AI grows more powerful, it also becomes easier to overestimate what the technology can do. Equally important, we can underestimate the human expertise behind it. That’s where automation bias can creep in. This is when people assume that AI is more accurate, objective, or capable than it really is.
That’s just one reason why Joseph Wilson’s new book, Humans of AI, is so timely. It peels back the curtain on AI systems and challenges the idea that they operate independently. Instead, the book reveals the human effort, judgment, and oversight involved in every stage.
Having worked with Joseph before, we were delighted to sit down and discuss Humans of AI. We explored what his book reveals about the essential role people will always play in the age of artificial intelligence.
Lydia Sani: You wrote Humans of AI through the lens of an anthropologist. How did that shape your perspective?
Joseph Wilson: As an anthropologist and someone who spent many years in education, I’ve always been drawn to the human factor. As a high school teacher and later, working with edtech companies, I kept coming back to a fundamental question: How do people communicate with one another and with the wider world?
When I returned to school to pursue my master’s and PhD, it was just as awareness of ChatGPT began to surge. That made it even clearer to me that we can’t lose sight of the human factor. And not just with AI but with any technology. If we don’t, we start to perceive the technology as thinking, acting, and changing our lives solely on its own. In reality, it is humans who design, shape, and use these systems.
That is one of the central arguments in my book, that AI often conceals the human role. But when we lose sight of the people working all along the tech stack, we overvalue the technology and undervalue the humans behind it. That can be dangerous, because we begin to assign AI abilities it simply does not have. Abilities such as genuine reasoning, ethical judgment, or the capacity to teach as a human teaches.
L.S. What misperceptions do we need to be aware of when it comes to harnessing the power of AI?
J.W. The main misperception is that these systems are autonomous. They are not. If you map out all the functions required for AI to operate, you find humans involved at every stage. This is unlikely to change anytime soon. The specific roles people play may evolve, but humans will remain embedded in the tools they create, regardless of how the technology is marketed.
But while machines are not biased in exactly the same way as humans are, machines still reflect the assumptions, worldviews, and patterns built into them by people. AI systems are trained on human-created data. That means they can reproduce human biases, stereotypes, and distortions. In some cases, they can even amplify them because the system has learned that those patterns appear to be what users expect or want.
Another common misunderstanding is the belief that AI suddenly experienced a dramatic scientific breakthrough around 2022, when tools like ChatGPT entered the public eye. In reality, many of the core elements behind today’s AI systems have existed for years.
What changed was less a single revolutionary discovery than a convergence of scale: vastly more data, vastly more computing power, and advances such as transformer-based models that made it possible to process patterns in language much more effectively.
L.S. You spent years researching and writing Humans of AI. What was your biggest takeaway?
J.W. One of my biggest takeaways came from working closely with the engineers, designers, and developers who built these systems. I was struck by how skeptical many of them are of the grand narratives around AI becoming conscious or replacing humans. They tend to be highly practical people, focusing less on sweeping futuristic predictions and more on solving problems, advancing science, and meeting technical challenges.
In fact, many people closest to the technology seem more skeptical than the general public about claims that AI will replace human work. I spoke with developers who acknowledged the narrative that generative AI can write code. But they laughed at the idea that it could fully replace them. They understand better than anyone that writing functional code is only a small part of the job. Good development also requires judgment, security, integration, usability, testing, and debugging. In essence, much of the real work that makes technology actually function.
That was revealing to me. I had expected many engineers and developers to be true believers in AI’s transformative power. Instead, I found that the people building these systems often have a much clearer sense of their limitations. In some ways, they are the least likely to buy into the idea that AI is about to replace humanity, precisely because they know what it really takes to make these technologies work.
L.S. As an author and educator, what are the implications of AI for the L&D industry?
J.W. We know that learning works best when it is social. That does not necessarily mean everyone has to be sitting in the same room, but it does mean that connection matters. Learning depends on context, relationships, and shared meaning. People do not simply absorb information; they make sense of it within a social and cultural framework, shaped by peers, family, community, and values.
This is why I think AI won’t replace older approaches to learning. What it can do is open up new possibilities. One area I find especially interesting is generative AI’s ability to detect patterns in language and imagery and then produce new content in a recognizable style. That creates a powerful opportunity to explore questions of voice, genre, and form. Asking AI to write in the style of Hemingway or Virginia Woolf, for example, can help students think more critically about what actually defines a writer’s voice or an artist’s style.
L.S. So, you believe that the human element will remain essential for training for the foreseeable future?
J.W. Good education will never eliminate the human role. Deep learning depends on more than access to information; it depends on guidance, trust, context, and the ability to apply knowledge meaningfully in new situations. That is true both at the design stage of learning and when learners attempt to apply what they have learned in the real world.
Human connection remains central to the process. We learn more effectively from people we trust, and that kind of learning depends on interaction and interpretation. AI can be highly effective at delivering information, but it cannot fully replicate the human capacity to help someone work through context, value, and meaning. That’s why teachers, facilitators, and mentors will remain indispensable.
Organizations should focus on making any AI they use a transparent and critically examined part of the learning environment. If AI is being used in a learning platform, people need to know what it is, how it is being used, how it was trained, and who was responsible for checking its quality. A great deal of anxiety around AI comes from the fact that people often don’t know where the content came from or who is standing behind it. That’s why AI should be presented as part of a larger human system, not as an all-knowing guide.
L.S. Any final thoughts?
J.W. If I were designing a learning platform today, I would make the human role visible. I would show that people are reviewing the outputs, fact-checking, challenging, and refining them. AI should be framed as just one tool within a network of human judgment and oversight.
Good teaching has never depended on pretending to know everything. It depends on questioning, revising, acknowledging mistakes, and helping learners understand that knowledge is always evolving. AI should support that process, not obscure it behind a false veneer of certainty.
Lydia’s Closing Note:
It was a pleasure to speak with Joseph and hear a clear reminder that human judgment, empathy, and insight remain essential to L&D excellence. Technology will continue to advance, but the human role remains essential to creating the best possible learning experience.
I believe Humans of AI is a must-read for anyone in the L&D industry. To help spread the word, Redwood will be conducting a giveaway of five signed copies of Joseph’s book. Send an email to the address below with your contact information to enter the contest!
Email: contests@redwoodperforms.com
Winners will be chosen on April 2, 2026. Good luck!
