As Large Language Models (LLMs) like GPT-4 integrate into our daily workflows, we face a paradox: while they promise enhanced productivity, they may inadvertently erode human expertise.
The way TikTok (and similar platforms) has taken over the world is now known to seriously harm our brains, particularly in terms of concentration span (among other effects). Yet, TikTok is a pastime activity, not a productivity tool. If you need to work, you must get off TikTok and do something completely different (unless, of course, your work is TikTok-related).
Are LLMs going to become the TikTok of our work? Will the instant gratification from approving an LLM suggestion set the rhythm of our workday?
The temporary productivity boost from LLMs that we're witnessing today could be negated by a decline in our cognitive capacity. We might all become dependent on a new "walking stick" that we didn't need initially, but which would have altered our way of "walking." We may become unable to "run" again - yet we'll have to keep upgrading our "stick" just to continue limping through our daily tasks.
The Path of Least Resistance and Expertise Decay
LLMs offer an unprecedented cognitive shortcut, providing polished outputs with minimal effort. While efficient, this path of least resistance poses several risks:
-
Erosion of Learning Opportunities: Today's experts are yesterday's beginners with experience that was developed through a "productive struggle". As LLMs increase the productivity of today's experts, there is fewer entry-level work left for beginners. When the current generation of experts retires, how will this gap be filled?
-
The Dunning-Kruger Effect: Users might develop an inflated sense of expertise by accessing AI-generated content without developing true understanding. Yes, in lots of cases the generated content will be "good enough", but this is superficial. An expert writing a sentence may have 50 reasons to frame it the way they do, and when challenged, they can defend their choice. A beginner faced with a blank page will have to gradually build their own understanding of the topic before writing something, which may be less than perfect, but will leave permanent impact on the author.
-
Loss of Original Thinking: Our brains may gradually lose the "muscle memory" required for original content creation and complex problem-solving.
- Stagnation of Innovation: As experts lose the need to challenge themselves, organizational efficiency and innovation may decline over time.
-
Content Saturation: The ease of generating expert-level content with LLMs could lead to an overwhelming abundance of information, decreasing the overall value and impact of expertise.
The Chess Master Analogy
Imagine if chess playing were a core human activity and chess engines were ubiquitous. Non-experts would never spend countless hours working on their skills, instead "complying" with whatever the computer suggests. The chess level of the general population would plateau at a beginner level, with true mastery becoming increasingly rare.
However, the analogy stops here because chess has clear rules and a clear goal. We can objectively know when a chess engine is giving good results. The scope of application for LLMs is much larger, and we can never be entirely sure if the output is not a hallucination. This over-reliance on AI tools could lead to a decline in human expertise across various fields, with no way to compensate for this decline.
Two constructive way to work with LLMs
1. AI as a Sparring Partner
Rather than viewing AI solely as a content generator or avoiding it entirely, we can leverage LLMs as intellectual sparring partners. This approach allows us to maintain control over our work while benefiting from AI's capabilities. For example:
- Writing Assistance: Instead of generating entire emails or reports, use AI to review your writing for tone, clarity, and effectiveness.
- Ideation Support: Bounce ideas off an AI system to get alternative perspectives or identify potential blind spots in your thinking.
- Learning Aid: Engage with AI to explain complex concepts, asking follow-up questions to deepen your understanding.
- Problem-Solving Companion: Walk through your problem-solving process with an AI, using it to challenge your assumptions or suggest alternative approaches.
This "sparring partner" model allows us to leverage AI's strengths while actively engaging our own cognitive abilities, potentially offering a more sustainable balance between efficiency and expertise development.
This is probably the most dependency-free option, but it has its downsides as well. The danger lies in losing autonomy. Soon, we may be letting the AI influence everything we write and do. The machine could become the ultimate judge of what is appropriate or not, what tone a letter should have, etc.
2. Second-best Option: Engaged Writing (Create-Write-LLM-Revise)
To harness the benefits of LLMs while mitigating risks, consider adopting a structured workflow:
- Create: Begin with human ideation and planning
- Write: Produce the first draft through human effort
- LLM: Use AI tools for enhancement and refinement
- Revise: Review, and possibly go back to step 1
This approach maintains active cognitive engagement while leveraging AI capabilities, helping to preserve and develop human expertise.
I wrote this post using this approach, but I can say I feel the dependency creeping in. Today I am capable of writing it on my own, but in 10 years' time, who knows?
What should we be doing?
- I don't believe regulation is the right approach today. We still don't know enough to implement any meaningful regulation in this area.
- Private enterprises are busy finding the right business models and use-cases for GenAI, or ways to maximize LLM use.
- Universities are on the front lines here, and I believe they have a critical role to play. The first step is to correctly frame the issue, add specific courses to the curriculum, and fully integrate GenAI into all existing courses. The worst approach would be to ignore or try to forbid the use of LLMs.
We cannot wait 3, 5, or 10 years for someone else to tell us that getting hooked on GenAI is bad.
Today, I believe it's a matter of personal discipline and individual introspection to find the right approach to integrating GenAI into our lives.