As Large Language Models (LLMs) like GPT-4 integrate into our daily workflows, we face a paradox: while they promise enhanced productivity, they may inadvertently erode human expertise.
The way TikTok (and similar platforms) has taken over the world is now known to seriously harm our brains, particularly in terms of concentration span (among other effects). Yet, TikTok is a pastime activity, not a productivity tool. If you need to work, you must get off TikTok and do something completely different (unless, of course, your work is TikTok-related).
Are LLMs going to become the TikTok of our work? Will the instant gratification from approving an LLM suggestion set the rhythm of our workday?
The temporary productivity boost from LLMs that we're witnessing today could be negated by a decline in our cognitive capacity. We might all become dependent on a new "walking stick" that we didn't need initially, but which would have altered our way of "walking." We may become unable to "run" again - yet we'll have to keep upgrading our "stick" just to continue limping through our daily tasks.
LLMs offer an unprecedented cognitive shortcut, providing polished outputs with minimal effort. While efficient, this path of least resistance poses several risks:
Imagine if chess playing were a core human activity and chess engines were ubiquitous. Non-experts would never spend countless hours working on their skills, instead "complying" with whatever the computer suggests. The chess level of the general population would plateau at a beginner level, with true mastery becoming increasingly rare.
However, the analogy stops here because chess has clear rules and a clear goal. We can objectively know when a chess engine is giving good results. The scope of application for LLMs is much larger, and we can never be entirely sure if the output is not a hallucination. This over-reliance on AI tools could lead to a decline in human expertise across various fields, with no way to compensate for this decline.
Rather than viewing AI solely as a content generator or avoiding it entirely, we can leverage LLMs as intellectual sparring partners. This approach allows us to maintain control over our work while benefiting from AI's capabilities. For example:
This "sparring partner" model allows us to leverage AI's strengths while actively engaging our own cognitive abilities, potentially offering a more sustainable balance between efficiency and expertise development.
This is probably the most dependency-free option, but it has its downsides as well. The danger lies in losing autonomy. Soon, we may be letting the AI influence everything we write and do. The machine could become the ultimate judge of what is appropriate or not, what tone a letter should have, etc.
To harness the benefits of LLMs while mitigating risks, consider adopting a structured workflow:
This approach maintains active cognitive engagement while leveraging AI capabilities, helping to preserve and develop human expertise.
I wrote this post using this approach, but I can say I feel the dependency creeping in. Today I am capable of writing it on my own, but in 10 years' time, who knows?
We cannot wait 3, 5, or 10 years for someone else to tell us that getting hooked on GenAI is bad.
Today, I believe it's a matter of personal discipline and individual introspection to find the right approach to integrating GenAI into our lives.
Artificial General Intelligence (AGI) is "years, if not decades away" according to Yann LeCun [source], requiring "new scientific breakthroughs that we don’t know of yet". Taken from one of the leading researchers in AI, I think this should be enough of a reason to believe AGI is not around the corner.
Another evidence in that direction, is Sam Altman's latest Manifesto - The Intelligence Age - where he phrases it as AGI will be here after "possibly few thousand days, or longer".
Sam Altman, among all people, and OpenAI in general, have a very strong incentive to bring AGI as soon as possible on the stage. In fact, it seems achieving AGI is the main strategy pursued by OpenAI's founders [source]. Even though, OpenAI doesn't seem to have a roadmap to follow [source], its entire charter revolves around AGI. This seems like a page taken from Elon Musk's SpaceX, whose mission statement is about "making humanity multiplanetary". Only in OpenAI's case, there are many companies that are head-to-head with it and didn't require the AGI narrative to develop.
Altman is pivoting OpenAI from research lab to a "classical scale-up", as the company has losses of US$5bln this year (for US$3.7bln of revenue). The pivot from non-profit to profit structure will be clear only after the dust settles. For the moment, Microsoft detains 49% of the for-profit arm of OpenAI [source], Microsoft "have all the IP rights and all the capability" to continue operating its CoPilot "even if OpenAI dissapears". There is this "fun" clause, which takes away all this IP from Microsoft ONCE OpenAI hits the AGI status. Which is not anytime soon. If anything, it would incentivize Microsoft to start hindering OpenAI the moment they suspect AGI is close to becoming a reality.
OpenAI is raising $6.5bln, with $1bln coming from Microsoft, $0.1bln from NVidia, $0bln from Apple (who withdrew), and $1.2bln from Thrive Capital [source]. Getting Thrive Capital's investment required a sweetener - the right to invest $1bln more at the same conditions if certain revenue goals are met.
It is surprising for the hottest company in the world, leading on the hottest topic in the world to have to sweeten its investment deals... every time.
Alexander
]]>I've finally made the step to start a place of putting my thoughts on the world in general, probably with some technology slant.
An old computer science tradition is to have your first program display "Hello, World!".
I've owned this domain since 2005 as far as I can remember. That brings the Time-to-Hello-World to almost 20 years.
For me, it is about having to say something. Something worth of writing.
We'll see how it goes.
Cheers,
Alexander