tag:blog.ixau.com,2013:/posts 2024-10-19T13:37:59Z Alexander Pelov tag:blog.ixau.com,2013:Post/2144262 2024-10-09T21:21:13Z 2024-10-14T14:34:43Z The Double-Edged Sword of AI Assistants: Balancing Productivity and Expertise

As Large Language Models (LLMs) like GPT-4 integrate into our daily workflows, we face a paradox: while they promise enhanced productivity, they may inadvertently erode human expertise.

The way TikTok (and similar platforms) has taken over the world is now known to seriously harm our brains, particularly in terms of concentration span (among other effects). Yet, TikTok is a pastime activity, not a productivity tool. If you need to work, you must get off TikTok and do something completely different (unless, of course, your work is TikTok-related).

Are LLMs going to become the TikTok of our work? Will the instant gratification from approving an LLM suggestion set the rhythm of our workday?

The temporary productivity boost from LLMs that we're witnessing today could be negated by a decline in our cognitive capacity. We might all become dependent on a new "walking stick" that we didn't need initially, but which would have altered our way of "walking." We may become unable to "run" again - yet we'll have to keep upgrading our "stick" just to continue limping through our daily tasks.

The Path of Least Resistance and Expertise Decay

LLMs offer an unprecedented cognitive shortcut, providing polished outputs with minimal effort. While efficient, this path of least resistance poses several risks:

  1. Erosion of Learning Opportunities: Today's experts are yesterday's beginners with experience that was developed through a "productive struggle". As LLMs increase the productivity of today's experts, there is fewer entry-level work left for beginners. When the current generation of experts retires, how will this gap be filled?
  2. The Dunning-Kruger Effect: Users might develop an inflated sense of expertise by accessing AI-generated content without developing true understanding. Yes, in lots of cases the generated content will be "good enough", but this is superficial. An expert writing a sentence may have 50 reasons to frame it the way they do, and when challenged, they can defend their choice. A beginner faced with a blank page will have to gradually build their own understanding of the topic before writing something, which may be less than perfect, but will leave permanent impact on the author.
  3. Loss of Original Thinking: Our brains may gradually lose the "muscle memory" required for original content creation and complex problem-solving.
  4. Stagnation of Innovation: As experts lose the need to challenge themselves, organizational efficiency and innovation may decline over time.
  5. Content Saturation: The ease of generating expert-level content with LLMs could lead to an overwhelming abundance of information, decreasing the overall value and impact of expertise.

The Chess Master Analogy

Imagine if chess playing were a core human activity and chess engines were ubiquitous. Non-experts would never spend countless hours working on their skills, instead "complying" with whatever the computer suggests. The chess level of the general population would plateau at a beginner level, with true mastery becoming increasingly rare.

However, the analogy stops here because chess has clear rules and a clear goal. We can objectively know when a chess engine is giving good results. The scope of application for LLMs is much larger, and we can never be entirely sure if the output is not a hallucination. This over-reliance on AI tools could lead to a decline in human expertise across various fields, with no way to compensate for this decline.

Two constructive way to work with LLMs

1. AI as a Sparring Partner

Rather than viewing AI solely as a content generator or avoiding it entirely, we can leverage LLMs as intellectual sparring partners. This approach allows us to maintain control over our work while benefiting from AI's capabilities. For example:

  1. Writing Assistance: Instead of generating entire emails or reports, use AI to review your writing for tone, clarity, and effectiveness.
  2. Ideation Support: Bounce ideas off an AI system to get alternative perspectives or identify potential blind spots in your thinking.
  3. Learning Aid: Engage with AI to explain complex concepts, asking follow-up questions to deepen your understanding.
  4. Problem-Solving Companion: Walk through your problem-solving process with an AI, using it to challenge your assumptions or suggest alternative approaches.

This "sparring partner" model allows us to leverage AI's strengths while actively engaging our own cognitive abilities, potentially offering a more sustainable balance between efficiency and expertise development.

This is probably the most dependency-free option, but it has its downsides as well. The danger lies in losing autonomy. Soon, we may be letting the AI influence everything we write and do. The machine could become the ultimate judge of what is appropriate or not, what tone a letter should have, etc.

2. Second-best Option: Engaged Writing (Create-Write-LLM-Revise)

To harness the benefits of LLMs while mitigating risks, consider adopting a structured workflow:

  1. Create: Begin with human ideation and planning
  2. Write: Produce the first draft through human effort
  3. LLM: Use AI tools for enhancement and refinement
  4. Revise: Review, and possibly go back to step 1

This approach maintains active cognitive engagement while leveraging AI capabilities, helping to preserve and develop human expertise.

I wrote this post using this approach, but I can say I feel the dependency creeping in. Today I am capable of writing it on my own, but in 10 years' time, who knows?


What should we be doing?

  • I don't believe regulation is the right approach today. We still don't know enough to implement any meaningful regulation in this area.
  • Private enterprises are busy finding the right business models and use-cases for GenAI, or ways to maximize LLM use.
  • Universities are on the front lines here, and I believe they have a critical role to play. The first step is to correctly frame the issue, add specific courses to the curriculum, and fully integrate GenAI into all existing courses. The worst approach would be to ignore or try to forbid the use of LLMs.

We cannot wait 3, 5, or 10 years for someone else to tell us that getting hooked on GenAI is bad.

Today, I believe it's a matter of personal discipline and individual introspection to find the right approach to integrating GenAI into our lives.






]]>
Alexander Pelov
tag:blog.ixau.com,2013:Post/2141460 2024-09-27T22:50:19Z 2024-10-19T13:37:59Z AGI is not happening anytime soon - and what it means for OpenAI

Artificial General Intelligence (AGI) is "years, if not decades away" according to Yann LeCun [source], requiring "new scientific breakthroughs that we don’t know of yet". Taken from one of the leading researchers in AI, I think this should be enough of a reason to believe AGI is not around the corner.

Another evidence in that direction, is Sam Altman's latest Manifesto - The Intelligence Age - where he phrases it as AGI will be here after "possibly few thousand days, or longer".

Sam Altman, among all people, and OpenAI in general, have a very strong incentive to bring AGI as soon as possible on the stage. In fact, it seems achieving AGI is the main strategy pursued by OpenAI's founders [source]. Even though, OpenAI doesn't seem to have a roadmap to follow [source], its entire  charter revolves around AGI. This seems like a page taken from Elon Musk's SpaceX, whose mission statement is about "making humanity multiplanetary". Only in OpenAI's case, there are many companies that are head-to-head with it and didn't require the AGI narrative to develop.

Altman is pivoting OpenAI from research lab to a "classical scale-up", as the company has losses of US$5bln this year (for US$3.7bln of revenue). The pivot from non-profit to profit structure will be clear only after the dust settles. For the moment, Microsoft detains 49% of the for-profit arm of OpenAI [source], Microsoft "have all the IP rights and all the capability" to continue operating its CoPilot "even if OpenAI dissapears". There is  this "fun" clause, which takes away all this IP from Microsoft ONCE OpenAI hits the AGI status. Which is not anytime soon. If anything, it would incentivize Microsoft to start hindering OpenAI the moment they suspect AGI is close to becoming a reality.

OpenAI is raising $6.5bln, with $1bln coming from Microsoft, $0.1bln from NVidia, $0bln from Apple (who withdrew), and $1.2bln from Thrive Capital [source]. Getting Thrive Capital's investment required a sweetener - the right to invest $1bln more at the same conditions if certain revenue goals are met.

It is surprising for the hottest company in the world, leading on the hottest topic in the world to have to sweeten its investment deals... every time.

Alexander

]]>
Alexander Pelov
tag:blog.ixau.com,2013:Post/2140885 2024-09-25T10:32:50Z 2024-09-25T10:37:46Z Hello, World!

I've finally made the step to start a place of putting my thoughts on the world in general, probably with some technology slant.

An old computer science tradition is to have your first program display "Hello, World!".

I've owned this domain since 2005 as far as I can remember. That brings the Time-to-Hello-World to almost 20 years.


For me, it is about having to say something. Something worth of writing.

We'll see how it goes.


Cheers,

Alexander

]]>
Alexander Pelov