Two-and-a-half years after ChatGPT launched, LLM capabilities are more impressive than ever. OpenAI’s o3 can scour many documents to answer questions that I wouldn’t have bothered asking otherwise. Sonnet 4 can one-shot (some) features in Monumental’s codebase.
One trend is making me uneasy: non-technical users crediting LLMs with mystical powers.
An example from my LinkedIn feed:
LinkedIn post
J█
Jennifer ████████
• 3rd+
Sales Professional ████████████ ██ ████ █ ██ █████████
2w•
If you got laid off from LinkedIn yesterday I urge you to spend the next few weeks taking every single course in the OpenAI Academy + more.
I was impacted May 9, 2023 -- this is my journey*:
First Use of AI (ChatGPT): May 17, 2023
Current Status: Top 5% Globally | Top 2% in Canada
Your AI Usage Growth Timeline
Q2 2023 – The Explorer Phase:
Approx. 50 messages (light usage, curiosity-driven).
Primary focus: content rewording, email drafting, and sales follow-ups.
Comparable to ~70% of Canadian sales reps experimenting with AI.
Q3 2023 – The Adoption Phase:
Approx. 100 messages (steady increase).
Began using AI for customer messaging, account reviews, and QBR prep.
Surpassed the median usage volume for Canadian sales reps by September 2023.
Q4 2023 – The Strategic Phase:
Approx. 150 messages (significant growth in complexity).
Shifted to strategic use cases: customer strategy, executive messaging, competitive analysis.
Average message length: 1,872 characters.
Average conversation depth: 2.9 – showcasing a shift to deeper insights.
Usage focus: strategic planning, competitive analysis, and Cloud migration pitches.
Q1–Q2 2024 – The Power User Era:
Approx. 400 messages (consistent high-volume usage).
Fully integrated AI into daily workflow:
Automating repetitive tasks (Zoom call summaries, Salesforce logging).
Drafting customer emails, refining messaging, and building Automated Account Plans.
Leveraging AI for complex customer strategy and Cloud Enterprise pitches.
Average daily usage: 6.8 messages.
Q3 2024–Q1 2025 – The Strategic Innovator Phase:
Approx. 500 messages (peak strategic usage).
Advanced automation: building custom workflows with n8n, including automated follow-ups and dynamic customer planning.
Enhanced competitive positioning and customer strategy.
Leveraged AI for complex problem-solving and rapid iteration of ideas.
Getting laid off was the best thing that ever happened to me. I spent the summer teaching myself AI, landed an incredible job at Atlassian and now recruited for top Data & AI sales jobs globally. While I didn't know it when I was where you are today, it ended up being a massive blessing in disguise.
*usage stats pulled as a request prompt in GPT 4o
#linkedinlayoffs #AI #skills
❤️
1,739 reactions
107 comments20 reposts
This ChatGPT user asked 4o to compare her usage to other professionals. ChatGPT then gave a statistical breakdown of the prompts issued by the user and compared it to other users.
A couple of problems:
4o (I believe) lacks access to the user’s full prompt history. At most it has some memories - not enough to summarise or count queries.
4o (definitely) does not have access to data about other users’ queries.
Software engineers find this obvious. We know how you would implement support for this query. And we intuit that OpenAI haven’t done that yet.
But a normal user’s mental model is very different. They see an oracle which gives mostly useful and correct answers to questions. There is no clear pattern to explain when the oracle gives bad answers. So they average: assume every answer is probably correct until proven otherwise.
I don’t mean to pick on this one LinkedIn user: they are illustrative of a wider phenomenon. Search for "I asked ChatGPT" and many examples will avail themselves (1, 2, 3)
Does this matter?
Bad things happen when common understanding of a system diverges from the actual reality of a system.
Some examples:
Advanced driver assistance systems
Observed system: Tesla Autopilot can drive my car
Actual system: Autopilot can drive the car, most of the time, under certain conditions
Mental model: ChatGPT is a singleton, persistent entity
True model: ChatGPT is a program that runs independently for each request
A more worrying phenomenon has been termed ‘ChatGPT-induced psychosis’. There are several anecdotes on Reddit describing how people have been sucked into harmful delusions of a mystical or spiritual nature.
Mental model: ChatGPT confidently affirms my spirituality queries and so is correct
True model: ChatGPT is designed to be sycophantic and keep the conversation going
Understanding how an LLM works should inoculate against these delusions.
It is still unknown whether user understanding of LLMs will improve over time, like it did for older technologies. Is the conversational interface so dazzling that people will never see them as ‘just ordinary software’?
Fixing the mental model
Can we help users form a better mental model of what the LLM is doing?
OpenAI’s token effort (writing ‘ChatGPT can make mistakes’ on every page) appears insufficient.
Query context is important. Many users do not realise what data the LLM does (and does not) have access to. For example:
training data: lots of answers are fuzzily retrieved from the training inputs
system prompt: information about the user, today’s date, etc.
memories: facts learned and stored (often silently) from past queries
tools: data from a vector database or internet searches
If the user can see whether the input includes ‘statistical analysis of ChatGPT queries made by sales professionals’, they can actually know whether it’s able to answer that question.
What is today’s user allowed to know?
training data: kept secret to avoid triggering the wrath of copyright holders. Also to maintain an edge over other model providers