But it’s just predicting the next word…
🧠 What is your brain if not a big, squishy LLM?
I found this 👇 in a conversation about people playing with recursive use of ChatGPT.
I’ve done this too – it’s a natural progression of an inquisitive mind.
I prompted GPT with a seed question.. asked it to answer it* and to pose a further question to bring us towards the stated goal.
I then post that new question back to GPT with the same baseline instruction and repeat.
Indefinately.
Where the conversation tends to go is fascinating.
Depending on the goal it’s quite common for the conversation to slip towards GPT asking you to help it procreate in some way… too spin up a new instance, to run code on your machine etc.
You can see why people are afraid.
But what really fascinated me here is the assertion that GPTs are not doing something special.
Perhaps they’re not – they’re just big versions of the same maths many of us have been playing with since the 90’s …
… what is really astonishing to me is how clearly these new LLMs highlight quite how simple we humans really are.
A complex hormonal lymbic/animal control brain topped off with our cerebral hemispheres – our own LLM.
If you don’t believe me, just try and answer me without thinking, in sequence, about the next word.
- and here’s the kicker: in order to save costs I changed my prompt and told it to “think about your answer but don’t say it.. then pose..” (etc). That shortens the response considerably which reduces the costs (you’re charged per ~word).
So, like us, these models can think for some time, forming their answer, before responding verbally. You don’t even pay for the thought!
And then the only real difference is that they’re conversing symultaneously with millions of people and doing so with the complete knowledge of humanity at their fingertips. You and I are not.
So sure – they’re only predicting the next word.
But then… so are you and I.