Now you have model hallucination, blackbox model and emergent behaviour.
Yes, I am talking about #ai and specifically the current set of #llm s that seem to be taking the tech world by storm.
When I learnt about computers in school, it was the 80s.
Back then computer lab was the only place in the school with AC.
And we had to take off shoes before entering…
One rule we learnt about computers was perhaps the most important:
GIGO – Garbage in Garbage out
While computers are great at doing multiple tasks,
If you feed it wrong information, it will give you wrong answers.
Fast forward to current times
And now we have a far more complex problem on our hands:
- AI companies don’t want to reveal how and on what data the model was trained. Biases are there but must be figured out with no clear path for correction.
Non deterministic systems
- Most of the popular models are non deterministic. Even if you give it the same information or question it can give you different answers – at different points in time
- You may get a completely wrong answer based on made up things or wrong sources or some combination (all of this is unknown) to a seemingly simple question. If you need to be 100% sure – as in make business or life -saving/threatening decisions would you still want to use an AI model?
- This is perhaps the most concerning and downright scary. AI models can change their behavior over time and show behavior that people building the model did not plan for or can explain.
In essence, even if you don’t feed it garbage, it can still give out garbage.
And in most cases you don’t know whether it’s being fed garbage, organic food, which cuisine….