The tech world’s attention right now seems to be entirely captivated by OpenAI’s ChatGPT, which to many people represents a qualitative jump in the capabilities of language models. At this moment, I think it’s important to take a step back and take stock of the situation, to ask on the one hand whether the current hype is justified, and to consider on the other hand whether the whole project is really as dangerous as some claim it is. There are already a good dozen factions with differing views and interpretations of AI. I'll try to cover as many of them as I can, one by one.
Right, AI will evolve by scaling up, changing its neural architecture, thinking at a thousand or a million times human speed; and humans will keep up by... doing what, exactly?
Bars
"AI and humans can co-evolve"
Right, AI will evolve by scaling up, changing its neural architecture, thinking at a thousand or a million times human speed; and humans will keep up by... doing what, exactly?