Select Page

The Matrix, The Terminator—so much of our science fiction is built around the dangers of superintelligent artificial intelligence: a system that exceeds the best humans across nearly all cognitive domains. OpenAI CEO Sam Altman and Meta CEO Mark Zuckerberg have predicted we’ll achieve such AI in the coming years. Yet machines like those depicted as battling humanity in those movies would have to be far more advanced than ChatGPT, not to mention more capable of making Excel spreadsheets than Microsoft Copilot. So how can anyone think we’re remotely close to artificial superintelligence?

More From Scientific American

Share via
Copy link