In 1957 frank rosenblatt, a psychologist, built a machine called the Perceptron. Modelled on the human brain, its neural networks were a forerunner of today’s artificial intelligence (ai). It intrigued the cia which was drowning in photos from spy planes and satellites. It funded the Perceptron in the hope of automatically identifying objects of interest. The experiment failed. There was not enough computing power, storage or training data available. But it was a start.
Spy agencies used machine learning to sift through images and text in the cold war, and then to identify patterns in billions of phone records after 9/11. Although advances in algorithms and computing power over the past decade have made those models faster and better, most agencies still believe ai will assist humans rather than replace them. However, the Perceptron’s successors, large language models (llms) like gpt-4, are beginning to challenge that assumption.
Start with geospatial intelligence (geoint). Machines have not solved the problem that led to the cia’s interest in the Perceptron: too many images from space, too few people and too little time to sort through them. Vice-Admiral Frank Whitworth, who runs America’s National Geospatial-Intelligence Agency (nga), points out that the number of humans in his agency—around 14,000 today compared with 32,000 in the National Security Agency (nsa)—will rise more slowly than the “terabytes from space”.
Computer vision is helping deal with the deluge. “If you started a shift at 7.30am,” says General Sir Jim Hockenhull, who oversees British defence intelligence, “you might get to the important image at 1pm.” Now algorithms flag up key changes, and analysts are at least twice as productive. “Through my career, intelligence analysts used to spend 80% of their time wrangling the information and 20% adding value,” he says. “In the geospatial world, we’ve been able to flip that.”
Recent Comments