U.S. Northern Command (NORTHCOM) recently conducted a series of tests known as the Global Information Dominance Experiments, or GIDE, which combined global sensor networks, artificial intelligence (AI) systems, and cloud computing resources in an attempt to “achieve information dominance” and “decision-making superiority.” According to NORTHCOM leadership, the AI and machine learning tools tested in the experiments could someday offer the Pentagon a robust “ability to see days in advance,” meaning it could predict the future with some reliability based on evaluating patterns, anomalies, and trends in massive data sets. While the concept sounds like something out of Minority Report, the commander of NORTHCOM says this capability is already enabled by tools readily available to the Pentagon.Â
Go here to read the rest. Who knew the Terminator movies were prophesy?
Idiocracy has been proving itself pretty prophetic lately, so why not Terminator? sigh
The more probable danger is not that such a system would be too smart but rather that it would be too dumb; but be trusted anyway. The story of “machine learning” is always the same: impressive results in a narrow field that closely resembles the learning set, bizarre and unpredictable results outside of it. But by its very nature, it is impossible to determine what a machine learning algorithm is really doing, and whether it has latched onto a deep and useful pattern that humans would miss, or whether it is associating things together that have absolutely nothing to do with each other.
The chance of such a program gaining sentience and deciding to kill humans is minuscule. The chance of such a program doing something like saying we need to invade India because they are going to launch a nuclear strike against us, while in fact that prediction was made due to some posts made by some random Indian civilians that had no relation to war whatsoever, is much greater.