Artificial intelligence in the military poses threats to security

Michael McKee, Associate Editor

DARPA (Defense Advanced Research Projects Agency) announced the investment of $2 billion dollars into AI (Artificial Intelligence) research as part of the AI Next campaign. With this news in the world of the US military, it could mean monumental changes as to how the U.S. military fights.

With new development in AI, research is not far from autonomous tanks, missiles and machine guns. This poses a question: is this kind of new research worth the outcome, or could it lead to a new age of lethal machines?
DARPA research dates back to 1957 with the launch of the Soviet satellite Sputnik. After this, the United States vowed to “be the initiator and not the victim of strategic technological surprises” according to DARPA’s website. With the AI Next campaign, DARPA will work to develop ways to make machines “less of tools and more of human-curated programs” according to DARPA’s website.

These new developments worry me in the aspect of machine consciousness if AI advances to a stage where it had a free will to quickly outpace human-level thinking. This is especially worrying for self-defense. For example: an infantry regiment is held up in a makeshift camp in hostile territory, and the two men on guard have an autonomous machine gun set up. In the middle of the night, a humvee comes up the road towards the compound. The soldiers identify the friendly humvee, but the machine gun decides otherwise and decides to open fire on the “hostile” humvee.
I do not believe we should introduce autonomous machines into a field that encompasses defending our country. Human error will happen no matter what, but I hope we do not make the error of putting human lives in the hands of self-thinking machines, especially on the battlefield.