Tech giant Palantir has pushed back against concerns that military use of its AI platforms could lead to unforeseen risks, in an exclusive interview with the BBC, insisting that the way the technology is used is the responsibility of its military customers.
It comes as experts have expressed concern over the use of Palantir's AI-powered defence platform - Maven Smart System - during wartime and its reported use in US attacks on Iran. Analysts have warned that the military's use of the platform, which helps personnel plan attacks, leaves little time for meaningful verification of its output and could lead to incorrect targets being hit.
But the company's UK and Europe head, Louis Mosley, told the BBC in a wide-ranging interview that while AI platforms like Maven have been instrumental to the US management of the Iran war, the responsibility for how their output is used must always remain with the military organisation.
There's always a human in the loop, so there is always a human that makes the ultimate decision. That's the current set-up, Mosley stated.
The Maven Smart System was launched by the Pentagon in 2017 and is designed to speed up military targeting decisions by bringing together masses of data, including a range of intelligence, satellite, and drone images. The system analyses this data and provides recommendations for targeting, suggesting levels of force based on available military resources.
Amid growing scrutiny over the use of such tools in warfare, Palantir defends that its AI serves as a guide to assist military personnel in their decision-making process, rather than serve as an automated targeting system. Mosley emphasizes that the platform is intended as a support tool that synthesises information, noting the importance of human oversight in military decisions.
There has been increasing criticism from experts regarding the implications of AI in military operations. Concerns have been raised following reports of airstrikes that potentially led to significant civilian casualties. Congressional figures also call for stricter regulations on AI use, emphasizing that the presence of human decision-makers is crucial to prevent disastrous outcomes.
While the use of AI in military operations continues to evolve, Palantir's integration of the Maven system exemplifies the ongoing debate over automation in the context of warfare, and the ethical implications that arise from relying on technology for critical decisions.
It comes as experts have expressed concern over the use of Palantir's AI-powered defence platform - Maven Smart System - during wartime and its reported use in US attacks on Iran. Analysts have warned that the military's use of the platform, which helps personnel plan attacks, leaves little time for meaningful verification of its output and could lead to incorrect targets being hit.
But the company's UK and Europe head, Louis Mosley, told the BBC in a wide-ranging interview that while AI platforms like Maven have been instrumental to the US management of the Iran war, the responsibility for how their output is used must always remain with the military organisation.
There's always a human in the loop, so there is always a human that makes the ultimate decision. That's the current set-up, Mosley stated.
The Maven Smart System was launched by the Pentagon in 2017 and is designed to speed up military targeting decisions by bringing together masses of data, including a range of intelligence, satellite, and drone images. The system analyses this data and provides recommendations for targeting, suggesting levels of force based on available military resources.
Amid growing scrutiny over the use of such tools in warfare, Palantir defends that its AI serves as a guide to assist military personnel in their decision-making process, rather than serve as an automated targeting system. Mosley emphasizes that the platform is intended as a support tool that synthesises information, noting the importance of human oversight in military decisions.
There has been increasing criticism from experts regarding the implications of AI in military operations. Concerns have been raised following reports of airstrikes that potentially led to significant civilian casualties. Congressional figures also call for stricter regulations on AI use, emphasizing that the presence of human decision-makers is crucial to prevent disastrous outcomes.
While the use of AI in military operations continues to evolve, Palantir's integration of the Maven system exemplifies the ongoing debate over automation in the context of warfare, and the ethical implications that arise from relying on technology for critical decisions.
















