As we reported back in March, Google has been working with the United States Department of Defense (DoD) on "Project Maven," a secretive military project involving AI.

Specifically, Google is working on technology that can automatically analyze drone footage using machine learning, though it was never specified what the AI was looking for.

As you might expect, Project Maven has proven to be quite controversial, particularly among Google's own employees. Many of the company's employees were worried about the ethical implications of working alongside a government military organization.

Google attempted to soothe these concerns by claiming their technology strictly "flags images for human review," and would only be used for "non-offensive" purposes. A month later, we reported that over 3,100 employees signed a letter opposing the partnership with the DoD.

That number has now reportedly reached 4,000, according to Gizmodo. Furthermore, roughly a dozen employees have officially left Google due to the company's ongoing work on Project Maven.

"Over the last couple of months, I've been less and less impressed with the response and the way people's concerns are being treated and listened to..."

"Over the last couple of months, I've been less and less impressed with the response and the way people's concerns are being treated and listened to," One of the resigning employees reportedly said.

It's tough to say whether or not these resignations and letter signatures will be enough to convince Google to break off their partnership with the DoD, but the company is certainly aware of their employee's complaints.

A Google spokesperson released the following statement in April:

An important part of our culture is having employees who are actively engaged in the work that we do. We know that there are many open questions involved in the use of new technologies, so these conversations---with employees and outside experts---are hugely important and beneficial.

Maven is a well-publicized DoD project, and Google is working on one part of it---specifically scoped to be for non-offensive purposes and using open-source object-recognition software available to any Google Cloud customer. The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.

Any military use of machine learning naturally raises valid concerns. We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine-learning technologies.