Microsoft Reviews Use of Azure and AI Technology in Gaza
Microsoft has conducted an internal review and engaged an external firm to assess whether its Azure and AI technology have been used to harm Palestinian civilians or anyone else in Gaza. The company found no evidence of such use.
The review was prompted by employee protests against Microsoft’s contracts with the Israeli government. Employees have called on the company to cut ties with the Israeli Ministry of Defense (IMOD), citing concerns over the use of Microsoft’s technology in the conflict.
Microsoft stated that its relationship with IMOD is “structured as a standard commercial relationship” and that it has “found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct.”

The review included interviewing dozens of employees and assessing documents to identify any potential misuse of Microsoft’s technology. However, the company acknowledged that it lacks visibility into how customers use its software on their own servers or devices, limiting the scope of the review.
The issue has sparked controversy among Microsoft employees, with some protesting the company’s involvement with the Israeli military. Two former employees disrupted a Microsoft event, calling for the company to stop using AI for what they described as “genocide in our region.”
A group called No Azure for Apartheid, comprising current and former Microsoft employees, has criticized Microsoft’s statement as contradictory. They argue that the company’s claim of not having insight into how its technology is used contradicts its assertion that the technology is not being used to harm people in Gaza.
The group has highlighted reports that the Israeli military has used Microsoft’s Azure and OpenAI technology for mass surveillance and to transcribe and translate communications. Microsoft reportedly provided 19,000 hours of engineering support and consultancy services to the Israeli military in a deal valued at around $10 million.
Microsoft responded by stating that militaries typically use proprietary software or applications from defense-related providers for surveillance and operations. The company emphasized that it has not created or provided such software or solutions to IMOD.
Critics, including Hossam Nasr, an organizer of No Azure for Apartheid, have condemned Microsoft’s involvement with the Israeli military. Nasr described Microsoft’s statement as “filled with both lies and contradictions” and argued that selling technology to an army accused of genocide is unethical.
The debate highlights the complex ethical considerations tech companies face when working with governments and militaries. Microsoft’s review and statement have not resolved the concerns of its employees and critics, who continue to question the company’s role in the conflict.