The U.S. House of Representatives has restricted congressional staffers from using Microsoft’s AI assistant Copilot, citing security concerns. The House’s Chief Administrative Officer, Catherine Szpindor, flagged Copilot as a potential risk for leaking sensitive House data due to its reliance on non-approved cloud services. This move follows a broader trend of government agencies grappling with the secure use of artificial intelligence tools.

Microsoft’s Copilot, built on technology from ChatGPT creator OpenAI, offers AI-powered assistance for programmers. It suggests code and functions based on what the user is working on, aiming to streamline the development process. However, the House Office of Cybersecurity raised concerns about Copilot potentially transmitting user data to unauthorized cloud storage.

In response, the House is removing and blocking Copilot from all government-issued Windows devices. This decision is specific to the commercially available version of Copilot. Szpindor’s office clarified they’ll evaluate a potential government-approved version of Copilot from Microsoft when it becomes available later this year.

This incident highlights the ongoing challenges of balancing the potential benefits of AI with data security needs, especially for sensitive government work. While AI tools can offer significant productivity gains, concerns about data privacy and potential security vulnerabilities require careful consideration before widespread adoption.

Microsoft acknowledged the security concerns and emphasized its commitment to addressing them. A spokesperson pointed out their efforts to develop government-specific AI tools that comply with stricter security and regulatory standards. These tools are expected to be rolled out by summer 2024.

The House’s decision comes amidst a broader push for stricter AI regulations in the U.S. President Biden signed an executive order in late 2023 outlining new standards for AI development and safeguards against potential risks. This incident serves as a real-world example of the complexities involved in ensuring the secure and responsible use of AI within government institutions.