A Microsoft senior engineer sounded the alarm about potential risks posed by Copilot Designer, the tech giant’s AI image generator. Shane Jones, a principal software engineering manager, urged the US government to investigate this tool over safety concerns.

In a letter to the Federal Trade Commission and Microsoft’s board, Jones highlighted Copilot Designer’s capability to generate inappropriate visuals depicting explicit content, violence, underage substance use, and political bias or conspiracy theories. He emphasized educating the public, especially parents and educators, about such technology’s dangers in settings like schools.

Despite Jones’ repeated internal efforts over three months to address these issues, Microsoft declined removing Copilot Designer from public access or implementing adequate safeguards, he revealed. His suggestions of adding disclaimers and changing the app’s rating were disregarded.

Also Read:

Microsoft responded, stating their commitment to properly addressing employee concerns aligns with company policies, appreciating efforts enhancing technology safety. This wasn’t Jones’ first AI safety warning – months earlier, he publicly urged OpenAI to pull DALL-E, the model powering Copilot Designer, until risks reduced. Facing pressure from Microsoft’s legal team to retract, Jones persisted, even contacting US senators.

The incident amplifies growing tech industry scrutiny over AI technologies. Google recently paused access to its AI image generator Gemini after complaints about racially insensitive, historically inaccurate visuals. DeepMind’s CEO assured reinstatement once concerns addressed.

As generative AI capabilities rapidly advance, responsible development ensuring public safety remains paramount. Openly addressing potential harms allows implementing proper guardrails before unintended consequences manifest at scale. Microsoft’s internal ethics processes face renewed examination after this whistleblowing event.

Shares: