Microsoft Says Office Bug Exposed Customers' Confidential Emails to Copilot AI
Microsoft has revealed a bug in its Office software that exposed customers' confidential emails to its Copilot AI chatbot. The bug, which has been fixed, meant that Copilot was reading and summarizing paying customers' confidential emails, bypassing data protection policies.
According to Microsoft, the bug was discovered in the company's Office software, which is used by millions of people around the world. The bug allowed Copilot to access and read confidential emails, including those that were marked as private or sensitive.
Microsoft has since fixed the bug and has apologized for the incident. The company has also assured its customers that their data is safe and that the bug was not a result of any malicious activity.
The incident highlights the importance of data protection and the need for companies to ensure that their software and systems are secure. It also raises questions about the use of AI in business and the potential risks associated with it.
Microsoft has not commented on how many customers were affected by the bug or how long it had been present in the software. However, the company has assured its customers that it is taking steps to prevent similar incidents in the future.
The bug was discovered by Microsoft's security team, who were reviewing the company's software for potential vulnerabilities. The team found the bug and reported it to the company's management, who then fixed it.
The incident is a reminder of the importance of data protection and the need for companies to ensure that their software and systems are secure. It also highlights the potential risks associated with the use of AI in business and the need for companies to be transparent about their use of AI.
Microsoft has a long history of developing and using AI in its products and services. The company has been at the forefront of AI research and development, and its products and services are used by millions of people around the world.
However, the company has also faced criticism for its use of AI and its handling of data. In 2020, Microsoft was criticized for its use of AI in its facial recognition software, which was found to be biased against certain groups of people.
The company has since taken steps to address these concerns and has developed new policies and procedures for the use of AI in its products and services. However, the incident highlights the need for companies to be transparent about their use of AI and to ensure that their software and systems are secure.
Sources
[7] Microsoft says Office bug exposed customers’ confidential emails to Copilot AI