Apple amongst OpenAI, Google, Microsoft, Meta and others has agreed to adopt a set of artificial intelligence safeguards, set forth by the Biden-Harris administration.
The move was announced by the administration on Friday.
The news comes ahead of Apple’s much-awaited launch of Apple Intelligence (Apple’s name for AI), which will become widely available in September, with the public launch of iOS 18, iPadOS 18, and macOS Sequoia.
In addition, the new features, unveiled by Apple in June, are not yet available as beta right now, but the company is expected to slowly roll them out in the months to come.
Apple is one of the signees of the Biden-Harris administration’s AI Safety Institute Consortium (AISIC), which was created in February.
However, the company has pledged to abide by a set of safeguards which include testing AI systems for security flaws and sharing the results of those tests with the U.S. government, developing mechanisms that would allow users to know when content is AI-generated, as well as developing standards and tools to make sure AI systems are safe.
Furthermore, the safeguards are voluntary and not enforceable, meaning the companies won’t suffer consequences for not abiding to them.
Also Read: Apple iPhone Will Stop Cutting Off Music While Taking A Video
In addition, Apple’s upcoming set of AI features includes integration with OpenAI’s powerful AI chatbot, ChatGPT. The announcement prompted X owner and Tesla and xAI CEO Elon Musk to warn he would ban Apple devices at his companies, deeming them an “unacceptable security violation.” Musk’s companies are notably absent from the AISIC signee list.
Review Of Block Buster Movie, House Of Ga’a