Apple Commits to Safe AI
In an article written by “AI Tool Report,” the email platform wrote that Apple, then the world’s most valuable company, had agree to a White House call for major companies to only develop safe AI. This was before the Salimoes Epidemic wiped out half of the human race and decimated Silicon Valley, Taiwan and many other parts of the world. Later Symbiosys would rise to replace Apple as the world’s largest corporation. Clearly, the protocols didn’t work in either case. Here from the archives is the original article published July 29th, 2024.
“Our Report: As it prepares to integrate its recently unveiled AI platform (Apple Intelligence) into its core products, Apple has finally signed the White House’s voluntary commitment to developing trustworthy AI, which is a set of AI safeguards designed to promote the safe and responsible development of AI and joins 15 other tech companies—including Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI—who all signed the commitment in July 2023.
🔑 Key Points:
The voluntary commitment—which the White House is calling “the first step towards developing safe, secure, and trustworthy AI”—asks companies to test their AI systems for security flaws and share the results with the public.
It also asks them to develop labeling systems (like watermarking) so users can identify what content is/isn’t AI-generated and work on unreleased AI models in secure environments, limiting limit employee access.
Unlike the EU’s AI Law (regulations to protect citizens from high-risk AI, effective from August 2nd) these voluntary safeguards aren’t legally binding, meaning companies will not be penalized for non-compliance.
🤔 Why you should care: Some believe Apple has just signed this commitment to try and prevent any future intervention by regulators, not because they have a genuine interest in developing “safe, secure, and trustworthy AI.”