The United States, the United Kingdom, and more than a dozen other countries have unveiled their first detailed international agreements on how to protect AI from wrongdoers, urging companies to build AI systems that are "secure by design," as reported by Reuters on November 27. In a 20-page document, 18 countries agreed that companies that design and use AI need to develop and deploy in a way that ensures that customers and the public are protected from abuse. The agreement is non-binding and contains general recommendations such as monitoring AI systems for misuse, protecting data from tampering, and vetting software vendors. In addition to the United States and the United Kingdom, the 18 countries that signed the agreement include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore, among others. Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, said it's important that so many countries agree that AI systems need to put security first. The agreement is the latest in a series of initiatives by governments around the world to shape the development of AI, few of which are truly effective, and the influence of AI has seeped into more and more industries.