Stephen Hawking, Elon Musk and others come up with guidelines for safe use of AI

Think  your webcam spies on you? Your refrigerator is getting flirty with your wife? Or that your mixer is running off at night in clown make up to kill people? I have good news for you so get out of your basement, and take off your tin foil hat, because you might be relieved to know that the Beneficial AI conference has developed 23 principles to guide future AI research

 

At the Beneficial AI 2017 conference, January 5–8 held at a conference center in Asilomar, California, the Future of Life Institute brought together more than 100 AI researchers from the fields of academics and industry, and leaders in economics, law, ethics, and philosophy to address and formulate principles of beneficial AI.

 

The result was 23 Asilomar AI Principles, intended to dictate AI research guidelines, such as “The goal of AI research should be to create not undirected intelligence, but beneficial intelligence” and “An arms race in lethal autonomous weapons should be avoided”; identify ethics and values, such as safety and transparency; and address long-term issues. AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity are also to be subject to strict safety and control measures. Super intelligence, it has been decreed, should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

These principles have found proponents in many leaders in their respective industries, the most eminent being physicist and Nobel Prize recipient Stephen Hawking and Tesla CEO Elon Musk who was also present at the conference.

 

There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts with their expected impact in mind. These rules are an important step in this direction.

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInShare on RedditEmail this to someone

Leave a Reply

Your email address will not be published. Required fields are marked *