Microsoft robot
© David Paul Morris/BloombergAttendees examining a robot at a Microsoft developers conference in March.
Five major technology companies said Wednesday that they had created an organization to set the ground rules for protecting humans — and their jobs — in the face of rapid advances in artificial intelligence.

The Partnership on AI, unites Amazon, Facebook, Google, IBM and Microsoft in an effort to ease public fears of machines that are learning to think for themselves and perhaps ease corporate anxiety over the prospect of government regulation of this new technology.

The organization has been created at a time of significant public debate about artificial intelligence technologies that are built into a variety of robots and other intelligent systems, including self-driving cars and workplace automation.

The industry group introduced a set of basic ethical standards for engineering development and scientific research that its five members have agreed upon.

In a conference call on Wednesday, five artificial intelligence researchers representing the companies said they thought the technology would be a major force in the world for social and economic benefits, but they acknowledged the potential for misuse in a wide variety of ways.

They said their effort was not intended to be an enforcement organization to force technology companies into self-regulation. Rather, they want to foster "public understanding" and set "best practices" for work in artificial intelligence.

Mustafa Suleyman
Mustafa Suleyman, head of applied A.I. for DeepMind, an artificial intelligence development company acquired by Google in 2014.
"We passionately believe in the potential for it to transform in a positive way our world," said Mustafa Suleyman, head of applied A.I. for DeepMind, an artificial intelligence development company acquired by Google in 2014. "We believe it's critical now to start to think about new models of engagement with the public, new models of collaboration across the industry and new models of transparency around the work that we do."

The group released eight tenets that are evocative of Isaac Asimov's original "Three Laws of Robotics," which appeared in a science fiction story in 1942. The new principles include high-level ideals such as, "We will seek to ensure that A.I. technologies benefit and empower as many people as possible."

Nevertheless, at least one of the tenets implies that the companies realize they could be drawn into sticky ethical situations, and it calls on engineers to oppose the use of artificial intelligence technology in weapons or other tools that could be used to violate human rights.

"With the hyperbole about A.I. over the last two to four years, there have been concerns in an echo chamber of anxiety that the government itself will be misinformed," said Eric Horvitz, managing director for Microsoft Research.

The researchers said they were talking with other companies like Apple and research laboratories like the new nonprofit research group OpenAI about participating in their organization.