On June 7 Google pledged not to “ design or deploy AI ” that would cause “ overall harm, ” or to develop AI-directed weapons or use AI for surveillance that would violate international norms. It also pledged not to deploy AI whose use would violate international laws or human rights. While the statement is vague, it represents one starting point. So does the idea that decisions made by AI systems should be explainable, transparent, and fair.To put it another way: How can we make sure that the thinking of intelligent machines reflects humanity ’ s highest values? Only then will they be useful servants and not Frankenstein ’ s out-of-control monster.