• Bring Human Values to AI

    When it launched GPT-4, in March 2023, OpenAI touted its superiority to its already impressive predecessor, saying the new version was better in terms of accuracy, reasoning ability, and test scores-all of which are AI-performance metrics that have been used for some time. However, most striking was OpenAI's characterization of GPT-4 as "more aligned"-perhaps the first time that an AI product or service has been marketed in terms of its alignment with human values. In this article a team of five experts offer a framework for thinking through the development challenges of creating AI-enabled products and services that are safe to use and robustly aligned with generally accepted and company-specific values. The challenges fall into five categories, corresponding to the key stages in a typical innovation process from design to development, deployment, and usage monitoring. For each set of challenges, the authors present an overview of the frameworks, practices, and tools that executives can leverage to face those challenges.
    詳細資料
  • AI Regulation Is Coming

    For years public concern about technological risk has focused on the misuse of personal data. But as firms embed more and more artificial intelligence in products and processes, attention is shifting to the potential for bad or biased decisions by algorithms--particularly the complex, evolving kind that diagnose cancers, drive cars, or approve loans. Inevitably, many governments will feel regulation is essential to protect consumers from that risk. This article explains the moves regulators are most likely to make and the three main challenges businesses need to consider as they adopt and integrate AI. The first is ensuring fairness. That requires evaluating the impact of AI outcomes on people's lives, whether decisions are mechanical or subjective, and how equitably the AI operates across varying markets. The second is transparency. Regulators are very likely to require firms to explain how the software makes decisions, but that often isn't easy to unwind. The third is figuring out how to manage algorithms that learn and adapt; while they may be more accurate, they also can evolve in a dangerous or discriminatory way. Though AI offers businesses great value, it also increases their strategic risk. Companies need to take an active role in writing the rulebook for algorithms.
    詳細資料
  • When Machine Learning Goes Off the Rails

    Products and services that rely on machine learning-computer programs that constantly absorb new data and adapt their decisions in response-don't always make ethical or accurate choices. Sometimes they cause investment losses, for instance, or biased hiring or car accidents. And as such offerings proliferate across markets, the companies creating them face major new risks. Executives need to understand and mitigate the technology's potential downside. Machine learning can go wrong in a number of ways. Because the systems make decisions based on probabilities, some errors are always possible. Their environments may evolve in unanticipated ways, creating disconnects between the data they were trained with and the data they're currently fed. And their complexity can make it hard to determine whether or why they made a mistake. A key question executives must answer is whether it's better to allow smart offerings to continuously evolve or to "lock" their algorithms and periodically update them. In addition, every offering will need to be appropriately tested before and after rollout and regularly monitored to make sure it's performing as intended.
    詳細資料
  • A Better Way to Onboard AI

    In a 2018 Workforce Institute survey of 3,000 managers across eight industrialized nations, the majority of respondents described artificial intelligence as a valuable productivity tool. But respondents to that survey also expressed fears that AI would take their jobs. They are not alone. The Guardian recently reported that in the UK "more than 6 million workers fear being replaced by machines." AI's advantages can be cast in a dark light: Why would humans be needed when machines can do a better job? To allay such fears, employers must set AI up to succeed rather than to fail. The authors draw on their own and others' research and consulting on AI and information systems implementation, along with organizational studies of innovation and work practices, to present a four-phase approach to implementing AI. It allows organizations to cultivate people's trust--a key condition for adoption--and to work toward a distributed cognitive system in which humans and artificial intelligence both continually improve.
    詳細資料