學門類別
政大
哈佛
- General Management
- Marketing
- Entrepreneurship
- International Business
- Accounting
- Finance
- Operations Management
- Strategy
- Human Resource Management
- Social Enterprise
- Business Ethics
- Organizational Behavior
- Information Technology
- Negotiation
- Business & Government Relations
- Service Management
- Sales
- Economics
- Teaching & the Case Method
最新個案
- Leadership Imperatives in an AI World
- Vodafone Idea Merger - Unpacking IS Integration Strategies
- Predicting the Future Impacts of AI: McLuhan’s Tetrad Framework
- Snapchat’s Dilemma: Growth or Financial Sustainability
- V21 Landmarks Pvt. Ltd: Scaling Newer Heights in Real Estate Entrepreneurship
- Did I Just Cross the Line and Harass a Colleague?
- Winsol: An Opportunity For Solar Expansion
- Porsche Drive (B): Vehicle Subscription Strategy
- Porsche Drive (A) and (B): Student Spreadsheet
- TNT Assignment: Financial Ratio Code Cracker
-
AI-at-Scale Hinges on Gaining a 'Social License'
For AI deployments to succeed, the systems must be trusted and accepted by those who use their input and those who are affected by the decisions these systems make or support. That means being accountable for the use and outputs of AI technologies, and transparently communicating both the benefits and drawbacks to all stakeholders. The authors describe the three sources of stakeholders' trust in artificial intelligence and suggest four steps companies can take to earn that trust. -
AI Regulation Is Coming
For years public concern about technological risk has focused on the misuse of personal data. But as firms embed more and more artificial intelligence in products and processes, attention is shifting to the potential for bad or biased decisions by algorithms--particularly the complex, evolving kind that diagnose cancers, drive cars, or approve loans. Inevitably, many governments will feel regulation is essential to protect consumers from that risk. This article explains the moves regulators are most likely to make and the three main challenges businesses need to consider as they adopt and integrate AI. The first is ensuring fairness. That requires evaluating the impact of AI outcomes on people's lives, whether decisions are mechanical or subjective, and how equitably the AI operates across varying markets. The second is transparency. Regulators are very likely to require firms to explain how the software makes decisions, but that often isn't easy to unwind. The third is figuring out how to manage algorithms that learn and adapt; while they may be more accurate, they also can evolve in a dangerous or discriminatory way. Though AI offers businesses great value, it also increases their strategic risk. Companies need to take an active role in writing the rulebook for algorithms.