Organizers: Tae Wan Kim | Ignacio Ferrero, email@example.com | Alejo Jose G. Sison
Business enterprises progressively exploit artificial intelligence techniques to make significant decisions for humans. YouTube, Amazon, Google, and Facebook customize what users see. Uber and Lyft algorithmically match passengers with drivers and set prices. Tesla’s Advanced driver-assistance systems aid with steering and braking for drivers. Although each of these examples comprises its own complicated technology, they share a core: a data-trained set of decision rules (often called “machine learning’’) that implement a decision with little or no human intermediation. Such features raise various ethical issues and managerial responsibilities. Amazon used AI to recruit new employees and shut it down after it recognized that the machine showed bias against women. Microsoft had to stop its first AI-based twitter-bot, Tay, immediately after the chatbot tweeted racist and misogynist tweets. Tesla vehicles have caused deaths but the black box technology does not allow us to understand whether it was an accident or an incident. This call seeks papers that examine ethical issues in using AI techniques, broadly defined, in business.