ARTIFICIAL INTELLIGENCE, HOLDS GREAT PROMISE FOR THE FUTURE OF PRODUCTIVITY. HOWEVER, WHAT HAPPENS WHEN THINGS GO AWRY? DR. YULIA SULLIVAN, ASSISTANT PROFESSOR OF INFORMATION SYSTEMS AND BUSINESS ANALYTICS, INVESTIGATES THE INTRIGUING QUESTION OF ACCOUNTABILITY IN AI SYSTEMS.
“Our major question is who will be held accountable if AI is involved in a wrongdoing? We look at two different dimensions…. perceived experience and agency. Perceived agency is we are attributing intention, reasoning, and pursuing goals because AI has a goal. When we observe these qualities in AI, we attribute agency to AI and expect it to be a moral agent. If the AI makes a mistake or violates societal standards, then we will blame the AI.“
WHILE AI IS FREQUENTLY HELD ACCOUNTABLE, IT'S ESSENTIAL TO RECOGNIZE THAT RESPONSIBILITY MAY EXTEND TO OTHER ENTITIES AS WELL.
“Company, developers, and end users at some points could also be held accountable. We found is people blame the developer mostly for either intentional or unintentional that they observe in AI because these developers are the one who built and trains the system. “
SULLIVAN’S RESEARCH HIGHLIGHTS THE RESPONSIBILITY OF DEVELOPERS AND COMPANIES TO INTEGRATE MORAL VALUES INTO AI DESIGNS FOR ETHICAL CONSIDERATIONS.
THE BUSINESS REVIEW IS A PRODUCTION OF LIVINGSTON AND MCKAY AND THE HANKAMER SCHOOL OF BUSINESS AT BAYLOR UNIVERSITY