In Blog

Richard Taylor

Richard Taylor
Palmer Chair & Professor, Telecommunications Studies Emeritus, The Pennsylvania State University

Enterprises ignore the rapidly accelerating impacts of AI at their peril. This matters at both the national policy level and in the workplace. Two experts with valuable insights on these topics, Scott Shackelford, professor and executive director of the Ostrom Workshop and Indiana University-Bloomington Cybersecurity Program and Elizabeth Adams, affiliate fellow of the Stanford Institute for Human-Centered Artificial Intelligence, answered a series of questions from John Gasparini, associate, corporate department, Paul Hastings LLP during the center stage panel: Cybersecurity, Privacy, and AI Ethics. The conversation produced many actionable insights for leadership at all levels for adjusting to AI-driven changes.

In overview comments, Shackelford provided a comparative review of how the APEC nations are evaluating their cybersecurity strategies, particularly with a view to “public benefit.” Countries’ responses fall across a wide range, and collective norm building presents challenges. The more developed countries and the less have different laws and legal regimes.

His organization, the Ostrom Workshop, provides a focus for stakeholders on a collective regional “knowledge commons” around privacy and AI, using AI to provide automated tracking and accumulated “lessons learned.” Transparency, he noted, is especially important.

He said that AI policy makers had learned helpful processes from other recent treaties, involving the inclusion of “soft” law and the use of “naming and shaming” as a form of accountability rather than the specific penalties, citing the Montreal Protocol as an example.

Adams, in her remarks, emphasized the need for leadership to educate itself on how to guide others through complex issues involving AI, in particular AI ethics. An approach of “shared leadership” needs to be adopted, based on trust, with an emphasis on stakeholder engagement, both internal and external. Employees should be included in the development of responsible AI rules and have a “seat at the table” from beginning to end. Broadly she refers to this as the “human-centered development” process.

In closing remarks, Shackelford noted that collective AI cybersecurity agreements are not fixed but are an iterative process which is constantly evolving, with some potential Codes of Conduct now emerging. Adams reiterated the need for engagement of stakeholders and not just diplomats. Projecting what topics might be the focus of the same panel next year, ideas about how to thrive with AI and how AI privacy and cybersecurity are converging were offered. Both agreed that this is a critical action area for leadership and the time to address it is now.

Start typing and press Enter to search