Enterprises ignore the rapidly accelerating impacts of AI at their peril. This matters at both the national policy level and in the workplace. Two experts with valuable insights on these topics, Scott Shackelford, professor and executive director of the Ostrom Workshop and Indiana University-Bloomington Cybersecurity Program and Elizabeth Adams, affiliate fellow of the Stanford Institute for Human-Centered Artificial Intelligence, answered a series of questions from John Gasparini, associate, corporate department, Paul Hastings LLP during the center stage panel: Cybersecurity, Privacy, and AI Ethics. The conversation produced many actionable insights for leadership at all levels for adjusting to AI-driven changes.

In overview comments, Shackelford provided a comparative review of how the APEC nations are evaluating their cybersecurity strategies, particularly with a view to “public benefit.” Countries’ responses fall across a wide range, and collective norm building presents challenges. The more developed countries and the less have different laws and legal regimes.

His organization, the Ostrom Workshop, provides a focus for stakeholders on a collective regional “knowledge commons” around privacy and AI, using AI to provide automated tracking and accumulated “lessons learned.” Transparency, he noted, is especially important.

He said that AI policy makers had learned helpful processes from other recent treaties, involving the inclusion of “soft” law and the use of “naming and shaming” as a form of accountability rather than the specific penalties, citing the Montreal Protocol as an example.

Adams, in her remarks, emphasized the need for leadership to educate itself on how to guide others through complex issues involving AI, in particular AI ethics. An approach of “shared leadership” needs to be adopted, based on trust, with an emphasis on stakeholder engagement, both internal and external. Employees should be included in the development of responsible AI rules and have a “seat at the table” from beginning to end. Broadly she refers to this as the “human-centered development” process.

In closing remarks, Shackelford noted that collective AI cybersecurity agreements are not fixed but are an iterative process which is constantly evolving, with some potential Codes of Conduct now emerging. Adams reiterated the need for engagement of stakeholders and not just diplomats. Projecting what topics might be the focus of the same panel next year, ideas about how to thrive with AI and how AI privacy and cybersecurity are converging were offered. Both agreed that this is a critical action area for leadership and the time to address it is now.

Authored by →

Richard Taylor

Check out related news

ZincFive Announces Nickel-Zinc Retrofit Kit to Modernize Existing UPS Energy Storage InfrastructureMember news

ZincFive Announces Nickel-Zinc Retrofit Kit to Modernize Existing UPS Energy Storage Infrastructure

8 April 2026
TIME Recognizes ZincFive Among America’s Top GreenTech Companies 2026Member news

TIME Recognizes ZincFive Among America’s Top GreenTech Companies 2026

26 March 2026
The APNIC Foundation Is Inviting Concept Notes for the 2026 Round of ISIF Asia GrantsMember news

The APNIC Foundation Is Inviting Concept Notes for the 2026 Round of ISIF Asia Grants

24 March 2026
Ilkari Expands Its Sovereign Infrastructure Platform with Ilkari CloudMember news

Ilkari Expands Its Sovereign Infrastructure Platform with Ilkari Cloud

18 March 2026
Duos Technologies Group Executes Definitive Agreement with Hydra HostMember news

Duos Technologies Group Executes Definitive Agreement with Hydra Host

14 March 2026