Enterprises ignore the rapidly accelerating impacts of AI at their peril. This matters at both the national policy level and in the workplace. Two experts with valuable insights on these topics, Scott Shackelford, professor and executive director of the Ostrom Workshop and Indiana University-Bloomington Cybersecurity Program and Elizabeth Adams, affiliate fellow of the Stanford Institute for Human-Centered Artificial Intelligence, answered a series of questions from John Gasparini, associate, corporate department, Paul Hastings LLP during the center stage panel: Cybersecurity, Privacy, and AI Ethics. The conversation produced many actionable insights for leadership at all levels for adjusting to AI-driven changes.

In overview comments, Shackelford provided a comparative review of how the APEC nations are evaluating their cybersecurity strategies, particularly with a view to “public benefit.” Countries’ responses fall across a wide range, and collective norm building presents challenges. The more developed countries and the less have different laws and legal regimes.

His organization, the Ostrom Workshop, provides a focus for stakeholders on a collective regional “knowledge commons” around privacy and AI, using AI to provide automated tracking and accumulated “lessons learned.” Transparency, he noted, is especially important.

He said that AI policy makers had learned helpful processes from other recent treaties, involving the inclusion of “soft” law and the use of “naming and shaming” as a form of accountability rather than the specific penalties, citing the Montreal Protocol as an example.

Adams, in her remarks, emphasized the need for leadership to educate itself on how to guide others through complex issues involving AI, in particular AI ethics. An approach of “shared leadership” needs to be adopted, based on trust, with an emphasis on stakeholder engagement, both internal and external. Employees should be included in the development of responsible AI rules and have a “seat at the table” from beginning to end. Broadly she refers to this as the “human-centered development” process.

In closing remarks, Shackelford noted that collective AI cybersecurity agreements are not fixed but are an iterative process which is constantly evolving, with some potential Codes of Conduct now emerging. Adams reiterated the need for engagement of stakeholders and not just diplomats. Projecting what topics might be the focus of the same panel next year, ideas about how to thrive with AI and how AI privacy and cybersecurity are converging were offered. Both agreed that this is a critical action area for leadership and the time to address it is now.

Authored by →

Richard Taylor

Check out related news

CBC Tech Accelerates Market Leadership with Integration of SkyGuard, Strengthening Its Position in SASE and Data SecurityMember news

CBC Tech Accelerates Market Leadership with Integration of SkyGuard, Strengthening Its Position in SASE and Data Security

14 May 2026
ValorC3 Data Centers Appoints Digital Infrastructure Veteran Corey Dyer as Chief Executive Officer to Drive Next Phase of Aggressive ExpansionMember news

ValorC3 Data Centers Appoints Digital Infrastructure Veteran Corey Dyer as Chief Executive Officer to Drive Next Phase of Aggressive Expansion

12 May 2026
APTelecom Announces Support for Planned Central Pacific Cable Subsea Extension to American SamoaMember news

APTelecom Announces Support for Planned Central Pacific Cable Subsea Extension to American Samoa

5 May 2026
ZincFive Surpasses 2 GW Milestone Underscoring Commercial Adoption of Nickel-Zinc in Data CentersMember news

ZincFive Surpasses 2 GW Milestone Underscoring Commercial Adoption of Nickel-Zinc in Data Centers

15 April 2026
ZincFive Announces Nickel-Zinc Retrofit Kit to Modernize Existing UPS Energy Storage InfrastructureMember news

ZincFive Announces Nickel-Zinc Retrofit Kit to Modernize Existing UPS Energy Storage Infrastructure

8 April 2026