In Blog

Nico Grove, Managing Director & Co-Founder, Kawikani, Kawikani GmbH & Co KG, Germany

Interviewer:
Dr. Nico Grove
Managing Director & Co-Founder, Kawikani, Kawikani GmbH & Co KG

Elizabeth Adams, Affiliate Fellow, Stanford Institute for Human-Centered AI (HAI) and CEO, EMA Advisory Services

Interviewee:
Elizabeth M. Adams
Affiliate Fellow, Stanford Institute for Human-Centered AI (HAI) and CEO, EMA Advisory Services

Nico Grove:
Aloha, I am here with Elizabeth M. Adams, affiliate fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and CEO of EMA Advisory Services. Elizabeth, it is a pleasure for the PTC Community to have you as a panelist for Cybersecurity, Privacy, and AI Ethics at PTC’23, where you will talk about your doctoral research on Leadership of Responsible AI™. Let me introduce you quickly and then get into the topic.

Elizabeth, you are an AI ethics and organizational culture advisor and CEO of EMA Advisory Services with 25 years of expertise in technology leadership, civic tech policy, social advocacy, inclusive tech design, and diversity and inclusion. In parallel, you are pursuing an executive doctoral degree at Pepperdine University with a research focus on Leadership of Responsible AI™.

You have also been featured in Forbes as one of “15 AI Ethics Leaders Showing The World The Way Of The Future.” So, if there is anybody out there to get our PTC Community into responsible AI leadership, it would be you.

Elizabeth M. Adams:
Thank you, Nico. It is a great pleasure to be invited to PTC’s Annual Conference to outline the importance of Responsible AI (RAI) with attendees.

Leadership of Responsible AI™ has to be understood as a shared leadership function between technical and non-technical leaders who adopt processes and procedures to operationalize responsible AI. This includes knowledge and skill building which is critically important throughout the AI product development lifecycle. Leadership of Responsible AI™ is basically a series of inclusive and intentional interdisciplinary actions that result in RAI systems. It includes the creation of RAI artifacts that guide AI system design and development and ensures foundational elements for AI agency adoption and operationalization in organizational practice.

So, what kind of AI functions are you focusing on then?

As a program/project manager by trade, my line of sight is across the entire AI lifecycle which includes initial discussions concerning the business case for AI, it involves discussions around data procurement and data use, and even includes accounting for societal impacts through human-centered design and artifact creation. So, in my world I don’t focus on one or two specific functions of AI. This includes leadership across the entire organization.

 

It is very important to have an understanding of the types of questions related to responsible AI that have to be thought about. This includes specific questions like:

 

  • What is the business case?
  • What are some likely scenarios to be addressed?
  • Are there AI ethics principles and practices in place?
  • How might this impact our ability as humans to thrive?
  • Who could be impacted?
  • What are the risk mitigation strategies?
  • What stakeholders should be included in design and development?

 

It is an exercise in asking lots and lots of questions.

How can companies make use of combining AI and responsibility in order to add value to their current business practices?

Based on my research, trust is an essential value across all stakeholders. Employees need to trust the systems they are working on are ethical and that they can answer questions about automated decision making, upskilling, reskilling, how they will work alongside AI co-workers, and where they fit in the future of work. Customers need to trust in the results and output of the system, and they are using AI generated knowledge to either make decisions or have decisions made on their behalf. Shareholders need to trust in the organizational adoption of responsible AI practices to ensure business operations are legal, ethical, uninterrupted, and that the business can remain competitive. We could add a reason for every stakeholder in the business ecosystem and why trust is essential.

 

So how does trust occur? Again, based on my research, Leadership of Responsible AI™ has to be inclusive and intentional.

Nico Grove:
Elizabeth, thank you very much for your time and getting me closer to responsible AI. This is a very important topic for our PTC leaders, not only from an application development perspective, but especially from a general management view. I am very much looking forward to seeing you in Honolulu in January 2023 and continuing our discussion. Mahalo.

About Elizabeth M. Adams:
Elizabeth (she/her) is an AI ethics and organizational culture advisor and CEO of EMA Advisory Services. She is a scholar-practitioner recently featured in Forbes “15 AI Ethics Leaders Showing The World The Way Of The Future.”

Elizabeth offers 25 years of expertise in technology leadership, civic tech policy, social advocacy, inclusive tech design, and diversity and inclusion. Elizabeth is a highly sought-after international speaker and trainer who engages with her audiences through learning events, panels, workshops, and keynote speaking engagements.

Elizabeth was awarded the inaugural 2020 Race & Technology Practitioner Fellowship by Stanford University’s Center for Comparative Studies in Race & Ethnicity. In August of 2021, she was awarded affiliate fellow status with Stanford’s Institute for Human-Centered AI, a two-year appointment.

In 2020, Elizabeth launched EMA Books and wrote four children’s books to help parents share stimulating and diverse books with children during the COVID-19 pandemic. Her book “Little A.I. and Peety” is currently sold in over 40 online bookstores around the world.

Elizabeth is pursuing an executive doctoral degree at Pepperdine University with a research focus on Leadership of Responsible AI™. She holds an MBA in project management from Capella University, a B.A. in business management from Bethel University, as well as a graduate certificate from Johns Hopkins University in leadership development. Elizabeth has earned diversity and inclusion credentials from Cornell University, a Project Management Professional (PMP) certification from the Project Management Institute, and a Security+ certification from EXIN.

Elizabeth serves as the chief AI ethics advisor for Paravison. She also serves as the global chief AI culture and ethics officer for Women in AI where she volunteers her time building a world-class team and program to support the needs of 8,000 women around the world.

Start typing and press Enter to search