Back in May, I had the honor of providing the opening keynote for an interdisciplinary event organized by Prof. JP Singh, Distinguished University Professor at the Schar School of Policy and Government at George Mason University, and Richard von Weizsäcker Fellow with the Robert Bosch Academy in Berlin. If you haven’t met JP, he is definitely both someone to meet and watch for the release of his deep research into the different AI Policies of nations around the world — it’s much more nuanced than most pundits and news headlines would have you believe with regards to the AI Policies being advanced by different nations (i.e., it’s not just what Europe, China, and U.S. are ordaining).
My keynote focused on the topics of Why trust matters in AI? How can we achieve it? — and there’s a video recording of the twenty-minute talk below. I won’t give away everything that I present except to recommend that we each consider whether the action-oriented steps we need to do to improve Trust in AI are *almost exactly* the same steps we need to do to improve Trust in Societies. This includes steps to improve trust among organizations, the public, governments, national security professionals, media platforms, and other networked actors globally.
Personally I find the extreme AI hype (it’s going to only be wonderful) or AI fear (it’s going to risk destroying us all) fatiguing, and as one who has been urging folks to consider how AI will transform how we live, work, and co-exist since 2016, I’m hoping some pragmatism can shine through the hype/fear cycles out there.
A Pluralistic Panel on AI In Peace, Conflict, and Turbulence
After the keynote, there was a great panel discussion that included Dr. JP Singh himself, plus Jacqueline Acker — CIA, Dr. Neil Johnson — GWU, Denise Garcia — Northeastern University, and Branka Panic — AI for Peace the different elements of AI in countering hate and disinformation, helping communities build peace, in battlefield conflict situations, and in the challenges of non-state actors seeking to introduce turbulence and disorder. A video of that discussion is below — and once you watch, it I think you’ll agree it was a nuanced, reasoned discussion on these complex topics:
I’ll close this post with three points I shared on a Sunday call with the People-Centered Internet coalition as part of a panel with friends and colleagues Vint Cerf, Anthony Scriffignano, Esther Dyson, Divya Chander, and Sarah Novotny, Kevin Clark, and several more. In that discussion, I closed with one concern, one call to action, and one message of hope.
Three Positive Steps Forward
One Concern — we seem to be repeating several of the patterns of the original Victorian Era. During that time there was rapid industrialization, technological progress, and both polarizing disinformation through some sensationalistic newspapers/news platforms (seeking to sell papers) as well as widening political strife in the United States (akin to our present reality). During that time folks placed too much emphasis on what people signaled — specifically virtue signaled — and less on what they did when no one, or very few folks, were looking. I hope we don’t fall into the trap of paying too much attention to virtue signals at the omission of what folks are actually doing to either address (or not) the important issues of our day?
One Call to Action — related to this, I’ve written before about the challenges that a lot of people in our world now feel overwhelmed with a sense of “learned helplessness” to include a sense that the challenges of the world, ranging from the economic to political strife as well as climate change and accelerating technological advancements, are just out of their control. This loss of control is correlated to folks feeling like they have no ability to shape their future — and risks becoming self-fulfilling where if folks feel like they have no control, they will relinquish control and spiral into a cycle of anxiety, isolation, sadness, anger, and frustration. When it comes to addressing AI, there’s a lot of anxiety already out there — and a decided lack of realistic non-dystopian narratives — that we need to remedy alongside helping people overcome any sense of helplessness to realize they do have choice, agency, and data dignity in the digital era.
One Message of Hope — if there’s a value to AI, it is its ability to learn and synthesize a large amount of data. Not always perfectly and not always in ways akin to what human mental models would do — however that ability to learn and synthesize might allow humans to better experience in a realistic format that includes words, images, and potentially videos with sound the experiences of others. Specifically, can we use AI to help “walk a mile in another person’s shoes” so that we can better understand each other’s perspectives?
This includes using AI to hold a digital mirror up to our own actions and patterns to see, perhaps in a safe private way, when perhaps we’re amplifying our existing thoughts, emotions, or biases — vs. perhaps pausing to reflect and ask what am I missing here, or why might my emotions be triggered here, or what biases (confirmation bias, sunk cost bias, or other externalized or internal biases) might be influencing our thoughts?
To close, in the GMU panel with JP Singh, I closed with the thought that perhaps we might use AI to amplify human strengths, mitigate human weaknesses — both individually and collectively, and potentially make us better people. I’ll end with this question:
If we were to commit to an AI “Manhattan Project” — wouldn’t it be timely and fitting if the focus was (1) how people could better know whether they could trust both AI and social institutions — and (2) how both AI and social institutions could amplify and uplift humans everywhere?
p.s. In the interim, if you’re interested in what the National Academy of Public Administration is doing regarding AI and Public Service, here’s a link as well: https://www.linkedin.com/pulse/call-action-ai-public-service-national-academy-david-bray-phd
Onwards and upwards together.