Challenges and Needed New Solutions for Open Societies to Maintain Civil Discourse — Part 1
Happy 2023 to everyone — as the new year begins, I want to commit to tackling a challenging topic that probably is going to get harder before it gets easier in terms of new solutions. It a topic that I’ve spent time addressing in different situations in different roles over time involving technology’s collision with social cohesion:
Back in 2001 with the U.S. anthrax events and again in 2003 with Severe Acute Respiratory Syndrome (the original coronavirus outbreak, in hindsight) with the Bioterrorism Preparedness and Response Program, we were concerned with how would you distinguish real public concerns vs. manufactured or ginned-up public concerns, to include fear created by even just the rumor of a bioterrorism event regardless of the scale or nature of one.
Then in 2007 and 2008, I spent time at the Oxford Internet Institute and later at the both Harvard and the MIT Center for Collective Intelligence looking at when, how, and why there are certain situations where more people make better decisions together — and similarly other situations where more more people result in less optimal and/or really bad decisions together.
Later in 2009 I was in Afghanistan looking at the challenges of rumor and gossip, and how this outpaced anything fact-based that NATO or Coalition Forces attempted to do in the region.
In 2012 and 2013, I served as Executive Director for a bipartisan, Congressional National Commission that reviewed the Research and Development Programs of the U.S. Intelligence Community — and one of the big issues was how the Community could not respond to claims that may be completely baseless about the Community’s actions or intentions, because to respond would potentially give away info to other actors associated with the claims — including what might be either real or not real methods and means.
And then in 2017 and 2018, I experienced a disinformation attack about something I observed involving a disruption in the public commenting process — namely that there were entities unknown flooding the system with less-than-human-in-origin comments, and thus effectively denying service to actual humans, that later the NY Attorney General confirmed to have happened with 18 million of the 23 million comments received from less-than-human-in-origin sources split 9 million plus 9 million across the two party lines (which apparently is why no one wanted to accept that something odd was occurring).
From 2017 to the start of 2020 I worked with the People-Centered Internet coalition as its Executive Director along side Vint Cerf, Mei Lin Fung, and several others who were concerned that the Internet was getting less people-centered in nature — simultaneously spending some time with U.S. Special Operations folks on this challenge as it represented a challenge both foreign and domestic that could polarize and erode stable societies both at home and abroad.
The big challenge is this:
Technologies have been developed to make it easier to flood the public space with spammed content — call it astroturfing or bot-net generated content or whatever words you want — and this means the “public square” is getting harder to hear and harder to discern.
It’s also more than this too — as you surf the web, different algorithms are guiding what’s recommended to you, what your search results show, again with an intent of keeping you engaged and not necessarily showing you all sides of an issue.
All of which points to a really big problem — how will countries like the United States, Canada, Australia, and other representative societies in Europe, South America, Africa, Asia, Oceania, and around the world — be able to hear from their citizens? My 2017 experiences demonstrated that the Administrative Procedure Act (APA) of 1946 was written at a time when neither PCs, nor faxes, nor smartphones were available. Lawyers at different agencies — both back then and still now — vary in their interpretations of what this Act requires for our digital era and most of the public is not familiar with the challenges of implementing the APA online.
Without a consistent legal- and process focused interpretation across agencies, different agencies and political parties will reach divergent conclusions. On the other hand, a standard Federal interpretation will permit agencies to use a shared technology service, improving the Notice & Comment process across agencies while reducing the costs of technologies.
In 2019 and 2020, I elevated these concerned these concerns to multiple sources — including the Administrative Conference of the United States and the U.S. Government Accountability Office. The challenge is ACUS saw this as a technology problem — where I would suggest that empirical evidence indicates it’s a legal interpretation, process, policy, and people issue for the most part, not tech — and GAO can only follow-up on Congressional requests for research. In January 2020, a collective of 52 individuals did submit a letter to the General Services Administration (GSA) in response to a Modernization of Electronic Rulemaking — that letter is linked here with a screenshot of some of the important parts below. The group did receive a confirmation that GSA received our concerns, specifically: Confirmation 1k4–9eb7-lkmf from Regulations.gov (ID: GSA_FRDOC_0001–1624) on 07 Jan 2020 at 2:42pm Eastern Time; however GSA never actually published the letter we submitted published it.
So why am I bringing this up now?
Because ChatGPT-3 is only going to make all these issues even harder for representative societies. ChatGPT-3 and similar chat bots risk flooding the “public square” even further, making it harder to hear. It is important to note, per the APA, an agency only has to respond to any valid legal arguments in the comments — however even this points to a use of ChatGPT-3 or a similar chat bot to spam an agency with spurious legal arguments that will take 5–15 years to wade through and attempt to answer. Whether this results from domestic or foreign societies, the risk of eroding trust and cohesion even further in public societies is high.
So I’m sharing a link to that GSA letter here — and highlighting the top five questions we raised to ACUS, GAO, GSA and other agencies back in 2019 and 2020 — because we need to start working on new solutions now. The solution need not be tech-centric, in fact a lot of this can be solved with updated legal interpretations, consistent processes, and improved policy across agencies that cannot be misused or abused by political rhetoric or misinfo/disinfo attacks. Here were the top five questions from 2020 — still relevant now (especially given ChatGPT-3).
1. Does identity matter regarding who files a comment or not — and must one be a U.S. person in order to file?
2. Should agencies publish real-time counts of the number of comments received — or is it better to wait until the end of a commenting round to make all comments available, including counts?
3. Should third-party groups be able to file on behalf of someone else or not — and do agencies have the right to remove spam-like comments?
4. Should the public commenting process permit multiple comments per individual for a proceeding — and if so, how many comments from a single individual are too many? 100? 1000? More?
5. Finally, should the U.S. government itself consider, given public perceptions about potential conflicts of interest for any agency performing a public commenting process, whether it would be better to have third-party groups take responsibility for assembling comments and then filing those comments via a validated process with the government?
I’ll close for now with two links to important CxOTalk videos done in my People-Centered Internet role, specifically on countering misinformation and disinformation because part of what flooding the “public square” involves includes flooding the system with less-than-human-in-origin comments, and thus effectively denying service to actual humans. Moreover this isn’t a hypothetical: we have seen this already per what the NY Attorney General later confirmed to have happened with 18 million of the 23 million comments received from less-than-human-in-origin sources split 9 million plus 9 million across the two party lines. Open societies must create an updated Grand Strategy framework for data, sensemaking, and trust.
We must take action now — specifically to work on a more People-Centered Internet, to address threats both foreign and domestic that could simultaneous polarize and erode stable nations, and to work on strengthening pluralistic societies globally.