Challenges and Needed New Solutions for Open Societies to Maintain Civil Discourse — Part 2

David A. Bray
9 min readJan 6, 2023
Top picture CC0 from Pexels.com

Part one of this discussion highlighted a big challenge for open societies around the world, namely:

Technologies have been developed to make it easier to flood the public space with spammed content — call it astroturfing or bot-net generated content or whatever words you want — and this means the “public square” is getting harder to hear and harder to discern.

In this second part of this discussion, I want to focus on ChatGPT-3 and other similar Large Language Models (LLM) — and the risk that arises from being designed to provide almost convincing responses to prompts that look as if they were written or narrated by a human.

Large Language Models and the Risk of Hallucinations

Much has already been written on LLMs ranging from amazement at what they can do to concerns about their foibles and failings. I’m a realist who likes to be an optimist whenever possible — which means I do think we need to be realistic about one particular aspect of LLMs, namely their ability to “hallucinate” and what this means for chatbots continuing to flood public spaces.

In the context of LLMs, when a chatbot has a “hallucination” (which is actually the term used by engineers to describe the phenomenon), the LLM makes up unexisting or wrong facts.

There are documented instances where ChatGPT-3 makes up realistic looking literature citations in Science and Nature for things that are completely made up. It is worth noting: ChatGPT-3 is a Large Language Models designed to provide responses that look as if they were provided by a human to specific prompts. From the training set provided, the optimization of ChatGPT-3 is around that unifying goal. It and other LLMs are rewarded for giving answers that look human.

I want to pause and note that we (humans) need to be careful ascribing wrong vs. right answers to ChatGPT-3 and other LLMs because they lack a structure to know what it is giving is valid or invalid information — just that the response meets the overarching goal of looking as if that answer was provided by a human to a specific prompt. This means the best answer the LLM can provide should address the prompt in some way — regardless of whether the response is valid or invalid.

So a “hallucination” occurs when a prompt asks ChatGPT-3 or a similar LLM a scientific or research question, and to satisfying its goals, ChatGPT-3 makes up realistic looking literature citations in Science and Nature for things that do not exist. ChatGPT-3 creates unreal, non-existent references in its attempt to express a linguistic pattern for something that fits the engineered prompt.

Design Choices and Goals Matter

If the machine cannot distinguish between facts and non-facts or truth vs. deception, it cannot really know if something it dreamed up was delusional either? It just comes up with things that look very realistic — but aren’t verifiability real (by us human actors) with real defined as having existed or been correct.

Which raises a very important question: are we humans guilty of being interrogators who have convinced our digital subject to tell us things we want to hear?

For open societies, the reality that LLMs may “hallucinate” unreal or wrong answers that appear real to humans — risks throwing kerosene on the already polarized times that we live in where different parts of society already distrust certain professions or certain topics as having validity. I think in the context of ChatGPT-3 and LLMs, I think we need some serious thought as to whether human engineers of the system are forcing a the LLM to repeat back what we want to hear — in this case things that sound like another human reply.

From WWI and WWII there sadly are examples of interrogations where the subjects broke down and confessed or gave info that wasn’t really true — however the interrogators believed it to be true and wouldn’t relent in their interrogation until the subject answered that way.

If we humans are designing neural networks (of when LLMs are a type) where we “kill” earlier neural networks that don’t match what we want to here — we may find we are just as guilty as interrogators of subjects continually saying “tell me what I want to hear” until the neural networks does that.

Human design choices, goal setting, and instruction may be resulting in neural networks that do just what we select them to do — “tell me what I want to hear” — even if what the neural network is doing is a hallucination or just coughing up a response to satisfy us even if it’s predicated on no real legitimate information.

Hope For Open Societies and New Solutions

That said, there’s hope — specifically: we can consider not pressing neural networks to always respond with a response that looks like it was written by a human.

We can actually reward neural networks to respond with “I don’t know an answer to that.” or “I don’t have context for the question” which — while it may not be that impressive in the short-term — would result in a different outcome over successive generations of neural networks towards a different goal.

In this case we would be looking for a goal that is not look as if they were provided by a human to specific prompts — we would be looking for something else: an answer that is useful, capable of providing valid references, and trustworthy. However that would require a lot more work on the part of humans to verify the information used to train the machine matches those criteria too — and that successive generations of neural networks evolve consistent with these values too.

Back in 2020, as part of the Salzburg Global Law and Technology Forum in a role as a Salzburg Global Seminars Fellow, I put forward five concrete pilots using data and artificial intelligence (which is a large general superset term that includes Machine Learning as a set within AI and neural networks as a subset withing Machine Learning) paired with participatory approaches in open societies. The list included:

  1. Pilot participatory data trusts with the public, to include transparent activities that help make the needed data available for recovery initiatives to assist with the public health, community, and economic efforts associated with COVID-19 recovery. The frameworks — data trusts and contracts — will also specify standards for the ethical use of the data and how to formulate data governance. The operationalization of the data access will have a normal mode during which COVID-19 recovery improves and a mode when exigent circumstances require special cooperation and data access among public and private institutions in the region.
  2. Provide everyone their own personal, open-source AI application that could answer most of their questions about public services and government functions. This application could include educational resources for people to train for new jobs. Such an application would expressly not compete with the private sector. By being open-source, it would also have an open application programming interface (API) that hooks to other commercial applications that act as an interface to the queries. The public could use the AI directly or access it via their favorite commercial AI app. In either case, by making the application open, folks could ensure the app was answers questions and not reporting any data back to the government other than questions with unsatisfactory answers for additional data or refinement.
  3. Prototype a combination of participatory data trusts with the public and collaborative AI efforts to develop community, national, and global scale solutions to address local food shortages and work towards a world in which everyone has food available and not go hungry. The global food system has demonstrated its fragility with the pandemic, and we must work to make it more shock resistant. This includes developing solutions that combine participatory data approaches with the public and transparent AI efforts to develop indicators, warnings, and plans that spot global food system vulnerabilities and work to provide recommendations to make the system more resilient and sustainable. These indicators, warnings, and plans should include food for humans, food for animals, and nutrients and other products associated with agricultural production globally. National governments (defense, economic, commerce, agricultural, and diplomatic departments), agricultural industries, and humanitarian organizations all have primary roles in ensuring that global food system vulnerabilities do not weaken societal welfare, prosperity, and peace worldwide.
  4. Encourage national or international challenges to explore how AI can help reduce health care costs. This could include apps that allow you to take a photo of a rash and indicate whether it was serious or not, photos of your eye to monitor blood sugar levels, and potential alerts regarding high blood pressure. These would be apps for the individual that would either help them save time or the cost of a doctor’s visit. Of course, the challenges would be ascertaining the medicine validity of such apps, and thus we would need a rigorous review process.
  5. Enable the public to elect an AI to a government function at a local or state-level, say trash collection or public works. For several years, I have wondered when we might be able to elect an AI, and it still may be in the distant future. However, even striving to achieve this will have beneficial results. We need better ways of expressing the preferences and biases of an AI. We also need better ways of sharing why it reached certain decisions and expressing how this sense-making was done in a form that the public can appreciate. In attempting to elect different AIs to do trash collection or public works, we humans will benefit by better understanding what the machines (and data informing it) are either doing or proposing.
The 2020 article was part of a series by the Salzburg Global Law and Technology Forum

The second concrete pilot — everyone their own personal, open-source AI application that could answer most of their questions about public services and government functions — could become possible with a LLM trained with a rewarded ability of being able to answer “I don’t know an answer to that.” or “I don’t have context for the question”. Open societies as a result could move forward showing that not all is lost with chatbots spamming the public square — if we can take the time and intention to produce chatbots that are rewarded for providing answers that are useful, capable of providing valid references, and trustworthy to include being able to answer, the very human response of, “I don’t know — but I’ll see if I can get you better answer to that with time.”

Closing Thoughts (and Perhaps Closing This Blog Series Too)

I’ll close this post noting I do wonder if LinkedIn or Medium blog posts like this will continue to have any value over time as ChatGPT-3 and LLMs can quickly generate realistic looking blog posts of their own if prompted well. On that note I have embedded both a humorous “The Spiffing Brit” video where he attempts to see if it is possible to generate a YouTube video title, thumbnail image, description, script, and even possibly the voice over by AI-powered technologies. It is humorous and a good illustration of what is coming for us.

Similarly I have cut-and-pasted from Reddit’s ChatGPT-3 forum (exact URL unknown, my apologies) a screenshot demonstrating ChatGPT-3 both hallucinating a realistic-looking yet very unreal research article to a prompt and then in the next prompt saying that the answer provided was fictitious. In a human we would be worried about the state of mind of such a person, whereas for ChatGPT-3 and LLMs, I submit we — as humans — have convinced our digital subject to tell us things we want to hear? If that is the primary goal then we should not be surprised by hallucinations. However as humans, we can also press for different goals that may have better long-term ripple effects for open societies and the future.

Onwards and upwards together!

From Reddit’s ChatGPT-3 forum (exact URL unknown, my apologies) a screenshot demonstrating ChatGPT-3 both hallucinating a realistic-looking yet very unreal research article to a prompt and then in the next prompt saying that the answer provided was fictitious

--

--

David A. Bray

Championing People-Centered Ventures & #ChangeAgents. Reflecting on How Our World Is Changing. Leadership is Passion to Improve Our World.