¶¶ÒõÊÓÆµ

Skip Navigation

UMGC Global Media Center
Decoding Disinformation: Jason Pittman on AI, Intent and the Battle for Truth

Alex Kasten
By Alex Kasten
  • News |
  • Cybersecurity

Jason Pittman, Sc.D., collegiate associate professor in the School of Cybersecurity and Information Technology at ¶¶ÒõÊÓÆµ (UMGC) is currently serving as a Fulbright Scholar at the Australian Institute for Machine Learning (AIML) at the University of Adelaide. He is researching the potential use of open-source large language models in the unwitting generation of disinformation.

We caught up with Prof. Pittman to ask about the influence of disinformation and his work at AIML.  

What is the difference between disinformation and misinformation? 

Sometimes these two concepts get confused. Disinformation is misinformation shared with the intent to manipulate behavior or belief. Misinformation is false or inaccurate information. â¶Ä¯

Jason Pittman

Jason Pittman says that disinformation is not just the province of nation-states or large business organizations. 

Disinformation is a slow cook, not a fast boil. Take the assertion that the earth is flat. No one who seriously believes a flat earth is true got to such a belief suddenly. The person didn’t just wake up with the belief. Instead, a slow and steady stream of disinformation walked the person to the conclusion. Interestingly, the disinformation was necessarily related to the shape of the earth. Often, we focus too much of the conclusion and don’t consider the large and varied set of steps necessary to reach the conclusion.  
  
We should also consider that disinformation is not just the province of nation-states or large business conglomerates. Anyone with time and a basic computer can generate content on social media. â¶Ä¯â€¯

How did you become interested in the influence of disinformation? 

One of my undergraduate degrees is in English literature. I took a linguistics course as part of the program and immediately fell in love with the structure and analysis of written language. This certainly is the foundation of my interest in cyber information influence, or disinformation. â¶Ä¯

Tell us about your work during the fellowship. 

Machine learning (ML) has advanced to a point where some methods are more than 90 percent accurate in identifying misinformation. Yet, the material difference between misinformation and disinformation is intent. Because of how ML works in this domain, there is no mechanism for existing methods to infer or extract intent. At best, the methods are working on the semantic, or meaning, of the message. 

Where a standard ML model can generate misinformation when prompted, the model can also generate information. On the other hand, a ML model trained to generate misinformation does so even when prompted to generate information. The difference is one of intensity or power, not capability only. 

This is where my work comes in. I’ve developed a method to work above the semantic layer and extract intent through a combination of computational linguistics and computational semiotics. The detection scheme is kind of a gestalt polygraph. That’s a simplified way to conceptualize the research.

Can you share some of your results? How effective have the models been?  

Based on preliminary results along the way, there seems to be definitive structural differences between information and misinformation, and it seems possible to use the differences as input to a phase in my method to computationally extract intent from text. â¶Ä¯

Importantly, I’m not using AI to do the detection. I’ve avoided doing so because too much of the method would be a black box and I’d lose sight of ground truth. â¶Ä¯

Also, part of the method I’m working on transposes elements in the ML model—not the generated output from a prompt but elements within the model itself—into tones. To my ears, the disharmony of misinformation is readily apparent. The tones collide, and the discord in the waves is audible.  
  
There is a lot more work to do before I have true results. I’m working with a sample of about 135,000 tweets. 

What does the future of disinformation look like? What work is needed to rein it in? 

We are rapidly losing ground to disinformation, specifically what in the field is known as post-truths. Disinformation campaigns writ large are effective because the onus is on the verifier to apply fact-checking and so forth. In face of the volume of disinformation spread across digital reality, there simply is no way to keep up. However, protecting human knowledge doesn’t require reining in disinformation. It merely requires assuring human knowledge is not corrupted by disinformation. Keep it out rather than refute it.