Augmented Citizen Intelligence

What is Augmented Citizen Intelligence?

Augmented Citizen Intelligence is the combination of group-sourced intelligence by citizens outside the traditional intelligence community with advanced data analytics and AI.

Augmented Citizen Intelligence is a form of Augmented Collective Intelligence, combining citizen intelligence (Van Gelder, de Rozario and Saletta 2019), collective intelligence (Malone et al 2009)  with advanced data analytics and AI (Verlhurst 2018).

Currently both human collective intelligence and AI have many limitations. We need new approaches that combine human and AI strengths to enhance both. Augmented Citizen Intelligence combines crowdsourcing and collaboration with advanced informational technology and/or AI to produce collective intelligence “on steroids”. This approach has the potential to leverage the best of both human performance and machine performance to create a force multiplier in tackling many different types of problems, scientific and social (Levy, 1997). In what follows, however, I will focus on the potential for using Augmented Citizen Intelligence in the domains of cognitive security and malicious or unacceptable influence operations.

Research by members of the Hunt Lab team has shown that teams recruited from the public can produce high-quality analytic reasoning, outperforming teams of professional analysts in our studies (van Gelder et al., 2019). This, and other evidence, raises the possibility of applying Augmented Collective Intelligence generally, and Augmented Citizen Intelligence specifically,  as a force multiplier for national intelligence and defence- particularly for detecting, tracking and countering influence operations, but also to other OSINT questions.

Weak vs. Strong Augmented Collective Intelligence.

This distinction is roughly analogous to weak AI vs. Strong AI, and may be pertinent to seedling vs. program funding applications.

Weak Augmented Collective Intelligence involves integrating existing, (or very near term foreseeable) technologies with Collective Intelligence/crowdsourcing. In terms of information operations (something we at the Hunt Lab are deeply interested in), this would undoubtedly mean giving members of crowd sourcing teams access to data collection and/or social media analysis tools (text analysis, image/video analysis, for example).

Strong Augmented Intelligence would be integration of relatively stronger AI and agents into the Augmented Collective Intelligence ecosystem.

In terms of Augmenting Citizen Intelligence the integration of one or more tools into the SWARM process to produce Augmented Citizen Intelligence could be done through actual platform integration (though API’s, etc.) or  by integration into the crowd-sourcing process.

How would Augmented Citizen Intelligence work?

The integration of one or more tools into the SWARM process to produce Augmented Collective Intelligence could be done through actual platform integration (though API’s, etc.) or  by integration into the crowd-sourcing process.

This might involve integration of existing packages through processes in the short term (for a study), with a longer term goal of directly integrating functionality into SWARM (for a program grant).

Available tools for consideration include (again, in the context of information operations):

  • Spiders/Crawlers/Worms, bots, webscrapers, etc. to automate data collection/extraction. These tools can be relatively simple and coded in short amounts of time–a Hunt Lab member recently coded a web scraper to extract reports from a website over the course of a weekend, for example]. Other tools are available as opensource intel tools. For example, this Facebook scraping tool.
  • Social Media/Disinformation Visualization Tools: For example, Hoaxy is built with a goal to “reconstruct the diffusion networks induced by hoaxes and their corrections [i.e. fact checking articles] as they are shared online and spread from person to person. Hoaxy will allow researchers, journalists, and the general public to study the factors that affect the success and mitigation of massive digital misinformation.” RAND has compiled a larger list of Tools to Fight Disinformation Online.
  • Automatic Text Analysis Tools with functions for sentiment analysis, word frequency analysis, category frequency counts, keyword-in-context analysis, cluster analysis, and visualizations . Examples include SentiWordNet  “a lexical resource for opinion mining” and CATPAC (Lu and Stepchenkova, 2015). Richard de Rosario mentioned the possibility of integrating Sintelix into the SWARM platform, for example, based on past conversations.
  • Multi-media Qualitative and Mixed Methods Analysis Tools: Nvivo, ATLAS.ti, MAXQDA, etc. [Note that these tools, at least Nvivo, incorporated automatic text analysis tools into the package].

Others tools to consider in this context  might include argument visualisation and/or mining tools such as have been developed by the Centre for Argument Technology, or Tim van Gelder.

Augmentation also includes the construction of the crowd itself—via recruitment, the use of diverse incentives to facilitate cognitively diverse participation, and the use of analytic tools to encourage and weight effective contributions.

Augmented Collective Intelligence and Assessing and Countering Information Operations

Developing the capability to crowdsource  not only the detection and tracking but especially the countering of malicious or unacceptable influence operations and their narratives requires a solution to many technical and practical issues. A major problem is countering influence operations at scale and in real time. Can augmented collective intelligence (crowd-sourcing combined with advanced analytics) develop and disseminate counter narratives to effectively inoculate the body politic from the ‘viral’ spread of malicious influence narratives?

One promising model for this is the “Challenge” format as, for example, used by the data analytics crowdsourcing site Kaggle in combination with aspects of the crowdsourcing model adapted from emergency management, the Virtual Operations Support Team (VOST) (Roth and Prior, 2019).

In this approach, teams of (at least minimally vetted) volunteers from the public would use a collaborative platform and advanced analytics powered by AI to detect and track, and then to counter, unacceptable influence operations. These teams might also be alerted to influence campaigns detected by the IC.

To counter influence operations- teams would develop, disseminate and track, in real time, counter narratives aimed at countering the unacceptable narratives of the influence campaigns. Their actions would be governed by rules of engagement and ethical conduct. The purpose would be to counter the influence operations with truthful and accurate information, raising awareness both of the malign influence campaign, and to shape the narrative and discussion in a way that builds social trust and increases the cognitive security of democratic populations.

Morgan Saletta

Senior Research Associate, The Hunt Laboratory for Intelligence Research.