Evaluating Reasoning in Intelligence Analysis: Training & Study

Data collection for this study concluded on 12 July 2021. Participation in this study has now closed.

This training covers how to evaluate arguments, identify reasoning flaws, and provide useful suggestions on how people can improve their reasoning.

Developed with funding from the US Intelligence Community, the training is designed for analysts, but the skills taught are useful in assessing any type of reasoning, from business cases to online debates.

The training proceeds through practice evaluating fictional intelligence reports and by testing participants’ skills on a rigorously developed set of questions called the The Intelligence Evaluation Task. Our training participants have found it challenging, useful and stimulating. After completing all 18 evaluations in The Intelligence Evaluation Task, participants receive a Certificate of Completion. The training is part of a research study on improving the evaluation of reasoning in intelligence, and participation helps advance our understanding of this important question.

Advanced Online Training Package

This advanced online training package on Evaluating Analytic Reasoning covers:

  • The fundamental concepts of reasoning evaluation
  • How to identify the most important and common reasoning flaws that can prevent an argument from establishing its primary finding
  • How to apply the US Office of the Director of National Intelligence’s tradecraft standards to analytic reasoning

The training package concludes with a study comprised of a challenging set of fictional analytic products (reports and briefings) validated by a panel of experts. The study tests participants’ skills and measures their learning outcomes.

 

Developed by Experts

Dr Tim van Gelder
Director, Hunt Lab
Dr Tim van Gelder is the Director of the Hunt Laboratory for Intelligence Research in the School of BioSciences, University of Melbourne. He is an applied epistemologist with a background in philosophy and cognitive science. His primary areas of expertise are reasoning and collective intelligence. He has published widely, including in Behavioral and Brain Sciences, Cognitive Science, Journal of Philosophy and Journal of Public Deliberation. Prior to joining the University he had extensive experience assisting large organisations with analytic capability development, and expert elicitation.
Ashley Barnett
Research Associate, Hunt Lab
Ashley Barnett is a PhD candidate at the University of Melbourne writing on the evaluation of reasoning in intelligence analysis, with a focus on the Reasoning Stress Test which he formulated for measuring the quality of reasoning in reports. He has previously researched how to teach argumentation to intelligence analysts, which involved developing and delivering training courses to professional analysts, and has developed questions for a critical thinking skills test for the intelligence community. All these areas of expertise have gone into designing the materials and challenging questions that form part of this training.
Tamar Primoratz
Research Associate, Hunt Lab
Tamar Primoratz has a Masters in Publishing and Communications and is currently completing a Masters in Philosophy from the University of Melbourne. During her work at the Hunt Lab she has overseen the evaluation of the reasoning process, based on the ODNI’s rating scale, for the several of the Hunt Lab’s collective reasoning studies, and has developed validation studies on quality of reasoning metrics, and training materials for intelligence analysts.
Yanna Rider
Expert consultant, yannarider.com
Yanna Rider has over ten years’ experience designing and delivering thinking-related training in Australia, the UK and the US, working primarily with intelligence agencies. She holds a PhD in Philosophy from The University of Melbourne for work on cognition (mental representation), and is a member of the International Association for Intelligence Education (IAFIE).
Lisa Jane Young
Intelligence Principal, Hunt Lab (former)
Lisa Jane (‘LJ’) Young has held senior intelligence roles in law enforcement, the regulatory sector, private sector and academia for over 20 years. Her particular interests are in building analytical capability, strategic intelligence and the management of intelligence functions. At the Hunt Lab, LJ developed a security policy and supporting framework to support engagement with intelligence agencies and provided advice on current intelligence practice. She is the co-author of Managing Intelligence: The Art of Influence (Federation Press, 2010) and holds Masters degrees in both Criminal Intelligence and Public Policy.

Developed for professional intelligence analysts, available to you.

 

Continuous Improvement

Based on feedback from 200+ training participants, we’ve worked hard to continually improve our learning materials and delivery.

Here’s what people are saying about the training and study:

“GREAT training! I loved it. Looking forward to applying what I learnt.”
“I feel that I am significantly better able to articulate my dissatisfaction with an information product (not just an intelligence product).”
“I learnt a LOT that I did not previously understand. The training was challenging but GREAT. So useful.”

 

Frequently Asked Questions

  • Who is this training for?

    While the training is being developed for professional intelligence analysts, the principles and methods can be applied to reasoning in general.

    It is particularly useful for anyone who engages in complex analysis, or needs to provide useful feedback to people who do.

  • Why is the training being developed?

    The Evaluating Analytic Reasoning training is being developed at The University of Melbourne by the SWARM project and is funded by The Intelligence Advanced Research Projects Activity (IARPA), an organisation within the US Office of the Director of National Intelligence.

    Our goal with this training and study is to develop a method for evaluating reasoning that can be used:

    1. as a research tool for measuring the effectiveness of different reasoning procedures and techniques, and
    2. by analysts to review their own work and the work of their colleagues.

    In order to achieve this goal, we’ve designed a study to validate the method. We give participants some training in assessing reasoning, and then test their ability to distinguish between good and bad reasoning.

  • What's the time commitment?

    This training can also be delivered to the intelligence community as a professional development workshop. As such, the training includes a substantial amount of valuable learning material.

    We deliver this training to you online so that it can be done at your own pace.

  • What does the study involve?

    The study uses two methods of evaluating reasoning, primarily designed for intelligence analysis; but the principles you will learn apply to all types of reasoning.

    Your participation involves:

    • Random allocation to one of the two reasoning evaluation methods
    • Training on how to use the method, including some simple review questions
    • A series of challenging fictional intelligence products (i.e. reports or assessments) to evaluate. In previous testing, we have found that many of these are very difficult to evaluate.
    • Expert responses to each question.

    Following completion of all the training on the first method, participants are provided access to the training material for the other method. This training can be completed or not as participants choose.

  • What feedback is provided?

    During the training and evaluation participants are able to compare their answers to those provided by a group of experts and see their justifications for their answers.

    In addition, after participants complete their training and evaluation, we send them feedback on how they went, including how they ranked compared to others who have completed the training.

  • How does contribution help research?

    Participants’ contribution to this important research will help in the development of standards and tools for improving analytical reasoning skills and performance in the intelligence community in Australia and overseas.