The fifth and last invited lecture my Verification and Validation of models course will be on Monday (Nov 16, 2020) at 8:00 PM EST. In this lecture, Dr. Amir Ghasemian, Postdoctoral Research Fellow at Temple University, will give a lecture titled “Limits of model selection and link prediction in complex networks”. If you’re interested in joining the lecture, please email me (see the flyer below) to receive the Zoom invitation link.

Ghasemian Flyer

Lecture summary:

A common graph mining task is community detection, which seeks an unsupervised decomposition of a network into groups based on statistical regularities in network connectivity. Although many such algorithms exist, community detection’s No Free Lunch theorem implies that no algorithm can be optimal across all inputs. However, little is known in practice about how different algorithms over or underfit to real networks, or how to reliably assess such behavior across algorithms. In first part of my talk, I will present a broad investigation of over and underfitting across 16 state-of-the-art community detection algorithms applied to a novel benchmark corpus of 572 structurally diverse real-world networks. We find that (i) algorithms vary widely in the number and composition of communities they find, given the same input; (ii) algorithms can be clustered into distinct high-level groups based on similarities of their outputs on real-world networks; (iii) algorithmic differences induce wide variation in accuracy on link-based learning tasks; and, (iv) no algorithm is always the best at such tasks across all inputs. Also, we quantify each algorithm’s overall tendency to over or underfit to network data using a theoretically principled diagnostic, and discuss the implications for future advances in community detection.

From (iii) and (iv) one can ask whether different link prediction methods, or families, are capturing the same underlying signatures of “missingness.” For instance, is there a single best method or family for all circumstances? If not, then how does missing link predictability vary across methods and scientific domains (e.g., in social vs. biological networks) or across network scales? Additionally, how close to optimality are current link prediction methods? In the second part of my talk by analyzing 203 link prediction algorithms applied to 550 diverse real-world networks, I will show that no predictor is best or worst overall. I will show combining these many predictors into a single state-of-the-art algorithm achieves nearly optimal performance on both synthetic networks with known optimality and real-world networks. Not all networks are equally predictable, however, and we find that social networks are easiest, while biological and technological networks are hardest.


The fourth invited lecture for the new verification and validation of models course is on Monday (Nov 09, 2020) at 8:00 PM EST. This week, Dr. Philippe Giabbanelli, Associate Professor at Miami University (OH), will give a lecture titled “How to validate subjective perspectives? A computational examination of Fuzzy Cognitive Maps and Agents’ Cognitive Architectures”. If you’re interested in joining the lecture, please email me (see the flyer below) to receive the Zoom invitation link.

Giabbanelli Flyer

Lecture summary:

Humans routinely make decisions under uncertainty and occasionally express contradictory beliefs. This complexity is often lost in an agent-based model, in which modelers equip agents with cognitive architectures that may over-simplify behaviors or lack transparency. The technique of Fuzzy Cognitive Mapping (FCM) allows to externalize the perspectives of a person or group into an aggregate model consisting of a causal map and an inference engine. An FCM may be used as the ‘virtual brain’ of an agent, thus providing rich human behaviors that are transparently acquired from participants. This talk will focus on validating FCMs and hybrid FCM/ABM models.


The third invited lecture for the new verification and validation of models course is on Monday (Oct 19, 2020) at 7:20 PM EST. This week, Dr. Bilal Kartal, Founder in Residence at Entrepreneur First, will give a lecture titled “Safer Deep Reinforcement Learning and Auxiliary Tasks”. If you’re interested in joining the lecture, please email me (see the flyer below) to receive the Zoom invitation link.

Kartal Flyer

Lecture summary:

Safe reinforcement learning has many variants, and it is still an open research problem. In this talk, I describe different auxiliary tasks that improve learning and focus on using action guidance through a non-expert demonstrator to avoid catastrophic events in a domain with sparse, delayed, and deceptive rewards: the previously proposed multi-agent benchmark of Pommerman. I present a framework where a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number of rollouts, can be integrated into asynchronous distributed deep reinforcement learning methods. Compared to vanilla deep RL algorithms, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game and Atari games.


The second invited lecture for the new verification and validation of models course is this coming Monday (Oct 12, 2020) at 7:20 PM EST with Dr. William (Bill) Kennedy from GMU. Dr Kennedy will give a lecture titled “Verification and Validation within Cognitive Modeling of Individuals”. If you’re interested in joining the lecture, please email me (see the flyer below) to receive the Zoom invitation link.

Kennedy Flyer

Lecture summary:

This presentation will be on the practice of V&V within the field of Cognitive Science and in particular Cognitive Modeling within Cognitive Science. The presentation will start with a demonstration of a cognitive phenomenon and a cognitive model of the human cognition behind it. Cognitive modeling will be introduced along with its relation to Artificial Intelligence. The practice of V&V in cognitive modeling relies on cognitive architectures and multiple lines of reasoning to support its models. The application of V&V to the ACT-R cognitive architecture and a specific cognitive model of intuitive learning will be discussed.


The Fall semester is in its full course and we are starting the first invited lecture for the new verification and validation of models course. Dr. Christopher J. Lynch from VMASC will give a lecture on Feedback-Driven Runtime Verification on Monday (Sep 21) at 7:20 PM EST. If you’re interested in joining the lecture, please email me (see the flyer below) to receive the invitation link.

Lynch Flyer

Lecture summary:

Runtime verification facilitates error identification during individual simulation runs to increase confidence, credibility, and trust. Existing approaches effectively convey history, state information, and flow-of-control information. These approaches are common in practice due to shallow learning curves, lower mathematical requirements, and interpretation aided by observable feedback that provides context to the time and location of an error. However, runtime techniques lack consistent representation of the models’ requirements and the attention-demanding process of monitoring the run to identify and interpret errors falls on the user. As a result, these techniques lack consistent interpretation, do not scale well, and are time intensive for identifying errors.

To address these shortcomings, the lightweight, feedback-driven runtime verification (LFV) provides a formal specification that facilitates clear and consistent mappings between model components, simulation specifications, and observable feedback. These mappings are defined by simulation users, without requiring knowledge in formal mathematics, using the information available to them about how they expect a simulation to operate. Users specify values to simulation components’ properties to represent acceptable operating conditions and assign feedback to represent any violations. Any violations within a run trigger the corresponding feedback and direct users’ attention toward the appropriate simulation location while tracing back to the assigned specification. A formal specification adds transparency to error specification, objectiveness to evaluation, and traceability to outcomes. A two-group randomized experiment reveals a statistically significant increase in precision (i.e., the proportion of correctly identified errors out of the total errors identified) and recall (i.e., the proportion of correctly identified errors out of the total errors present) for participants’ using the LFV. The LFV opens new research areas for runtime verification of large-scale and hybrid simulations, occluded simulation components, and exploring the role of different feedback mediums in support of verification.

This lecture provides a brief historical background on Runtime Verification in M&S, an overview of the LFV to address existing shortcomings, and provides hands-on examples with implementing and interpreting runtime verification using the LFV of Discrete Event Simulations. You are encouraged to create an account with the CLOUDES simulation platform so that you can participate in the hands-on portion of the lecture. Accounts can be created at https://beta.cloudes.me/.