Learning Curve

Every semester in my introductory class, I have new students who are interested in learning about modeling and simulation. One idea that struck me and my colleagues at VMASC was understanding the learning progress of these learners without bothering them with tests or labor-intensive assessments. We have our new paper presented at the 2020 Spring Simulation Conference that tackled this very challenge from an empirical perspective. Check out the presentation video, take a look at the abstract, or access our paper below. Let me know if you have any feedback.

Watch the presentation video on YouTube: https://youtu.be/eti0zj1mOQA

Abstract

This paper presents our novel efforts on automatically capturing and analyzing user data from a discrete-event simulation environment. We collected action data such as adding/removing blocks and running a model that enable creating calculated data fields and examining their relations across expertise groups. We found that beginner-level users use more blocks/edges and make more build errors compared to intermediate-level users. When examining the users with higher expertise, we note differences related to time spent in the tool, which could be linked to user engagement. The model running failure of beginner-level users may suggest a trial and error approach to building a model rather than an established process. Our study opens a critical line of inquiry focused on user engagement instead of process establishment, which is the current focus in the community. In addition to these findings, we report other potential uses of such user action data and lessons learned.

Reference

Modeling the Modeler: an Empirical Study on How Modelers Learn to Create Simulations
H. Kavak, J.J. Padilla, S.Y. Diallo and A. Barraco
The 2020 Spring Simulation Conference, Virtual, May 18-21, 2020
Paper


VERIFY

With simulations constantly receiving attention to support decisions and develop strategies in response to events such as the ongoing COVID-19 pandemic, people frequently ask how they can be expected to trust findings obtained from a simulation. Many factors play into the question of credibility including model assumptions, the use of data, calibration, to name a few. Many people don’t generally know how a simulation is expected to work, i.e. characteristics of COVID-19 spreads, or how to properly interpret the outcomes. To address such concerns, simulation modelers refer to the verification and validation (V&V) of models.

To explore the first V of V&V, my collaborators at Old Dominion University’s Virginia Modeling, Analysis and Simulation Center and myself have published a PLOS ONE article that presents our exploratory content analysis using over 4,000 simulation papers published from 1963-2015. See the abstract and reference below for details.

Abstract

Verification is a crucial process to facilitate the identification and removal of errors within simulations. This study explores semantic changes to the concept of simulation verification over the past six decades using a data-supported, automated content analysis approach. We collect and utilize a corpus of 4,047 peer-reviewed Modeling and Simulation (M&S) publications dealing with a wide range of studies of simulation verification from 1963 to 2015. We group the selected papers by decade of publication to provide insights and explore the corpus from four perspectives: (i) the positioning of prominent concepts across the corpus as a whole; (ii) a comparison of the prominence of verification, validation, and Verification and Validation (V&V) as separate concepts; (iii) the positioning of the concepts specifically associated with verification; and (iv) an evaluation of verification’s defining characteristics within each decade. Our analysis reveals unique characterizations of verification in each decade. The insights gathered helped to identify and discuss three categories of verification challenges as avenues of future research, awareness, and understanding for researchers, students, and practitioners. These categories include conveying confidence and maintaining ease of use; techniques’ coverage abilities for handling increasing simulation complexities; and new ways to provide error feedback to model users.

Reference

A content analysis-based approach to explore simulation verification and identify its current challenges
C.J. Lynch, S.Y. Diallo, H. Kavak, J.J. Padilla
PLOS One, 2020, doi:10.1371/journal.pone.0232929 [Paper]


SpringSim'20 Program

We are just ten days away from the 2020 Spring Simulation Conference. As the Program Chair of the conference, I am excited to tell you that we have an outstanding program, even though we had to shift all face-to-face events into a virtual setting. The conference will feature five tutorial sessions, one DEVS dissertation awards session, one demo session, eighteen regular paper + panel sessions covering 71 peer-reviewed papers. You can take a look at the details of the program here. It is still not too late to register and participate in the conference. Never been at SpringSim before? Contact me for a free invitation. The conference has some slots for first-time, non-author participants.


V&V as a Service

Verification and Validation (V&V) is one of the main processes in simulation development and is essential for increasing the credibility of simulations. Due to the extensive time requirement and the lack of common V&V practices, simulation projects often conduct ad-hoc V&V checks using informal methods. On Feb 7, 2020, I gave a talk in my department to discuss the building blocks of a new Verification and Validation platform, which can be utilized Software as a Service. This work-in-progress platform aims to improve V&V practices by facilitating ease of use, increasing accessibility to V&V techniques, and establishing independence from simulation paradigm and programming language. The platform relies on the seamless integration of web technologies, data management, discovery & analysis techniques of (V&V), and cloud computing. In the talk, I described the technical details of this platform, present a proof-of-concept implementation, and draw a roadmap for future developments.

If you are curious, here are the slides from my talk.


It is that time of the year again! Spring Simulation Conference 2020 is around the corner. This time, the conference will be hosted by my institution, George Mason University. If interested in submitting a paper, the deadline is January 22, 2020. Accepted papers will be archived in the IEEE and ACM Digital Libraries.

SpringSim CfP