Janar Pekarev:

My fellowship at Stanford’s Center for International Security and Cooperation (CISAC) was an extraordinary and enriching period, significantly advancing my research and professional development. Stanford provided a uniquely stimulating environment, a true Mecca for scholars seeking cutting-edge research, unparalleled access to excellent facilities, expert guidance, and an inspiring scholarly community. I sincerely thank Vabamu for making this invaluable opportunity possible. The dedicated support from CISAC personnel (Tracey, Kelly, Harold, and Andrew) was instrumental, and I am especially thankful to Liisi from Stanford Libraries for guiding my seamless integration on and off campus. This support allowed me to immerse myself entirely in research without distraction.

From the very start, Stanford’s academic landscape captivated me. It simply was not optional to spend less than 9 am to 9 pm on campus because so much fascinating activity was constantly unfolding, resulting in daily experiences of FOMO. I eagerly audited numerous interdisciplinary courses, including “Exploring AI,” “Minds and Machines,” “The Social & Economic Impact of AI,” “Exploring Computing,” “Ethics and Safety of AI,” “Governing AI: Law, Policy, and Institutions,” and “AI in Medicine and Healthcare Ventures.” These and many other courses profoundly enhanced my understanding of AI’s multifaceted impact, deepening my research and educator capabilities. Experiencing Stanford’s engaging teaching methodologies firsthand provided valuable insights, significantly influencing how I plan to structure my classes in Estonia.

My primary research objective, investigating AI’s potential to enhance or compromise decision-making integrity in military operations, progressed considerably during my stay. I developed a human-machine teaming framework tailored explicitly for AI-augmented military operations with UI. This framework emphasizes trust calibration between human decision-makers and AI systems, skillfully integrating essential human oversight with AI’s predictive strengths. Using a Random Forest classifier strategically chosen for accuracy, transparency, and interpretability, I could effectively balance machine learning with rule-based overrides to ensure robust ethical and operational adherence.

I actively participated in numerous high-level conferences featuring distinguished speakers such as Microsoft, Google, and OpenAI’s chief scientific officers. These experiences gave me an invaluable opportunity to absorb cutting-edge insights, significantly influencing my research trajectory, and that knowledge I continue to revisit in my extensive notes even now. Additionally, the regular seminars and workshops hosted by Stanford’s Freeman Spogli Institute (FSI) and CISAC were exceptionally enriching. Meeting and engaging with prominent scholars such as Rose Gottemoeller, former Deputy Secretary General of NATO, and renowned thinker Francis Fukuyama, and numerous other remarkable researchers offered fresh perspectives and profound insights that broadened my intellectual horizons and professional network considerably.

My public presentation, titled “Reimagining Military AI: Cognitive Interfaces in Human-(e)Machine,” concerns ethical, operational, and cognitive dimensions of AI integration in military contexts specifically outlined how my research integrates a machine learning model and an override-rule module within an end-user interface, operationalizing key laws-of-war principles: distinction, proportionality, and military necessity through scenario simulations. The stepwise experimental design examines human-machine interaction dynamics, particularly how AI transparency impacts human trust, decision-making biases, and ethical judgments in uncertain conditions. Although still conceptual, the work paves the way for extensive empirical validation and interdisciplinary collaboration, enhancing our understanding of adaptive, transparent, and ethically grounded human-machine teaming in military operations.

During my fellowship, I significantly contributed to scholarly literature. My primary article, titled “Exploration of Trust and Decision-Making in AI-Augmented Military Domain: A Framework for Human-Machine Teaming,” will be published in the journal “Artificial Intelligence and Applications (AIA),” where I serve as the corresponding author. Additionally, I am co-writing a second article, “A Systematic Literature Review of User’s Trust in Autonomous Military Technologies,” further extending my research impact in the field.

All in all, my time at Stanford was transformative, an unmatched experience in conducting research among the brightest minds. It was a deeply fulfilling academic and professional milestone, enriched profoundly by vibrant scholarly interactions, innovative research opportunities, and exceptional learning experiences. The visionary support of the Kistler-Ritso Estonian Foundation demonstrates an extraordinary commitment to scholarly excellence, making this fellowship a truly defining chapter in my academic journey.