Next-Gen Philosophical Foundations of AI (Afternoon Workshop)

TBD
University of Missouri
-
Middlebush 309

Schedule:

2:00 – 2:10 pm: 

Introduction

Mike Schneider

2:10 – 3:10 pm:

Is logic a suitable foundation for modeling real human reasoning?

Gaia Belardinelli, Stanford University

Abstract: Successful human–AI interaction depends on the AI reliably modeling the complexities of human beliefs and reasoning. While symbolic methods such as logic excel at advanced tasks like robust Theory of Mind reasoning, where ML can struggle, logic is often dismissed as purely normative, prescribing how agents ought to reason rather than how people actually think. This view raises doubts on its suitability as a foundation for AI systems intended to engage with real human reasoning.

The aim of this talk is to show that these doubts are misplaced: logic is fundamentally just a mathematical framework, and as such it is capable of both normative and descriptive applications. After introducing standard epistemic logic, we will discuss where it succeeds and where it falls short in representing how humans form and revise their beliefs. Then, we will see how logical systems can incorporate cognitively realistic features, such as limited attention, awareness, and implicit biases, to more accurately capture human reasoning and belief dynamics. The talk concludes with open questions comparing logic with other symbolic methods

3:10 – 3:20 pm:

Break

3:20 – 4:20 pm:

Massive programs and monumental proofs 

Will Stafford, Kansas State University

Abstract: Computer aided program verification is now a reality. Applications of the technology already include the verification of software used in some aircraft. But computer aided verification inherits a host of epistemological problems found with computer aided proofs. The stakes however are much higher. This talk will outline the concerns and argue that we can find solutions to some of these concerns by examining how very large code bases with hundreds of contributors are managed.

4:20 – 4:30 pm:

Break

4:30pm – 5:30 pm:

Data quality in the machine learning age

Kino Zhao, Simon Fraser University

Abstract: Data quality, or the lack thereof, is often blamed for inferential failures. The garbage in, garbage out (GIGO) principle serves to remind us that no amount of fancy mathematical footwork can save a good model from bad data. Yet there exists remarkably little consensus on the precise nature of good data. In this talk, I discuss several popular accounts of data quality -- notably representationalism, contextualism, and fit-for-purpose accounts -- and argue that all of them make assumptions that are not generally true in the context of opaque ML training. I then revisit the question "what is the point of data quality?" and argue that the inferential opacity typical of ML algorithms provide new reasons to understand data quality from the producer's perspective -- that is, independently of data's ability to support inference.