Philosophical puzzles of rational artificial intelligence | Massachusetts Institute of Technology News



How rational can an artificial system be?

MIT’s new course 6.S044/24.S00 (AI and Rationality) does not aim to answer this question. Instead, we ask students to explore this and other philosophical questions through the lens of AI research. For the next generation of scholars, the concepts of rationality and agency may prove essential to AI decision-making, especially when influenced by the way humans understand their own cognitive limits and constrained subjective views of what is and is not rational.

This pursuit is rooted in the deep relationship between computer science and philosophy, both of which have long collaborated on formalizing what constitutes rational decisions to form rational beliefs, learn from experience, and pursue goals.

“You might imagine that computer science and philosophy are quite distant, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Koelbling, the Panasonic Professor of Computer Science and Engineering at the Massachusetts Institute of Technology, recalling computer scientist and philosopher Alan Turing. Kelbling himself earned a bachelor’s degree in philosophy from Stanford University, but notes that computer science was not available as a major at the time.

Brian Hedden, a professor in the Department of Linguistics and Philosophy who also teaches with Kelbling in the Department of Electrical Engineering and Computer Science (EECS) at MIT’s Schwarzman College of Computing, said the two fields are more interconnected than people might imagine, adding, “The difference is in emphasis and perspective.”

Tools for further theoretical thinkingg

In this project, which will first be delivered in fall 2025, Koelbring and Hedden created AI and Rationality as part of a common foundation for computing education. This is a cross-cutting effort at the MIT Schwarzman School of Computing, where departments collaborate to develop and teach new courses and launch new programs that blend computing with other disciplines.

With more than 20 students enrolled, AI and Rationality is one of two Common Ground classes with foundations in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).

While Ethics of Computing examines concerns about the impact of rapidly advancing technology on society, AI and Rationality examines the controversial definition of rationality by considering several factors, including the nature of rational subjects, the concept of fully autonomous and intelligent agents, and the attribution of beliefs and desires to these systems.

Because AI implementations are so broad and each use case poses different questions, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and collaboration between the two perspectives of computer science and philosophy.

“When working with students studying machine learning or robotics, it’s important for them to step back a little and examine the assumptions they’re making,” says Kelbling. “Thinking about things from a philosophical perspective helps people back up and better understand how their work fits into real-life situations.”

Both instructors emphasize that this is not a course that provides concrete answers to the question of what it means to design rational agents.

Professor Hedden says, “We think of this course as building a foundation. We’re not giving them a set of doctrines to learn and memorize and then apply. We’re giving them the tools to think critically as they move into their chosen career, whether it’s in academia, industry or government.”

Rapid advances in AI are also bringing new challenges to academia. Predicting what students will need to know five years from now is an impossible task, Kelbling believes. “What we have to do is give them tools at a higher level, such as habits of mind and mindsets, so they can approach things that we can’t quite foresee right now,” she says.

Combining disciplines and questioning assumptions

So far, the class has attracted students from a wide range of disciplines, from those with a solid foundation in computing to those interested in exploring how AI intersects with their field of study.

Through readings and discussions throughout the semester, students grappled with different definitions of rationality and how to resist their field’s assumptions.

About what surprised her about this course, EECS 4 “It’s like we’re taught that math and logic are the gold standard, or the truth,” said Amanda Paredes Riobou, a first-year student. “This lesson is about different examples of how humans act inconsistently with these mathematical and logical frameworks. We uncovered a whole can of worms: are humans irrational? Is it the machine learning systems we designed that are irrational?

Junior Okoroafor, a doctoral student in the Department of Brain and Cognitive Sciences, appreciated the class assignments and how the definition of a rational agent changes depending on the field. “By expressing in a formal framework what we mean by rationality in each field, it becomes clear exactly which assumptions should be shared across the fields and which assumptions were different.”

As with all Common Ground initiatives, the co-teaching, collaborative structure of the course gave students and instructors the opportunity to hear different perspectives in real time.

This is Paredes-Liobou’s third time at the common ground course. She says, “I really like the interdisciplinary aspect. I always feel like there’s a good mix of theory and application due to the fact that you have to cross disciplines.”

Okoroafor said Kelbling and Hedden demonstrated a clear synergy between the disciplines, making it feel like they were in class learning together. How computer science and philosophy can be used to inform each other has helped him understand their commonalities and valuable perspectives on intersecting issues.

He added, “Philosophy also has a way of surprising people.”



Source link

Leave a Reply