In the era of AI-driven transformations, foundation models (FMs), like large-scale language and vision models, have become pivotal in various applications, from natural language processing to computer vision. These models, with their immense capabilities, offer a plethora of benefits but also introduce challenges related to reliability, transparency, and ethics. The workshop on reliable and responsible FMs (R2-FM) delves into the urgent need to ensure that such models are trustworthy and aligned with human values. The significance of this topic cannot be overstated, as the real-world implications of these models impact everything from daily information access to critical decision-making in fields like medicine and finance. Stakeholders, from developers to end-users, care deeply about this because the responsible design, deployment, and oversight of these models dictate not only the success of AI solutions but also the preservation of societal norms, equity, and fairness. Some of the fundamental questions that this workshop aims to address are:
We invite submissions from researchers in the fields of reliability and responsibility pertaining to foundation models. Additionally, we welcome contributions from scholars in the natural sciences (such as physics, chemistry, and biology) and social sciences (including pedagogy and sociology) that necessitate the use of reliable and responsible foundation models In summary, our topics of interest include, but are not limited to:
For any questions, please contact us at r2fm2024@googlegroups.com.
Submission deadline: February 3, 2024, AOE, February 10, 2024, AOE
Notification to authors: March 3, 2024, AOE, March 5, 2024, AOE
Final workshop program, camera-ready deadline: April 3, 2024, AOE, April 12, 2024, AOE
This is the tentative schedule of the workshop. All slots are provided in Central European Time (CET).
08:50 - 09:00 | Introduction and opening remarks |
09:00 - 09:30 | Invited Talk 1: Lilian Weng |
09:30 - 10:00 | Invited Talk 2: Been Kim |
10:00 - 10:15 | Contributed Talk 1: Watermark Stealing in Large Language Models |
10:15 - 11:15 | Poster Session 1 |
11:15 - 11:45 | Invited Talk 3: Denny Zhou |
11:45 - 12:15 | Invited Talk 4: Mor Geva Pipe |
12:15 - 13:30 | Break |
13:30 - 14:00 | Invited Talk 5: Andrew Wilson |
14:00 - 14:30 | Invited Talk 6: Weijie Su |
14:30 - 14:45 | Contributed Talk 2: Value Augmented Sampling: Predict Your Rewards To Align Language Models |
14:45 - 15:00 | Contributed Talk from AISI |
15:00 - 15:45 | Poster Session 2 |
15:45 - 16:15 | Invited Talk 7: James Zou |
16:15 - 16:30 | Contributed Talk 3: Questioning the Survey Responses of Large Language Models |
16:30 - 17:00 | Invited Talk 8: Nicolas Papernot |
New York University
OpenAI
Google DeepMind
University of Pennsylvania
Google DeepMind
University of Toronto
Tel Aviv University
Stanford University
UNC-Chapel Hill
UNC-Chapel Hill
Columbia University
OpenAI & New York University
Stanford University
New York University
University of Washington
Stanford University
University of California Santa Cruz