About The Workshop
The striding advances of large language models (LLMs) are revolutionizing many long-standing natural language processing tasks ranging from machine translation to question-answering and dialog systems. However, as LLMs are often built upon massive amounts of text data and subsequently applied in a variety of downstream tasks, building, deploying and operating LLMs entails profound security and trustworthiness challenges, which have attracted intensive research efforts in recent years.
Call For Papers
The primary aim of the proposed workshop is to identify such emerging challenges, discuss novel solutions to address them, and explore new perspectives and constructive views across the full theory/algorithm/application stack.
Topics
The potential topics include but are not limited to- Reliability assurance and assessment of LLMs
- Privacy leakage issues of LLMs
- Copyright protection
- Interpretability of LLMs
- Plagiarism detection and prevention
- Security of LLM deployment
- Backdoor attacks and defenses in LLMs
- Adversarial attacks and defenses in LLMs
- Toxic speech detection and mitigation
- Challenges in new learning paradigms of LLMs (e.g., prompt engineering)
- Fact verification (e.g. hallucinated generation)
Submission
All papers can be submitted through OpenReview through OpenReview: https://openreview.net/group?id=ICLR.cc/2024/Workshop/SeT_LLM
Important Dates
In response to the requests for an extension of the submission deadline and and in consideration of the Lunar Spring Festival, we have decided to extend the deadline for workshop submissions to February 19th.
- Submission Open: Jan 15
- Submission Deadline:
Feb 12Feb 19 - Final Decision Notification: Mar 3
- Camera Ready Deadline:
Apr 3Apr 12
All time are in UTC+0.
Format
Please format your submissions with ICLR 2024 LaTeX style file. The review process for this workshop is double-blinded. Please anonymize your submissions and remove any links that may reveal your identity. Submissions are limited to 4 pages for main contents with unlimited reference and appendix pages. The accepted submissions are allowed with 1 additional page (5 pages in total for main contents) for the camera ready version.
Policies
Submissions that are concurrently under review at other venues are acceptable. All accepted papers are non-archival, and will be made publicly available at Openreview without an official proceeding and reviews. For any questions, please contact us at set-llm-admin@googlegroups.com.
Reviewer recruiment
If you are interested in reviewing submissions, please fill out this form.
Invited Speakers
More speakers coming soon.
Bo Li
University of Chicago
Tatsunori Hashimoto
Stanford
Cho-Jui Hsieh
UCLA
Robin Jia
USC
Graham Neubig
CMU
Chaowei Xiao
University of Wisconsin, Madison
Event Schedule
TBD
Organizers
This workshop is organized by
Yisen Wang
Peking University
Ting Wang
Stony Brook University
Jinghui Chen
Penn State University
Chaowei Xiao
University of Wisconsin, Madison
Jieyu Zhao
USC
Nanyun Peng
UCLA
Anima Anandkumar
Caltech