Secure and Trustworthy Large Language Models
(SeT LLM @ ICLR 2024)

May 11, Vienna, Austria

About The Workshop

The striding advances of large language models (LLMs) are revolutionizing many long-standing natural language processing tasks ranging from machine translation to question-answering and dialog systems. However, as LLMs are often built upon massive amounts of text data and subsequently applied in a variety of downstream tasks, building, deploying and operating LLMs entails profound security and trustworthiness challenges, which have attracted intensive research efforts in recent years.

Call For Papers

The primary aim of the proposed workshop is to identify such emerging challenges, discuss novel solutions to address them, and explore new perspectives and constructive views across the full theory/algorithm/application stack.


The potential topics include but are not limited to
  • Reliability assurance and assessment of LLMs
  • Privacy leakage issues of LLMs
  • Copyright protection
  • Interpretability of LLMs
  • Plagiarism detection and prevention
  • Security of LLM deployment
  • Backdoor attacks and defenses in LLMs
  • Adversarial attacks and defenses in LLMs
  • Toxic speech detection and mitigation
  • Challenges in new learning paradigms of LLMs (e.g., prompt engineering)
  • Fact verification (e.g. hallucinated generation)


All papers can be submitted through OpenReview through OpenReview:

Important Dates

In response to the requests for an extension of the submission deadline and and in consideration of the Lunar Spring Festival, we have decided to extend the deadline for workshop submissions to February 19th.

  • Submission Open: Jan 15
  • Submission Deadline: Feb 12 Feb 19
  • Final Decision Notification: Mar 3
  • Camera Ready Deadline: Apr 3 Apr 12

All time are in UTC+0.


Please format your submissions with ICLR 2024 LaTeX style file. The review process for this workshop is double-blinded. Please anonymize your submissions and remove any links that may reveal your identity. Submissions are limited to 4 pages for main contents with unlimited reference and appendix pages. The accepted submissions are allowed with 1 additional page (5 pages in total for main contents) for the camera ready version.


Submissions that are concurrently under review at other venues are acceptable. All accepted papers are non-archival, and will be made publicly available at Openreview without an official proceeding and reviews. For any questions, please contact us at

Reviewer recruiment

If you are interested in reviewing submissions, please fill out this form.

Invited Speakers

More speakers coming soon.

Event Schedule



This workshop is organized by

Speaker 3

Yisen Wang

Peking University

Speaker 2

Ting Wang

Stony Brook University

Speaker 2

Jinghui Chen

Penn State University

Speaker 1

Chaowei Xiao

University of Wisconsin, Madison