header

Visa

You will find all the necessary information about visa here https://icpr2024.org/visa.html.

Call for papers

About

We are witnessing the emergence of an “AI economy and society” where AI technologies are increasingly impacting many aspects of business as well as everyday life. We read with great interest about recent advances in AI medical diagnostic systems, self-driving cars, ability of AI technology to automate many aspects of business decisions like loan approvals, hiring, policing etc. However, as evident by recent experiences, AI systems may produce errors, can exhibit overt or subtle bias, may be sensitive to noise in the data, and often lack technical and judicial transparency and explainability. These shortcomings have been documented in scientific but also and importantly in general press (accidents with self-driving cars, biases in AI-based policing, hiring and loan systems, biases in face recognition systems for people of color, seemingly correct medical diagnoses later found to be made due to wrong reasons etc.). These shortcomings are raising many ethical and policy concerns not only in technical and academic communities, but also among policymakers and general public, and will inevitably impede wider adoption of AI in society.

The problems related to Ethical AI are complex and broad and encompass not only technical issues but also legal, political and ethical ones. One of the key component of Ethical AI systems is explainability or transparency, but other issues like detecting bias, ability to control the outcomes, ability to objectively audit AI systems for ethics are also critical for successful applications and adoption of AI in society. Consequently, explainable and Ethical AI are very current and popular topics both in technical as well as in business, legal and philosophy communities. Many workshops in this field are held at top conferences, and we believe ICPR has to address this topic broadly and focus on its technical aspects. Our proposed workshop aims to address technical aspects of explainable and ethical AI in general, and include related applications and case studies with the aim to address this very important problems from a broad technical perspective.

Topics

The topics comprise but are not limited to:

  • Naturally explainable AI methods
  • Post-Hoc Explanation methods of Deep Neural Networks and Transformers
  • Technical issues in AI ethics including automated audits, detection of bias, ability to control AI systems to prevent harm and others
  • Methods to improve AI explainability in general, including algorithms and evaluation methods
  • User interface and visualization for achieving more explainable and ethical AI
  • Real world applications and case studies

Important dates

These dates are still siubject to changes:

  • May 2nd, 2024: Call for paper
  • July 14th, 2024 July 30th, 2024 (revised date) : Submission deadline
  • September 20th, 2024: Notification to the authors
  • September 27th, 2024: Camera ready versions

Program Committee

Carlos Toxtli, Clemson University, US
Celine Hudelot, CentraleSupelec, France
Damien Garreau, Université Côte d'Azur, France
David Auber, Univ. Bordeaux, France
Dragutin Petkovic, San Francisco State University, US
Georges Quénot, Laboratoire d'Informatique de Grenoble, CNRS, France
Hervé Le Borgne, CEA LIST, France
Jenny Benois-Pineau, LABRI, France
Luis Gustavo Nonato, USP, Brazil
Mark Keane, UCD Dublin / Insight SFI Centre for Data Analytics, Irland
Romain Xu Darme, CEA LIST, France
Stefanos Kollias, National Technical University of Athens / Image, Video and Multimedia Systems Lab, Greece
Thomas Baltzer Moeslund, Aalborg University / Visual Analysis and Perception Laboratory , Denmark
Vicent Botti, Universitat Politècnica de València, Spain
Victoria Bourgeais, University of Bordeaux, France
Wassila Ouerdane, CentraleSupelec, France
Weiru Liu, University of Bristol, UK

Program

To be announced

Paper submission

A paper can be submitted via the SciencesConf online submission system here.

Paper guidelines

The Proceedings of the XAIE 2024 workshop will be published in the Springer Lecture Notes in Computer Science (LNCS) series. Papers will be selected by a single blind (reviewers are anonymous) review process. Submissions must be formatted in accordance with the Springer’s Computer Science Proceedings guidelines: 12-15 pages.

Articles should be prepared according to the LCNS guidelines and templates available.

All papers must be submitted in electronic format as PDF files before the submission deadline.

Univ. BordeauxLaBRICNRSSan Francisco State UniversityIAPR

Online user: 4 Privacy
Loading...