Dear all,
This might interest some of you, in particular the robustness aspects
first advocated.
______________________________
---------- Forwarded message ----------
From: Adrian Weller <aw665@cam.ac.uk>
Date: 2017-07-10 0:48 GMT+02:00
Subject: [UAI] CfP: Reliable Machine Learning in the Wild at ICML
2017, deadline 17 July
To:
Cc : Jacob Steinhardt <jacob.steinhardt@gmail.com>, Dylan
Hadfield-Menell <dhm@eecs.berkeley.edu>, Smitha Milli
<smilli@berkeley.edu>
Final call for Papers for ICML 2017 Workshop on Reliable Machine
Learning in the Wild, please forward to others who may have interest.
Workshop website https://sites.google.com/site/wildml2017icml/
When can we trust that a system that has performed well in the past
will continue to do so in the future? Designing systems that are
reliable in the wild is essential for high stakes applications such as
self-driving cars and automated surgical assistants. This workshop
aims to bring together researchers in diverse areas such as
reinforcement learning, human-robot interaction, game theory,
cognitive science, and security to further the field of reliability in
machine learning. We will focus on three aspects ��� robustness (to
adversaries, distributional shift, model misspecification, corrupted
data); awareness (of when a change has occurred, when the model might
be miscalibrated, etc.);and�� adaptation (to new situations or
objectives). We aim to consider each of these in the context of the
complex human factors that impact the successful application or
meaningful monitoring of any artificial intelligence technology.
Together, these will aid us in designing and deploying reliable
machine learning systems.
We are seeking submissions that deal with the challenges of reliably
applied machine learning techniques in the real world. Some possible
questions touching on each of these categories are given below, though
we also welcome submissions that do not directly fit into these
categories.
Robustness: How can we make a system robust to novel or potentially
adversarial inputs? What are ways of handling model mis-specification
or corrupted training data? What can be done if the training data is
potentially a function of system behavior or of other agents in the
environment (e.g. when collecting data on users that respond to
changes in the system and might also behave strategically)?
Awareness: How do we make a system aware of its environment and of its
own limitations, so that it can recognize and signal when it is no
longer able to make reliable predictions or decisions? Can it
successfully identify ���strange��� inputs or situations and take
appropriately conservative actions? How can it detect when changes in
the environment have occurred that require re-training? How can it
detect that its model might be mis-specified or poorly-calibrated?
Adaptation: How can machine learning systems detect and adapt to
changes in their environment, especially large changes (e.g. low
overlap between train and test distributions, poor initial model
assumptions, or shifts in the underlying prediction function)? How
should an autonomous agent act when confronting radically new
contexts?
Monitoring: How can we monitor large-scale systems in order to judge
if they are performing well? If things go wrong, what tools can help?
Value Alignment: For systems with complex desiderata, how can we learn
a value function that captures and balances all relevant
considerations? How should a system act given uncertainty about its
value function? Can we make sure that a system reflects the values of
the humans who use it?
Reward Hacking: How can we ensure that the objective of a system is
immune to reward hacking? Reward hacking is a way that the system can
attain high reward that was unintended by the system designer. For
example see https://blog.openai.com/faulty-reward-functions/
Human Factors: Actual humans will be interacting and adapting to these
systems when they are deployed. How do properties of humans affect the
guarantees of performance that the system has? What if the humans are
suboptimal or even adversarial?
How to submit
Papers submitted to the workshop should be up to four pages long
excluding references and in ICML 2017 format. They should be submitted
via Easychair at https://easychair.org/conferences/?conf=rmlw17 . As
the review process is not blind, authors can reveal their identity in
their submissions. Accepted submissions will be presented as posters
or talks.
We will accept submissions at two deadlines. One earlier deadline,
with an earlier acceptance notification, and one later one. Our goal
is to allow for late submission to the extent that we can, while still
allowing some people to get early confirmation of paper acceptance,
which they might need in order to arrange travel in time.
Important Dates:
Submission deadline 1: 16 June 2017
Acceptance notification 1: 1 July 2017
Submission deadline 2: 17 July 2017
Acceptance notification 2: 31 July 2017
Final camera-ready versions of accepted papers: 5 August 2017
Workshop: 11 August 2017
Thank you,
Dylan, Jacob, Smitha and Adrian
----------------------------------------------
Adrian Weller
_________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/ listinfo/uai
--
====================================
Sebastien Destercke, Ph. D.
CNRS researcher in computer science.
Universit�� de Technologie de Compiegne
U.M.R. C.N.R.S. 7253 Heudiasyc
Avenue de Landshut
F-60205 Compiegne Cedex
FRANCE
Tel: +33 (0)3 44 23 79 85
Fax: +33 (0)3 44 23 44 77
====================================
_______________________________________________
SIPTA mailing list
SIPTA@idsia.ch
http://mailman2.ti-edu.ch/mailman/listinfo/sipta