News
- The virtual poster session will take place on Tuesday, August 20, 2024 at 14:00-16:00 UTC. 💻 Gather.Town link, PW:
43papers!
. - We have published the schedule 🗓️.
- We are excited to announce 43 accepted papers 🎉.
- The 2024 edition of the workshop has been accepted 🎊 to take place at ICML 2024 in Vienna 🎻.
Workshop Summary
Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of “differentiable everything”, impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.
This workshop will provide a forum for anything differentiable, bringing together academic and industry researchers to highlight challenges and developments, provide unifying ideas, discuss practical implementation choices and explore future directions.
Keynote Speakers
Organizers
Call for Papers
This workshop encourages submissions on novel research results, benchmarks, frameworks, and work-in-progress research on differentiating through conventionally non-differentiable operations. The format of submission are 4-page papers (excluding references) submitted to OpenReview. The review-process will not be open.
Scope
The technical topics of interest at this workshop include (but are not limited to):
- Continuous relaxations of discrete operations and algorithms (e.g., argmax, sorting, ranking, rendering, shortest-path, optimizers, if-else constructs, indexing, top-k, logics, etc.)
- Stochastic relaxations and gradient estimation methods (e.g., stochastic smoothing)
- Weakly- and self-supervised learning with differentiable algorithms, e.g., ranking supervision
- Optimization with diff. algorithms, e.g., regression of scene parameters via diff. rendering
- Systematic techniques for making discrete structures differentiable, e.g., smoothing
- Differentiable simulators such as differentiable fluid dynamics, differentiable particle simulators, differentiable optics, differentiable protein-folding, differentiable cloth simulations, etc.
- Differentiable architecture search, e.g., convolutions with diff. and learnable kernel sizes
- Applications of differentiable relaxations, e.g., in learning-to-rank and computer vision
The workshop does not cover “differentiable programming”, i.e., the programming paradigm of automatic differentiation and its technical implementations. Instead, the workshop covers cases where vanilla automatic differentiation fails or does not yield meaningful gradients.
Contact
Contact the organizers: mail@differentiable.xyz