News
- We are excited to announce 46 accepted papers 🎉.
- We have published the schedule 🗓️.
- The workshop takes place on Friday, July 28, 2023 in the Hawaii Convention Center in Honolulu in the Lili`u Theater (Room 310) 🎭.
- The virtual poster session will take place on Friday, August 4, 2023 at 3:00pm-5:00pm UTC. (CEST: 5pm-7pm, PT: 8am-10am, ET: 11am-1pm) 💻 (Gather.Town link, PW:
46papers
).
Workshop Summary
Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of “differentiable everything”, impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.
This workshop will provide a forum for anything differentiable, bringing together academic and industry researchers to highlight challenges and developments, provide unifying ideas, discuss practical implementation choices and explore future directions.
Speakers
Organizers
Call for Papers
This workshop encourages submissions on novel research results, benchmarks, frameworks, and work-in-progress research on differentiating through conventionally non-differentiable operations. The format of submission are 4-page papers (excluding references) submitted to OpenReview. The review-process will not be open.
Scope
The technical topics of interest at this workshop include (but are not limited to):
- Continuous relaxations of discrete operations and algorithms (e.g., argmax, sorting, ranking, rendering, shortest-path, optimizers, if-else constructs, indexing, top-k, logics, etc.)
- Stochastic relaxations and gradient estimation methods (e.g., stochastic smoothing)
- Weakly- and self-supervised learning with differentiable algorithms, e.g., ranking supervision
- Optimization with diff. algorithms, e.g., regression of scene parameters via diff. rendering
- Systematic techniques for making discrete structures differentiable, e.g., smoothing
- Differentiable simulators such as differentiable fluid dynamics, differentiable particle simulators, differentiable optics, differentiable protein-folding, differentiable cloth simulations, etc.
- Differentiable architecture search, e.g., convolutions with diff. and learnable kernel sizes
- Applications of differentiable relaxations, e.g., in learning-to-rank and computer vision
The workshop does not cover “differentiable programming”, i.e., the programming paradigm of automatic differentiation and its technical implementations. Instead, the workshop covers cases where vanilla automatic differentiation fails or does not yield meaningful gradients.
Contact
Contact the organizers: mail@differentiable.xyz