# Workshop Summary

Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of “differentiable everything”, impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.

This workshop will provide a forum for anything differentiable, bringing together academic and industry researchers to highlight challenges and developments, provide unifying ideas, discuss practical implementation choices and explore future directions.

# Speakers

# Organizers

# Call for Papers

This workshop encourages submissions on novel research results, benchmarks, frameworks, and work-in-progress research
on differentiating through conventionally non-differentiable operations.
The format of submission are **4-page papers** (excluding references) submitted to OpenReview.
The review-process will not be open.

## Scope

The technical topics of interest at this workshop include (but are not limited to):

- Continuous relaxations of discrete operations and algorithms (e.g., argmax, sorting, ranking, rendering, shortest-path, optimizers, if-else constructs, indexing, top-k, logics, etc.)
- Stochastic relaxations and gradient estimation methods (e.g., stochastic smoothing)
- Weakly- and self-supervised learning with differentiable algorithms, e.g., ranking supervision
- Optimization with diff. algorithms, e.g., regression of scene parameters via diff. rendering
- Systematic techniques for making discrete structures differentiable, e.g., smoothing
- Differentiable simulators such as differentiable fluid dynamics, differentiable particle simulators, differentiable optics, differentiable protein-folding, differentiable cloth simulations, etc.
- Differentiable architecture search, e.g., convolutions with diff. and learnable kernel sizes
- Applications of differentiable relaxations, e.g., in learning-to-rank and computer vision

The workshop does *not* cover “differentiable programming”, i.e., the programming paradigm of automatic differentiation and its technical implementations.
Instead, the workshop covers cases where vanilla automatic differentiation fails or does not yield meaningful gradients.

## Submission

*The OpenReview submission is now open:* https://openreview.net/group?id=ICML.cc/2023/Workshop/Differentiable_Almost_Everything

The submission deadline is: ~~May 24 ‘23 23:59 PT~~ Updated deadline: **May 31 ‘23 23:59 PT**

The format of submission are 4-page papers (+ references) using the ICML 2023 LaTeX style files. Supplementary materials / appendices after the references are allowed and do not count towards the page limit.

Please submit anonymized versions of your paper that include no identifying information about any author identities or affiliations. There are no formal proceedings generated from this workshop. Accepted papers will be made public on OpenReview. The reviewing process will be double-blind. While we welcome short versions of published papers, preference will be given to new and not yet published work.

### Contact

Contact the organizers: mail@differentiable.xyz