Skip to the content.

News

Workshop Summary

Gradients and derivatives are integral to machine learning, as they enable gradient-based optimization. In many real applications, however, models rest on algorithmic components that implement discrete decisions, or rely on discrete intermediate representations and structures. These discrete steps are intrinsically non-differentiable and accordingly break the flow of gradients. To use gradient-based approaches to learn the parameters of such models requires turning these non-differentiable components differentiable. This can be done with careful considerations, notably, using smoothing or relaxations to propose differentiable proxies for these components. With the advent of modular deep learning frameworks, these ideas have become more popular than ever in many fields of machine learning, generating in a short time-span a multitude of “differentiable everything”, impacting topics as varied as rendering, sorting and ranking, convex optimizers, shortest-paths, dynamic programming, physics simulations, NN architecture search, top-k, graph algorithms, weakly- and self-supervised learning, and many more.

This workshop will provide a forum for anything differentiable, bringing together academic and industry researchers to highlight challenges and developments, provide unifying ideas, discuss practical implementation choices and explore future directions.


Organizers


Call for Papers

This workshop encourages submissions on novel research results, benchmarks, frameworks, and work-in-progress research on differentiating through conventionally non-differentiable operations. The format of submission are 4-page papers (excluding references) submitted to OpenReview. The review-process will not be open.

Scope

The technical topics of interest at this workshop include (but are not limited to):

The workshop does not cover “differentiable programming”, i.e., the programming paradigm of automatic differentiation and its technical implementations. Instead, the workshop covers cases where vanilla automatic differentiation fails or does not yield meaningful gradients.

Submission

The submission deadline is: May 28 ‘24 23:59 PT

The format of submission are 4-page papers (+ references) using the workshop’s LaTeX style files. Supplementary materials / appendices after the references are allowed and do not count towards the page limit.

Please submit anonymized versions of your paper that include no identifying information about any author identities or affiliations. Accepted papers will be made public on OpenReview. The reviewing process will be double-blind. While we welcome short versions of published papers, preference will be given to new and not yet published work. Short versions and/or extensions of papers accepted at ICML are allowed.


Contact

Contact the organizers: mail@differentiable.xyz