ELLIS @ IST Austria

When Are Solutions Connected in Deep Networks?

When Are Solutions Connected in Deep Networks?

NeurIPS 2021

  • Nguyen
  • Bréchet
  • Mondelli
PCA Initialization for Approximate Message Passing in Rotationally Invariant Models

PCA Initialization for Approximate Message Passing in Rotationally Invariant Models

NeurIPS 2021

  • Mondelli
  • Venkataramanan
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks

Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks

ICML 2021

  • Nguyen
  • Mondelli
  • Montufar
One-sided Frank-Wolfe algorithms for saddle problems

One-sided Frank-Wolfe algorithms for saddle problems

ICML 2021

  • Kolmogorov
  • Pock
Communication-Efficient Distributed Optimization with Quantized Preconditioners

Communication-Efficient Distributed Optimization with Quantized Preconditioners

ICML 2021

  • Alimisis
  • Davies
  • Alistarh
Parallelism versus Latency in Simplified Successive-Cancellation Decoding of Polar Codes

Parallelism versus Latency in Simplified Successive-Cancellation Decoding of Polar Codes

ISIT 2021

  • Hashemi
  • Mondelli
  • Fazeli
  • Vardy
  • Cioffi
  • Goldsmith
Sparse Multi-Decoder Recursive Projection Aggregation for Reed-Muller Codes

Sparse Multi-Decoder Recursive Projection Aggregation for Reed-Muller Codes

ISIT 2021

  • Fathollahi
  • Farsad
  • Hashemi
  • Mondelli
New Bounds For Distributed Mean Estimation and Variance Reduction

New Bounds For Distributed Mean Estimation and Variance Reduction

ICLR 2021

  • Davies
  • Gurunanthan
  • Moshrefi
  • Ashkboos
  • Alistarh
The Inductive Bias of ReLU Networks on Orthogonally Separable Data

The Inductive Bias of ReLU Networks on Orthogonally Separable Data

ICLR 2021

  • Phuong
  • Lampert
Byzantine-Resilient Non-Convex Stochastic Gradient Descent

Byzantine-Resilient Non-Convex Stochastic Gradient Descent

ICLR 2021

  • Allen-Zhu
  • Ebrahimian
  • Li
  • Alistarh
Approximate Message Passing with Spectral Initialization for Generalized Linear Models

Approximate Message Passing with Spectral Initialization for Generalized Linear Models

AISTATS 2021

  • Mondelli
  • Venkataramanan
Genomic architecture and prediction of censored time-to-event phenotypes with a Bayesian genome-wide analysis

Genomic architecture and prediction of censored time-to-event phenotypes with a Bayesian genome-wide analysis

Nature Communications

  • Ojavee
  • Robinson
Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent

Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient Descent

AAAI 2021

  • Nadiradze
  • Markov
  • Chatterjee
  • Kungurtsev
  • Alistarh
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

JMLR

  • Hoefler
  • Alistarh
  • Ben-Nun
  • Dryden
  • Peste
Sublinear Latency for Simplified Successive Cancellation Decoding of Polar Codes

Sublinear Latency for Simplified Successive Cancellation Decoding of Polar Codes

IEEE Transactions on Wireless Communications

  • Mondelli
  • Hashemi
  • Cioffi
  • Goldsmith
Optimal Combination of Linear and Spectral Estimators for Generalized Linear Models

Optimal Combination of Linear and Spectral Estimators for Generalized Linear Models

Foundations of Computational Mathematics

  • Mondelli
  • Thrampoulidis
  • Venkataramanan
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology

Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology

NeurIPS 2020

  • Nguyen
  • Mondelli
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression

WoodFisher: Efficient Second-Order Approximation for Neural Network Compression

NeurIPS 2020

  • Singh
  • Alistarh
Relaxed Scheduling for Scalable Belief Propagation

Relaxed Scheduling for Scalable Belief Propagation

NeurIPS 2020

  • Aksenov
  • Alistarh
  • Alistarh
Unsupervised object-centric video generation and decomposition in 3D

Unsupervised object-centric video generation and decomposition in 3D

NeurIPS 2020

  • Henderson
  • Lampert
Computational Design of Cold Bent Glass Façades

Computational Design of Cold Bent Glass Façades

ACM Transactions on Graphics 39(6) (SIGGRAPH Asia 2020)

  • Gavriil
  • Guseinov
  • Pérez
  • Pellis
  • Henderson
  • Rist
  • Pottmann
  • Bickel

Cold bent glass is a promising and cost-efficient method for realizing doubly curved glass façades. They are produced by attaching planar glass sheets to curved frames and must keep the occurring stress within safe limits. However, it is very challenging to navigate the design space of cold bent glass panels because of the fragility of the material, which impedes the form finding for practically feasible and aesthetically pleasing cold bent glass façades. We propose an interactive, data-driven approach for designing cold bent glass façades that can be seamlessly integrated into a typical architectural design pipeline. Our method allows non-expert users to interactively edit a parametric surface while providing real-time feedback on the deformed shape and maximum stress of cold bent glass panels. The designs are automatically refined to minimize several fairness criteria, while maximal stresses are kept within glass limits. We achieve interactive frame rates by using a differentiable Mixture Density Network trained from more than a million simulations. Given a curved boundary, our regression model is capable of handling multistable configurations and accurately predicting the equilibrium shape of the panel and its corresponding maximal stress. We show that the predictions are highly accurate and validate our results with a physical realization of a cold bent glass surface.

@article{Gavriil2020,
author = {Gavriil, Konstantinos and Guseinov, Ruslan and P{\'e}rez, Jes{\'u}s and Pellis, Davide and Henderson, Paul and Rist, Florian and Pottmann, Helmut and Bickel, Bernd},
title = {Computational Design of Cold Bent Glass Fa{\c c}ades},
journal = {ACM Transactions on Graphics (SIGGRAPH Asia 2020)},
year = {2020},
month = {12},
volume = {39},
number = {6}
articleno = {208},
numpages = {16}
}
Binary Linear Codes with Optimal Scaling: Polar Codes with Large Kernels

Binary Linear Codes with Optimal Scaling: Polar Codes with Large Kernels

IEEE Transactions on Information Theory

  • Fazeli
  • Hassani
  • Mondelli
  • Vardy
Does SGD Implicitly Optimize for Smoothness?

Does SGD Implicitly Optimize for Smoothness?

GCPR 2020

  • Volhejn
  • Lampert
Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks

Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks

ICML 2020

  • Shevchenko
  • Mondelli
On the Sample Complexity of Adversarial Multi-Source PAC Learning

On the Sample Complexity of Adversarial Multi-Source PAC Learning

ICML 2020

  • Konstantinov
  • Frantar
  • Alistarh
  • Lampert
Probabilistic inference of the genetic architecture of functional enrichment of complex traits

Probabilistic inference of the genetic architecture of functional enrichment of complex traits

medRxiv

  • Patxot
  • Robinson
Bayesian reassessment of the epigenetic architecture of complex traits

Bayesian reassessment of the epigenetic architecture of complex traits

Nature Communications

  • Trejo Banos
  • Robinson
Functional vs. Parametric Equivalence of ReLU Networks

Functional vs. Parametric Equivalence of ReLU Networks

ICLR 2020

  • Phuong
  • Lampert
Localizing Grouped Instances for Efficient Detection in Low-Resource Scenarios

Localizing Grouped Instances for Efficient Detection in Low-Resource Scenarios

WACV 2020

  • Royer
  • Lampert
A Flexible Selection Scheme for Minimum-Effort Transfer Learning

A Flexible Selection Scheme for Minimum-Effort Transfer Learning

WACV 2020

  • Royer
  • Lampert
Analysis of a Two-Layer Neural Network via Displacement Convexity

Analysis of a Two-Layer Neural Network via Displacement Convexity

Annals of Statistics

  • Javanmard
  • Mondelli
  • Montanari
Rate-Flexible Fast Polar Decoders

Rate-Flexible Fast Polar Decoders

IEEE Transactions on Signal Processing

  • Hashemi
  • Condo
  • Mondelli
  • Gross
Distillation-Based Training for Multi-Exit Architectures

Distillation-Based Training for Multi-Exit Architectures

ICCV 2019

  • Phuong
  • Lampert
KS(conf): A Light-Weight Test if a Multiclass Classifier Operates Outside of Its Specifications

KS(conf): A Light-Weight Test if a Multiclass Classifier Operates Outside of Its Specifications

IJCV

  • Sun
  • Lampert
Function Norms for Neural Networks

Function Norms for Neural Networks

Workshop on Statistical Deep Learning in Computer Vision at ICCV 2019

  • Triki
  • Berman
  • Kolmogorov
  • Blaschko
Testing the complexity of a valued CSP language

Testing the complexity of a valued CSP language

ICALP 2019

  • Kolmogorov
Towards Understanding Knowledge Distillation

Towards Understanding Knowledge Distillation

ICML 2019

  • Phuong
  • Lampert
Robust Learning from Untrusted Sources

Robust Learning from Untrusted Sources

ICML 2019

  • Konstantinov
  • Lampert
MAP inference via Block-Coordinate Frank-Wolfe Algorithm

MAP inference via Block-Coordinate Frank-Wolfe Algorithm

CVPR 2019

  • Swoboda
  • Kolmogorov
On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition

On the Connection Between Learning Two-Layers Neural Networks and Tensor Decomposition

AISTATS 2019

  • Mondelli
  • Montanari