21-21 Dec 2023 Clermont-Ferrand (France)

Invited Speakers

Christophette BLANCHET-SCALLIET (École Centrale de Lyon) Talk_Slides

  • Title: Gaussian Process Regression on Nested Spaces
  • Abstract: As industrial computer codes involve a large amount of input variables, creating directly one big metamodel depending on the whole set of inputs may be a very challenging problem. Industrialists choose instead to proceed sequentially. They build metamodels depending on nested sets of variables (the variables that are set aside are fixed to nominal values). However, at each step, the previous piece of information is lost as a new Design of Experiment is generated to learn the new metamodel. In this paper, an alternative approach will be introduced, based on all the DoEs rather than just the last one. This metamodel uses Gaussian process regression and is called "sequential Gaussian process regression". At each step n, the output is supposed to be the realization of the sum of two independent Gaussian processes Yn-1 + Zn. The first one models the output at step n - 1. It is defined on the input space of step n - 1 which is a subspace of the one of step n. The second Gaussian process is a correction term defined on the input space of step n. It represents the additional information provided by the newly released variables and has the particularity of being null on the subspace where Yn-1 is defined. First, some candidate Gaussian processes for (Zn)n>2 are suggested, which have the property of being null on an infinite continuous set of points. Then, an EM algorithm is implemented to estimate the parameters of the processes.

Stéphane Chretien (University of Lyon 2) Talk_Slides

  • Title: Relationship between sample size and architecture for the estimation of Sobolev functions using deep neural networks
  • Abstract: Beyond the many successes of Deep Learning based techniques in various branches of data analytics, medicine, business, engineering and the human sciences, a sound understanding of the generalisation properties of these techniques is still elusive. Central to these successes are the availability of huge datasets and the availability of huge computational resources and some of the most recent trends have given paramount importance to the necessity of building huge neural networks with millions of parameters, and most often, of several orders of magnitude larger than the size of the training set. This set-up has however led to many surprises and counterintuitive discoveries. Overprametrisation was recently shown to favour connectivity in a weak sense of the set of stationary points, hence permitting stochastic gradient type methods to potentially reach good minimisers in several stages despite the wild nonconvexity of the training problem as demonstrated by Kuditipudi et al. Relating generalisation to stability, recent theoretical breakthroughs have been able to provide a better understanding of why generalisation cannot even happen without overparametrisation as shown by Bubeck et al. Following the ideas developed by Belkin, a substantial amount of work has also been undertaken in order to study the double descent phenomenon, and the associated benign overfitting property which holds for least norm estimators in linear and mildly non-linear regression, as well as in for certain kernel based methods. In the present paper, we aim at studying the generalisation properties of overparametrised deep neural networks using a novel approach based on Neuberger's theorem.

Anis Fradi (University Clermont Auvergne) Talk_Slides

  • Title: Learning and inferring on shapes and manifolds 
  • Abstract: In the realm of information science, the exploration of shapes plays a pivotal role in unraveling and interpreting data. Fore example, curves exhibit a vast spectrum of forms, and delving into their attributes yields invaluable insights into underlying patterns and relationships. This understanding is particularly valuable in machine learning, aiding in the selection of appropriate models and the design of algorithms that can effectively capture the inherent complexity of data. In this context, we will present a Bayesian model specifically designed for the analysis, registration, and clustering of multidimensional curves. The main advantage of the proposed models lies in their integration of reparametrization functions that act as local distributions on curves. To manage the inherent complexity of such model, we establish a connection with well-understood Riemannian manifolds, building upon the insights on the use of Riemannian metrics for shape analysis. This intricate connection streamlines the reparametrization space, rendering the optimization process more tractable.

Hong Van Le (Czech Academy of Sciences) Talk_Slides

  • Title: Correct loss functions for generative models of supervised learning

  • Abstract: In my talk  I shall propose a   generative model of supervised learning  that unifies  two approaches  to supervised learning,  using    a concept  of a correct loss function. Addressing  two measurability problems,  which  have been ignored  in  statistical  learning  theory,  I propose  to use  convergence  in outer  probability  to characterize  the consistency  of a learning  algorithm. Building upon these results,  I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik-Stefanuyk's regularization method for solving stochastic ill-posed problems, and  discuss   its consequences. My talk is based on my eprint  https://arxiv.org/abs/2305.06348

     

  • Bio: She works in  information geometry, representation  theory,  differential  geometry,  algebraic and symplectic topology,  and more recently on statistical learning theory. More details on her  websitehttps://users.math.cas.cz/~hvle/

Rodolphe le Riche (CNRS) Talk_Slides

  • Title: Modeling and Optimization with Gaussian Processes in Reduced Eigenbases
  • Abstract: Parametric shape optimization aims at minimizing an objective function f(x) where x are CAD parameters. This task is difficult when f is the output of an expensive-to-evaluate numerical simulator and the number of CAD parameters is large. Most often, the set of all considered CAD shapes resides in a manifold of lower effective dimension in which it is preferable to build the surrogate model and perform the optimization. In this work, we uncover the manifold through a high-dimensional shape mapping and build a new coordinate system made of eigenshapes. The surrogate model is learned in the space of eigenshapes: a regularized likelihood maximization provides the most relevant dimensions for the output. The final surrogate model is detailed (anisotropic) with respect to the most sensitive eigenshapes and rough (isotropic) in the remaining dimensions. Last, the optimization is carried out with a focus on the critical dimensions, the remaining ones being coarsely optimized through a random embedding and the manifold being accounted for through a replication strategy. At low budgets, the methodology leads to a more accurate model and a faster optimization than the classical approach of directly working with the CAD parameters.

  • Bio: Rodolphe Le Riche is a permanent CNRS researcher at LIMOS. His work deals with global optimization algorithms and engineering model identification. He likes simple, well-thought theory in relation to practical engineering problems. More infos at https://www.emse.fr/~leriche. David Gaudrie, the first contributor to this work, was PhD student at LIMOS working with Rodolphe Le Riche. He is now a research engineer with Stellantis.

Shantanu Joshi (University of California Los Angeles) Talk_Slides

  • Title: Generative Models for Alignment of Shape Data
  • Abstract:  We will present ideas and applications for aligning shape data from brain imaging. We will first introduce the problem of alignment of functional magnetic resonance time series data. We will show one solution for achieving temporal alignment of both amplitude and phase of the functional magnetic resonance imaging (fMRI) time course and spectral densities. Next we present an idea for aligning tractography representations from diffusion weighted imaging. Lastly, we will present a recent generative approach for learning the geometric alignment process using deep neural networks in a fully unsupervised manner.

  • Bio: Shantanu Joshi is an Associate Professor of Neurology, Bioengineering, and Computational & Systems Biology,  and a faculty of the UCLA Ahmanson Lovelace Brain Mapping Center at UCLA. He received his PhD degree in Electrical Engineering from Florida State University. His research interests lie in the modeling and development of novel biomedical signal and image processing approaches for brain mapping of structure and function. His work in shape morphometrics led to the identification of a new genus of Lambeosaurine dinosaurs as listed in the International Commission of Zoological Nomenclature (ICZN). He is the recipient of the NIH Career Development Award in 2015, the Ziskind-Somerfeld Research Award Finalist from the Society of Biological Psychiatry, and the UCLA Faculty Career Award in 2019. He is on the editorial board of the International Journal of Computer Assisted Radiology and Surgery and an Associate Editor of Frontiers in Neuroscience, Brain Imaging Methods.

Gérard Subsol (CNRS) Talk_Slides

  • Title: Two original methods to analyze shapes 

  • Abstract: In this presentation, we will present two original methods to analyze shapes. The first method is based on a framework provided by oriented matroid theory, that is on a combinatorial encoding of convexity properties. We apply this method to a set of skull shapes presenting various types of coronal craniosynostosis. The second method consists in a kinematic synthesis methodology of planar rigid-body chains. This methodology approximates the set of profile curves that represent a series of shapes with a single chain comprised of rigid-body links connected by revolute or prismatic joints. The primary advantage of the presented approach is that a modest number of physical parameters describes the shape and size change between a set of curves. 

    More infos at https://www.lirmm.fr/~subsol/research.html 

Veronika Zimmer (Technical University of Munich)

  • Title: A topological loss function for deep-learning based image segmentation using persistent homology
  •  Abstract: We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in three experiments.  We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels.

     

     

Online user: 2 Privacy
Loading...