Calibration-based functional lifting allows us to embed non-convex variational problems into another space, such that the embedded and relaxed formulation is convex and that global minimizers of the later can be mapped to global minimizers of the original problem. While related theory and results in the continuous setting are very elegant, practical implementation of the calibration-based lifted formulation comes with certain challenges.
In the first part of this talk, we look at the theoretic derivation and properties of the (continuous) lifting approach and discuss difficulties encountered using established discretization approaches.
In the second part, we introduce a more recent stochastic optimization approach using neural fields and discuss its current restrictions and future prospects.