top of page

School Program

On the 8th of August, we will have a summer school on energy and machine learning, which represent two of the fastest growing areas of bilevel optimization. Each lecture will run for about half of the day, organized in two separate parts (of 1h30min each) as follows:

08:30 - 10:00

10:30 - 12:00

12:00 - 13:15

13:15 - 14:45

14:45 - 15:15

15:15 - 16:45

17:00 - 19:00

Bilevel Optimization Algorithms & Models for Contemporary Energy Challenges (Part A)
Coffee break
Bilevel Optimization Algorithms & Models for Contemporary Energy Challenges (Part B)
Lunch Break
Bilevel Optimization in Machine learning (Part A)
Coffee Break
Bilevel Optimization in Machine learning (Part B)
Conference welcome reception

Bilevel Optimization Algorithms & Models for Contemporary Energy Challenges

The first part of the course will begin with the basic concepts of bilevel optimization. It will then focus on mathematical optimization algorithms for solving bilevel optimization problems. While most attention will be paid to the problems where the functions involved are linear or convex, some aspects of nonconvex problems will be discussed. The second part will present bilevel models that address contemporary challenges in electric energy systems.


Part A: Solving bilevel optimization problems

A.1 - Single-level reformulations

A.2 - Algorithms for linear and convex problems

A.3 - Nonconvex bilevel optimization: What is possible?

Part B: Models for contemporary electric energy challenges

B.1 - Residential demand response and energy storage

B.2 - Multinational carbon-credit market with distinct national strategies

B.3 - Unit commitment under demand uncertainty  

Bilevel Optimization in Machine Learning

In this lecture, you will learn the challenges in solving bilevel machine learning problems. Popular examples are hyperparameter optimization and meta-learning. The focus will be on explaining efficient gradient based methods that rely only on gradients and Jacoban-vector products and on establishing theoretical quantitative guarantees for such methods.

Part A:
A.1 - Introduction and outline.
A.2 - Machine learning applications overview: hyperparameter optimization, meta-learning, ...
A.3 - Characteristics of bilevel problems in machine learning: large-scale and simple constraints.
A.4 - Implicit function theorem and the hypergradient.
A.5 - Hypergradient approximation methods, pytorch implementation and memory/time complexity.

Part B:
B.1 - Theoretical assumptions: smoothness and strong-convexity/contraction at the lower level.
B.2 - Error rates for Approximate Implicit Differentiation (AID) and Iterative Differentiation (ITD).
B.3 - Convergence rates for AID-based inexact (projected) hypergradient descent.
B.4 - Relaxing the assumptions: non-smoothness, multiple inner solutions, …

10:00 - 10:30

bottom of page