top of page
sou3.jpg

School Program

On the 8th of August, we will have a summer school on energy and machine learning, which represent two of the fastest growing areas of bilevel optimization. Each lecture will run for about half of the day, organized in two separate parts (of 1h30min each) as follows:

Time
Event
Room
08:00 - 08:30
Welcome & Registration
B2A - Foyer (Level 2)
08:30 - 10:00
Bilevel optimization algorithms and models for contemporary energy challenges (I)
B2A - 2077
10:00 - 10:30
Coffee Break
B2A - Foyer (Level 2)
10:30 - 12:00
Bilevel optimization algorithms and models for contemporary energy challenges (II)
B2A - 2077
12:00 - 13:15
Lunch Break
B38 - Terrace Restaurant
13:15 - 14:45
Bilevel optimization in machine learning (I)
B2A - 2077
14:45 - 15:15
Coffee Break
B2A - Foyer (Level 2)
15:15 - 16:45
Bilevel optimization in machine learning (II)
B2A - 2077
17:00 - 19:00
Welcome Reception & Registration
B38 - The Arlott Bar

Bilevel Optimization Algorithms & Models for Contemporary Energy Challenges

The first part of the course will begin with the basic concepts of bilevel optimization. It will then focus on mathematical optimization algorithms for solving bilevel optimization problems. While most attention will be paid to the problems where the functions involved are linear or convex, some aspects of nonconvex problems will be discussed. The second part will present bilevel models that address contemporary challenges in electric energy systems.

 

Part I: Solving bilevel optimization problems

I.1 - Single-level reformulations

I.2 - Algorithms for linear and convex problems

I.3 - Nonconvex bilevel optimization: What is possible?

Part II: Models for contemporary electric energy challenges

II.1 - Residential demand response and energy storage

II.2 - Multinational carbon-credit market with distinct national strategies

II.3 - Unit commitment under demand uncertainty  

Bilevel Optimization in Machine Learning

In this lecture, you will learn the challenges in solving bilevel machine learning problems. Popular examples are hyperparameter optimization and meta-learning. The focus will be on explaining efficient gradient based methods that rely only on gradients and Jacoban-vector products and on establishing theoretical quantitative guarantees for such methods.

Part I:
I.1 - Introduction and outline
I.2 - Machine learning applications overview: hyperparameter optimization, meta-learning, ...
I.3 - Characteristics of bilevel problems in machine learning: large-scale and simple constraints
I.4 - Implicit function theorem and the hypergradient
I.5 - Hypergradient approximation methods, pytorch implementation and memory/time complexity

Part II:
II.1 - Theoretical assumptions: smoothness and strong-convexity/contraction at the lower level
II.2 - Error rates for Approximate Implicit Differentiation (AID) and Iterative Differentiation (ITD)
II.3 - Convergence rates for AID-based inexact (projected) hypergradient descent
II.4 - Relaxing the assumptions: non-smoothness, multiple inner solutions, …

bottom of page