About me:

My name is Sébastien Kerleau, I am a third year PhD student in computer science at Université Paris Dauphine PSL. My work, done under the supervision of Denis Cornaz and Clément Royer, revolves around linear algebra.
More precisely, I study the properties of positive spanning sets. Using arguments from graph theory, I analyze their structure and look for new ways to use such sets in derivative free algorithms.
For more informations on any specific topic, click any link on the sidebar.

About my thesis:

Consider a function f that one needs to minimize. Assuming that a derivative exists, gradient descent algorithms are usually applied, yet it may happen that information on the derivative of the function is unavailable or takes a significant amount of time to be obtained. Such scenarios led researchers to creating derivative-free optimization (DFO) algorithms. An emblematic case is given by the direct-search methods. Starting from a candidate \(\mathbf{x}\) for being the minimizer, the basic idea is to sample the objective function at a finite number of points \(\mathbf{x_1},\dots,\mathbf{x_s}\) around \(\mathbf{x}\) at each iteration. Each point \(\mathbf{x_i}\) is expressed as \(\mathbf{x_i}=\mathbf{x}+\lambda\mathbf{v_i}\) for some vector \(\mathbf{v_i} \in \mathbb{R}^n\) and some parameter \(\lambda \in \mathbb{R}_+ \) and depending on the sampled values \(f(\mathbf{x_i})\), the value of \(\mathbf{x}\) might be changed at the next iteration. Assuming that the vectors \(\mathbf{v_1},\dots,\mathbf{v_s} \) are 'well spread' in the space, the points \(\mathbf{x_i}\) are sufficiently dispersed around \(\mathbf{x}\) at each iteration and the algorithm should converge !

Positive spanning sets (PSSs) can be defined as families of vectors able to span the Euclidian space through non-negative linear combinations (equivalently, through positive linear combinations). Using Farkas' lemma, one can also define PSSs as families of vectors that are not contained in a same half-space. In other words, PSSs can be defined as families whose elements are 'well spread' in the space !

Drag around the vectors below to create your favourite PSS in \(\mathbb{R}^2\) !

PSS
All PSSs are not created equal: the speed of convergence of DFO algorithms will vary based on the collection of vectors given as an input. In particular, the cosine measure of the PSS seems to be an important factor: start by finding a vector in \(\mathbb{R}^n\) whose angle with its closest neighbor in your family is as large as possible. The cosine measure of the family is the cosine of this angle. For instance, the family formed by the four vectors above has a cosine measure of -0.3162.
Having a positive cosine measure is equivalent to being a PSS. Sets whose cosine measure is close to 1 are more popular in DFO algorithms as, roughly speaking, their elements are extremely well spread in the space.
Creating PSSs with a large cosine measure can be done trivially as it suffices to consider families with a very large number of elements. In practice however, small PSSs are usually favoured in algorithms.
Did you notice ? A PSS of size 4 in \(\mathbb{R}^2\) is usually not inclusion-wise minimal: it contains a 'useless' vector which can be removed with no impact on the family's ability to positively span the Euclidian space. Inclusion-wise minimal PSSs are called positive bases.

What is the size of a positive basis in \(\mathbb{R}^n\)?

Weirdly enough, although linear bases all share the same size, positive bases do not: in n-dimensional spaces, their size ranges from n+1 up to 2n ! Using the tool above, can you create a positive basis of size 4 in \(\mathbb{R}^2\) ? Can you prove that positive spanning sets of size 5 in \(\mathbb{R}^2\) can never be inclusion-wise minimal ?