# Pacific Northwest Probability Seminar: Gravitational Allocation to Uniform Points on the Sphere

[MUSIC] This is a talk on gravitational

allocation to uniform points. It’s with Nina Holden,

who is a student at the MIT, and Alex Zhai, student at Stanford. And you really couldn’t ask for

better collaborators. Before starting on topic, let

me use my pulpit to recommend to all of you speaking especially,

don’t mean today, but in general when you give talks especially

with junior co-authors. I’ve learned this trick of

putting the co-authors pictures because it’s so easy to forget when you have

a more senior speaker and he has junior co-authors who

maybe did most of the work. Then it’s so

easy to forget their names. But, we’ve been

selected to remember faces more than just

names that flash by. So with that,

let me start on the topic. And it’s a topic that goes back

a long while, in particular, I’ve been fascinated by it for

already 20 years. So it’s comparing

matchings in the plane and later we’ll see

allocations in the plane. Different kinds of matchings and

allocations which will have a bipartite nature, so

let’s start with matchings. And so we’re gonna

take a manifold without boundary in the plain. But in fact we’ll focus

just on two examples, a torus and a sphere. And they won’t really be

difference in the theorems, we can prove in both cases,

so I’ll switch just for pictorial reason

between the two. But here you have, so

the example here is you have torus, so a square with

periodic boundary conditions. We’re going to make

it have area n, so this is root n

by root n square. And then we have

n blue points and n red points thrown

independently and uniformly at random

into this torus, both on the right and the left. Okay, and both on the right and

the left. Now, and then we’re trying to

match the blues and the reds. Now what you see on the right

is the optimal matching, minimizing the total length, or equivalently, the average

length of the matching. And this has been studied

intensively since the 80s. And it’s well understood, so

in two dimensions with this normalization, the average

distance in the optimal matching is order root log n,

is the theorem of. I’ll state it again later. So here the average is root log

in in this root n by root n square. What you see on the left,

calling it the stable matching, it’s also the greedy matching. So the simplest construction

of what you see on the left is this. It take blue and red point

that are closes to each other, match them, take them out of the

matching game and now, repeat. So each time take the two

unmatched, closest blue and red points and match them til

you’ve exhausted all the points. Okay, now,

why am I calling it stable? Cuz it has an alternative

construction which is the Gale-Shapley stable

marriage, so each red points prefers the blue points

according to their distance. So it prefers to be

matched at close points. Similarly, for the blue points, they prefer to be

matched to close ones. And then there is a notion

whenever you have two kind of preference orderings for both

sides, then Gale-Shapley ensure that there is a stable matching

where there’s no unstable pairs. So an unstable pair are blue and

red points that are not matched and prefer

each other to their matches. So there always is a stable

matching, in general, it’s not unique. In this setting where both

sides use the same metric, it’s unique and it’s the same

as the greedy matching. Through that kind of background,

so here you have that

greedy matching, it’s simpler to define than

to construct the optimal one. Algorithmically faster,

but it does worse on the measure of the average, but

we don’t know how much worse. So the bounds here are very far

apart, so the lower bound is still root log n, the upper

bound is about n to the quarter. So one open problem

I’m emphasizing now, I’ll emphasize again later

is understand this matching. Now the optimal matching does

have a polynomial algorithm which is somewhat sophisticated,

also classical. But we’re going to look at

somehow simpler algorithms to define more local and focus on

the gravitational one that you will see later, so it’s an

online gravitational algorithm. So, by the way,

these pictures are due to. Okay, so we’re going to

move from matchings to allocations and

here is the formal definition. But here is a picture of

a different allocation. Again, pictured which

we studied long ago so, also with Chris Hoffman. So what you have is

you flow n points again in the torus and we want

to allocate to each equal area. So that’s what a fair

allocation is. And it’s kind of analogous

to the matching problem. So this allocation is

created by each point growing a spheric unit rate or

diskette unit rate around it. And each point acquires all

the area it reaches first until it is sated. Until it gets it’s fair

share of the total. And then it stops. So some point like this

one are very lucky and they just get the sphere,

a disk around themselves. Other points are hemmed in and so their character

becomes disconnected. So if you look, for instance, at this point down here, it is

hemmed in, so it starts to grow. But the area around it is

acquired by other centers. But it keeps growing

in the background and eventually gets its

fill way out here. Okay, so

everyone will get a territory which is equal in area. But some of these territories

will be disconnected. And it’s an open problem

here also to understand, what’s the average distance

traveled between, if you take a typical point and look how far

it is from the relative center. What’s the average distance

that’s open with similar bounds to the matching problem? Okay, so this is the formal

definition of a fair allocation. So we’re gonna have a set

of centers or stars, we’re gonna call them stars,

these n points. And we’ll have a function

from the manifold. So we’re gonna switch from

the torus to a sphere. But everything applies

in both settings, so now we’re going to talk about

an allocation on the sphere. So its just a mapping from

the sphere to the stars with the preimage of every

star has the same area. So here if we normalize that

the area of the manifold is exactly n, then it means that the area of

each preimage will be exactly 1. Okay so we’ll tend to do that. And then there could be some

points that are just unmapped and then we’ll just formally

map them to infinity. We have to ensure that

that’s a zero measure set. Okay so that’s the definition

of fair allocation. We’ll want a fair allocation

which is efficient in the sense that points get allocated

close to where they started. Okay and some of you who get the

Notices may have seen this fair allocation on the cover. And so main purpose of this talk

is to tell you what this is and will also be related

to Michelle’s talk in the afternoon. Okay, so, So we are going to be interested

in an allocation rule, so not just a specific

set of points but given any set of points how

to allocate fairly to them. And we want the allocation rule

which I said is efficient, so we want to kinda minimize for on average the distance from x

to the point it’s allocated to. So there’s, again, a very sophisticated theory

of optimal matching and also optimal allocation that is a

special case of transportation. Here our focus will be kind

of near optimal allocations. But that can be constructed

in a very intuitive and as local as possible way. So this is the scheme of

gravitational location that I want to now define. So we’re gonna start with

the gravitational potential. And since this is

two-dimensional object, so here I have to choose either

the sphere or the torus. I’m going to describe it for

the sphere. There’s an analogous thing for

torus and other manifold. So let’s start with

gravitational potential, which will be, so the harmonic

function in two dimensions, which is function of

the distance is the logarithm. Here it’s a little

big different, because we’re working

on this sphere, so it’s a two-dimensional object

in R3, the distance we’re using here is the distance from R3

not the spherical distance, and U(x) is the sum over

the stars of these logs. So that’s the potential

function we’re gonna use. And then the force field

will be minus the gradient of the potential,

except there’s this S here. So if you just formally

differentiate look at the gradient,

it will point inside the ball. So we don’t want points to go

inside one, everything just and the sphere, so given wherever

this vector points we’re gonna always project it to the tangent

plane of the sphere, so we’re going to get a force that

keeps things on the sphere. So that’s this subscript S. And then once we define

the force we get flow lines via this ODE. So given the starting point x,

we have a curve yx(t) that starts at x and satisfies this

differential equation until some stopping time tau x and

this tau x represents the time the particle gets

swallowed by a star. So this is just a continuous

gradient descent if we only optimize this here. Continuous gradient descent for this potential function

starting from x. Okay, so important notions for us will be the basin of

attraction of every star, so all the points x that

are mapped to the star z. And then the terminal

point psi of x, this limit is just psi of x. So z if x is in B of z and

infinity if x is not in the domain of attraction

of any of these points, of any of these stars. Any question about

this definition? I mean this is the star

of today’s show. So at least this hour’s show. So here’s a little picture of

a local patch of this potential, so we’re not drawing it

over the whole sphere, just think of a small patch. And you see that it

has the potential is minus infinity at the start,

and so you can think of this grading the descent

as for any starting point x put a pebble on the surface and

see where it’s going to roll. That’s where this

mapping psi of x. So I guess the point here

is that unlike if you go to the first allocation I defined,

there, [LAUGH] It was kind of built

in that it’s fair, right? Because every point stopped

when it got stated. Here we didn’t say anything

about equal area in the definition, yet

the domains have equal area. So the principle behind this

goes back to the work of Moser Thal from 1990, but each particular case is a little

bit different, but I’ll explain this, but let me just

tell you a little helpful story. So I gave this talk in

the math congress in Montreal, that’s where that picture

on the Notices comes from. But before I was allowed to

give this talk Frank Morgan, who was handling the abstracts,

I sent him the abstract and he said,

sorry I don’t believe you. You have to convince me that

what you’re saying is reasonable before I can publish

this abstract. And he said well what about,

So that’s actually been very helpful because I incorporate

now his questions into the talk. He sent this configuration. Well what if I have one star in

the North Pole and it’s hemmed in by a bunch of stars, so

if I then apply this gravity, clearly this star is

going to get less area. And I sent him

back this picture. So it’s actually,

it gets its equal share. And even points here that are

closer to these stars than to the North Pole star, they

are going to travel here because the gravity from these

two will cancel and it is just going to

travel through this. So this doesn’t show why

it’s equal area, but at least it shows why

there is no contradiction in this picture. But that wasn’t enough. He asked, well, what about if I

have a point on the south pole and all the rest are on

the northern hemisphere? Surely, this one is going

to get a far bigger region. And it doesn’t gets equal area? And even points here that

are closer to the South Pole, they get swept up to

the northern hemisphere. Because all these points in

the northern hemisphere kind of are banding together and pulling

from here all the way up. So this is the most

technical slide of the talk, this is the one page proof of

why, indeed, you get equal area, and you have to remember

the divergence theorem. So divergence theorem, or sub

theorem on the sphere, will tell us that on the domain, and we’re

gonna use this as a basin of z0. So on such a domain, and we’re

gonna use the basin of one star zero, if we integrate the, so

we’re going to take the force remember is minus the spherical

gradient of the potential. So if we take

the diversions of that, then the integral of the

diversions over the interior is the integral of the fourth times

the normal On the boundary, so that’s

the Divergence Theorem. Remember the force was minus

the gradient that’s why we have this minus sign. This is the diversion of the

Divergence Theorem just on this sphere, so this has to be

the spherical Laplacian, so you take the divergence of

this spherical gradient. Okay, so, we write down this is

the Divergence Theorem, and now, I want to assume the area of the

sphere is A, not necessarily n just to separate n,

the two meanings of n here. So, we gonna take

a sphere of S sum area A. Now, what is this Laplacian? Well, if you take one logarithm,

if you’re doing in the plane, the Laplacian of the logarithm

is just a Dirac with this constant in front. Two pi times a Dirac at z. This is Laplacian

in the x variable. Because we’re not in

the plane we’re on the sphere there’s

a curvature turn. Okay, this is a calculus

exercise, I’m not writing out. But the spherical Laplacian

of a log will give you this term you’re familiar

with from the plane with the curvature correction

two pi over the area. So, if area tends to infinity,

so the sphere becomes flat, and this term disappears. If using that, then the

Laplacian of the potential is just gonna be two pi times

the sum of these Dirac over the star minus two pi and

over A. Now, if you look

at these regions, maybe I have a picture of this. Yeah, if you look at

each of these domains, on the boundary, the force has

to be cogent to the boundary. You can’t have any component

in the normal direction or this point would

be pulled inside. So for points on

the boundary the force is orthogonal to the normal. So this part is zero. By the way,

I write this Laplacian, this is a distributional

Laplacian. So in this sense close to. So, in the force

times the normal, the forces orthogonal

to the normal. So this integral

is in fact zero. So this integral must

be zero as well. And if you integrate

the Laplacian here and write down the dot zero,

what you get? Well, in the basin there’s

just a single star, that’s in the basin

by definition. So this part just

gives you two pi. And this part gives you two

pi n over A times the area of the basin we’re integrating,

this is just a constant. So we get this identity

which we can solve for the area of the basin

is just A over n.>>Suppose this is black magic, do you have like a physical idea

why we should have [INAUDIBLE].>>[LAUGH]

>>After physical ideas and black magic

are actually very close. So you just have to get used to

it and it will feel natural, but we do have another

proof which is equally mysterious at first sight and

you know, okay. So, that’s why it’s fair, but it’s interesting because

it’s also efficient. So, the expected distance

traveled is going to be from x to [INAUDIBLE] x is going

to be a constant root log n. So again, the normalization is

we are taking a sphere of area n, the same thing applies for force of area n, and

this is in fact optimal. So, a trip for

the value of the constant, any fair allocation on

the sphere will have to have at least this expected

distance travel. So, same gravity is

efficient way to get, you’ve done

an efficient allocation. Okay, so the code for this

picture was first written by Manjunath Krishnapur and

then modified by many people. So, here you’ll see the pictures

for different values of n, and you’ll see they’re

kind of change. Because as n grows,

the domains get more elongated. This is for 750. Now, little background

about related work. So, the first kind of work

of this nature was also in the plane. But, was not to uniform

random points but to the zeroes of

a Gaussian Entire Function. So, there’s a unique

Gaussian Entire Function. Unique in distribution. I’ll be more precise

in a moment. There’s a Gaussian Entire

Function who’s zeroes are translation in variant,

so the function itself is not unique cuz you could multiply

by function like e to the z. But the distribution

of the set of zeroes is unique up to scaling. If you have any Gaussian

Entire Function who’s zeroes have a transition in

variant distribution, then this distribution is

the same up to scaling. So that’s a theorem

of [INAUDIBLE]. Anyway so they look at this

function which actually, I’ll write the formula for

this function. So, The an our Gaussian coefficients So the an here are independent

complex normal variables. So this is

>>With the same variance?>>All with the same variance,

thank you, yes. All complex,

we’ll say variance one. Gaussian says you look at the

series, okay, and then they were able to look at an allocation

given by this potential. So, although the set

of points for a problem looks much more

complicated than the uniform points, the potential

is much simpler. Because instead of being

given by some series. It’s just the log of

analytic function with this quadratic correction. And now, we were interested

in uniform points. Unfortunately, if you try to

do gravity in the whole plane, the force of gravity natural

two dimensional gravity, the force doesn’t converge. So you write down a series and

it diverges in all senses. So, with Sourav Chatterjee,

Ron Peled and Dan Romik,

we analyzed an analogous process in Rd for

at least three. So you can take a Poisson

process in space. And just define

the corresponding gravitational potential or

gravitational force. Actually, the force

converges in all dimensions. The potential only dimension

five and higher and then analyze

the domains that arise. It turns out that they have

an exponential tales on their diameter which is

quite efficient, much better than you could get

say from the stable allocation. But we couldn’t do it

in two dimensions. So, in two dimensions, the way we can do uniform points

is just work in finite volume where there’s no

problem of divergence. So that’s what we’re doing here. Okay, so

that’s the one you saw before. So, One application of, One application

of gravitational. So, I will explain this theorem

on gravitational location I stated, so. I’ll explain the idea behind

this in a few moments, but let me before explaining

this say how it is applied. So sorry,

let me go back to where we are. So one application is to

matching to the classical matching problem. So given n red points and

n blue points on the sphere of area n,

we want to match them. And we can use gravity to define

a matching where the expected average distance of

the matching is root. So X here is the average

distance between a point ai and what it’s matched to on a red

point and where it’s matched. And the theorem of

Ajtai-Komlos-Tusnady tells us that up to the value of

the constant, this is optimal. Okay, so how does this go? So we have blue points and

red points, let’s put down the blue points

and think of them as stars. Now the red points

will arrive one by one. When the red point arrives,

just apply the force of the gravitational

potential to that point. Look at the gradient

flow we defined, it’s going to be matched

to some blue point. Do this matching and repeat. So one key point is after

the first blue point is gone, the remaining n-1 points are

independent uniform points on the sphere. We haven’t created any bias. Contrast, I mean, maybe

a more natural thing to do in the first step would be to match

a1 to the closest blue point. That’s more natural but

it actually creates a bias. So after you do that,

you tend to go for the point where with the largest

cell that has a preference and so the points get biased. But with gravitational

allocation, the points don’t get biased. And after the first matching

we have n minus 1 points, which are still

independent uniform. Anyone sees why?>>[INAUDIBLE]

Uniform?>>Right,

because of the equal area. So the equal area tells us that

the first red point is equally likely to be matched to

each of the n blue points. So we’re taking n

uniform points and just sampling one uniformly

at random and removing it. And now it’s easy to see that

that’s what leads to n minus 1 independent uniform points. So now we can repeat. And by the allocation theorem, we got root login

in the first round. In the second round, we have to have a correction

term because the theorem of root log n allocation was on a sphere

of area n with n points. Now we have a sphere of area

n but only n minus 1 point, so we have to square

the radius by square root of n over n minus 1 to map

area n minus 1 to area n. And we can repeat this again and

again. And in the end what’s

the average distance, we’re gonna get the 1 over n. What’s this average

expected distance? When we’re down to k points,

we basically get root log k. I’m adding a 1 here just to take

care of the case k equals 1, but it’s basically root log k. So we just have to have this

average, and with the scaling factor root n over k to take

care of the different radii. And this is an easy

series to sum. You can take out the root,

just bound this by root log n, and you’ll see that the whole

thing is bounded by constant root log n. So you get the claimed average. Now this matching

algorithm has an advantage, which is also a disadvantage,

namely it’s online. So where a point gets

mapped only depends on the, Where a red point gets mapped

only depends on the previous red points and all the blue points,

but not on the later red points. So here I want to emphasize

again the open problem which was for the greedy matching. So for greedy matching in

work with Ander Holroyd, Robin Permantle and Oded Schramm

that appeared in 2009, we got a tail bound for the

distance in the greedy matching. Which when integrated gives

an expected average distance, which is about n to the quarter. I expect the conjecture that the

true average is order log n, but all I know is the lower bound

root log n which we have for all matchings, and the upper

bound that you see here. Problem is also open in

higher dimensions to understand the greedy

matching although the optimal matching is

understood to great precision. And you’ll hear some more

maybe in Michelle’s talk. Okay, so in the remaining eight

minutes, I want to show you something about the proof for

the allocation theorem. So expected the distance, root log n [INAUDIBLE] we

sketch why that’s reasonable. So the key thing is to

understand how large is the force on a typical point so let’s understand the variance

of a force on a typical point. So because of the symmetry,

if we have a typical point y on a sphere, the expected

force on it is zero, so we want to understand

what’s the second moment or equivalent to either variance. So let’s divide this area around

it, two rings concentrically. So this is a ring of distance k. And so for each k, let’s look at

the contribution of the force of stars at distance k. So since we have one

star per unit area, there are about k

stars in this ring, let’s take a unit of area here

where the number of stars is approximately Poisson 1

in this unit of area. They’re all a distance about

k from y, what’s the force? Well because this is two

dimensional gravity, the force is 1 over distance,

not 1 over distance squared. So force will be of order

1 over the distance. But it’s multiplied by the

Poisson number of points that fall in this region. So the variance of the force

here will be order 1 over k squared. Second moment will

be 1 over k squared. And the contributions from

different regions will be essentially

independent because of this approximate

Poisson property. So overall, the contribution of

this ring to the variance will be one over k, we have to sum

k contributions like this. And now when we sum this over k,

we’re going to get a log contribution for

the second moment. This tells you that

the second moment is log n, from here you could believe but

it’s not obvious that the typical size of

the force is root log n. Now one exception to the typical

sizes is a point y happens to land nearest star, say less than

small distance from a star, then the force on

it will be bigger. But for the local force to

dominate the global force, the star has to

fall quite close, less than distance 1

over root log n, okay? Cuz the global force from

the far away stars which we’ve computed is order root log n. So this comes in, so

now let’s take a typical point, now I’m calling it x, and see what’s going to happen

to it in its travels. So typically it’s not

immediately near a star. So it starts traveling

with a force, and approximately along

a straight line. Occasionally a star that’s

not too far is going to curve it a little bit. So again,

the force is root log n. When is this point going to

be swallowed by a star, well, only when its local force

dominates the global force. Which means only when

it’s within strip of 1 over root log n

around this path. And since the points are

approximately Poisson of rate 1, this will happen when

the strip has area 1. So the strip has area of

order 1 after you’ve gone a distance of root log n. Cuz it’s the strip of

width 1 over root log n. So the point will travel until

accidentally there is a star close enough to its path. And then it will just veer and

fall into that star. And that distance is

going to be root log n. So you can also see this from, any questions here

about this heuristic? Okay, another calculation,

which is kind of closer to the rigorous proof starts

with this formula. So psi of x is where

x is allocated, and that’s x plus the integral

of the force over the path. Now the magnitude of the fourth

we said is order root log n. In order to handle the distance, we want to know that the time

until absorption is order one. So the time, we have this differential

equation that [INAUDIBLE]. And here’s a really beautiful

fact that follows from a principle called Liouville’s

theorem in mechanics. Which says that the time

to absorption has exactly an exponential distribution

with parameter 2 pi. So this is exact,

it’s not approximate, it’s not asymptotic,

and it’s true no matter how you put the points down, so

it’s not just for random points. Okay, so to get this for

fixed points, you have to pick the starting

point at random. But in our setting,

we are fixing x and then picking the points at

random, and we get this fact. And in fact, I don’t have

time to go into details, but it follows from Liouville’s

theorem that gives the differential equation for the volume of the image of

a region under the flow. So derivative of the volume is given by the integral of

the divergence of the force. And when you write that down you

get the differential equation for an exponential so. And the paper is on the archive

if you want to see more details, but actually I can use these

principle to give another proof of the equal area theorem that

I eluded to in the beginning. And the advantage of the proof

is it doesn’t need to use any disparate smoothness of

the boundaries of the basins, which I implicitly alluded

to in the previous proof. So the bottom line is using

the argument from differential equations, we get this beautiful

fact that the time to absorption exactly has an exponential

distribution. And then if you use this,

you again get the root log n. So let me connect to something

closer to what you’ll hear from Michelle after the lunch. So if instead of

L1 you look at L2. There’s been influential

physics paper by Caracciolo-Lubicello-Parisi-Sik- uro. And they gave detailed

conjectures with heuristic reasoning, why when you

average the squares, so this is like a w2

distance in a matching. This should behave, I’m going

to focus on the two-dimensional case, so it should behave

like 1 over 2 pi log n. See we know that the average of

the optimal matching is root log n, so that suggests that the

average squared should be log n. Doesn’t prove, just suggests, when in fact they came up with

a conjecture for the constant. And this conjecture

was proved for d equals 2 by

Ambrosio-Stra-Trevisan. So they establish this 1 over

2 pi here, rigorously for the optimal matching. What’s most interesting for us

is their analysis suggests, but doesn’t prove,

that gravitational location will also achieve this

asymptotic optimal constant. Although it doesn’t seem

to be optimal for w1, It does seem to open for w2. Unfortunately as

Ambrosio et al write, their proof doesn’t actually

analyze any specific coupling. So they use gravitational

location as part of the argument but in the end they use duality

in a way which doesn’t prove that any specific coupling

has the right property, just using duality

proves the optimality. So there’s some delicate

interchange of limits going on but I said they [INAUDIBLE]

suggest that actually gravitational location

will [INAUDIBLE] optimum. So basically I’m almost done. I want to just show you

final slides that this kind of gravitational location can be considered in

the hyperbolic plane. And there’s a beautiful work

that’s been in progress for the last five,

six years by Ding and Peled that analyzes this

in the hyperbolic plane. I’ll just end with

these picture and again emphasize

the open problems. So I told you about the greedy,

so one final open problem, so for matching we used gravitation. But the most natural

way to do a matching, which is related to

gravitation is electrostatics. So think of n red particles,

nblue particles, put the same forces but

think of them as electrostatic. So red points repel each other,

and similarly blue points repel each

other, and blue and red attract. And I don’t have

a simulation of this, but actually I have seen one where

the points beautifully travel, so the red and

blue create the matching. So when the red and blue points

coalesce, the charge becomes zero, so they stop attracting or

repelling any other point. So you do get a matching, and

I would conjecture that this matching is of

the optimal order. There’s no rigorous analysis of

this at all, and the difficulty compared to the gravitation is

here we crucially use the fact that the star’s applying the

gravitation were all located at Poisson points or uniform points

in the analysis I showed you. While for this electrostatic

story, the points, once they start traveling, we don’t

understand their distribution, so we don’t understand

the distribution of the forces. And so it’s completely open to

analyze that kind of I think very appealing matching. Thanks for your attention.>>[APPLAUSE]

>>What’s known about the boundaries

of the region, do all kinds of things or-

>>So there’s some information like the number of saddle

points there, they are known to be piecewise smooth from

classical ODE theory.>>Enough to do what-

>>Enough to do, yes. Yes?>>It seems to me that you

indicated that when the number of points is large and that

origin seems to be elongated.>>Right.>>Is there simulations or conjections what happens when

the number goes to infinity? How do these shapes look-

>>Well, remember these shapes normally

have area one yet the average distance traveled from a point

to the center is route log n, so that already gives

you some elongation>>But would it be straight or?>>They will not be straight. But I don’t understand the,

we don’t understand exactly. And in particular we don’t

understand exactly at all their diameters, so we understand how

far a typical point distance but these things,

in higher dimensions, we understand the diameters

of the region. But in two dimensions we only

understand kind of the typical distance and the problem

is that these regions have tiny tails that go very far and

curve around. And we have no control

over the actual diameters. Only about the typical distance

that the point travels. Yes?>>Is this still true for non

uniform 3 dimensional shapes? Or non-classical I guess.>>So, what was the question?>>Is the variant allocation

still true for non-uniform shapes like the torus sphere I

think to show the hyperbolic?>>Right, so you can do this, so if you have no boundary

then it’s still true. So this extends, right? All right so, the key is just

to define the potential whose LaPlacian is the difference

between the empirical measure and the target measure.>>Right.>>Which is the uniform

submission.>>You bet, do you think there

is a gap between those local processes, and Matching,

do you think that there could be such a simple process that

matches the optimal concept?>>For again, for W2 I can lecture

>>The gravitational. The W1,

I think there won’t be a gap. So, but it. Okay let me be more careful. I think that you can

approxol unlike the case of this harmonic sedan,

maybe you’re thinking of, here I don’t expect there will

be any gap but it doesn’t mean there will be a local that

will achieve the concept but will there be locals that

approximate the concept. But I don’t know that for

a fact.>>[INAUDIBLE]

>>So the gravitational it seems to.>>No the electrostatic. There we have no estimates at

all so no, I don’t expect it to be the optimal constant but

prove anything about it. Yes?>>To static, isn’t there two you wanted

>>The blue and the red points both move right

>>Right?>>But would make sense to

keep one of them fixed and just the other ones move like

thinking of sticking cables in like DLE

>>So let the other ones move so then it is close to this

gravitational but then you are adding a repelling force

>>Yeah, I think that’s fairly natural,

and I believe that it can be

analyzed by similar methods, because the points applying

the gravity are uniform, but it’s not something

we’ve explicitly done, so I think that’s

a very natural to do. Thanks. We should thank the speaker.>>[APPLAUSE].>>All right, so.