On a family of fully nonlinear integro-differential operators: From fractional Laplacian to nonlocal Monge-Amp\`ere

We introduce a new family of intermediate operators between the fractional Laplacian and the Caffarelli-Silvestre nonlocal Monge-Amp\`ere that are given by infimums of integro-differential operators. Using rearrangement techniques, we obtain representation formulas and give a connection to optimal transport. Finally, we consider a global Poisson problem, prescribing data at infinity, and prove existence, uniqueness, and $C^{1,1}$-regularity of solutions in the full space.


Introduction
Integro-differential equations arise in the study of stochastic processes with jumps, such as Lévy processes.A classical elliptic integro-differential operator is the fractional Laplacian, s ∈ (0, 1), which can be understood as an infinitesimal generator of a stable Lévy process.These types of processes are very well studied in probability, and their generators may be given by where the kernel K is a nonnegative function satisfying some integrability condition.
Over the last few years, there has been significant interest in studying linear and nonlinear integro-differential equations from the analytical point of view.In particular, extremal operators like play a fundamental role in the regularity theory.See [6][7][8]16] and the references therein.The above equation is an example of a fully nonlinear equation that appears in optimal control problems and stochastic games [12,15].The infimum in (1.1) is taken over a family of admissible kernels K that depends on the applications.In fact, nonlocal Monge-Ampère equations have been developed in the form (1.1), for some choice of K [4,9,11].The Monge-Ampère equation arises in several problems in analysis and geometry, such as the mass transportation problem and the prescribed Gaussian curvature problem [10].The classical equation prescribes the determinant of the Hessian of some convex function u: In the literature, there are different nonlocal versions of the Monge-Ampère operator that Guillen-Schwab [11], Caffarelli-Charro [4], and Caffarelli-Silvestre [9] have considered.See also [13] for a nonlocal linearized Monge-Ampère equation given by Maldonado-Stinga.These definitions are motivated by the following property: if B is a positive definite symmetric matrix, then (1.2) n det(B) 1/n = inf A∈A tr(A T BA), where A = {A ∈ M n : A > 0, det(A) = 1} and M n is the set of n × n matrices.If a convex function u is C 2 at a point x 0 , then by the previous identity with B = D 2 u(x 0 ), we may write the Monge-Ampère operator as a concave envelope of linear operators.It follows that Caffarelli-Charro study a fractional version of det(D 2 u) 1/n , replacing the Laplacian by the fractional Laplacian in the previous identity.More precisely, where s ∈ (0, 1) and c n,s ≈ 1 − s as s → 1 (see also [11]).A different approach based on geometric considerations was given by Caffarelli-Silvestre.In fact, the authors consider kernels whose level sets are volume preserving transformations of the fractional Laplacian kernel.Namely, where the infimum is taken over the family, (1.3) K s n = K : R n → R + : |{x ∈ R n : K(x) > r −n−2s }| = |B r | for all r > 0 .
Moreover, both MA s u and D s u converge to det(D 2 u) 1/n , up to some constant, as s → 1.
In this paper, we introduce a new family of operators of the form, (1.4) inf for any integer 1 ≤ k < n, which arises from imposing certain geometric conditions on the kernels.Moreover, we will see that |y| −n−2s ∈ K s 1 ⊂ K s k ⊂ K s n , for 1 < k < n, and thus, this family will be monotone decreasing, and bounded from above by the fractional Laplacian and by below by the Caffarelli-Silvestre nonlocal Monge-Ampère.
The paper is organized as follows.In Section 2, we construct the family of admissible kernels K s k , and give the precise definition of our operators for C 1,1 -functions.We introduce in Section 3 the basic tools from the theory of rearrangements necessary for our goals.In Section 4, we study the infimum in (1.4) and obtain a representation formula, provided some condition on the level sets is satisfied (see Theorem 4.1).We also study the limit as s → 1 and give a connection to optimal transport.The Hölder continuity of F s k u is proved in Section 5, following similar geometric techniques from [9].In Section 6, we consider a global Poisson problem, prescribing data at infinity, and introduce a new definition of our operators for functions that are merely continuous and convex.We show existence of solutions via Perron's method and C 1,1 -regularity in the full space by constructing appropriate barriers.Finally, we discuss some future directions in Section 7.

Construction of kernels
Let us start with the construction of the family of admissible kernels.Notice that any kernel K in K s n , defined in (1.3), will have the same distribution function as the kernel of the fractional Laplacian, since for any r > 0, Geometrically, this means that the level sets of K are deformations in any direction of R n of the level sets of |x| −n−2s , preserving the n-dimensional volume.
In view of this approach, a natural way of finding an intermediate family of operators between the nonlocal Monge-Ampère and the fractional Laplacian is to consider kernels whose level sets are deformations that preserve the k-dimensional Hausdorff measure H k , with 1 ≤ k < n, of the restrictions of balls in R n to hyperplanes generated by We define the set of admissible kernels as follows.
Definition 2.1.We say that K ∈ K s k if for all z ∈ R n−k , and all r > 0, it holds that where In Figure 1 we illustrate condition (2.1) for k = 2 and n = 3.Note that for k = n, we recover the definition of Proof.Let K ∈ K s k .Fix any z ∈ R n−k−1 and r > 0. Then: where , and Definition 2.3.A function u : R n → R is said to be C 1,1 at the point x 0 , and we write u ∈ C 1,1 (x 0 ), if there is a vector p ∈ R n , a radius ρ > 0, and a constant C > 0, such that We denote by [u] C 1,1 (x 0 ) , the minimum constant for which this property holds, among all admissible vectors p and radii ρ.
where K s k is the set of kernels satisfying (2.1) and c n,s is the constant in ∆ s .As an immediate consequence of Proposition 2.2, we obtain that the operators are ordered.
k=1 is monotone decreasing.The regularity condition on u in Definition 2.4 allows us to compute F s k u at the point x 0 in the classical sense.To obtain a finite number, we need to impose two extra conditions: (P 1 ) An integrability condition at infinity: ) and satisfies (P 1 ) and (P 2 ), then We point out that if u is not convex at x 0 , then the infimum could be −∞.We show this result in the next proposition.
Proposition 2.7.Let u ∈ C 0 (R n ) ∩ C 1,1 (x 0 ).Assume that u satisfies (P 1 ).If there exists x ∈ R n with x = (ȳ, 0) and ȳ ∈ R k , such that Also, the measure is clearly zero if |z| ≥ r.Therefore, K ∈ K s k .It follows that , we have that ũ is continuous.Hence, given that ũ(x) < 0, then ũ(x) < 0, for all x ∈ B ε (x), for some ε > 0.Moreover, since K / ∈ L 1 (B ε (x)), it follows that I = −∞.Arguing similarly as in the proof of Proposition 2.6, we see that II < ∞.Therefore, Remark 2.8.The operators F s k are not rotation invariant.This is because, for simplicity, in the construction of the family of admissible kernels K s k we chose the first k vectors from the canonical basis of R n .In general, we may take any subset of k unitary vectors, τ = {τ i } k i=1 , and replace the first condition on (2.1) by (2.2) for all z ∈ τ and r > 0, where τ denotes the span of {τ i } k i=1 , and τ ⊥ the orthogonal subspace to τ .Let SO(n) be the group of rotation matrices n × n.Since τ i = Ae i , for some A ∈ SO(n), it follows that any kernel K τ satisfying (2.2) can be written as K τ = K • A, where K satisfies (2.1).Therefore to make the operators rotation invariant, one possibility is to take the infimum over all possible rotations.Namely, inf To focus on the main ideas, we will not explore this operator in this work.

Rearrangements and measure preserving transformations
We introduce some definitions and preliminary results regarding rearrangements of nonnegative functions.For more detailed information, see for instance [1,2].Definition 3.1.Let f : R n → R be a nonnegative measurable function.We define the decreasing rearrangement of f as the function f * defined on [0, ∞) given by and the increasing rearrangement of f as the function f * defined on [0, ∞) given by We use the convention that inf ∅ = ∞.Proposition 3.2.Let f, g : R n → R be nonnegative measurable functions.Then The upper bound is the classical Hardy-Littlewood inequality.For the proof see [2, Theorem 2.2] or [1,Corollary 2.16].For the sake of completeness, we give the proof of the lower bound.
Proof.For j ≥ 1, let f j = f | B j and g j = g| B j , where B j denotes the ball of radius j centered at 0 in R n .By [1,Corollary 2.18], it follows that Since f, g ≥ 0, we get that Note that for any t ∈ [0, |B j |], we have Hence, (f j ) * (t) ≥ f * (t), and Moreover, g j g pointwise on R n .Then by [1, Proposition 1.39], we have (g j ) * g * pointwise on [0, ∞), as j → ∞.By the monotone convergence theorem, we get lim j→∞ Combining the previous estimates, we conclude that Definition 3.3.We say that a measurable function ψ : R l → R m is a measure preserving transformation if for any measurable set E in R m , it holds that Lemma 3.4.If ψ : R l → R m is a measure preserving, then for any measurable f : R m → R, and any measurable set An important result by Ryff [17] provides a sufficient condition for which we can recover a function given its decreasing/increasing rearrangement, by means of a measure preserving transformation.
As a consequence of Ryff's theorem, we obtain a representation formula for the admissible kernels.We denote In particular, there exists a measure preserving σ z : supp(K z ) → (0, ∞), such that Moreover, lim t→∞ K * z (t) = 0. Therefore, the result follows from Theorem 3.5.In view of Definition 3.1, we introduce the symmetric rearrangement of a function in R n with respect to the first k variables as follows.Fix k ∈ N with 1 ≤ k < n.Given x ∈ R n , we denote x = (y, z), with y ∈ R k and z ∈ R n−k .Furthermore, for z fixed, we call f z the restriction of f to R k .Namely, f z (y) = f (y, z).Definition 3.8.Let f : R n → R be a nonnegative measurable function.We define the k-symmetric decreasing rearrangement of f as the function f * ,k : R n → [0, ∞] given by When k = n, we obtain the usual symmetric rearrangement.Remark 3.9.(1) Notice that f * ,k and f * ,k are radially symmetric and monotone decreasing/increasing, with respect to y.In the literature, this type of symmetrization is also known as the Steiner symmetrization [1,Chapter 6].
(2) By Lemma 3.7, we see that any kernel

Analysis of F s k
Our main goal of this section is to study the infimum in the definition of the operator, where ũ(x) = u(x 0 + x) − u(x 0 ) − x • ∇u(x 0 ).Throughout the section, we will assume that u ∈ C 0 (R n ) ∩ C 1,1 (x 0 ) and satisfies properties (P 1 ) and (P 2 ), so that 0 ≤ F s k u(x 0 ) < ∞.

4.1.
Analysis of the infimum.We will study the following cases: Case 2. There exists some λ 0 > 0 such that for all z ∈ R n−k , In the first case, when all of the level sets of ũ have finite measure, we show that the infimum is attained at some kernel whose level sets depend on the measure preserving transformation that rearranges the level sets of ũ.More precisely: Theorem 4.1.Suppose that for all λ > 0 and z ∈ R n−k , Then, for any z ∈ R n−k , there exists a measure preserving σ z : R k → [0, ∞) such that In particular, the infimum is attained.
Remark 4.2.Observe that if ũ(•, z) is constant in some set of positive measure, then the kernel where the infimum is attained is not unique since the integral is invariant under any measure preserving rearrangement of K within this set (see [17]).
Before we give the proof of Theorem 4.1, we need a lemma regarding the k-symmetric increasing rearrangement of ũ.By Definition 3.8, this is given by the following expression: Proof.Assume there exists M > 0, independent of λ, such that (4.1) Then for any y ∈ R k , with ω k |y| k > M , we have that ũ * ,k (y, z) = ∞, since inf ∅ = ∞.If (4.1) does not hold, then there must be an increasing sequence Then for any M > 0, there exists Λ = Λ(M ) > 0 such that M λ > M , for all λ > Λ.Since M λ is monotone increasing, we can assume without loss of generality that M Λ ≤ M .Otherwise, we take Λ to be the minimum for which this property holds.Also, Λ(M ) is monotone increasing, and Λ(M ) → ∞, as M → ∞.In particular, it holds that inf{λ > 0 : Then for any K > 0, there exists M > 0 such that inf{λ > 0 : Therefore, for any We conclude that lim Proof of Theorem 4.1.Since u is convex at x 0 , we have that ũ(y, z) ≥ 0.Moreover, Fix z ∈ R n−k and consider the functions f (y) = ũ(y, z) and g(y) = K(y, z).Since for any λ > 0, then by Lemma 4.3, we have . By Ryff's theorem (Theorem 3.5), there exists a measure preserving For any r > |z|, we have that since σ k is measure preserving (see Definition 3.3).Then K ∈ K s k , and thus, To prove the reverse inequality, let K ∈ K s k .Applying Proposition 3.2, we see that by Lemma 3.4 and (4.2).Moreover, by the definition of rearrangements, with ω k |ỹ| k = σ z (y).By (3.1), we get Hence, integrating over all z ∈ R n−k , and taking the infimum over all kernels K ∈ K s k , we conclude that Remark 4.4.A natural question that arises from this result is whether there exists a measure preserving ϕ z : R k → R k such that In that case, we would have that the infimum is attained at a kernel K such that where φ : R n → R n is a measure preserving with φ(y, z) = (ϕ z (y), z).
Recall that Ryff's theorem gives a representation of a function f in terms of its increasing rearrangement f * , that is, f = f * • σ, with σ : R k → R measure preserving.If this result were also true for the symmetric increasing rearrangement, given by f # (x) = f * (ω k |x| k ), then there would exist a measure preserving ϕ : Hence, it seems reasonable that ω k |ϕ(x)| k = σ(x).As far as we know, this is an open problem.
As an immediate consequence of Theorem 4.1, we obtain the following representation of the function F s k u in terms of the k-symmetric increasing rearrangement of ũ.Corollary 4.5.Under the assumptions of Theorem 4.1, we have . Proof.Note that ũ * ,k (0) = 0, since ũ(0) = 0. Therefore, using the same notation as in the proof of Theorem 4.1, we showed that From the previous result and the fact that the family of operators {F k } n−1 k=1 is monotone decreasing, we see that the fractional Laplacian of the k-symmetric rearrangements are ordered at the origin.Next we treat the second case.Theorem 4.7.Suppose that there exists some λ 0 > 0 such that for all z ∈ R n−k , Then there exists a kernel In particular, the infimum is attained.
Proof.Fix z ∈ R n−k .For j ≥ 1, define the set For simplicity, we drop the notation of z.We have that H k (A j ) < ∞, A j ⊆ A j+1 , and Hence, we need to distinguish two cases: By Lemma 3.4, for any measure preserving σ : By Ryff's theorem (Theorem 3.5), there exists σ j : We claim that σ j+1 (y) ≤ σ j (y), for all y ∈ A j .Indeed, since A j ⊆ A j+1 , we have In particular, for all y ∈ A j , Since (v j+1 ) * is monotone increasing, we must have σ j+1 (y) ≤ σ j (y), for all y ∈ A j .
Therefore, there exists σ ∞ : Define the kernel K 0 as , for all y ∈ A ∞ .Then by Fatou's lemma, Lemma 3.7, and (4.3), we get for any K ∈ K s k .Integrating over z and taking the infimum over all kernels K, we conclude the result.For any j ∈ N, with j > 1/ε, consider the set Choose R > 0 large enough (depending on ε, j, λ 0 , and z) so that , for all y ∈ B c R , and thus, By (4.6) and (4.7), we see that ∞ , and thus, by (4.4), we get In particular, v ε satisfies the assumptions of Case 2.1, so there exists Finally, we need to pass to the limit.First, we prove that {σ ε } ε>0 is monotone decreasing.Indeed, let where In particular, the sequence of kernels {K ε } ε>0 is monotone decreasing.Define (4.10) By (4.8) and (4.10), we have Moreover, K 0 ∈ K s k since K ε ∈ K s k , and for any r > 0, it follows that where Finally, using (4.5), (4.9), (4.10), and the monotone convergence theorem, we get The last equality follows from the following observation: since , then the infimum over all kernels in K s k is less than or equal to the infimum over Ks k .Moreover, the reverse inequality holds trivially.
Finally, we deal with the third case, that is, when all of the level sets of ũ have infinite measure.In particular, notice that ũ * ,k (x) = 0, for all x ∈ R n .This is the only case where the infimum is not attained.Indeed, we prove in the following theorem that the infimum is equal to zero.Theorem 4.8.Suppose that for all λ > 0 and z ∈ R n−k , Then F s k u(x 0 ) = 0. Proof.From (P 2 ), we have that F s k u(x 0 ) ≥ 0. To prove the reverse inequality, it is enough to find a sequence of kernels Fix ε > 0 and z ∈ R n−k .For any j ≥ 0, we define the set Note that U j+1 ⊆ U j .Also, by assumption, with λ = ε2 −j(1+2s) e −|z| 2 , we have that We will construct K ε ∈ K s k by describing first where to locate each level set of the form: Recall that K ∈ K s k if for all r > 0, we have In view of this definition, we define the sets Note that More precisely, for j ≥ 0, if |z| < 2 −(j+1) < 2 −j , then Therefore, H k (A j ) ≤ c2 −kj , where c > 0 only depends on k.It follows that (4.12) For any i ≥ 0, let D i be the collection of all dyadic closed cubes of the form , where l(Q) denotes the side length of the cube Q.
For any j ≥ 0, since U j is an open set, by a standard covering argument, we have that there exists a family of dyadic cubes F j such that satisfying the following properties: (1) For any Q ∈ F j , there exists some i ≥ 0 such that Q ∈ D i . ( Analogously, for the sets B j , with j ≥ −1, there exists a family of dyadic cubes Fj satisfying properties ( 1) We will construct the sets A j by properly translating the dyadic cubes partitioning the sets B j into U j .In particular, we will prove that for some translation mappings T j : Fj → F j to be determined.
We start with the case j = 0.For any i ≥ 0, denote by where H 0 (E) is equal to the cardinal of the set E. Note that m i , n i ∈ Z + ∪ {∞}.
We will recursively place B 0 into U 0 .First, fix i = 0.If m 0 ≥ n 0 , then for any Q ∈ F0 ∩D 0 , there exists some τ ∈ R k and some Moreover, we can define T 0 one-to-one since m 0 ≥ n 0 , and we can always choose a different Q for each Q.Note that there are p 0 cubes in F 0 ∩ D 0 , with p 0 = m 0 − n 0 , that have not been used.Hence, to all of these cubes, divide each side in half, so that each cube gives rise to 2 k cubes with side length 2 −1 .Call this collection of new cubes Q = {Q l } 2 kp 0 l=1 ⊂ D 1 , and add them to the family F 0 ∩ D 1 .Namely, we replace F 0 ∩ D 1 by (F 0 ∩ D 1 ) ∪ Q.
If m 0 < n 0 , then take q 0 cubes in F0 ∩ D 0 , with q 0 = n 0 − m 0 , and divide each side in half.Call this collection of new cubes Q = { Ql } 2 kq 0 l=1 ⊂ D 1 .Then, we replace F0 by F0 , where If n0 = H 0 ( F0 ∩ D 0 ), then m 0 = n0 .Hence, by the same argument as in the previous case, we find T 0 as in (4.13).For i ≥ 1, we can repeat the same process until we run out of cubes from F0 (or the modified family).We know the process will end since H k (B 0 ) < H k (U 0 ).When this happens, we will have constructed a one-to-one mapping Iterating this process, we find a sequence of translation mappings {T j } ∞ j=0 , with T j : Fj → F j , and a sequence of disjoint sets {A j } ∞ j=0 such that We will see that ).Indeed, we can write n+2s) , for j ≥ 0.

Now call
Then B −1 = ∞ j=0 C j , with H k (C j ) < ∞, for all j ≥ 0. Hence, instead of partitioning all of B −1 into dyadic cubes, we partition each of its disjoint components C j .Arguing as before, we place them into U 0 \ ∞ i=0 A i recursively, according to the following scheme: where T j −1 is defined as before.At the end of this process, we find a translation map In particular, there exists some j ≥ −1 such that y ∈ A j .Furthermore, recall that A j = T j (B j ), where T j is a one-to-one and onto translation map.Hence, there exists a unique w ∈ B j such that y = T j (w) = w + τ , for some τ ∈ R k .Let T z : R k → R k be given by T z (y) = w.Note that T z is measure preserving.Then we define the kernel K ε as We have that For I, we use that ũ(y, z) ≤ εe −|z| 2 , since A −1 ⊂ U 0 .Then by Lemma 3.7 and Lemma 3.4: where C > 0 depends only on n and s.For II, we use that ũ(y, z) ≤ ε2 −j(n+2s) e −|z| 2 , since where C > 0 depends only on n, s, and k.

Limit as
We define MA k u as the Monge-Ampère operator acting on u, with respect to the first k variables, that is, , with D 2 u(x) = (u ij (x)) 1≤i,j≤n .We define ∆ n−k u as the Laplacian of u, with respect to the last n − k variables, that is, Then under some special conditions, it holds that In particular, the family {F s k } n−1 k=1 can be understood as nonlocal analogs of concave second order elliptic operators, which are decomposed into a Monge-Ampère operator restricted to R k and a Laplacian restricted to R n−k .Indeed, by Corollary 4.5, we have F s k u(x) = ∆ s ũ * ,k (0).Since the k-symmetric rearrangement does not depend on s and ∆ s → ∆, as s → 1, passing to the limit we see that Suppose that ũ * ,k (y, z) = ũ(ϕ −1 z (y), z), where ϕ z : R k → R k is an invertible measure preserving transformation, with ϕ z (0) = 0, and Recall that σ z is given in Theorem 4.1 (see also Remark 4.4).In this case, (4.15) ∆ũ * ,k (0) = ∆ y ũ(ϕ −1 z (y), z) + ∆ z ũ(ϕ −1 z (y), z) (y,z)=(0,0) .For the first term, we use that where Ψ = {ψ : R k → R k measure preserving such that ψ(0) = 0 , and the fact that the infimum is attained when ũ • ψ is a radially symmetric increasing function [9].Hence, For the second term, call φ(y, z) = (ϕ −1 z (y), z) and compute: , and D z φ(0) = (0, I n−k ), where I n−k denotes the identity matrix in M n−k .Therefore, Combining (4.15), (4.16) and (4.17), we conclude (4.14).

4.3.
Connection to optimal transport.In Corollary 4.5, we obtained a representation of the function F s k u in terms of the k-symmetric increasing rearrangement.Using this representation, we find an equivalent expression of F s k u that can be understood from the viewpoint of optimal transport.Theorem 4.9.Suppose we are under the assumptions of Theorem 4.1.Then for any z ∈ R n−k , z = 0, there exists an invertible map ϕ z : R k → R k such that Moreover, if σ z : R k → [0, ∞) is the Ryff 's map given in Theorem 4.1, then ϕ z is measure preserving if and only if The key tool to prove Theorem 4.9 is Brenier-McCann's theorem, a very well-known result in the theory of optimal transport [3,14].We state it here in the form that we will use it.
Then there exists a convex function ψ : R k → R whose gradient ∇ψ pushes forward f dy to g dy.Namely, for any measurable function h in R k , (4.20) Moreover, ∇ψ : R k → R k is invertible and unique.
In the literature, ∇ψ is known as the (optimal) transport map.
Proof of Theorem 4.9.Fix z ∈ R n−k , z = 0, and consider f z , g z ∈ L 1 (R k ) given by where σ z : R k → [0, ∞) is given in Theorem 4.1.Note that since σ z is measure preserving.By Theorem 4.10, there exists a convex function ψ z : R k → R (depending on z) whose gradient ∇ψ z pushes forward f z dy to g z dy.Moreover, ∇ψ z is invertible and unique.Call ϕ z = (∇ψ z ) −1 .Using (4.20), with h(y) = ũ(y, z), we see that dy.
Integrating over z ∈ R n−k , we obtain (4.18).It remains to show that ϕ z is measure preserving if and only if (4.19) holds.Indeed, for any measurable set E ⊂ R k , we have where the last equality follows from (4.21) with h(y) = |ϕ z (y)| 2 +|z| 2 n+2s 2 χ E (y).Therefore, Our main regularity result is the following.
for some constant C 0 > 0 depending only on n, k, s, ε, Λ, and M .This theorem will be a consequence of the next proposition.
for some C > 0 depending only on n, k, and s.
First, we prove Theorem 5.1.
Before we prove Proposition 5.2, we need several preliminary results.
Proof.By Fubini's theorem, we have Since f is monotone increasing, then r > µ f (t) if and only if t < f (r).Therefore, where dr.
Proof.By Corollary 4.5, we have that dr dz, where v(r, z) = ũ * ,k (y, z) for |y| = r.Next we apply Lemma 5.3 to f (r) = v(|z|r, z) and ω(r) = kω k r k−1 (r 2 + 1) − n+2s 2 .Note that since v is the k-symmetric increasing rearrangement of ũ, we have where W is given in (5.2).By Fubini's theorem, we conclude that Lemma 5.5.Suppose we are under the assumptions of Proposition 5.2.Let Proof.First we prove (a).Fix t ∈ (2Λd, ε] and let x ∈ D x 0 u(t − 2Λd).Then Using (5.3), convexity, and Moreover, x ∈ D x 0 u(ε), since t ≤ ε, and thus, Next we prove (b).Fix t ∈ (ε, ∞) and let x ∈ D x 0 u t − 2Λdt/ε .By the previous computation, we have that To control the distance from x to x 0 , we need to estimate the diameter of D x 0 u(t).Take y ∈ D x 0 u(t) \ D x 0 u(ε) and let z be in the intersection between ∂D x 0 u(ε) and the line segment joining x 0 and y.Then there is some λ > 1 such that y − x 0 = λ(z − x 0 ).By convexity of u, Hence, |x − x 0 | ≤ Λt/ε, and by (5.4), we get We are ready to give the proof of Proposition 5.2.

A Global Poisson Problem
We consider the following Poisson problem in the full space: where ϕ : R n → R is nonnegative, smooth, and strictly convex.Furthermore, we ask that ϕ behaves asymptotically at infinity as a cone φ, that is, Similar problems have been studied for nonlocal Monge-Ampère operators in [4,9].We will prove the following theorem.
Theorem 6.1.There exists a unique solution u to (6.1) To define the notion of solution, we introduce a natural pointwise definition of F s k u for functions u that are merely continuous.
The following properties of F s k u will be useful for our purposes.The proof is analogous to the one in [9], so we omit it here.Lemma 6.5.Let u, v ∈ C 0 (R n ) be convex functions.The following holds: (a) (Homogeneity).For any λ > 0, .
(d) (Lower semicontinuity).Assume that u ∈ C 1,1 (R n ).Then Definition 6.6.Let u ∈ C 0 (R n ) be a convex function.We say that u is a subsolution to We say that u is a solution if it is both a subsolution and a supersolution.Lemma 6.7.If u and v are subsolutions, then max{u, v} is a subsolution.
Proof.Let w = max{u, v}.Then w is continuous and convex.Fix x 0 ∈ R n .Without loss of generality, we may assume that u(x 0 ) ≥ v(x 0 ).Then w(x 0 ) = u(x 0 ) and w(x) ≥ u(x), for any x ∈ R n .By monotonicity (see Lemma 6.5), we have . Hence, w is a subsolution.
We will show existence and uniqueness of solutions to (6.1) using Perron's method.The key ingredients are the comparison principle, and the existence of a subsolution (lower barrier) and a supersolution (upper barrier).We state this in the following proposition.We omit the proof since it is similar to that in [9].An immediate consequence of the comparison principle is the uniqueness of solutions.Lemma 6.9 (Uniqueness).There exists at most one solution to (6.1).
Proof.Suppose by means of contradiction that there exist two functions u, v ∈ C 0 (R n ) with u = v, satisfying (6.1).Then |u(x) − v(x)| → 0, as |x| → ∞.Hence, for any ε > 0, there exists a compact set Ω ε ∈ R n , depending on ε, such that Moreover, for any x 0 ∈ R n , the function v + ε satisfies . Therefore, v is a supersolution and by the comparison principle, it follows that u ≤ v + ε in R n .Similarly, we see that v − ε is a subsolution and u and letting ε → 0, we get u = v in R n , which is a contradiction.
To prove existence of a solution, we define where S is the set of admissible subsolutions given by Note that S = ∅ since ϕ ∈ S, and the supremum is finite since v ≤ ϕ + w, for any v ∈ S.Moreover, u is convex and Lipschitz, with From ϕ ≤ u ≤ ϕ + w, and the upper bound for w in Proposition 6.8, it follows that Proof.We will show that for any x Indeed, the lower bound follows from convexity of u.Hence, we only need to prove the upper bound Take any v ∈ S and fix Indeed, since F s k is homogeneous of degree 1, concave, and translation invariant (see Lemma 6.5), we have . By (6.4) and the upper bound of w in Proposition 6.8, part (c), we see that ) → 0, as |x 0 | → ∞ and x 1 fixed, since 1 − 2s < 0. Then for all ε > 0, there is some compact set Ω ε , depending on ε and x 1 , such that v(x 0 ) − ε ≤ ϕ(x 0 ), for all x 0 ∈ R n \ Ω ε .
Choosing M = ε, we see that {φ < l + ε} ⊂ B R , for some R depending on ε.Hence, the set D x 0 u(ε) is bounded.
To prove the claim, we distinguish two cases.If u(0) = 0, then u attains an absolute minimum at 0, so ∇u(0) = 0.In particular, l(x) = 0, for all x ∈ R n , and thus, (6.7) is clearly satisfied.Hence, it remains to show the claim when u(0) > 0.
We will prove it by contradiction.If (6.7) is not true, then there exists a sequence of points {x j } ∞ j=1 ⊂ R n such that |x j | → ∞, as j → ∞, and lim j→∞ (φ − l)(x j ) < ∞.
Using that φ is continuous and homogeneous of degree 1, and letting j → ∞, we get Since l is a supporting plane of u, we know that u(x) ≥ l(x), for all x ∈ R n , and thus, u(λe) ≥ l(λe) = φ(λe) + u(0).

Proposition 5 . 4 .
Let x ∈ R n .Under the assumptions of Corollary 4.5 it holds that

Definition 6 . 2 .Definition 6 . 3 .
Let u ∈ C 0 (R n ).(a) We say that a linear function l(y) = y • p + b, with p ∈ R n , and b ∈ R, is a supporting plane of u at a point x if l(x) = u(x) and l(y) ≤ u(y), for all y ∈ R n .(b) We define the subdifferential of u at a point x as the set ∂u(x) of all vectors p ∈ R n such that l(y) = y • p + b is a supporting plane of u at x, for some b ∈ R. Let u ∈ C 0 (R n ) be a convex function.For x 0 ∈ R n , we define